id
int64
39
79M
url
stringlengths
31
227
text
stringlengths
6
334k
source
stringlengths
1
150
categories
listlengths
1
6
token_count
int64
3
71.8k
subcategories
listlengths
0
30
14,410,746
https://en.wikipedia.org/wiki/Neuropeptides%20B/W%20receptor%202
Neuropeptides B/W receptor 2, also known as NPBW2, is a human protein encoded by the NPBWR2 gene. The protein encoded by this gene is an integral membrane protein and G protein-coupled receptor. The encoded protein is similar in sequence to another G protein-coupled receptor (GPR7), and it is structurally similar to opioid and somatostatin receptors. This protein binds neuropeptides B and W. This gene is intronless and is expressed primarily in the frontal cortex of the brain. See also Neuropeptide B/W receptor References Further reading External links G protein-coupled receptors
Neuropeptides B/W receptor 2
[ "Chemistry" ]
139
[ "G protein-coupled receptors", "Signal transduction" ]
14,410,911
https://en.wikipedia.org/wiki/Sodium%20maleonitriledithiolate
Sodium maleonitriledithiolate is the chemical compound described by the formula . The name refers to the cis compound, structurally related to maleonitrile (). Maleonitriledithiolate is often abbreviated mnt. It is a "dithiolene", i.e. a chelating alkene-1,2-dithiolate. It is a prototypical non-innocent ligand in coordination chemistry. Several complexes are known, such as . The salt is synthesized by treating carbon disulfide with sodium cyanide to give the cyanodithioformate salt, which eliminates elemental sulfur in aqueous solution: The compound was first described in 1958. References Thiolates Alkene derivatives Sodium compounds Nitriles Substances discovered in the 1950s
Sodium maleonitriledithiolate
[ "Chemistry" ]
164
[ "Thiolates", "Nitriles", "Functional groups" ]
14,411,227
https://en.wikipedia.org/wiki/Critical%20state%20soil%20mechanics
Critical state soil mechanics is the area of soil mechanics that encompasses the conceptual models representing the mechanical behavior of saturated remoulded soils based on the critical state concept. At the critical state, the relationship between forces applied in the soil (stress), and the resulting deformation resulting from this stress (strain) becomes constant. The soil will continue to deform, but the stress will no longer increase. Forces are applied to soils in a number of ways, for example when they are loaded by foundations, or unloaded by excavations. The critical state concept is used to predict the behaviour of soils under various loading conditions, and geotechnical engineers use the critical state model to estimate how soil will behave under different stresses. The basic concept is that soil and other granular materials, if continuously distorted until they flow as a frictional fluid, will come into a well-defined critical state. In practical terms, the critical state can be considered a failure condition for the soil. It's the point at which the soil cannot sustain any additional load without undergoing continuous deformation, in a manner similar to the behaviour of fluids. Certain properties of the soil, like porosity, shear strength, and volume, reach characteristic values. These properties are intrinsic to the type of soil and its initial conditions. Formulation The Critical State concept is an idealization of the observed behavior of saturated remoulded clays in triaxial compression tests, and it is assumed to apply to undisturbed soils. It states that soils and other granular materials, if continuously distorted (sheared) until they flow as a frictional fluid, will come into a well-defined critical state. At the onset of the critical state, shear distortions occur without any further changes in mean effective stress , deviatoric stress (or yield stress, , in uniaxial tension according to the von Mises yielding criterion), or specific volume : where, However, for triaxial conditions . Thus, All critical states, for a given soil, form a unique line called the Critical State Line (CSL) defined by the following equations in the space : where , , and are soil constants. The first equation determines the magnitude of the deviatoric stress needed to keep the soil flowing continuously as the product of a frictional constant (capital ) and the mean effective stress . The second equation states that the specific volume occupied by unit volume of flowing particles will decrease as the logarithm of the mean effective stress increases. History In an attempt to advance soil testing techniques, Kenneth Harry Roscoe of Cambridge University, in the late forties and early fifties, developed a simple shear apparatus in which his successive students attempted to study the changes in conditions in the shear zone both in sand and in clay soils. In 1958 a study of the yielding of soil based on some Cambridge data of the simple shear apparatus tests, and on much more extensive data of triaxial tests at Imperial College London from research led by Professor Sir Alec Skempton at Imperial College, led to the publication of the critical state concept . Roscoe obtained his undergraduate degree in mechanical engineering and his experiences trying to create tunnels to escape when held as a prisoner of war by the Nazis during WWII introduced him to soil mechanics. Subsequent to this 1958 paper, concepts of plasticity were introduced by Schofield and published in his textbook. Schofield was taught at Cambridge by Prof. John Baker, a structural engineer who was a strong believer in designing structures that would fail "plastically". Prof. Baker's theories strongly influenced Schofield's thinking on soil shear. Prof. Baker's views were developed from his pre-war work on steel structures and further informed by his wartime experiences assessing blast-damaged structures and with the design of the "Morrison Shelter", an air-raid shelter which could be located indoors . Original Cam-Clay Model The name cam clay asserts that the plastic volume change typical of clay soil behaviour is due to mechanical stability of an aggregate of small, rough, frictional, interlocking hard particles. The Original Cam-Clay model is based on the assumption that the soil is isotropic, elasto-plastic, deforms as a continuum, and it is not affected by creep. The yield surface of the Cam clay model is described by the equation where is the equivalent stress, is the pressure, is the pre-consolidation pressure, and is the slope of the critical state line in space. The pre-consolidation pressure evolves as the void ratio () (and therefore the specific volume ) of the soil changes. A commonly used relation is where is the virgin compression index of the soil. A limitation of this model is the possibility of negative specific volumes at realistic values of stress. An improvement to the above model for is the bilogarithmic form where is the appropriate compressibility index of the soil. Modified Cam-Clay Model Professor John Burland of Imperial College who worked with Professor Roscoe is credited with the development of the modified version of the original model. The difference between the Cam Clay and the Modified Cam Clay (MCC) is that the yield surface of the MCC is described by an ellipse and therefore the plastic strain increment vector (which is perpendicular to the yield surface) for the largest value of the mean effective stress is horizontal, and hence no incremental deviatoric plastic strain takes place for a change in mean effective stress (for purely hydrostatic states of stress). This is very convenient for constitutive modelling in numerical analysis, especially finite element analysis, where numerical stability issues are important (as a curve needs to be continuous in order to be differentiable). The yield surface of the modified Cam-clay model has the form where is the pressure, is the equivalent stress, is the pre-consolidation pressure, and is the slope of the critical state line. Critique The basic concepts of the elasto-plastic approach were first proposed by two mathematicians Daniel C. Drucker and William Prager (Drucker and Prager, 1952) in a short eight page note. In their note, Drucker and Prager also demonstrated how to use their approach to calculate the critical height of a vertical bank using either a plane or a log spiral failure surface. Their yield criterion is today called the Drucker-Prager yield criterion. Their approach was subsequently extended by Kenneth H. Roscoe and others in the soil mechanics department of Cambridge University. Critical state and elasto-plastic soil mechanics have been the subject of criticism ever since they were first introduced. The key factor driving the criticism is primarily the implicit assumption that soils are made of isotropic point particles. Real soils are composed of finite size particles with anisotropic properties that strongly determine observed behavior. Consequently, models based on a metals based theory of plasticity are not able to model behavior of soils that is a result of anisotropic particle properties, one example of which is the drop in shear strengths post peak strength, i.e., strain-softening behavior. Because of this elasto-plastic soil models are only able to model "simple stress-strain curves" such as that from isotropic normally or lightly over consolidated "fat" clays, i.e., CL-ML type soils constituted of very fine grained particles. Also, in general, volume change is governed by considerations from elasticity and, this assumption being largely untrue for real soils, results in very poor matches of these models to volume changes or pore pressure changes. Further, elasto-plastic models describe the entire element as a whole and not specifically conditions directly on the failure plane, as a consequence of which, they do not model the stress-strain curve post failure, particularly for soils that exhibit strain-softening post peak. Finally, most models separate out the effects of hydrostatic stress and shear stress, with each assumed to cause only volume change and shear change respectively. In reality, soil structure, being analogous to a "house of cards," shows both shear deformations on the application of pure compression, and volume changes on the application of pure shear. Additional criticisms are that the theory is "only descriptive," i.e., only describes known behavior and lacking the ability to either explain or predict standard soil behaviors such as, why the void ratio in a one dimensional compression test varies linearly with the logarithm of the vertical effective stress. This behavior, critical state soil mechanics simply assumes as a given. For these reasons, critical-state and elasto-plastic soil mechanics have been subject to charges of scholasticism; the tests to demonstrated its validity are usually "conformation tests" where only simple stress-strain curves are demonstrated to be modeled satisfactorily. The critical-state and concepts surrounding it have a long history of being "scholastic," with Sir Alec Skempton, the “founding father” of British soil mechanics, attributed the scholastic nature of CSSM to Roscoe, of whom he said: “…he did little field work and was, I believe, never involved in a practical engineering job.”.In the 1960s and 1970s, Prof. Alan Bishop at Imperial College used to routinely demonstrate the inability of these theories to match the stress-strain curves of real soils. Joseph (2013) has suggested that critical-state and elasto-plastic soil mechanics meet the criterion of a “degenerate research program” a concept proposed by the philosopher of science Imre Lakatos, for theories where excuses are used to justify an inability of theory to match empirical data. Response The claims that critical state soil mechanics is only descriptive and meets the criterion of a degenerate research program have not been settled. Andrew Jenike used a logarithmic-logarithmic relation to describe the compression test in his theory of critical state and admitted decreases in stress during converging flow and increases in stress during diverging flow. Chris Szalwinski has defined a critical state as a multi-phase state at which the specific volume is the same in both solid and fluid phases. Under his definition the linear-logarithmic relation of the original theory and Jenike's logarithmic-logarithmic relation are special cases of a more general physical phenomenon. Stress tensor formulations Plane stress Drained conditions Plane Strain State of Stress Separation of Plane Strain Stress State Matrix into Distortional and Volumetric Parts: After loading Drained state of stress Drained Plane Strain State ; ; By matrix: ; Undrained conditions Undrained state of stress Undrained Strain State of Stress Undrained state of Plane Strain State Triaxial State of Stress Separation Matrix into Distortional and Volumetric Parts: Undrained state of Triaxial stress Drained state of Triaxial stress Only volumetric in case of drainage: Example solution in matrix form The following data were obtained from a conventional triaxial compression test on a saturated (B=1), normally consolidated simple clay (Ladd, 1964). The cell pressure was held constant at 10 kPa, while the axial stress was increased to failure (axial compression test).. Initial phase: Step one: Step 2-9 is same step one. Step seven: Notes References Soil mechanics
Critical state soil mechanics
[ "Physics" ]
2,287
[ "Soil mechanics", "Applied and interdisciplinary physics" ]
14,411,733
https://en.wikipedia.org/wiki/Madelung%20equations
In theoretical physics, the Madelung equations, or the equations of quantum hydrodynamics, are Erwin Madelung's equivalent alternative formulation of the Schrödinger equation for a spinless non relativistic particle, written in terms of hydrodynamical variables, similar to the Navier–Stokes equations of fluid dynamics. The derivation of the Madelung equations is similar to the de Broglie–Bohm formulation, which represents the Schrödinger equation as a quantum Hamilton–Jacobi equation. History In the fall of 1926, Erwin Madelung reformulated Schrödinger's quantum equation in a more classical and visualizable form resembling hydrodynamics. His paper was one of numerous early attempts at different approaches to quantum mechanics, including those of Louis de Broglie and Earle Hesse Kennard. The most influential of these theories was ultimately de Broglie's through the 1952 work of David Bohm now called Bohmian mechanics Equations The Madelung equations are quantum Euler equations: where is the flow velocity, is the mass density, is the Bohm quantum potential, is the potential from the Schrödinger equation. The Madelung equations answer the question whether obeys the continuity equations of hydrodynamics and, subsequently, what plays the role of the stress tensor. The circulation of the flow velocity field along any closed path obeys the auxiliary quantization condition for all integers . Derivation The Madelung equations are derived by first writing the wavefunction in polar form with and both real and the associated probability density. Substituting this form into the probability current gives: where the flow velocity is expressed as However, the interpretation of as a "velocity" should not be taken too literal, because a simultaneous exact measurement of position and velocity would necessarily violate the uncertainty principle. Next, substituting the polar form into the Schrödinger equation and performing the appropriate differentiations, dividing the equation by and separating the real and imaginary parts, one obtains a system of two coupled partial differential equations: The first equation corresponds to the imaginary part of Schrödinger equation and can be interpreted as the continuity equation. The second equation corresponds to the real part and is also referred to as the quantum Hamilton-Jacobi equation. Multiplying the first equation by and calculating the gradient of the second equation results in the Madelung equations: with quantum potential Alternatively, the quantum Hamilton-Jacobi equation can be written in a form similar to the Cauchy momentum equation: with an external force defined as and a quantum pressure tensor The integral energy stored in the quantum pressure tensor is proportional to the Fisher information, which accounts for the quality of measurements. Thus, according to the Cramér–Rao bound, the Heisenberg uncertainty principle is equivalent to a standard inequality for the efficiency of measurements. Quantum energies The thermodynamic definition of the quantum chemical potential follows from the hydrostatic force balance above: According to thermodynamics, at equilibrium the chemical potential is constant everywhere, which corresponds straightforwardly to the stationary Schrödinger equation. Therefore, the eigenvalues of the Schrödinger equation are free energies, which differ from the internal energies of the system. The particle internal energy is calculated as and is related to the local Carl Friedrich von Weizsäcker correction. See also Quantum potential Quantum hydrodynamics Bohmian quantum mechanics Pilot wave theory Notes References Partial differential equations Quantum mechanics
Madelung equations
[ "Physics" ]
707
[ "Theoretical physics", "Quantum mechanics" ]
14,412,240
https://en.wikipedia.org/wiki/3-Mercapto-3-methylbutan-1-ol
3-Mercapto-3-methylbutan-1-ol, also known as MMB, is a common odorant found in food and cat urine. The aromas ascribed to MMB include catty, roasty, broth-like, meaty, and savory, or similar to cooked leeks. MMB is an organosulfur compound with the formula CHOS. Its structure consists of isopentane with a primary alcohol group and a tertiary thiol group attached to a β-carbon relative to the alcohol. MMB is found in the urine of leopards and domestic cats, and is considered an important semiochemical in male scent-marking. MMB is also a common odorant in food, including coffee, passionfruit juice, and Sauvignon Blanc wines. As a tertiary thiol, MMB is structurally similar to other "catty" thiols, including 3-mercapto-3-methyl-2-pentanone, 4-mercapto-4-methyl-2-pentanone, 8-mercapto-p-menthan-3-one, and 2-mercapto-2-methylbutane. Synthesis The compound can be produced through many methods. The most well-known reaction sequence begins with ethyl acetate, which is activated with lithium bis(trimethylsilyl)amide at the α-position and coupled with acetone to form ethyl 3-hydroxy-3-methylbutyrate. The 3-hydroxy-3-methylbutyrate is then brominated, treated with thiourea, and hydrolyzed to form 3-mercapto-3-methyl-butyric acid. The compound is then reduced with lithium aluminum hydride to form 3-mercapto-3-methylbutanol. Since MMB is most often synthesized for use as a standard in isotope dilution assays, most instances of MMB synthesis in chemical literature involve acetone-d6 to form [H]-3-mercapto-3-methylbutanol. MMB in food and drink Besides felinine, MMB was first identified in the plant kingdom in Vitis vinifera L. cv., Sauvignon Blanc. 3-mercapto-3-methylbutan-1-ol have been known to be used commercially as aromas for certain foods such as Sauvignon Blanc wine and coffee. MMB has also been identified in passion fruit juice along with its acetate. The synthesis of the MMB was formed by the action of the bacterial extract on CESFPs of passion fruit juice. One study looked at the influence of human whole saliva on odor-active thiols, specifically that of salivary enzymes breaking down MMB. The degradation of MMB is thought to be induced by enzymatic activity, and it is much faster than other volatile thiols. Another study found that the perception threshold of 3-mercapto-3-methylbutan-1-ol is 1500 ng/L. This study found that MMB had a "catty" odor, had an orthonasal odor threshold of 2 μg/L in water, and was found in concentrations from 150-1500 μg/kg in coffee. The synthesis of MMB in wine is brought on by the fermentation process. The pathway of formation for the aromatic precursors involves four important steps: enzymatic oxidation, metabolic processing of unsaturated fatty acids, cysteinlated or glutathionylated conjugation to aldehydes, and a β-lyase cleavage during alcoholic fermentation to release MMB. In this fermentation process, strains of the certain bacterial species are characterized by an extra-cellular α-arabinofuranosidase, influencing the content of desirable varietal aromas and, in particular, Metschnikowia pulcherrima releases varietal thiols including MMB. For Sauvignon Blanc, the contribution of volatile thiols to varietal aroma is quite significant as the levels in wine usually exceed the threshold of detection. Unlike most aroma compounds found in wine, volatile thiols are unique in the fact that they exist in trace amounts in the berries. The intense passion fruit-type aroma of New Zealand Sauvignon Blanc wines are attributed to high concentrations of the varietal thiols. This vintner study found that aromas in wine caused by MMB diminish rapidly over just a year in its bottle. Odorant in cat urine MMB synthesis, within the biological system of a cat's bladder, is regulated by many different factors including cauxin, age, and sex. Cauxin is an enzyme that acts as a nonspecific carboxylesterase abundant in feline urine which converts 3-methylbutanol-cysteinylglycine (3MBCG) to felinine, with a side product of glycine. Upon formation, felinine gradually degrades into MMB. Prior to sexual maturation of cats, cauxin and MMB are not produced at significant levels since this is a testosterone-dependent, although the specific role of testosterone is not well understood. With little testosterone in the body in the first three months of their life, the concentrations of cauxin and 3-methylbutanol-cysteinylglycine are too low for proper reaction conditions. Biologically, this is logical as the ability to utilize pheromones such as MMB for territory marking and finding mates is not needed during kittenhood. The testosterone dependence also explains why female cats do not have nearly as much cauxin and MMB as male cats, and in turn, why their urine does not have a species-specific odor. It also explains why urine in neutered males, who are producing a lot less testosterone than their intact counterparts, do not have. After a cat reaches sexual maturity, a positive correlation is found with age and MMB production due to an increase in cauxin production, and release in urine with 3-methylbutanol-cysteinylglycine, allowing for the reaction to occur more frequently. This is advantageous for older cats as it allowed them to potently leave a scent trail for female cats to follow, and male cats to stay away. Studies have also demonstrated some wider predator-prey responses to MMB, cementing this molecule's role in wider ecological relationships. In wildlife, one study established that African wildcats respond to MMB dispensers, marking the territory nearby at higher rates than dispensers without, establishing they recognize the scent. Small mammals have also demonstrated a recognition of the scent, rolling around in where the wildcats have urinated in order to utilize the scent of MMB to its advantage, disguising itself in the scent of a large predator to ward off its own predators. Laboratory experiments have demonstrated that MMB has a does not have an expected repelling effect on mice, who are natural prey of cats, which complicates the narrative further since it is used for a repelling effect by small mammals as a defensive mechanism. Further investigation into this dynamic is needed. See also Cat pheromone References Thiols Primary alcohols Pheromones
3-Mercapto-3-methylbutan-1-ol
[ "Chemistry" ]
1,525
[ "Organic compounds", "Pheromones", "Thiols", "Chemical ecology" ]
14,413,016
https://en.wikipedia.org/wiki/Track%20automation
Track automation or sometimes only automation refers to the recording or handling of time-based controlling data in time-based computer applications such as digital audio workstations, video editing software and computer animation software. Examples Multitrack audio software In modern DAWs every parameter that exists can usually be automatized, be it settings for a track's volume, applied filters or a software synthesizer. Example automations in this context include: The volume of a track can sometimes or constantly change (fade-in/out/over) The panning of a sound might change A filter sweep (more or less intensive filter, or the frequency limits might change) To achieve automation, either the user turns some knobs or faders on a physical controller connected to the computer or the user can set key frames with the mouse, between which the computer interpolates, or the user can draw entire data curves. Animation software The user sets some keyframes for i.e. position/rotation/size of an object or the position/angle/focus of a camera, and this movement data can be altered over time. Video editing software Blending between 2 clips. The track automation curve affects how one image changes into the other, be it slow/fast with/without acceleration, maybe even back and forth if one uses a Sinus-like wave. See also Control voltage MIDI References Audio engineering
Track automation
[ "Engineering" ]
279
[ "Electrical engineering", "Audio engineering" ]
14,413,905
https://en.wikipedia.org/wiki/Debtor%20days
The debtors days ratio measures how quickly cash is being collected from debtors. The longer it takes for a company to collect, the greater the number of debtors days. Debtor days can also be referred to as debtor collection period. Another common ratio is the creditors days ratio. Definition or when References Financial ratios
Debtor days
[ "Mathematics" ]
66
[ "Financial ratios", "Quantity", "Metrics" ]
14,414,683
https://en.wikipedia.org/wiki/Mass%20vaccination
Mass vaccination is a public policy effort to vaccinate a large number of people, possibly the entire population of the world or of a country or region, within a short period of time. This policy may be directed during a pandemic, when there is a localized outbreak or scare of a disease for which a vaccine exists, or when a new vaccine is invented. Under normal circumstances, vaccines are provided as part of an individual's medical care starting from birth and given as part of routine checkups. But there are times when there is a need to quickly vaccinate the population at large and provide easy access to the service. When this occurs, temporary clinics may be established around communities that can efficiently handle the many people within at once. Challenges of a mass vaccination effort include vaccine supply, logistics, storage, finding vaccinators and other necessary staff, vaccine safety and public outreach. Historic mass vaccinations Smallpox Early successes in eradication (prior to 1950) In 1947, after a man traveled from Mexico to New York City and developed smallpox, Dr. Israel Weinstein announced to the residents of New York the need to get vaccinated. Vaccine clinics were established throughout the city and within less than a month, 6,350,000 residents were vaccinated. This was enabled by improvements in vaccine production and storage. Prior to new developments, transportation represented a major issue and hindered mass vaccinations. Because smallpox vaccination requires a live virus, it originally required a sample to be transferred from person-to-person or animal-to-person directly. The creation of a liquid vaccine stored in capillary tubes marked a major advancement for the smallpox vaccine. This method involved the use of glycerol as a preservative and was significant for storage and transportation. In addition to these benefits, it enabled mass production through the use of animals, and ensured long term viability at temperatures below freezing. However, this method was insufficient to enable widespread vaccination in tropical regions of the world, and thus was largely restricted to temperate countries. Compulsory vaccinations were used throughout the beginning of the 20th century in a most of these countries, which led to the decline of smallpox. For countries such as the United States, Canada, the United Kingdom, and some other European countries, outbreaks were quickly shut down by strong public health policies. Soon, the more deadly Variola Major smallpox variant steadily declined, and endemics were only brought on by travelers from countries that lacked control over smallpox outbreaks. It's important to note that the milder Variola Minor smallpox variant remained prevalent until the mid-20th century, as it often didn't warrant hospital visits or was misdiagnosed. The success of health policies in controlling and eliminating smallpox by 1950s in many countries led some to believe that the world eradication of smallpox would be possible. Interest in worldwide eradication (1950-1959) The creation of a heat stable, freeze-dried vaccine occurred in the 1950s. Further improvements in freeze-drying technology allowed for the mass production of the vaccine at a commercial level. The Health Assembly, a group within the World Health organization (WHO), began discussing the possibility of eliminating smallpox between 1950 and 1955. The idea was ultimately rejected, as many viewed it an impossible task to take on. In 1958, a professor from the USSR, acting as a Health Assembly delegate, once again pushed the idea of smallpox being an issue for all countries, whether or not endemics are still occurring. He presented a report to the Eleventh World Health Assembly, which argued that world eradication of the disease is possible, as shown by the success of countries that managed to eliminate it through health policy. This was particularly significant as the professor, Viktor Zhdanov, had come to the conclusion on his own, without knowledge of arguments from previous World Health Assemblies. In this Zhdanov Report, he used the USSR as an example, arguing that the success of mandatory vaccinations throughout his country proves that it's possible to eliminate it in any country. Zhdanov offered the support of the USSR, and backed the legitimacy of the report through the donation of millions of vaccines and previous offers of support to central-Asian countries. The method of eradication would that was proposed involved the use of the newly developed freeze-dried vaccinations and mandatory vaccination. Surveillance containment programs was also mentioned, which actually came to dominate in the later years of the eradication campaign. Over the course of the next year, resolutions coordinating the start of the program, as well as to ensure the success of it, were made. During the Twelve World Health Assembly in 1959, the proposal of an eradication campaign for smallpox was voted for successfully. Smallpox eradication program (1960-1966) The eradication of smallpox seemed to be easier and less costly than other previously eradicated diseases. Smallpox had no vectors, as humans were the only reservoirs carrying the disease. Furthermore, the elimination of the disease would be mostly on mass vaccination and did not require vector control. Directed by Donald Henderson, this first effort involved the use of mass vaccinations with a goal to have 80% of every country's population immunized. Although the program was brought forth by WHO, implementation would largely depend on individual governments. WHO would be responsible for supporting the programs through vaccine production, and training of staff. Each country would be required to cover most of the costs and actual functions of the program. A lack of universal commitment from countries hindered this campaign allowing smallpox to remain prevalent almost a decade later. This was particularly a problem in developing countries. The WHO was not designed to provide considerable material support and close collaboration between countries on a wide scale. Over the first few years of the program's initiation, a lack of donations of vaccines and money hindered the success of the program. The WHO created the Expert Committee on Smallpox in 1964 due to the lack of progress. A report was released giving a more clear strategy to be implemented, in the form of different phases. Based on outbreaks that occurred in India in regions that claimed to have more than 80% vaccination rates, the committee determined that 100% of the population would need to be vaccinated in the first mass vaccination phase. After this, they would focus on stopping subsequent cases and investigating them. This was not well received during the Seventeenth World Health Assembly, in which many express doubts over the success especially with extreme vaccine shortages following a lack of donations. It wasn't until 1965 that the USA increased commitment to the cause, yet not out of interest but because they were already starting a measles eradication campaign and felt this could be added on. This along with continued support from the USSR led the WHO to develop an intensified program for smallpox eradication, however many members still lacked confidence in this new programs success. Intensified smallpox eradication program (1967-1980) From 1967, the Intensified Smallpox Program now called for surveillance reporting and investigation in addition to mass vaccination. Teams were directed to find alternative or unique solutions in their regions. In the years following the initiation of this plan, the WHO saw an increase in qualified volunteers, contributions from countries and participation in their campaign. They worked on increasing training of staff and publicizing the program worldwide. Improvements in procedures and technology had a significant effect on advancing the program. Particularly, the invention of the bifurcated needle made administration of vaccines in the field more practical than the previously used jet-injectors. The number of outbreaks, instead of the percent of population vaccinated, became the new focus. By 1973, smallpox only remained a problem in five countries. Improved methods of surveillance and containment, as well as a large increase in support, was a critical part of finally eradicating smallpox. The regions would contain the spread out smallpox through vaccinating anyone exposed to an infected person; this was the method of ring vaccination. It would not be until May 8, 1980, during the World Health Assembly that smallpox was announced as officially eradicated. Criticism of mass vaccination Vaccination policies were not met without resistance, as countries that had mandatory vaccination policies saw a rise in antivaccination movements. In Brazil, compulsory vaccination was met with riots. The lack of control led to large outbreaks and many deaths. Other countries had more success in vaccination, which led to Variola Minor replacing Variola Major as the cause of smallpox outbreaks in these countries. Antivaccinationists rejected vaccination policy more, as this more mild form was not seen as significant. This was particularly an issue in the United States as only some states had compulsory vaccination, while others banned or lacked laws for it. Polio Poliomyelitis is a disease which causes lower body paralysis through the damage of motor neurons caused by three strains of the poliovirus. Only 1% of polio cases actually result in paralysis. In 1916, the United States experienced a polio epidemic which paralyzed over 27,000 people and lead to 6,000 deaths. These outbreaks gradually became worse and worse as it spread throughout the Americas and to Europe. Jonas Salk developed the first inactivated polio vaccine (IPV) in 1953 which was tested in a clinical trial that enrolled 1.6 million children in Canada, Finland and the United States. With the distribution of Salk's vaccine, cases decreased from 13.9 to 0.8 cases per 100,000 in a period of only 7 years from 1954 to 1961. By 1956, Albert Sabin had created the live-attenuated vaccine also known as the oral polio vaccine (OPV) which contained three types of wild polio strains. After almost two decades in 1972, Sabin decided to donate his vaccine strains to the World Health Organization (WHO) which greatly increased the distribution and accessibility of the vaccine across the world. In the years following the development of the vaccines from 1977 to 1995, children who had been fully vaccinated with all three doses of OPV had risen from 5% to 80%. In 1988, the World Health Assembly decided to make efforts to completely eradicate polio by the year 2000 with a large amount of the progress occurring before the target date. This effort was titled the Global Polio Eradication Initiative and has seen wild success with a decrease in 99% of cases worldwide by 2018. When the global campaign began in 1988, there were over 125 polio-endemic countries compared to only 20 by the year 2000. Wealthier countries with better infrastructure were able to use more resources and introduce better health strategies to achieve herd immunity early on. The WHO Region of the Americas declared themselves to be polio free in 1994. Following this enormous achievement, other WHO regions quickly followed with the Western Pacific Region declared polio free in 2000, the European Region in 2002 and South-East Asia Region in 2014. Mass vaccination strategies such as National Immunization Days were key to the success of the oral polio vaccine (OPV). In South America, transmission rates severely declined in the mid-1980s following the invention and widespread use of the OPV. With such an incredibly high amount of vaccinations within a short time frame, the overall incidence of Polio was decreased. Other countries such as India, were able to vaccinate over 120 million children in large scale vaccination days which became a regular occurrence. Polio campaigns in America Several famous Americans helped pave the way for the acceptance of the polio vaccine in the United States. Franklin D. Roosevelt, one of the most famous polio patients in the world, created the National Foundation for Infantile Paralysis in 1938 which eventually became known as March of Dimes. The March of Dimes funded a large portion of the polio research all throughout the epidemic and eventually resulted in the development of the vaccine by Jonas Salk. Following the years after its invention and distribution, polio cases decreased from tens of thousands to only a handful per year. With the help of Elvis Presley, who took the vaccine publicly, the acceptance of the polio vaccine increased even further. This act embodied three of the most important pillars of a behavioral change campaign: social influence, social norms and examples. Elvis Presley used his social influence to normalize getting the polio vaccine, which increased vaccination rates among American youth to over 80% in just under 6 months. These types campaigns were the heart of the mass vaccination efforts in America. Barriers to eradication Despite the global efforts to vaccinate and eradicate polio, the virus still causes outbreaks every year. As of 2021, only wild polio virus type 1(WPV1) affects the world and are localized in Afghanistan and Pakistan. The circulating vaccine-derived poliovirus (cVDPV) caused outbreaks in 32 countries in 2020. The cVDPV is a result of live oral poliovirus vaccine becoming infectious after extended circulation. This prompted an update to the Global Polio Eradication Initiative (GPEI) Strategy for the years 2022–2026. With the most recent update in August 2020, the WHO African Region was declared polio free leaving only one of the six WHO regions with polio. The GPEI's new initiatives focused on eradicating the WPV1 in both Afghanistan and Pakistan while also combating the new outbreaks of cVDPV. The difficulty arises when the world must not only eliminate the wild type polio virus but also the vaccine-derived form, making eradication even more complex. While both the live and inactivated polio vaccines were wildly successful in saving the world from the historic endemic, there still are drawbacks with each of the vaccines. The OPV vaccine was reverted to an infectious strain which led to the rise of the cVDPV. While the inactivated polio vaccine (IPV) protected the host, it was not strong enough to generate intestinal mucosa immunity and therefore did not prevent the transmission of the virus. These weaknesses suggest that more innovative vaccines or a combination of the two is needed to completely eradicate polio. Swine flu vaccination In 1918, the deadly H1N1 influenza virus which infected approximately 500 million people around the world and resulted in the deaths of 50 to 100 million people (3% to 5% of the world population). New York City had created two major mass immunization programs, the first was the smallpox immunization program initiated in 1947 and the second was the swine flu influenza program in 1976. For the first mass immunization campaign in 1947, the New York City Department of Health maintained the outbreak within a period of 29 days and vaccinated 6.35 million people successfully. Weinstein and colleagues established vaccination clinics at many locations such as at the Department of Health's 125 Worth Street headquarters, at the 21 district health centers, 60 child health clinics, and 13 municipal hospitals in order to accommodate for the high demand of people requesting for a vaccination. The smallpox vaccination effort was announced to be officially terminated on May 3, 1947.In which case, it was rather surprising to see that the second mass immunization campaign in 1976, which was a national immunization effort, was only able to accomplish vaccinating 639,000 against swine influenza over a period of 60 days. It was also noted that in 1976, the mass swine flu vaccination programme was discontinued after 362 cases of Guillain–Barré syndrome were identified among 45 million vaccinated people. The vast differences between the number of people vaccinated in 1947 versus 1976, despite the outbreaks, are reflected mainly by the public's skeptical perception of the minimal severity and low threat of swine flu. Swine flu, also known as H1N1 influenza A virus, is a type of infectious respiratory disease that has caused high economical and medical burden every year around the world. There are important lessons to be learned from the recent 'Swine Flu' pandemic. Improving techniques are necessary in trying to decrease the spread of infection-both in the community and within our hospitals would mean improving infection control and hygiene, and the use of masks, alcohol hand rubs and so on. A worldwide study was conducted which comprehensively analyzed adamantanes resistance in H1N1 influenza viruses from 1918 to 2019 and showed 77.32% H1N1 influenza variants demonstrating resistance to adamantanes. This study emphasizes the importance of global surveillance, especially in many third-world countries, as well as the evolution of drug-resistant H1N1 influenza variants in an effort to prevent another pandemic. Contemporary usage COVID-19 The introduction of multiple COVID-19 vaccines throughout the pandemic such as Pfizer, Moderna, Johnson and Johnson, and the newly approved Novavax vaccine have helped allow large amounts of the population to get vaccinated. When COVID-19 was identified in December 2019 there were no vaccines readily available to vaccinate mass populations. By December 2020, the Pfizer vaccine was the first to receive emergency use approval by the Food and Drug Administration. Vaccines under normal circumstances can take up to 10–15 years to be made and approved. Without worldwide collaboration, funding for research, and rigorous guidelines for clinical trials there would not have been a quickly developed vaccine. The type of vaccines that are available are messenger RNA, vector, and protein subunit. Messenger RNA vaccines work by giving cells specific instructions to make the S protein found on the surface of the COVID-19 virus. It does not infect recipients of the vaccine with the virus but allows for the body to detect and fight the COVID-19 virus. Both Pfizer and Moderna COVID-19 vaccines fall in to the messenger RNA category. Vector vaccines also deliver instructions on how to make the S protein found on the surface of the virus. It also does not cause the recipient to become infected with the virus after vaccination. The Johnson & Johnson vaccine falls into the vector category. Lastly the subunit vaccine only contains a part of the virus needed to create an immune response. The S protein is the harmless subunit that will allow for an immune response when the COVID-19 virus is detected. The Novavax vaccine falls into the subunit protein category. When vaccinating large populations an action plan must be created to organize which groups will receive the vaccination first. The California Department of Public Health created an action plan to vaccinate by population group. First immunocompromised groups, second unvaccinated or not fully vaccinated, third under 12 populations, fourth boosters for those 65 and older, and lastly boosters for ages 12–64. As well as mass vaccination centers being established at many locations, such as stadiums led to many people getting vaccinated. In the United States, NFL commissioner Roger Goodell offered the league's 30 stadiums as mass vaccination sites. As of April 2021, NFL stadiums have administered more than 2 million doses. By December 2021, more than 100,000 people had received vaccinations at Indianapolis Motor Speedway. Pharmacist have also played an important role in getting mass populations vaccinated since they are a skilled and trained workforce able to help increase vaccination rates. Many people can turn to drug or convivence stores to get vaccinated since it can be a quick and easy place to access. Pharmacies have played a large roll in mass vaccination now more than ever due to the pandemic. Some states prior to the pandemic did not allow pharmacist to vaccinate or administer flu vaccines. Now, pharmacies are contracting with state and federal governments since they have become key players in vaccinations. Without the involvement of pharmacies mass vaccination would be difficult to achieve. in most communities 90% of people live within five miles of a pharmacy. Pharmacist can oftentimes be the quickest access to a healthcare provider, making it a desirable option for the public to come and get vaccinated. Not only have pharmacist been involved in COVID-19 vaccinations but pharmacy technicians as well. Pharmacy technicians have helped alleviate the workload on pharmacist with the large increase in demand for vaccinations. They also can create more opportunities to interact with people who are hesitant in getting the COVID-19 vaccines. Pharmacy technicians can support pharmacist which will allow more vaccination services to be accommodated efficiently and safety. These efforts will allow for an increase in vaccinations and help vaccinate large groups at a time. During the pandemic pharmacist have had a fundamental roll in sharing information about COVID-19 vaccines. Pharmacist are a quick resource for information and can help relieve some common concerns about reactions or misinformation to the vaccines. They are also advocates for getting vaccinated since they are educators and vaccine administrators. Sharing information to the public about COVID-19 vaccines can help increase vaccinations rates. Since pharmacist are easily accessible in the community setting they can help motivate or encourage getting vaccinated helping decrease preventable infections or diseases such as COVID-19. Mass vaccination of COVID-19 vaccines is important to help stop the spread of the coronavirus and eventually end the pandemic. Individual governments have been allocating billions of dollars to increase production of vaccines to help with the current global manufacturing need of vaccines. Countries such as the United States, Canada, and Australia were able to receive many vaccines early on due to them being wealthier countries. They were able to receive many doses enough to vaccinate their own countries but this left other lower-income countries with limited supply of the vaccines. With some countries receiving more vaccines than others this leads to inequitable distribution and can increase the risk of new outbreaks. Without proper global vaccine distribution it will make it more difficult to end the pandemic and allow for mass vaccination as a global effort. Amid the new strains of the coronavirus such as the omicron variant, scientist and healthcare officials have raised concern about reduced effectiveness of available vaccines. In response to a concern about vaccines having reduced effectiveness countries have encouraged booster shots for most of their population. The World Health Organization would like to prioritize unvaccinated people over booster doses so more of the population will have received their initial dose. References Vaccination
Mass vaccination
[ "Biology" ]
4,614
[ "Vaccination" ]
4,086,059
https://en.wikipedia.org/wiki/Thermoplastic%20olefin
Thermoplastic olefin, thermoplastic polyolefin (TPO), or olefinic thermoplastic elastomers refer to polymer/filler blends usually consisting of some fraction of a thermoplastic, an elastomer or rubber, and usually a filler. Outdoor applications such as roofing frequently contain TPO because it does not degrade under solar UV radiation, a common problem with nylons. TPO is used extensively in the automotive industry. Materials Thermoplastics Thermoplastics may include polypropylene (PP), polyethylene (PE), block copolymer polypropylene (BCPP), and others. Fillers Common fillers include, though are not restricted to talc, fiberglass, carbon fiber, wollastonite, and MOS (Metal Oxy Sulfate). Elastomers Common elastomers include ethylene propylene rubber (EPR), EPDM (EP-diene rubber), ethylene-octene (EO), ethylbenzene (EB), and styrene ethylene butadiene styrene (SEBS). Currently there are a great variety of commercially available rubbers and BCPP's. They are produced using regioselective and stereoselective catalysts known as metallocenes. The metallocene catalyst becomes embedded in the polymer and cannot be recovered. Creation Components for TPO are blended together at 210 - 270 °C under high shear. A twin screw extruder or a continuous mixer may be employed to achieve a continuous stream, or a Banbury compounder may be employed for batch production. A higher degree of mixing and dispersion is achieved in the batch process, but the superheat batch must immediately be processed through an extruder to be pelletized into a transportable intermediate. Thus batch production essentially adds an additional cost step. Structure The geometry of the metallocene catalyst will determine the sequence of chirality in the chain, as in, atactic, syndiotactic, isotactic, as well as average block length, molecular weight and distribution. These characteristics will in turn govern the microstructure of the blend. As in metal alloys the properties of a TPO product depend greatly upon controlling the size and distribution of the microstructure. PP and PE form lamellar crystallites separated by amorphous regions that can grow into a variety of microstructures ranging from single crystals from dilute solution crystallization to fiberous crystals and shish-kabob structures. Thin films from quiescent melts can form spherulitic impinging structures that display cylindrically symmetric birefringence. The PP and PE components of a blend constitute the "crystalline phase", and the rubber and branched PE chains and PE/PP end groups gives the "amorphous phase". If PP and PE are the dominant component of a TPO blend then the rubber fraction will be dispersed into a continuous matrix of "crystalline" polypropylene. If the fraction of rubber is greater than 40% phase inversion may be possible when the blend cools, resulting in an amorphous continuous phase, and a crystalline dispersed phase. This type of material is non-rigid, and is sometimes called TPR for ThermoPlastic Rubber. To increase the rigidity of a TPO blend, fillers exploit a surface tension phenomena. By selecting a filler with a higher surface area per weight, a higher flexural modulus can be achieved. Specific density of TPO blends range from 0.92 to 1.1. Application TPO is easily processed by injection molding, profile extrusion, and thermoforming. However, TPO cannot be blown, or sustain a film thickness less than 1/4 mil (about 6 micrometers). References Thermoplastic elastomers Materials science Polymer physics
Thermoplastic olefin
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
830
[ "Polymer physics", "Applied and interdisciplinary physics", "Materials science", "nan", "Polymer chemistry" ]
4,086,644
https://en.wikipedia.org/wiki/NGC%205921
NGC 5921 is a barred spiral galaxy located approximately 65 million light-years from the Solar System in the constellation Serpens Caput. It was discovered by William Herschel on 1 May 1786. In February 2001 a type II supernova (SN 2001X) was discovered in NGC 5921. It is a member of the Virgo III Groups, a series of galaxies and galaxy clusters strung out to the east of the Virgo Supercluster of galaxies. See also Spiral Galaxy NGC 1300 List of NGC objects (5001–6000) References External links SEDS NOAO: NGC 5921 Barred spiral galaxies Serpens 5921 09824 54849 Discoveries by William Herschel
NGC 5921
[ "Astronomy" ]
143
[ "Constellations", "Serpens" ]
4,086,741
https://en.wikipedia.org/wiki/Kudu%20dung-spitting
Kudu dung-spitting (Bokdrol Spoeg in Afrikaans) is a sport practiced by the Afrikaner community in South Africa. In the competition small, hard pellets of dung from the kudu antelope, are spat, with the farthest distance reached being the winner. Kudu dung-spitting is popular enough to have an annual world championship competition, with the formal sport beginning in 1994. Contests are held at some community bazaars, game festivals or tourism shows in the bushveld, Natal and Eastern Cape. Unlike many similar sports, the distance is measured from the marker to the place the dung pellet comes to rest, rather than where it initially hit the ground. The world record in the sport is a distance of set by Shaun van Rensburg of Addo. The male UK record is held by Dave Marshall at 11.7m in 2022 with his daughter Harri holding the women's record at 8.3m again in 2022. It is said that hunters began using the pellets in spitting competitions to "retaliate" at their prey, as the kudu is a notoriously difficult animal to hunt, and infamous for leaving a trail of dung pellets while managing to elude the hunter. "Similar" sports Records are also kept for cherry pit spitting, watermelon seed spitting, prune pit spitting, brown cricket spitting and tobacco juice spitting. In 2015, a sheep dung-spitting competition was introduced to Northern Ireland's Lady of The Lake Festival in County Fermanagh. References Feces Individual sports Sport in Africa
Kudu dung-spitting
[ "Biology" ]
326
[ "Feces", "Excretion", "Animal waste products", "Spitting" ]
4,086,759
https://en.wikipedia.org/wiki/NGC%207217
NGC 7217 is an unbarred spiral galaxy in the constellation Pegasus. Features NGC 7217 is a gas-poor system whose main features are the presence of several rings of stars concentric to its nucleus: three main ones –the outermost one being of the most prominent and the one that features most of the gas and star formation of this galaxy – plus several others inside the innermost one discovered with the help of the Hubble Space Telescope; a feature that suggests NGC 7217's central regions have suffered several starbursts. There is also a very large and massive spheroid that extends beyond its disk. Other noteworthy features this galaxy has are the presence of a number of stars rotating in the opposite direction around the galaxy's center to most of them and two distinct stellar populations: one of intermediate age on its innermost regions and a younger, metal-poor version on its outermost ones. It has been suggested these features were caused by a merger with another galaxy and, in fact, computer simulations show that NGC 7217 could have been a large lenticular galaxy that merged with one or two smaller gas-rich ones of late Hubble type becoming the spiral galaxy we see today; however right now this galaxy is isolated in space, with no nearby major companions. More recent research presents a somewhat different scenario in which NGC 7217's massive bulge and halo would have been formed in a merger and the disk formed later (and is still growing) either accreting gas from the intergalactic medium or smaller gas-rich galaxies, or most likely from a previously existing reserve. See also Spiral Galaxy NGC 1512 NGC 7742, a very similar galaxy in the same constellation References External links NGC 7217 http://atlas.zevallos.com.br/ Unbarred spiral galaxies Ring galaxies Pegasus (constellation) 7217 11914 68096
NGC 7217
[ "Astronomy" ]
381
[ "Pegasus (constellation)", "Constellations" ]
4,086,786
https://en.wikipedia.org/wiki/Andy%20Carvin
Andy Carvin is an American blogger and a former senior product manager for online communities at National Public Radio (NPR). Carvin was the founding editor and former coordinator of the Digital Divide Network. He is now senior fellow and managing editor for the Atlantic Council's Digital Forensic Research Lab. Early life and education Carvin was born in Boston and raised in Indialantic, Florida alongside his older brother. His parents both worked for the Harris Corporation: father worked as a systems engineer, while his mother worked as a manager. He graduated from Melbourne High School in 1989, and from Northwestern University in 1993. Career When he was working for the Corporation for Public Broadcasting in 1994, he authored the website EdWeb, one of the first websites to advocate the use of the World Wide Web in education. In 1999, he was hired by the Benton Foundation to help develop Helping.org, a philanthropic website that eventually became known as Networkforgood.org. At the December 1999 US National Digital Divide Summit in Washington DC, President Bill Clinton announced the launch of the Digital Divide Network, a spin-off of Helping.org edited by Carvin. In 2001, he organized an email forum called SEPT11INFO, an emergency discussion forum in response to the September 11 attacks. Following the Boxing Day tsunami in 2004, he created the RSS aggregator Tsunami-Info.org, and served as a contributor to the TsunamiHelp collaborative blog. He also joined Global Voices Online at the end of 2004. In January 2005, Carvin began advocating mobile phone podcasting as a tool for citizen journalism and human rights monitoring; he called the concept "mobcasting". Utilizing free online tools including FeedBurner, Blogger and Audioblogger, Carvin demonstrated the potential of mobcasting at a February 2005 Harvard blogging conference and at The Gates, the Central Park art installation created by the artist Christo. He later demonstrated mobcasting as part of a collaborative blog called Katrina Aftermath, which allowed members of the public to post multimedia content regarding Hurricane Katrina. In May 2006, Carvin began serving as host on a blog called Learning.now on PBS, which explored "how new technology and Internet culture affect how educators teach and children learn". In September 2006, Andy Carvin became a staff member at NPR as their senior product manager for online communities. He founded NPR's social media desk in 2008, and stayed with the organization until 2013. Carvin accepted a position at First Look Media in February 2014. He also launched Reported.ly, an initiative that focused on reporting on issues related to social justice and human rights. He later worked at NowThis and the UBC Graduate School of Journalism in Vancouver. In 2019, Carvin was named a senior fellow to the Atlantic Council’s Digital Forensic Research Lab, which investigates online misinformation. Twitter journalism In late 2010, Carvin began sharing information about the popular revolution in Tunisia on Twitter, curating Twitter feeds and articles for an English-speaking audience. Carvin had traveled extensively in Tunisia, had many contacts there, and was able to develop others. In March 2011, Andy Carvin and his Twitter followers utilized crowdsourced research to debunk false stories that Israeli weapons were being used against the people of Libya. By April 2011, The Columbia Journalism Review dubbed Carvin a "living, breathing real-time verification system" and suggested his might be the best Twitter account to follow in the world. The Washington Post called him "a one-man Twitter news bureau". A few days before a foreign policy speech on the Middle East by President Barack Obama in mid-May 2011, the White House contacted Carvin and asked for him to co-host a Twitter interview chat with a White House official. Although NPR had refused to allow the White House to specify particular reporters in the past, Mark Stencel, NPR's managing editor for digital news, granted the request, saying that Carvin was "uniquely suited" for the role. In late June 2011, Carvin traveled to Egypt, where he covered protests in Tahrir Square in Cairo. On August 21, 2011, as armed fighters rolled into the city of Tripoli, Libya, in a bid to oust Muammar Gaddafi from his 42-year rule of the country, cable news stations in the U.S. appeared unprepared to cover the breaking news event, but Carvin tweeted over 800 times, "recording the oral history in real time". He was profiled in Britain's The Guardian newspaper as "the man who tweets revolutions". Carvin donated the iPhone he used to tweet during the Arab Spring to the American History Museum. Awards and nominations For Carvin's work on mobcasting and the digital divide, he received a 2005 TR35 award from Technology Review, awarded annually to the 35 leading technology innovators under the age of 35. Carvin has also been honored as one of the top education technology advocates in eSchool News magazine and District Administration magazine. In July 2011, Carvin received the Journalism Awards: Special Distinction Award, Knight-Batten Award for Innovation for his Twitter reporting. The Daily Dot recognized Carvin as second only to online hacktivist group Anonymous in his influence on Twitter in the year 2011. In its writeup of Carvin, the Dot compared him to Edward R. Murrow, whose radio coverage of the London Blitz established him as a household name in the United States during World War II. In 2011 and 2012, Carvin's Twitter feed was included on Time Magazine's list of the year's 140 Best Twitter Feeds. Writing In 2013, Carvin published Distant Witness, a book covering his journalistic coverings of the Arab Spring. Carvin has written for The Atlantic and Politico. Personal life Carvin lives in Silver Spring, Maryland with his wife and two children. Notes and references External links Andy Carvin's personal website Digital Divide Network PBS learning.now (blog) EdWeb: Exploring Technology & School Reform The Gates @ Central Park Mobcasting (blog) Katrina Aftermath (blog) TsunamiHelp (blog) Mind the Gap: The Digital Divide as the Civil Rights Issue of the New Millennium 1999 essay by Andy Carvin, Multimedia Schools magazine Andy Carvin's Twitter feed 1971 births Living people 20th-century American male writers 20th-century American non-fiction writers 21st-century American male writers 21st-century American non-fiction writers American bloggers American male bloggers American reporters and correspondents Digital divide activists Journalists from Boston Journalists from Florida Non-profit technology Northwestern University alumni NPR personalities People from Indialantic, Florida People from Silver Spring, Maryland Shorty Award winners American video bloggers Writers from Boston Writers from Florida Writers from Maryland YouTubers from Boston YouTubers from Florida YouTubers from Maryland
Andy Carvin
[ "Technology" ]
1,391
[ "Information technology", "Non-profit technology" ]
4,086,824
https://en.wikipedia.org/wiki/Selective%20soldering
Selective soldering is the process of selectively soldering components to printed circuit boards and molded modules that could be damaged by the heat of a reflow oven or wave soldering in a traditional surface-mount technology (SMT) or through-hole technology assembly processes. This usually follows an SMT oven reflow process; parts to be selectively soldered are usually surrounded by parts that have been previously soldered in a surface-mount reflow process, and the selective-solder process must be sufficiently precise to avoid damaging them. Processes Assembly processes used in selective soldering include: Selective aperture tooling over wave solder: These tools mask off areas previously soldered in the SMT reflow soldering process, exposing only those areas to be selectively soldered in the tool's aperture or window. The tool and printed circuit board (PCB) assembly are then passed over wave soldering equipment to complete the process. Each tool is specific to a PCB assembly. Mass selective dip solder fountain: A variant of selective-aperture soldering in which specialized tooling (with apertures to allow solder to be pumped through it) represent the areas to be soldered. The PCB is then presented over the selective-solder fountain; all selective soldering of the PCB is soldered simultaneously as the board is lowered into the solder fountain. Each tool is specific to a PCB assembly. Miniature wave selective solder : This typically uses a round miniature pumped solder wave, similar to the end of a pencil or crayon, to sequentially solder the PCB. The process is slower than the two previous methods, but more accurate. The PCB may be fixed, and the wave solder pot moved underneath the PCB; alternately, the PCB may be articulated over a fixed wave or solder bath to undergo the selective-soldering process. Unlike the first two examples, this process is toolless. Laser Selective Soldering System: A new system, able to import CAD-based board layouts and use that data to position a laser to directly solder any point on the board. Its advantages are the elimination of thermal stress, its non-contact quality, consistent high-quality solder joints and flexibility. Soldering time averages one second per joint; stencils and solder masks may be eliminated from the circuit board to reduce manufacturing costs. Less-common selective soldering processes include: Hot-iron solder with wire-solder feed Induction solder with paste-solder, solder-laden pads or preforms and hot gas (including hydrogen), with a number of methods of presenting the solder Other selective soldering applications are non-electronic, such as lead-frame attachment to ceramic substrates, coil-lead attachment, SMT attachment (such as LEDs to PCBs) and fire sprinklers (where the fuse is low-temperature solder alloys). Regardless of the selective soldering equipment used, there are two types of selective flux applicators: spray and dropjet fluxers. The spray fluxer applies atomized flux to a specific area, while the dropjet fluxer is more precise; the choice depends on the circumstances surrounding the soldering application. Miniature wave selective solder fountain The miniature wave selective solder fountain type is widely used, yielding good results if the PCB design and manufacturing process are optimized. Key requirements for selective fountain type soldering are: Process Nozzle diameter selection according to solder-joint geometry, nearby component clearance, component lead height and wettable or non-wettable nozzle Solder temperature: Set value or actual value on plated through-hole part Contact time Preheating Flux type: No-clean, organic-based; method of fluxing (spray or dropjet) Soldering: Drag, dip or angle method Design Temperature requirement (for soldered part) and component selection Nearby SMD through-hole component clearance Ratio of component pin diameter to plated through-hole Component lead length Thermal decoupling Solder masking (green masking) distance from component pad Drop-Jet The Drop-Jet is an Electromechanical device which is capable of depositing a droplet of flux on demand onto a surface such as a Printed Circuit Board and or component pin. Thermal profiling The thermal profile of the selective process is critical as with other common automated soldering techniques. Topside temperature measurements within the pre-heat stage must be verified as with conventional flow solder machine, additionally flux activation must be verified as sufficient. As number of miniature profiling dataloggers are now available to make the process more simple such as the Solderstar Pro units. Selective solder optimization A number of fixtures are available to allow daily checking of the selective solder process, these instruments allow the verification of machine parameters to be performed on a periodic basis. Parameters such as contact time, X/Y speeds, nozzle wave height and profile temperature can all be measured. Use of nitrogen atmosphere Selective soldering is normally undertaken in a nitrogen atmosphere. This prevents oxidation of the fountain surface and results in better wetting. Less flux is needed with less left-over residue. The use of nitrogen results in clean, shiny joints without the need for PCB cleaning or brushing. References Printed circuit board manufacturing Soldering
Selective soldering
[ "Engineering" ]
1,082
[ "Electrical engineering", "Electronic engineering", "Printed circuit board manufacturing" ]
4,086,864
https://en.wikipedia.org/wiki/NGC%204567%20and%20NGC%204568
NGC 4567 and NGC 4568 (nicknamed the Butterfly Galaxies or Siamese Twins) are a set of unbarred spiral galaxies about 60 million light-years away in the constellation Virgo. They were both discovered by William Herschel in 1784. They are part of the Virgo Cluster of galaxies. These galaxies are in the process of colliding and merging with each other, as studies of their distributions of neutral and molecular hydrogen show, with the highest star-formation activity in the part where they overlap. However, the system is still in an early phase of interaction. In about 500 million years the galaxies will coalesce into a single elliptical galaxy. Supernovae Four supernovae have been observed in the Butterfly Galaxies: SN 1990B (type Ib, mag. 16) was discovered by Saul Perlmutter and Carlton Pennypacker on 20 January 1990. SN 2004cc (type Ic, mag. 17.5) was discovered by the Lick Observatory Supernova Search (LOSS) on 10 June 2004. SN 2020fqv (type IIb, mag. 19) was discovered by the Automatic Learning for the Rapid Classification of Events (ALeRCE) on 31 March 2020. SN 2023ijd (type II, mag. 16.8) was discovered by ASAS-SN on 14 May 2023. Naming controversy The two galaxies were nicknamed "Siamese Twins" because they appear to be connected. On August 5, 2020, NASA announced that they would not use that nickname in an effort to avoid systemic discrimination in their terminology. See also Antennae Galaxies Eyes Galaxies Notes References External links Kopernik Space Images, Spiral Galaxies NGC 4568 and NGC 4567 aka "The Siamese Twins" : Supernova 2004cc, George Normandin (29 June 2004) Skyhound, The Siamese Twins SIMBAD, VCC 1673 : NGC 4567 -- Galaxy in Pair of Galaxies SIMBAD, VCC 1676 : NGC 4568 -- Galaxy in Pair of Galaxies NED, VV 219 NED, NGC 4567 NED, NGC 4568 Virgo (constellation) Virgo Cluster 4567 42064 Interacting galaxies Unbarred spiral galaxies Overlapping galaxies
NGC 4567 and NGC 4568
[ "Astronomy" ]
456
[ "Virgo (constellation)", "Constellations" ]
4,087,041
https://en.wikipedia.org/wiki/NGC%202736
NGC 2736 (also known as the Pencil Nebula) is a small part of the Vela Supernova Remnant, located near the Vela Pulsar in the constellation Vela. The nebula's linear appearance triggered its popular name. It resides about 815 light-years (250 parsecs) away from the Solar System. It is thought to be formed from part of the shock wave of the larger Vela Supernova Remnant. The Pencil Nebula is moving at roughly 644,000 kilometers per hour (400,000 miles per hour). History On 1 March 1835, John Herschel discovered this object at the Cape of Good Hope and described it as "eeF, L, vvmE; an extraordinary long narrow ray of excessively feeble light; position 19 ±. At least 20' long, extending much beyond the limits of the field...". This agrees perfectly with the ESO- Uppsala listing N2736 = E260-N14, a nebula with dimensions 30'x7', position angle of 20 and notes "Luminous filament". Harold Corwin adds that on the ESO IIIa-F film this nebula is the brightest patch of a huge supernova remnant (Gum Nebula) whose delicate whisps cover the field. A relatively bright star is immersed in N2736 (mentioned by Herschel). See also List of photographs considered the most important References External links Space.com article about NGC 2736 NASA feature Gum Nebula 2736 Vela (constellation)
NGC 2736
[ "Astronomy" ]
312
[ "Nebula stubs", "Vela (constellation)", "Astronomy stubs", "Constellations" ]
4,087,212
https://en.wikipedia.org/wiki/Aleksandar%20Totic
Aleksandar Totic is one of the original developers of the Mosaic browser. He cofounded and was a partner at Netscape Communications Corporation. He was born in Belgrade, Serbia, on 23 September 1966. He moved to America after his degree from Kuwait was not recognized by Yugoslav government, and currently lives in Palo Alto, CA San Francisco, CA. External links Mosaic - The First Global Web Browser Software engineers Serbian computer scientists Computer programmers Living people Year of birth missing (living people) Place of birth missing (living people)
Aleksandar Totic
[ "Technology", "Engineering" ]
108
[ "Software engineering", "Computing stubs", "Computer specialist stubs", "Software engineers" ]
4,087,321
https://en.wikipedia.org/wiki/Latent%20learning
Latent learning is the subconscious retention of information without reinforcement or motivation. In latent learning, one changes behavior only when there is sufficient motivation later than when they subconsciously retained the information. Latent learning is when the observation of something, rather than experiencing something directly, can affect later behavior. Observational learning can be many things. A human observes a behavior, and later repeats that behavior at another time (not direct imitation) even though no one is rewarding them to do that behavior. In the social learning theory, humans observe others receiving rewards or punishments, which invokes feelings in the observer and motivates them to change their behavior. In latent learning particularly, there is no observation of a reward or punishment. Latent learning is simply animals observing their surroundings with no particular motivation to learn the geography of it; however, at a later date, they are able to exploit this knowledge when there is motivation - such as the biological need to find food or escape trouble. The lack of reinforcement, associations, or motivation with a stimulus is what differentiates this type of learning from the other learning theories such as operant conditioning or classical conditioning. Comparison to other types of learning Classical conditioning Classical conditioning is when an animal eventually subconsciously anticipates a biological stimulus such as food when they experience a seemingly random stimulus, due to a repeated experience of their association. One significant example of classical conditioning is Ivan Pavlov's experiment in which dogs showed a conditioned response to a bell the experimenters had purposely tried to associate with feeding time. After the dogs had been conditioned, the dogs no longer only salivated for the food, which was a biological need and therefore an unconditioned stimulus. The dogs began to salivate at the sound of a bell, the bell being a conditioned stimulus and the salivating now being a conditioned response to it. They salivated at the sound of a bell because they were anticipating food. On the other hand, latent learning is when an animal learns something even though it has no motivation or stimulus associating a reward with learning it. Animals are therefore able to simply be exposed to the information for the sake of information and it will come to their brain. One significant example of latent learning in rats subconsciously creating mental maps and using that information to be able to find a biological stimulus such as food faster later on when there is a reward. These rats already knew the map of the maze, even though there was no motivation to learn the maze before the food was introduced. Operant conditioning Operant Conditioning is the ability to tailor an animals behavior using rewards and punishments. Latent Learning is tailoring an animals behavior by giving them time to create a mental map before a stimulus is introduced. Social learning theory Social learning theory suggests that behaviors can be learned through observation, but actively cognizant observation. In this theory, observation leads to a change in behavior more often when rewards or punishments associated with specific behaviors are observed. Latent learning theory is similar in the observation aspect, but again it is different due to the lack of reinforcement needed for learning. Early studies In a classic study by Edward C. Tolman, three groups of rats were placed in mazes and their behavior observed each day for more than two weeks. The rats in Group 1 always found food at the end of the maze; the rats in Group 2 never found food; and the rats in Group 3 found no food for 10 days, but then received food on the eleventh. The Group 1 rats quickly learned to rush to the end of the maze; Group 2 rats wandered in the maze but did not preferentially go to the end. Group 3 acted the same as the Group 2 rats until food was introduced on Day 11; then they quickly learned to run to the end of the maze and did as well as the Group 1 rats by the next day. This showed that the Group 3 rats had learned about the organisation of the maze, but without the reinforcement of food. Until this study, it was largely believed that reinforcement was necessary for animals to learn such tasks. Other experiments showed that latent learning can happen in shorter durations of time, e.g. 3–7 days. Among other early studies, it was also found that animals allowed to explore the maze and then detained for one minute in the empty goal box learned the maze much more rapidly than groups not given such goal orientation. In 1949, John Seward conducted studies in which rats were placed in a T-maze with one arm coloured white and the other black. One group of rats had 30 mins to explore this maze with no food present, and the rats were not removed as soon as they had reached the end of an arm. Seward then placed food in one of the two arms. Rats in this exploratory group learned to go down the rewarded arm much faster than another group of rats that had not previously explored the maze. Similar results were obtained by Bendig in 1952 where rats were trained to escape from water in a modified T-maze with food present while satiated for food, then tested while hungry. Upon being returned to the maze while food deprived, the rats learned where the food was located at a rate that increased with the number of pre-exposures given the rat in the training phase. This indicated varying levels of latent learning. Most early studies of latent learning were conducted with rats, but a study by Stevenson in 1954 explored this method of learning in children. Stevenson required children to explore a series of objects to find a key, and then he determined the knowledge the children had about various non-key objects in the set-up. The children found non-key objects faster if they had previously seen them, indicating they were using latent learning. Their ability to learn in this way increased as they became older. In 1982, Wirsig and co-researchers used the taste of sodium chloride to explore which parts of the brain are necessary for latent learning in rats. Decorticate rats were just as able as normal rats to accomplish the latent learning task. More recent studies Latent learning in infants The human ability to perform latent learning seems to be a major contributor to why infants can use knowledge they learned while they did not have the skills to use them. For example, infants do not gain the ability to imitate until they are 6 months. In one experiment, one group of infants was exposed to hand puppets A and B simultaneously at the age of three-months. Another control group, the same age, was only presented to with puppet A. All of the infants were then periodically presented with puppet A until six-months of age. At six-months of age, the experimenters performed a target behavior on the first puppet while all the infants watched. Then, all the infants were presented with puppet A and B. The infants that had seen both puppets at 3-months of age imitated the target behavior on puppet B at a significantly higher rate than the control group which had not seen the two puppets paired. This suggests that the pre-exposed infants had formed an association between the puppets without any reinforcement. This exhibits latent learning in infants, showing that infants can learn by observation, even when they do not show any indication that they are learning until they are older. The impact of different drugs on latent learning Many drugs abused by humans imitate dopamine, the neurotransmitter that gives humans motivation to seek rewards. It is shown that zebra-fish can still latently learn about rewards while lacking dopamine if they are given caffeine. If they were given caffeine before learning, then they could use the knowledge they learned to find the reward when they were given dopamine at a later time. Alcohol may impede on latent learning. Some zebra-fish were exposed to alcohol before exploring a maze, then continued to be exposed to alcohol when the maze had a reward introduced. It took these zebra-fish much longer to find a reward in the maze than the control group that had not been exposed to alcohol, even though they showed the same amount of motivation. However, it was shown that the longer the zebra-fish were exposed to alcohol, the less it had an effect of their latent learning. Another experiment group were zebra-fish representing alcohol withdrawal. Zebra-fish that performed the worst were those who had been exposed to alcohol for a long period, then had it removed before the reward was introduced. These fish lacked in motivation, motor dysfunction, and seemed to have not latently learned the maze. Other factors impacting latent learning Though the specific area of the brain responsible for latent learning may not have been pinpointed, it was found that patients with medial temporal amnesia had particular difficulty with a latent learning task which required representational processing. Another study, conducted with mice, found intriguing evidence that the absence of a prion protein disrupts latent learning and other memory functions in the water maze latent learning task. A lack of phencyclidine was also found to impair latent learning in a water finding task. References Ethology Learning methods Memory
Latent learning
[ "Biology" ]
1,853
[ "Behavioural sciences", "Ethology", "Behavior" ]
4,087,426
https://en.wikipedia.org/wiki/Instinctive%20drift
Instinctive drift, alternately known as instinctual drift, is the tendency of an animal to revert to unconscious and automatic behaviour that interferes with learned behaviour from operant conditioning. Instinctive drift was coined by Keller and Marian Breland, former students of B.F. Skinner at the University of Minnesota, describing the phenomenon as "a clear and utter failure of conditioning theory." B.F. Skinner was an American psychologist and father of operant conditioning (or instrumental conditioning), which is a learning strategy that teaches the performance of an action either through reinforcement or punishment. It is through the association of the behaviour and the reward or consequence that follows that depicts whether an animal will maintain a behaviour, or if it will become extinct. Instinctive drift is a phenomenon where such conditioning erodes and an animal reverts to its natural behaviour. B.F. Skinner B.F. Skinner was an American behaviourist inspired by John Watson's philosophy of behaviorism. Skinner was captivated with systematically controlling behaviour to result in desirable or beneficial outcomes. This passion led Skinner to become the father of operant conditioning. Skinner made significant contributions to the research concepts of reinforcement, punishment, schedules of reinforcement, behaviour modification and behaviour shaping. The mere existence of the instinctive drift phenomenon challenged Skinner's initial beliefs on operant conditioning and reinforcement. Operant conditioning Skinner described operant conditioning as strengthening behaviour through reinforcement. Reinforcement can consist of positive reinforcement, in which a desirable stimulus is added; negative reinforcement, in which an undesirable stimulus is taken away; positive punishment, in which an undesirable stimulus is added; and negative punishment, in which a desirable stimulus is taken away. Through these practices, animals shape their behaviour and are motivated to perform said learned behaviour to optimally benefit from rewards or to avoid punishment. Through operant conditioning, the presence of instinctive drift was discovered. The Brelands The term instinctive drift was coined by married couple Keller and Marian Breland Bailey, former psychology graduate students of B.F. Skinner at the University of Minnesota.  Keller and Marian were recruited to work with B.F. Skinner on a project to train pigeons to pilot bombs towards targets to aid with World War II efforts. This project was terminated when the development of the atom bomb took precedence. The Brelands, however, still enthralled with the application of animal behaviour, adopted Skinner's principles and began a life of training animals. They profited from these animals performing complex and amusing behaviours for the public's entertainment. They coined their successful business, "Animal Behaviour Enterprises" in 1943. Their business soon gained nationwide attention and even had a partnership with General Mills to train chickens, via operant conditioning, for business promotion. Discovery Keller and Marian Breland were the discoverers of instinctive drift. They first noted this behavioural pattern when animals they had been training for years interrupted their learned behaviours to satisfy innate patterns of feeding behaviours. This discovery debunked the once assumed ideas that animals are a "tabula rasa" prior to purposeful training and that all responses are equally conditionable. The Breland's described their first exposure to this phenomenon when working with their chickens that had been trained to appear as if they were turning on a jukebox and subsequently dancing. The breakdown in operant conditioning appeared when over half the chickens they had trained to stand on a platform developed an unplanned scratching or pecking pattern. The scratching pattern was subsequently used to create the "dancing chicken" performance. In raccoons The Breland's had their second, and more perplexing, encounter with instinctive drift when working with raccoons. They were training racoons to perform a captivating sequence of events to aid with the advertisement of a bank. This project involved teaching raccoons to deposit money into a bank slot. The Breland's were successful at yet another animal training project as raccoons were initially very successful at the task of depositing coins into the bank. The Brelands then noticed that over time and as the reinforcement schedule was spaced out, the raccoons began to dip the coins in and out of the bank and rub them with their paws rather than depositing them. They concluded that this was an instinct that was interfering with the raccoons’ performance on the task. In nature, raccoons dip their food in water several times in order to wash it. This is an instinct which was seemingly triggered by the similar action sequence involved in retrieving and depositing coins into a bank. Instinctive behaviour is usually automatic and unplanned and is a natural reaction which often is preferred by the animal over learned and unnatural actions. This instinctual drift was successfully avoided when they instead taught the raccoons to place a basketball into a basket. Because of the size of the ball and the different body position involved in this action, the raccoons did not experience instinctual drift (they did not dip the balls in and out of the basket). In pigs A similar training regimen was applied on pigs, animals who are known to condition rapidly. These pigs were trained to insert wooden coins into a piggy bank. Over time, the pigs stopped depositing the coins and instead began to drop it in the dirt, push it down with their noses, drag it back out, and fling it into the air. This is a series of actions which are part of a behaviour known as rooting. It is an instinctual pattern of behaviour which pigs use to dig for food and to communicate. The pigs chose to engage in rooting rather than performing their trained action (depositing the coin) and therefore, this is yet another clear example of instinctive drift interfering with operant conditioning. Nature vs. nurture The nature vs. nurture controversy is a major topic discussed in psychology and pertains to animal training as well. Both sides of the nature vs. nurture debate have valid points and this controversy is one of the most debated in psychology. A common question asked today by many experts in various fields is if behaviour is due to life experiences or if it is predisposed in DNA. Today, partial credit is given to both sides and in many cases nature and nurture are given equal weight. With animal training it is often questioned if the training and shaping is the cause of a behaviour exhibited by an animal (nurture), or if the behaviour is actually innate to the species (nature). Instinctive drift centers around the nature of behaviour more so than learning being the sole cause of a behaviour. Species are obviously capable of learning behaviours, this is not denied in instinctive drift. Instinctive drift says that animals often revert to innate (nature) behaviours that can interfere with conditioned responses (nurture). Relationship with evolution Instinctive drift can be discussed in association with evolution. Evolution is commonly classified as change occurring over a period of time. Instinctive drift says that animals will behave in accordance with evolutionary contingencies, as opposed to operant contingencies of their specific training. Evolutionary roots of instinct exist. Evolution of traits and behaviours occur over time and it is by means of evolution and natural selection that adaptive traits and behaviours are passed on to the next generation and maladaptive traits are weaned out. It is the adaptive traits of species over time that is exhibited in instinctive drift and that species revert to that interferes with operant conditioning. Much knowledge on the topic of evolution and natural selection can be credited to Charles Darwin. Darwin developed and proposed the theory of evolution and it was through this knowledge that other subjects could be better understood, such as instinctive drift. References Behavior Ethology
Instinctive drift
[ "Biology" ]
1,572
[ "Behavioural sciences", "Ethology", "Behavior" ]
4,087,645
https://en.wikipedia.org/wiki/Vela%20Pulsar
The Vela Pulsar (PSR J0835-4510 or PSR B0833-45) is a radio, optical, X-ray- and gamma-emitting pulsar associated with the Vela Supernova Remnant in the constellation of Vela. Its parent Type II supernova exploded approximately 11,000–12,300 years ago (and was about 800 light-years away). Characteristics Vela is the brightest pulsar (at radio frequencies) in the sky and spins 11 times per second (i.e. a period of 89.33 milliseconds—the shortest known at the time of its discovery) and the remnant from the supernova explosion is estimated to be travelling outwards at . It has the third-brightest optical component of all known pulsars (V = 23.6 mag) which pulses twice for every single radio pulse. The Vela pulsar is the brightest persistent object in the high-energy gamma-ray sky. Pulsed emission up to 20 TeV has been detected from the Vela pulsar and together with the Crab pulsar at 1.5 TeV are the only two known pulsar with emission in this energy range Glitches Glitches are sudden spin-ups in the rotation of pulsars. Vela is the best known of all the glitching pulsars, with glitches occurring on average every three years. Glitches are currently not predictable. On 12 December 2016, Vela was observed to glitch live for the first time with a radio telescope (the 26 m telescope at the Mount Pleasant Radio Observatory) large enough to see individual pulses. This observation showed that the pulsar nulled (i.e. did not pulse) for one pulse, with the pulse prior being very broad and the two following pulses featuring low linear polarization. It also appeared that the glitch process took under five seconds to occur and allowed to estimate physical properties of the pulsar. On 22 July 2021, a new glitch occurred. As a result, the period of the pulsar decreased by about 1 part in a million. Statistically, nearly the 1% of the long-term spin-down of the pulsar is reversed in spin-up glitches, a fraction that is also observed in other monitored pulsars. Careful estimation of the glitch activity and its uncertainty requires statistical tools beyond the simple linear regression. Research campaigns The association of the Vela pulsar with the Vela Supernova Remnant, made by astronomers at the University of Sydney in 1968, was direct observational proof that supernovae form neutron stars. Studies conducted by Kellogg et al. with the Uhuru spacecraft in 1970–71 showed the Vela pulsar and Vela X to be separate but spatially related objects. The term Vela X was used to describe the entirety of the supernova remnant. Weiler and Panagia established in 1980 that Vela X was actually a pulsar wind nebula, contained within the fainter supernova remnant and driven by energy released by the pulsar. Nomenclature The pulsar is occasionally referred to as Vela X, but this phenomenon is separate from either the pulsar or the Vela X nebula. A radio survey of the Vela-Puppis region was made with the Mills Cross Telescope in 1956–57 and identified three strong radio sources: Vela X, Vela Y, and Vela Z. These sources are observationally close to the Puppis A supernova remnant, which is also a strong X-ray and radio source. Neither the pulsar nor either of the associated nebulae should be confused with Vela X-1, an observationally close but unrelated high-mass X-ray binary system. In music The emissions of Vela and the pulsar PSR B0329+54 were converted into audible sound by French composer Gérard Grisey and used in the piece Le noir de l'étoile (1989–90). Gallery References External links Vela Pulsar at SIMBAD Vela Pulsar at NASA/IPAC Extragalactic Database Gum Nebula Pulsars Vela (constellation) Optical pulsars Articles containing video clips Velorum, HU
Vela Pulsar
[ "Astronomy" ]
885
[ "Vela (constellation)", "Constellations" ]
4,087,668
https://en.wikipedia.org/wiki/Oldest%20dated%20rocks
The oldest dated rocks formed on Earth, as an aggregate of minerals that have not been subsequently broken down by erosion or melted, are more than 4 billion years old, formed during the Hadean Eon of Earth's geological history, and mark the start of the Archean Eon, which is defined to start with the formation of the oldest intact rocks on Earth. Archean rocks are exposed on Earth's surface in very few places, such as in the geologic shields of Canada, Australia, and Africa. The ages of these felsic rocks are generally between 2.5 and 3.8 billion years. The approximate ages have a margin of error of millions of years. In 1999, the oldest known rock on Earth was dated to 4.031 ±0.003 billion years, and is part of the Acasta Gneiss of the Slave Craton in northwestern Canada. Researchers at McGill University found a rock with a very old model age for extraction from the mantle (3.8 to 4.28 billion years ago) in the Nuvvuagittuq greenstone belt on the coast of Hudson Bay, in northern Quebec; the true age of these samples is still under debate, and they may actually be closer to 3.8 billion years old. Older than these rocks are crystals of the mineral zircon, which can survive the disaggregation of their parent rock and be found and dated in younger rock formations. In January 2019, NASA scientists reported the discovery of the oldest known Earth rock, found on the Moon. Apollo 14 astronauts returned several rocks from the Moon and, later, scientists determined that a fragment from a rock nicknamed Big Bertha, which had been chosen by astronaut Alan Shepard, contained "a bit of Earth from about 4 billion years ago". The rock fragment contained quartz, feldspar, and zircon, all common on Earth, but highly uncommon on the Moon. Pre-solar grains in meteorites are older than the Solar System, with some grains extracted from the Murchison meteorite claimed to be 7 billion years old. Oldest rocks by category Oldest terrestrial material The oldest material of terrestrial origin that has been dated is a zircon mineral of 4.404 ±0.008 Ga enclosed in a metamorphosed sandstone conglomerate in the Jack Hills of the Narryer Gneiss Terrane of Western Australia. The 4.404 ±0.008 Ga zircon is a slight outlier, with the oldest consistently dated zircon falling closer to 4.35 Ga. This zircon is part of a population of zircons within the metamorphosed conglomerate, which is believed to have been deposited about 3.060 Ga, which is the age of the youngest detrital zircon in the rock. Recent developments in atom-probe tomography have led to a further constraint on the age of the oldest continental zircon, with the most recent age quoted as 4.374 ±0.006 Ga. The discovery of the oldest known Earth rock, found on the Moon, was reported in January 2019 by NASA scientists. Apollo 14 astronauts returned several rocks from the Moon and, later, scientists determined that a fragment from one of the rocks, nicknamed Big Bertha, contained "a bit of Earth from about 4 billion years ago". The rock fragment contained quartz, feldspar, and zircon, all common on Earth, but highly uncommon on the Moon. Earth's oldest rock formation The oldest outcropping rock formation is, depending on the latest research, either part of the Isua Greenstone Belt, Narryer Gneiss Terrane, Nuvvuagittuq Greenstone Belt, Napier Complex, or the Acasta Gneiss (on the Slave Craton). The difficulty in assigning the title to one particular block of gneiss is that the gneisses are all extremely deformed, and the oldest rock may be represented by only one streak of minerals in a mylonite, representing a layer of sediment or an old dike. This may be difficult to find or map; hence, the oldest dates yet resolved are as much generated by luck in sampling as by understanding the rocks themselves. It is thus premature to claim that any of these rocks, or indeed that of other formations of Hadean gneisses, is the oldest formations or rocks on Earth; doubtless, new analyses will continue to change conceptions of the structure and nature of these ancient continental fragments. Nevertheless, the oldest cratons on Earth include the Kaapvaal Craton, the Western Gneiss Terrane of the Yilgarn Craton (~2.9 – >3.2 Ga), the Pilbara Craton (~3.4 Ga), and portions of the Canadian Shield (~2.4 – >3.6 Ga). Parts of Dharwar Craton in India are greater than 3.0 Ga. The oldest dated rocks of the Baltic Shield are 3.5 Ga old. Other old formations include the Saglek Gneiss Complex, dated at 3.8-3.9 Ga; the Anshan Area, dated at 3.8 Ga; the Itsaq (Isua) Gneiss Complex, dated at 3.7-3.8 Ga; and the Ancient Gneiss Complex, dated at 3.6 Ga. Oldest rock on Earth The Acasta Gneiss in the Canadian Shield in the Northwest Territories, Canada is composed of the Archaean igneous and gneissic cores of ancient mountain chains that have been exposed in a glacial peneplain. Analyses of zircons from a felsic orthogneiss with presumed granitic protolith returned an age of 4.031 ±0.003 Ga. On September 25, 2008, researchers from McGill University, Carnegie Institution for Science and UQAM announced that a rock formation, the Nuvvuagittuq greenstone belt, exposed on the eastern shore of Hudson Bay in northern Quebec had a Sm–Nd model age for extraction from the mantle of 4.28 billion years. However, it is argued that the actual age of formation of this rock, as opposed to the extraction of its magma from the mantle, is likely closer to 3.8 billion years, according to Simon Wilde of the Institute for Geoscience Research in Australia. Hadean Zircons from the Jack Hills of Western Australian The zircons from the Western Australian Jack Hills returned an age of 4.404 billion years, interpreted to be the age of crystallization. These zircons also show another feature; their oxygen isotopic composition has been interpreted to indicate that more than 4.4 billion years ago there was already water on the surface of Earth. The importance and accuracy of these interpretations is currently the subject of scientific debate. It may be that the oxygen isotopes and other compositional features (the rare-earth elements) record more recent hydrothermal alteration of the zircons rather than the composition of the magma at the time of their original crystallization. In a paper published in the journal Earth and Planetary Science Letters, a team of scientists suggest that rocky continents and liquid water existed at least 4.3 billion years ago and were subjected to heavy weathering by an acrid climate. Using an ion microprobe to analyze isotope ratios of the element lithium in zircons from the Jack Hills in Western Australia, and comparing these chemical fingerprints to lithium compositions in zircons from continental crust and primitive rocks similar to Earth's mantle, they found evidence that the young planet already had the beginnings of continents, relatively cool temperatures and liquid water by the time the Australian zircons formed. Non-terrestrial rocks One of the oldest Martian meteorites found on Earth, Allan Hills 84001 has been measured to have crystallized from molten rock 4.091 billion years ago. The Genesis Rock (Lunar sample 15415), obtained from the Moon by astronauts during Apollo 15 mission, has been dated at 4.08 billion years. During Apollo 16, older rocks, including Lunar sample 67215, dated at 4.46 billion years, were brought back. Some types of meteorite are older than the Earth, having formed in the early Solar System, before the planet formation process was completed. The meteorite Northwest Africa 11119 (NWA 11119) has been dated to 4.5648 ± 0.0003 billion years. Some solid inclusions within meteorites are older than the surrounding rock. Calcium-aluminium rich inclusions (CAIs) in meteorites are the oldest solids that formed in the Solar System, so are conventionally used to set its formation date as 4567.30 ± 0.16 Myr. Pre-solar grains are even older; they formed in the interstellar medium and pre-date the formation of the Solar System. Some pre-solar grains extracted from the Murchison meteorite have been claimed to be 7 billion years old. See also Nature timeline References Bibliography Zircons are Forever Bowring, S.A., and Williams, I.S., 1999. Priscoan (4.00–4.03 Ga) orthogneisses from northwestern Canada. Contributions to Mineralogy and Petrology, v. 134, 3–16. Stern, R.A., Bleeker, W., 1998. Age of the world's oldest rocks refined using Canada's SHRIMP. the Acasta gneiss complex, Northwest Territories, Canada. Geoscience Canada, v. 25, pp. 27–31 Yu A., Lee C-D and Halliday, A. N.Lutetium-Hafnium and Uranium-Lead Systematics of Early-Middle Archean Single Zircon Grains, Ninth Annual Goldschmidt Conference. 2 External links Very old Australian zircons with a story to tell On the Acasta Gneiss Abstract and full text of the results from O'Neil's research, published by Science Petrology Radiometric dating Oldest things
Oldest dated rocks
[ "Chemistry" ]
2,083
[ "Radiometric dating", "Radioactivity" ]
4,087,843
https://en.wikipedia.org/wiki/Hemoglobin%20electrophoresis
Hemoglobin electrophoresis is a blood test that can detect different types of hemoglobin. The test can detect hemoglobin S, the form associated with sickle cell disease, as well as other abnormal types of hemoglobin, such as hemoglobin C. It can also be used to investigate thalassemias, which are disorders caused by defective hemoglobin production. Procedure The test uses the principles of gel electrophoresis to separate out the various types of hemoglobin and is a type of native gel electrophoresis. After the sample has been treated to release the hemoglobin from the red cells, it is introduced into a porous gel (usually made of agarose or cellulose acetate) and subjected to an electrical field, most commonly in an alkaline medium. Different hemoglobins have different charges, and according to those charges, they move at different speeds in the gel and eventually form discrete bands (see electrophoretic migration patterns). A quality control sample containing hemoglobins A, F, S, and C is run along with the patient sample to aid in identifying the different bands. The relative amounts of each type of hemoglobin can be estimated by measuring the optical density of the bands, though this method is not reliable for hemoglobins that are present in low quantities. Because hemoglobins exhibit different migration patterns depending on the pH level, testing the same sample at both an acid and an alkaline pH can help to identify some abnormal hemoglobins that would otherwise be impossible to distinguish from others. Clinical significance Adult human blood normally contains three types of hemoglobin: hemoglobin A, which makes up approximately 95% of the total; hemoglobin A2, which accounts for less than 3.5%; and a minute amount of hemoglobin F. If abnormal hemoglobin variants such as hemoglobin S (which occurs in sickle cell disease), C or E are present, they will appear as unexpected bands on electrophoresis (provided they do not migrate to the same place as other hemoglobins). Hemoglobin electrophoresis can also be used to investigate thalassemias, which are caused by decreased production of subunits of the hemoglobin molecule. Hemoglobin A2 levels are typically elevated in beta-thalassemia minor and hemoglobin F may be slightly increased. In beta-thalassemia major, hemoglobin A is decreased (or in some cases absent) and hemoglobin F is markedly elevated; A2 levels are variable. In hemoglobin H disease, a form of alpha-thalassemia, an abnormal band of hemoglobin H can be detected, and sometimes a band of Hemoglobin Barts; but in the milder alpha-thalassemia trait, electrophoresis results are effectively normal. History Linus Pauling is credited with the invention of hemoglobin electrophoresis in 1949. Newer alternatives to conventional hemoglobin electrophoresis include isoelectric focusing, capillary zone electrophoresis, and high-performance liquid chromatography. References Blood tests
Hemoglobin electrophoresis
[ "Chemistry" ]
687
[ "Blood tests", "Chemical pathology" ]
4,087,965
https://en.wikipedia.org/wiki/Genetic%20analysis
Genetic analysis is the overall process of studying and researching in fields of science that involve genetics and molecular biology. There are a number of applications that are developed from this research, and these are also considered parts of the process. The base system of analysis revolves around general genetics. Basic studies include identification of genes and inherited disorders. This research has been conducted for centuries on both a large-scale physical observation basis and on a more microscopic scale. Genetic analysis can be used generally to describe methods both used in and resulting from the sciences of genetics and molecular biology, or to applications resulting from this research. Genetic analysis may be done to identify genetic/inherited disorders and also to make a differential diagnosis in certain somatic diseases such as cancer. Genetic analyses of cancer include detection of mutations, fusion genes, and DNA copy number changes. History Much of the research that set the foundation of genetic analysis began in prehistoric times. Early humans found that they could practice selective breeding to improve crops and animals. They also identified inherited traits in humans that were eliminated over the years. The many genetic analyses gradually evolved over time. Mendelian research Modern genetic analysis began in the mid-1800s with research conducted by Gregor Mendel. Mendel, who is known as the "father of modern genetics", was inspired to study variation in plants. Between 1856 and 1863, Mendel cultivated and tested some 29,000 pea plants (i.e., Pisum sativum). This study showed that one in four pea plants had purebred recessive alleles, two out of four were hybrid and one out of four were purebred dominant. His experiments led him to make two generalizations, the Law of Segregation and the Law of Independent Assortment, which later became known as Mendel's Laws of Inheritance. Lacking the basic understanding of heredity, Mendel observed various organisms and first utilized genetic analysis to find that traits were inherited from parents and those traits could vary between children. Later, it was found that units within each cell are responsible for these traits. These units are called genes. Each gene is defined by a series of amino acids that create proteins responsible for genetic traits. Types Genetic analyses include molecular technologies such as PCR, RT-PCR, DNA sequencing, and DNA microarrays, and cytogenetic methods such as karyotyping and fluorescence in situ hybridisation. DNA sequencing DNA sequencing is essential to the applications of genetic analysis. This process is used to determine the order of nucleotide bases. Each molecule of DNA is made from adenine, guanine, cytosine and thymine, which determine what function the genes will possess. This was first discovered during the 1970s. DNA sequencing encompasses biochemical methods for determining the order of the nucleotide bases, adenine, guanine, cytosine, and thymine, in a DNA oligonucleotide. By generating a DNA sequence for a particular organism, you are determining the patterns that make up genetic traits and in some cases behaviors. Sequencing methods have evolved from relatively laborious gel-based procedures to modern automated protocols based on dye labelling and detection in capillary electrophoresis that permit rapid large-scale sequencing of genomes and transcriptomes. Knowledge of DNA sequences of genes and other parts of the genome of organisms has become indispensable for basic research studying biological processes, as well as in applied fields such as diagnostic or forensic research. The advent of DNA sequencing has significantly accelerated biological research and discovery. Cytogenetics Cytogenetics is a branch of genetics that is concerned with the study of the structure and function of the cell, especially the chromosomes. Polymerase chain reaction studies the amplification of DNA. Because of the close analysis of chromosomes in cytogenetics, abnormalities are more readily seen and diagnosed. Karyotyping A karyotype is the number and appearance of chromosomes in the nucleus of a eukaryotic cell. The term is also used for the complete set of chromosomes in a species, or an individual organism. Karyotypes describe the number of chromosomes, and what they look like under a light microscope. Attention is paid to their length, the position of the centromeres, banding pattern, any differences between the sex chromosomes, and any other physical characteristics. Karyotyping uses a system of studying chromosomes to identify genetic abnormalities and evolutionary changes in the past. DNA microarrays A DNA microarray is a collection of microscopic DNA spots attached to a solid surface. Scientists use DNA microarrays to measure the expression levels of large numbers of genes simultaneously or to genotype multiple regions of a genome. When a gene is expressed in a cell, it generates messenger RNA (mRNA). Overexpressed genes generate more mRNA than underexpressed genes. This can be detected on the microarray. Since an array can contain tens of thousands of probes, a microarray experiment can accomplish many genetic tests in parallel. Therefore, arrays have dramatically accelerated many types of investigations. PCR The polymerase chain reaction (PCR) is a biochemical technology in molecular biology to amplify a single or a few copies of a piece of DNA across several orders of magnitude, generating thousands to millions of copies of a particular DNA sequence. PCR is now a common and often indispensable technique used in medical and biological research labs for a variety of applications. These include DNA cloning for sequencing, DNA-based phylogeny, or functional analysis of genes; the diagnosis of hereditary diseases; the identification of genetic fingerprints (used in forensic sciences and paternity testing); and the detection and diagnosis of infectious diseases. Applications Cancer Numerous practical advancements have been made in the field of genetics and molecular biology through the processes of genetic analysis. One of the most prevalent advancements during the late 20th and early 21st centuries is a greater understanding of cancer's link to genetics. By identifying which genes in the cancer cells are working abnormally, doctors can better diagnose and treat cancers. Research Research has been able to identify the concepts of genetic mutations, fusion genes and changes in DNA copy numbers, and advances are made in the field every day. Much of these applications have led to new types of sciences that use the foundations of genetic analysis. Reverse genetics uses the methods to determine what is missing in a genetic code or what can be added to change that code. Genetic linkage studies analyze the spatial arrangements of genes and chromosomes. There have also been studies to determine the legal and social and moral effects of the increase of genetic analysis. Genetic analysis may be done to identify genetic/inherited disorders and also to make a differential diagnosis in certain somatic diseases such as cancer. Genetic analyses of cancer include detection of mutations, fusion genes, and DNA copy number changes. References Analysis
Genetic analysis
[ "Biology" ]
1,392
[ "Genetics" ]
4,088,091
https://en.wikipedia.org/wiki/Main%20battery
A main battery is the primary weapon or group of weapons around which a warship is designed. As such, a main battery was historically a naval gun or group of guns used in volleys, as in the broadsides of cannon on a ship of the line. Later, this came to be turreted groups of similar large-caliber naval rifles. With the evolution of technology the term has come to encompass guided missiles and torpedoes as a warship's principal offensive weaponry, deployed both on surface ships and submarines. A main battery features common parts, munition and fire control system across the weapons which it comprises. Description In the age of cannon at sea, the main battery was the principal group of weapons around which a ship was designed, usually its heavies. With the coming of naval rifles and subsequent revolving gun turrets, the main battery became the principal group of heaviest guns, regardless of how many turrets they were placed in. As missiles displaced guns both above and below the water their principal group became a vessel's main battery. Between the age of sail and its cannons and the dreadnought era of large iron warships fighting ships' weapons deployments lacked standardization, with a variety of naval rifles of mixed breach and caliber scattered throughout vessels. Dreadnoughts resolved this in favor of a main battery of large guns, supported by largely defensive secondary batteries of smaller guns of standardized form, further augmented on large warships such as battleships and cruisers with smaller yet tertiary batteries. As air superiority became all-important early in World War II, weight of broadside fell by the wayside as a vessel's principal fighting asset. Anti-aircraft batteries of scores of small-caliber rapid-fire weapons came to supplant big guns even on large warships assigned to protect vital fast carrier task forces. At sea, ships such as small, fast destroyers assigned to convoy protection, essential in the transport of the enormous stock of materials required for land war particularly in the European Theater, came to rely more on depth charge projectors. The terms main battery and secondary battery fell out of favor as ships were designed to carry surface-to-air missiles and anti-ship missiles with greater range and heavier warheads than their guns. Such ships often referred to their remaining guns as simply the gun battery and to the missiles as the missile battery. Ships with more than one type of missile might refer to the batteries by the name of the missile. had a Talos battery and a Tartar battery. Examples The German battleship , carried a main battery of eight 15 inch (380mm) guns, along with a secondary battery of twelve 5.9 inch (150mm) guns for defense against destroyers and torpedo boats, and an anti-aircraft battery of various guns ranging in caliber from 4.1 inch (105mm) to 20mm guns. Many later ships during World War II used dual-purpose guns to combine the secondary battery and the heavier guns of the anti-aircraft battery for increased flexibility and economy. The United States Navy battleship had a main battery of nine guns arranged in three turrets, two forward and one aft. The secondary battery was 5-inch dual purpose guns, allowing use against other ships and aircraft. A dedicated anti-aircraft battery was composed of light Bofors 40 mm guns and Oerlikon 20 mm cannon. References Notes Weapons platforms Shipbuilding Naval warfare Naval artillery
Main battery
[ "Engineering" ]
678
[ "Shipbuilding", "Marine engineering" ]
4,088,299
https://en.wikipedia.org/wiki/Autostereoscopy
Autostereoscopy is any method of displaying stereoscopic images (adding binocular perception of 3D depth) without the use of special headgear, glasses, something that affects vision, or anything for eyes on the part of the viewer. Because headgear is not required, it is also called "glasses-free 3D" or "glassesless 3D". There are two broad approaches currently used to accommodate motion parallax and wider viewing angles: eye-tracking, and multiple views so that the display does not need to sense where the viewer's eyes are located. Examples of autostereoscopic displays technology include lenticular lens, parallax barrier, and integral imaging. Volumetric and holographic displays are also autostereoscopic, as they produce a different image to each eye, although some do make a distinction between those types of displays that create a vergence-accommodation conflict and those that do not. Autostereoscopic displays based on parallax barrier and lenticular methodologies have been known for about 100 years. Technology Many organizations have developed autostereoscopic 3D displays, ranging from experimental displays in university departments to commercial products, and using a range of different technologies. The method of creating autostereoscopic flat panel video displays using lenses was mainly developed in 1985 by Reinhard Boerner at the Heinrich Hertz Institute (HHI) in Berlin. Prototypes of single-viewer displays were already being presented in the 1990s, by Sega AM3 (Floating Image System) and the HHI. Nowadays, this technology has been developed further mainly by European and Japanese companies. One of the best-known 3D displays developed by HHI was the Free2C, a display with very high resolution and very good comfort achieved by an eye tracking system and a seamless mechanical adjustment of the lenses. Eye tracking has been used in a variety of systems in order to limit the number of displayed views to just two, or to enlarge the stereoscopic sweet spot. However, as this limits the display to a single viewer, it is not favored for consumer products. Currently, most flat-panel displays employ lenticular lenses or parallax barriers that redirect imagery to several viewing regions; however, this manipulation requires reduced image resolutions. When the viewer's head is in a certain position, a different image is seen with each eye, giving a convincing illusion of 3D. Such displays can have multiple viewing zones, thereby allowing multiple users to view the image at the same time, though they may also exhibit dead zones where only a non-stereoscopic or pseudoscopic image can be seen, if at all. Parallax barrier A parallax barrier is a device placed in front of an image source, such as a liquid crystal display, to allow it to show a stereoscopic image or multiscopic image without the need for the viewer to wear 3D glasses. The principle of the parallax barrier was independently invented by Auguste Berthier, who published first but produced no practical results, and by Frederic E. Ives, who made and exhibited the first known functional autostereoscopic image in 1901. About two years later, Ives began selling specimen images as novelties, the first known commercial use. In the early 2000s, Sharp developed the electronic flat-panel application of this old technology to commercialization, briefly selling two laptops with the world's only 3D LCD screens. These displays are no longer available from Sharp but are still being manufactured and further developed from other companies. Similarly, Hitachi has released the first 3D mobile phone for the Japanese market under distribution by KDDI. In 2009, Fujifilm released the FinePix Real 3D W1 digital camera, which features a built-in autostereoscopic LCD measuring diagonal. The Nintendo 3DS video game console family uses a parallax barrier for 3D imagery. On a newer revision, the New Nintendo 3DS, this is combined with an eye tracking system to allow for wider viewing angles. Integral photography and lenticular arrays The principle of integral photography, which uses a two-dimensional (X–Y) array of many small lenses to capture a 3-D scene, was introduced by Gabriel Lippmann in 1908. Integral photography is capable of creating window-like autostereoscopic displays that reproduce objects and scenes life-size, with full parallax and perspective shift and even the depth cue of accommodation, but the full realization of this potential requires a very large number of very small high-quality optical systems and very high bandwidth. Only relatively crude photographic and video implementations have yet been produced. One-dimensional arrays of cylindrical lenses were patented by Walter Hess in 1912. By replacing the line and space pairs in a simple parallax barrier with tiny cylindrical lenses, Hess avoided the light loss that dimmed images viewed by transmitted light and that made prints on paper unacceptably dark. An additional benefit is that the position of the observer is less restricted, as the substitution of lenses is geometrically equivalent to narrowing the spaces in a line-and-space barrier. Philips solved a significant problem with electronic displays in the mid-1990s by slanting the cylindrical lenses with respect to the underlying pixel grid. Based on this idea, Philips produced its WOWvx line until 2009, running up to 2160p (a resolution of 3840×2160 pixels) with 46 viewing angles. Lenny Lipton's company, StereoGraphics, produced displays based on the same idea, citing a much earlier patent for the slanted lenticulars. Magnetic3d and Zero Creative have also been involved. Compressive light field displays With rapid advances in optical fabrication, digital processing power, and computational models for human perception, a new generation of display technology is emerging: compressive light field displays. These architectures explore the co-design of optical elements and compressive computation while taking particular characteristics of the human visual system into account. Compressive display designs include dual and multilayer devices that are driven by algorithms such as computed tomography and non-negative matrix factorization and non-negative tensor factorization. Autostereoscopic content creation and conversion Tools for the instant conversion of existing 3D movies to autostereoscopic were demonstrated by Dolby, Stereolabs and Viva3D. Other Dimension Technologies released a range of commercially available 2D/3D switchable LCDs in 2002 using a combination of parallax barriers and lenticular lenses. SeeReal Technologies has developed a holographic display based on eye tracking. CubicVue exhibited a color filter pattern autostereoscopic display at the Consumer Electronics Association's i-Stage competition in 2009. There are a variety of other autostereo systems as well, such as volumetric display, in which the reconstructed light field occupies a true volume of space, and integral imaging, which uses a fly's-eye lens array. The term automultiscopic display has been introduced as a shorter synonym for the lengthy "multi-view autostereoscopic 3D display", as well as for the earlier, more specific "parallax panoramagram". The latter term originally indicated a continuous sampling along a horizontal line of viewpoints, e.g., image capture using a very large lens or a moving camera and a shifting barrier screen, but it later came to include synthesis from a relatively large number of discrete views. Sunny Ocean Studios, located in Singapore, has been credited with developing an automultiscopic screen that can display autostereo 3D images from 64 different reference points. A fundamentally new approach to autostereoscopy called HR3D has been developed by researchers from MIT's Media Lab. It would consume half as much power, doubling the battery life if used with devices like the Nintendo 3DS, without compromising screen brightness or resolution; other advantages include a larger viewing angle and maintaining the 3D effect when the screen is rotated. Movement parallax: single view vs. multi-view systems Movement parallax refers to the fact that the view of a scene changes with movement of the head. Thus, different images of the scene are seen as the head is moved from left to right, and from up to down. Many autostereoscopic displays are single-view displays and are thus not capable of reproducing the sense of movement parallax, except for a single viewer in systems capable of eye tracking. Some autostereoscopic displays, however, are multi-view displays, and are thus capable of providing the perception of left–right movement parallax. Eight and sixteen views are typical for such displays. While it is theoretically possible to simulate the perception of up–down movement parallax, no current display systems are known to do so, and the up–down effect is widely seen as less important than left–right movement parallax. One consequence of not including parallax about both axes becomes more evident as objects increasingly distant from the plane of the display are presented: as the viewer moves closer to or farther away from the display, such objects will more obviously exhibit the effects of perspective shift about one axis but not the other, appearing variously stretched or squashed to a viewer not positioned at the optimal distance from the display. Vergence-accommodation conflict Autostereoscopic displays display stereoscopic content without matching focal depth, thereby exhibiting vergence-accommodation conflict. References External links Tridelity Viva3D VisuMotion Explanation of 3D Autostereoscopic Monitors Overview of different Autostereoscopic LCD displays Rendering for an Interactive 360° Light Field Display, a demonstration of Autostereoscopy using a spinning mirror, a holographic diffuser, and a high speed video projector demonstrated at SIGGRAPH 2007 Behind-the-scenes video about production for autostereoscopic displays 3D Without Glasses - The Future of 3D Technology? Diffraction Influence on the Field of View and Resolution of Three-Dimensional Integral Imaging Stereoscopy 3D imaging Display technology Photographic techniques
Autostereoscopy
[ "Engineering" ]
2,010
[ "Electronic engineering", "Display technology" ]
4,088,382
https://en.wikipedia.org/wiki/Square-free%20word
In combinatorics, a square-free word is a word (a sequence of symbols) that does not contain any squares. A square is a word of the form , where is not empty. Thus, a square-free word can also be defined as a word that avoids the pattern . Finite square-free words Binary alphabet Over a binary alphabet , the only square-free words are the empty word , and . Ternary alphabet Over a ternary alphabet , there are infinitely many square-free words. It is possible to count the number of ternary square-free words of length . This number is bounded by , where . The upper bound on can be found via Fekete's Lemma and approximation by automata. The lower bound can be found by finding a substitution that preserves square-freeness. Alphabet with more than three letters Since there are infinitely many square-free words over three-letter alphabets, this implies there are also infinitely many square-free words over an alphabet with more than three letters. The following table shows the exact growth rate of the -ary square-free words, rounded off to 7 digits after the decimal point, for in the range from 4 to 15: 2-dimensional words Consider a map from to , where is an alphabet and is called a 2-dimensional word. Let be the entry . A word is a line of if there exists such that , and for . Carpi proves that there exists a 2-dimensional word over a 16-letter alphabet such that every line of is square-free. A computer search shows that there are no 2-dimensional words over a 7-letter alphabet, such that every line of is square-free. Generating finite square-free words Shur proposes an algorithm called R2F (random-t(w)o-free) that can generate a square-free word of length over any alphabet with three or more letters. This algorithm is based on a modification of entropy compression: it randomly selects letters from a k-letter alphabet to generate a -ary square-free word. algorithm R2F is input: alphabet size , word length output: a -ary square-free word of length . choose in uniformly at random set to followed by all other letters of in increasing order set the number of iterations to 0 while do choose in uniformly at random append to the end of update shifting the first elements to the right and setting increment by if ends with a square of rank then delete the last letters of return Every (k+1)-ary square-free word can be the output of Algorithm R2F, because on each iteration it can append any letter except for the last letter of . The expected number of random k-ary letters used by Algorithm R2F to construct a -ary square-free word of length isNote that there exists an algorithm that can verify the square-freeness of a word of length in time. Apostolico and Preparata give an algorithm using suffix trees. Crochemore uses partitioning in his algorithm. Main and Lorentz provide an algorithm based on the divide-and-conquer method. A naive implementation may require time to verify the square-freeness of a word of length . Infinite square-free words There exist infinitely long square-free words in any alphabet with three or more letters, as proved by Axel Thue. Examples First difference of the Thue–Morse sequence One example of an infinite square-free word over an alphabet of size 3 is the word over the alphabet obtained by taking the first difference of the Thue–Morse sequence. That is, from the Thue–Morse sequence one forms a new sequence in which each term is the difference of two consecutive terms of the Thue–Morse sequence. The resulting square-free word is . Leech's morphism Another example found by John Leech is defined recursively over the alphabet . Let be any square-free word starting with the letter . Define the words recursively as follows: the word is obtained from by replacing each in with , each with , and each with . It is possible to prove that the sequence converges to the infinite square-free word Generating infinite square-free words Infinite square-free words can be generated by square-free morphism. A morphism is called square-free if the image of every square-free word is square-free. A morphism is called k–square-free if the image of every square-free word of length k is square-free. Crochemore proves that a uniform morphism is square-free if and only if it is 3-square-free. In other words, is square-free if and only if is square-free for all square-free of length 3. It is possible to find a square-free morphism by brute-force search. algorithm square-free_morphism is output: a square-free morphism with the lowest possible rank . set while True do set k_sf_words to the list of all square-free words of length over a ternary alphabet for each in k_sf_words do for each in k_sf_words do for each in k_sf_words do if then break from the current loop (advance to next ) if and then if is square-free for all square-free of length then return increment by Over a ternary alphabet, there are exactly 144 uniform square-free morphisms of rank 11 and no uniform square-free morphisms with a lower rank than 11. To obtain an infinite square-free words, start with any square-free word such as , and successively apply a square-free morphism to it. The resulting words preserve the property of square-freeness. For example, let be a square-free morphism, then as , is an infinite square-free word. Note that, if a morphism over a ternary alphabet is not uniform, then this morphism is square-free if and only if it is 5-square-free. Letter combinations in square-free words Avoid two-letter combinations Over a ternary alphabet, a square-free word of length more than 13 contains all the square-free two-letter combinations. This can be proved by constructing a square-free word without the two-letter combination . As a result, is the longest square-free word without the combination and its length is equal to 13. Note that over a more than three-letter alphabet there are square-free words of any length without an arbitrary two-letter combination. Avoid three-letter combinations Over a ternary alphabet, a square-free word of length more than 36 contains all the square-free three-letter combinations. However, there are square-free words of any length without the three-letter combination . Note that over a more than three-letter alphabet there are square-free words of any length without an arbitrary three-letter combination. Density of a letter The density of a letter in a finite word is defined as where is the number of occurrences of in and is the length of the word. The density of a letter in an infinite word is where is the prefix of the word of length . The minimal density of a letter in an infinite ternary square-free word is equal to . The maximum density of a letter in an infinite ternary square-free word is equal to . Notes References . Formal languages Combinatorics on words
Square-free word
[ "Mathematics" ]
1,535
[ "Formal languages", "Mathematical logic", "Combinatorics on words", "Combinatorics" ]
4,088,449
https://en.wikipedia.org/wiki/Refractometer
A refractometer is a laboratory or field device for the measurement of an index of refraction (refractometry). The index of refraction is calculated from the observed refraction angle using Snell's law. For mixtures, the index of refraction then allows the concentration to be determined using mixing rules such as the Gladstone–Dale relation and Lorentz–Lorenz equation. Refractometry Standard refractometers measure the extent of light refraction (as part of a refractive index) of transparent substances in either a liquid this is then used in order to identify a liquid sample, analyze the sample's purity, and determine the amount or concentration of dissolved substances within the sample. As light passes through the liquid from the air it will slow down and create a ‘bending’ illusion, the severity of the ‘bend’ will depend on the amount of substance dissolved in the liquid. For example, the amount of sugar in a glass of water. Types There are four main types of refractometers: traditional handheld refractometers, digital handheld refractometers, laboratory or Abbe refractometers (named for the instrument's inventor and based on Ernst Abbe's original design of the 'critical angle') and inline process refractometers. There is also the Rayleigh Refractometer used (typically) for measuring the refractive indices of gases. In laboratory medicine, a refractometer is used to measure the total plasma protein in a blood sample and urine specific gravity in a urine sample. In drug diagnostics, a refractometer is used to measure the specific gravity of human urine. In gemology, the gemstone refractometer is one of the fundamental pieces of equipment used in a gemological laboratory. Gemstones are transparent minerals and can therefore be examined using optical methods. Refractive index is a material constant, dependent on the chemical composition of a substance. The refractometer is used to help identify gem materials by measuring their refractive index, one of the principal properties used in determining the type of a gemstone. Due to the dependence of the refractive index on the wavelength of the light used (i.e. dispersion), the measurement is normally taken at the wavelength of the sodium line D-line (NaD) of ~589 nm. This is either filtered out from daylight or generated with a monochromatic light-emitting diode (LED). Certain stones such as rubies, sapphires, tourmalines and topaz are optically anisotropic. They demonstrate birefringence based on the polarisation plane of the light. The two different refractive indexes are classified using a polarisation filter. Gemstone refractometers are available both as classic optical instruments and as electronic measurement devices with a digital display. In marine aquarium keeping, a refractometer is used to measure the salinity and specific gravity of the water. In the automobile industry, a refractometer is used to measure the coolant concentration. In the machine industry, a refractometer is used to measure the amount of coolant concentrate that has been added to the water-based coolant for the machining process. In homebrewing, a brewing refractometer is used to measure the specific gravity before fermentation to determine the amount of fermentable sugars which will potentially be converted to alcohol. Brix refractometers are often used by hobbyists for making preserves including jams, marmalades and honey. In beekeeping, a brix refractometer is used to measure the amount of water in honey. Automatic Automatic refractometers automatically measure the refractive index of a sample. The automatic measurement of the refractive index of the sample is based on the determination of the critical angle of total reflection. A light source, usually a long-life LED, is focused onto a prism surface via a lens system. An interference filter guarantees the specified wavelength. Due to focusing light to a spot at the prism surface, a wide range of different angles is covered. As shown in the figure "Schematic setup of an automatic refractometer" the measured sample is in direct contact with the measuring prism. Depending on its refractive index, the incoming light below the critical angle of total reflection is partly transmitted into the sample, whereas for higher angles of incidence the light is totally reflected. This dependence of the reflected light intensity from the incident angle is measured with a high-resolution sensor array. From the video signal taken with the CCD sensor the refractive index of the sample can be calculated. This method of detecting the angle of total reflection is independent on the sample properties. It is even possible to measure the refractive index of optically dense strongly absorbing samples or samples containing air bubbles or solid particles . Furthermore, only a few microliters are required and the sample can be recovered. This determination of the refraction angle is independent of vibrations and other environmental disturbances. Influence of wavelength The refractive index of a given sample varies with wavelength for all materials. This dispersion relation is nonlinear and is characteristic for every material. In the visible range, a decrease of the refractive index comes with increasing wavelength. In glass prisms very little absorption is observable. In the infrared wavelength range several absorption maxima and fluctuations in the refractive index appear. To guarantee a high quality measurement with an accuracy of up to 0.00002 in the refractive index the wavelength has to be determined correctly. Therefore, in modern refractometers the wavelength is tuned to a bandwidth of +/-0.2 nm to ensure correct results for samples with different dispersions. Influence of temperature Temperature has a very important influence on the refractive index measurement. Therefore, the temperature of the prism and the temperature of the sample have to be controlled with high precision. There are several subtly-different designs for controlling the temperature; but there are some key factors common to all, such as high-precision temperature sensors and Peltier devices to control the temperature of the sample and the prism. The temperature control of these devices should be designed so that the variation in sample temperature is small enough that it will not cause a detectable refractive-index change. External water baths were used in the past but are no longer needed. Extended possibilities of automatic refractometers Automatic refractometers are microprocessor-controlled electronic devices. This means they can have a high degree of automation and also be combined with other measuring devices Flow cells There are different types of sample cells available, ranging from a flow cell for a few microliters to sample cells with a filling funnel for fast sample exchange without cleaning the measuring prism in between. The sample cells can also be used for the measurement of poisonous and toxic samples with minimum exposure to the sample. Micro cells require only a few microliters volume, assure good recovery of expensive samples and prevent evaporation of volatile samples or solvents. They can also be used in automated systems for automatic filling of the sample onto the refractometer prism. For convenient filling of the sample through a funnel, flow cells with a filling funnel are available. These are used for fast sample exchange in quality control applications. Automatic sample feeding Once an automatic refractometer is equipped with a flow cell, the sample can either be filled by means of a syringe or by using a peristaltic pump. Modern refractometers have the option of a built-in peristaltic pump. This is controlled via the instrument's software menu. A peristaltic pump opens the way to monitor batch processes in the laboratory or perform multiple measurements on one sample without any user interaction. This eliminates human error and assures a high sample throughput. If an automated measurement of a large number of samples is required, modern automatic refractometers can be combined with an automatic sample changer. The sample changer is controlled by the refractometer and assures fully automated measurements of the samples placed in the vials of the sample changer for measurements. Multiparameter measurements Today's laboratories do not only want to measure the refractive index of samples, but several additional parameters like density or viscosity to perform efficient quality control. Due to the microprocessor control and a number of interfaces, automatic refractometers are able to communicate with computers or other measuring devices, e.g. density meters, pH meters or viscosity meters, to store refractive index data and density data (and other parameters) into one database. Software features Automatic refractometers do not only measure the refractive index, but offer a lot of additional software features, like Instrument settings and configuration via software menu Automatic data recording into a database User-configurable data output Export of measuring data Statistical functions Predefined methods for different kinds of applications Automatic checks and adjustments Check if sufficient amount of sample is on the prism Data recording only if the results are plausible Pharma documentation and validation Refractometers are often used in pharmaceutical applications for quality control of raw intermediate and final products. The manufacturers of pharmaceuticals have to follow several international regulations like FDA 21 CFR Part 11, GMP, Gamp 5, USP<1058>, which require a lot of documentation work. The manufacturers of automatic refractometers support these users providing instrument software fulfills the requirements of 21 CFR Part 11, with user levels, electronic signature and audit trail. Furthermore, Pharma Validation and Qualification Packages are available containing Qualification Plan (QP) Design Qualification (DQ) Risk Analysis Installation Qualification (IQ) Operational Qualification (OQ) Check List 21 CFR Part 11 / SOP Performance Qualification (PQ) Scales typically used Brix Oechsle scale Plato scale Baumé scale See also Ernst Abbe Refractive index Gemology Must weight Winemaking Harvest (wine) Gravity (beer) High-fructose corn syrup Cutting fluid German inventors and discoverers High refractive index polymers References Further reading External links Refractometer – Gemstone Buzz uses, procedure & limitations. Rayleigh Refractometer: Operational Principles Refractometers and refractometry explains how refractometers work. Measuring instruments Scales Beekeeping tools Food analysis
Refractometer
[ "Chemistry", "Technology", "Engineering" ]
2,132
[ "Refractometers", "Food analysis", "Food chemistry", "Measuring instruments" ]
4,088,497
https://en.wikipedia.org/wiki/Traditional%20handheld%20refractometer
A traditional handheld refractometer is an analog instrument for measuring a liquid's refractive index. It works on the critical angle principle by which lenses and prisms project a shadow line onto a small glass reticle inside the instrument, which is then viewed by the user through a magnifying eyepiece. In use, a sample is placed between a measuring prism and a small cover plate. Light traveling through the sample is either passed through to the reticle or totally internally reflected. The net effect is that a shadow line forms between the illuminated area and the dark area. It is where this shadow line crosses the scale that a reading is taken. Because refractive index is very temperature dependent, it is important to use a refractometer with automatic temperature compensation. Compensation is accomplished through the use of a small bi-metallic strip that moves a lens or prism in response to temperature changes. This design was invented by Emanuel Goldberg. There are many types of refractometers and the most common types are Abbe's refractometer, Pulfrich refractometer, Immersion refractometer. Refractometers
Traditional handheld refractometer
[ "Technology", "Engineering" ]
236
[ "Refractometers", "Measuring instruments" ]
4,088,703
https://en.wikipedia.org/wiki/Digital%20handheld%20refractometer
A digital handheld refractometer is an instrument for measuring the refractive index of materials. Principle of operation Most operate on the same general critical angle principle as a traditional handheld refractometer. The difference is that light from an LED light source is focused on the underside or a prism element. When a liquid sample is applied to the measuring surface of the prism, some of the light is transmitted through the solution and lost, while the remaining light is reflected onto a linear array of photodiodes creating a shadow line. The refractive index is directly related to the position of the shadow line on the photodiodes. Once the position of the shadow line has been automatically determined by the instrument, the internal software will correlate the position to refractive index, or to another unit of measure related to refractive index, and display a digital readout on an LCD or LED scale. The more elements there are in the photodiode array, the more precise the readings will be, and the easier it will be to obtain readings for emulsions and other difficult-to-read fluids that form fuzzy shadow lines. Digital handheld refractometers are generally more precise than traditional handheld refractometers, but less precise than most benchtop refractometers. They also may require a slightly larger amount of sample to read from since the sample is not spread thinly against the prism. The result may be displayed in one of various units of measuremeant: Brix, freezing point, boiling point, concentration, etc. Nearly all digital refractometers feature Automatic Temperature Compensation (for Brix at least) Most have a metal sample well around the prism, which makes it easier to clean sticky samples, and some instruments offer software to prevent extreme ambient light from interfering with readings (you can also shade the prism area to prevent this as well). Some instruments are available with multiple scales, or the ability to input a special scale using known conversion information. There are some digital handheld refractometers that are IP65 (IP Code) water-resistant, and thus washable under a running faucet. See also Refractometer Types Traditional handheld refractometer Abbe refractometer Inline process refractometer References Refractometers
Digital handheld refractometer
[ "Technology", "Engineering" ]
466
[ "Refractometers", "Measuring instruments" ]
4,088,765
https://en.wikipedia.org/wiki/Comparison%20function
In applied mathematics, comparison functions are several classes of continuous functions, which are used in stability theory to characterize the stability properties of control systems as Lyapunov stability, uniform asymptotic stability etc. Let be a space of continuous functions acting from to . The most important classes of comparison functions are: Functions of class are also called positive-definite functions. One of the most important properties of comparison functions is given by Sontag’s -Lemma, named after Eduardo Sontag. It says that for each and any there exist : Many further useful properties of comparison functions can be found in. Comparison functions are primarily used to obtain quantitative restatements of stability properties as Lyapunov stability, uniform asymptotic stability, etc. These restatements are often more useful than the qualitative definitions of stability properties given in language. As an example, consider an ordinary differential equation where is locally Lipschitz. Then: () is globally stable if and only if there is a so that for any initial condition and for any it holds that () is globally asymptotically stable if and only if there is a so that for any initial condition and for any it holds that The comparison-functions formalism is widely used in input-to-state stability theory. References Types of functions Stability theory
Comparison function
[ "Mathematics" ]
270
[ "Functions and mappings", "Mathematical objects", "Stability theory", "Mathematical relations", "Types of functions", "Dynamical systems" ]
4,088,767
https://en.wikipedia.org/wiki/Abbe%20refractometer
An Abbe refractometer is a bench-top device for the high-precision measurement of an index of refraction. Details Ernst Abbe (1840–1905), working for Carl Zeiss AG in Jena, Germany in the late 19th century, was the first to develop a laboratory refractometer. These first instruments had built-in thermometers and required circulating water to control instrument and fluid temperatures. They also had adjustments for eliminating the effects of dispersion and analog scales from which the readings were taken. In the Abbe refractometer the liquid sample is sandwiched into a thin layer between an illuminating prism and a refracting prism. The refracting prism is made of a glass with a high refractive index (e.g., 1.75) and the refractometer is designed to be used with samples having a refractive index smaller than that of the refracting prism. A light source is projected through the illuminating prism, the bottom surface of which is ground (i.e., roughened like a ground-glass joint), so each point on this surface can be thought of as generating light rays traveling in all directions. A detector placed on the back side of the refracting prism would show a light and a dark region. Over a century after Abbe's work, the usefulness and precision of refractometers has improved, although their principle of operation has changed very little. They are also possibly the easiest device to use for measuring the refractive index of solid samples, such as glass, plastics, and polymer films. Some modern Abbe refractometers use a digital display for measurement, eliminating the need for discerning between small graduations. However, the user still has to adjust the view to get a final reading. The first truly digital laboratory refractometers began appearing in the late 1970s and early 1980s, and no longer depended on the user's eye to determine the reading. They still required the use of circulating water baths to control instrument and fluid temperature. They did, however, have the ability to electronically compensate for the temperature differences of many fluids where there is a known concentration-to-refractive-index conversion. Most digital laboratory refractometers, while much more accurate and versatile than their analog Abbe counterparts, are incapable of readings on solid samples. In the late 1990s, Abbe refractometers became available with the capability of measurements at wavelengths other than the standard 589 nanometers. These instruments use special filters to reach the desired wavelength, and can extend measurements well into the near infrared (though a special viewer is required to see the infrared rays). Multi-wavelength Abbe refractometers can be used to easily determine a sample's Abbe number. The most advanced instruments of today use solid-state Peltier effect devices to heat and cool the instrument and the sample, eliminating the need for an external water bath. The software on most of current instruments offers features such as programmable user-defined scales and a history function that recalls the last several measurements. Several manufacturers offer easily usable controls, with the ability to use from and export readings to a linked computer. See also Refractive index Traditional handheld refractometer Digital handheld refractometer Inline process refractometer Further reading External links refractometer after Ernst Abbe by Carl Zeiss made in 1904 improved Abbe refractometer by Carl Zeiss made in 1928 Abbe refractometer theory and operating instructions Refractometers German inventions
Abbe refractometer
[ "Technology", "Engineering" ]
733
[ "Refractometers", "Measuring instruments" ]
4,088,810
https://en.wikipedia.org/wiki/Inline%20process%20refractometer
Inline process refractometers are a type of refractometer designed for the continuous measurement of a fluid flowing through a pipe or inside a tank. First patented by Carl A. Vossberg Jr. US2807976A - Refractometer US2549402A, these refractometers typically consist of a sensor, placed inline with the fluid flow, coupled to a control box. The control box usually provides a digital readout as well as 4-20 mA analog outputs and relay outputs for controlling pumps and valves. Instead of placing the sensor inline of the process, it can be placed in a bypass, attached by a thin tube. This measurement has been an important element in the process control of the chemical and refining, pulp and paper, food, sugar and pharmaceutical industries for more than a century. For instance, the in-line concentration measurement can be used as a real-time predictive tool for the final concentration. A quick and accurate response is needed to optimize production. Cost reduction is possible by reducing the variation of mean average of the product concentration. The cost saving is related to the value of the component being measured. A digital inline process refractometer sensor measures the refractive index and the temperature of the processing medium. The measurement is based on the refraction of light in the process medium, i.e. the critical angle of refraction using a light source. The measured refractive index and temperature of the process medium are sent to the control box. It calculates the concentration of the process liquid based on the refractive index and temperature, taking pre-defined process conditions into account. The output is typically a 4 to 20mA DC output or, increasingly, an Ethernet signal proportional to process solution concentration, liquid density, Brix or other scale that has been selected for the instrument. The inline process refractometer consists of three primary components: the inline sensing head, the electronics console, and the process adapter. The inline sensing head is mounted on the adapter and contains a prism that scans the process solution through a transparent window and outputs a value relative to the refractive index of the solution. The electronics console houses all control circuitry, microprocessors, digital displays and calibration points and conditions the sensing head signal. The process adapter is the mechanical connection between the inline sensing head and the process piping, and is designed specifically to accommodate the pipe size and application. Inline process refractometers are used primarily in the pulp and paper industry; the food and beverage industry, the pharmaceutical industry, and the chemical industry as a means to assure consistency and quality. In the pulp and paper industry, inline process refractometers are used in the energy recovery from black liquor recovery boilers by accurately measuring solids in the black liquor. In the food and beverage industry, inline process refractometers are used to measure dissolved solids, most often as sugar content, measured in degrees Brix. In the pharmaceutical industry they are used to monitor and control concentration levels during supersaturation, a critical process in crystallization. In the chemical industry they are used in Hydrochloric Acid applications, Sulphuric Acid applications, and boiler cleaning chemicals processes. References Refractometers
Inline process refractometer
[ "Technology", "Engineering" ]
669
[ "Refractometers", "Measuring instruments" ]
4,088,899
https://en.wikipedia.org/wiki/Epitoky
Epitoky is a process that occurs in many species of polychaete marine worms wherein a sexually immature worm (the atoke) is modified or transformed into a sexually mature worm (the epitoke). Epitokes are pelagic morphs capable of sexual reproduction. Unlike the immature form, which is typically benthic (lives on the bottom), epitokes are specialized for swimming as well as reproducing. The primary benefit to epitoky is increased chances of finding other members of the same species for reproduction. There are two methods in which epitoky can occur: schizogamy and epigamy. Schizogamy Many species go through schizogamy, where the atoke uses asexual reproduction to produce buds from its posterior end. Each bud develops into an epitoke and, once fully formed, will then break off from the atoke and become free-swimming. Many genetically identical epitokes are formed in this way, thus allowing a higher chance of finding a mate of the same species and subsequent passing of genes to the next generation. Atokes may then live through another season to form more epitokes. Epigamy Epigamy is another common way to form epitokes. For species that use this method, the atoke undergoes physiological and morphological modifications as it transforms into the epitoke. Typically, male worms undergo a more pronounced transformation from atoke to epitoke. Modifications may include an increase in size of parapodia and the development of paddle-like chaetae for enhanced swimming ability, atrophy of the gut, filling of the body cavity with gametes (eggs or sperm), the development of large eyes, and the musculature may even change to perform swimming movements instead of feeding movements. The majority of species that undergo epigamy are unable to revert to the atoke form and die after reproducing. Male and female epitokes are produced and swim to the water's surface only at certain times of the year and are often synchronized with moon cycles in a behavior called swarming. Swarming brings individuals of the same species together so that there is an increased rate of fertilization. Some polychaete species have been found to use bioluminescence, presumably to compact and maintain swarms. Both schizogamous and epigamous epitokes are non-feeding individuals that die once gametes have been released into the water. In the past, epitokes were thought to be a separate group of polychaete marine worms, because epitokes may look very different than atokes. For instance, the atokes of Platynereis dumerilii are yellowish-brown, while the female epitokes are yellow because of the eggs they contain, and the male epitokes are white in the front part due to sperm and red in the hind part due to blood vessels (see pictures). References Reproduction in animals Asexual reproduction
Epitoky
[ "Biology" ]
617
[ "Reproduction in animals", "Behavior", "Asexual reproduction", "Reproduction" ]
4,089,175
https://en.wikipedia.org/wiki/Logical%20constant
In logic, a logical constant or constant symbol of a language is a symbol that has the same semantic value under every interpretation of . Two important types of logical constants are logical connectives and quantifiers. The equality predicate (usually written '=') is also treated as a logical constant in many systems of logic. One of the fundamental questions in the philosophy of logic is "What is a logical constant?"; that is, what special feature of certain constants makes them logical in nature? Some symbols that are commonly treated as logical constants are: Many of these logical constants are sometimes denoted by alternate symbols (for instance, the use of the symbol "&" rather than "∧" to denote the logical and). Defining logical constants is a major part of the work of Gottlob Frege and Bertrand Russell. Russell returned to the subject of logical constants in the preface to the second edition (1937) of The Principles of Mathematics noting that logic becomes linguistic: "If we are to say anything definite about them, [they] must be treated as part of the language, not as part of what the language speaks about." The text of this book uses relations R, their converses and complements as primitive notions, also taken as logical constants in the form aRb. See also Logical connective Logical value Non-logical symbol References External links Stanford Encyclopedia of Philosophy entry on logical constants Concepts in logic Logic symbols Logical truth Philosophical logic Syntax (logic) Constants
Logical constant
[ "Mathematics" ]
307
[ "Mathematical logic", "Symbols", "Mathematical symbols", "Logic symbols", "Logical truth" ]
4,089,752
https://en.wikipedia.org/wiki/Road%20Emergency%20Services%20Communications%20Unit
The Road Emergency Services Communications Unit (RESCU) is a traffic management system used by the City of Toronto on city managed highways. The system is used to monitor traffic on: Gardiner Expressway from the Queen Elizabeth Way to the Don Valley Parkway - 28 cameras Don Valley Parkway from the Gardiner Expressway to Ontario Highway 401 - 17 cameras Lake Shore Boulevard from near Parkside Drive to Leslie Street - 17 cameras Allen Road from Finch Avenue West to Eglinton Avenue - 9 cameras Various intersections through the GTA are also monitored including Woodbine Avenue and Steeles Avenue / Ontario Highway 404, Black Creek Drive and Lawrence Avenue West, Don Mills Road and Overlea Boulevard, and Warden Avenue and Ellesmere Road. The system consists of: 70+ traffic cameras 635 vehicle sensors 5 overhead changeable message signs - Gardiner Expressway and Don Valley Parkway 4 portable signs 121 detector stations (650 loops) Remote Traffic Information System (RTIS) - website Queue End Warning System - reminder for drivers that Allen Road ends at Eglinton Avenue West by using flashing light/sign and display screen on southbound Allen Road at Glengrove Avenue and south of Viewmount Avenue; advisory overhead sign at Flemingdon Road. See also RESCU is linked to the Ontario Ministry of Transportation's Freeway Traffic Management System or COMPASS. References RESCU Transport in the Greater Toronto Area Transport in Ontario Intelligent transportation systems
Road Emergency Services Communications Unit
[ "Technology" ]
273
[ "Information systems", "Warning systems", "Intelligent transportation systems", "Transport systems" ]
4,090,318
https://en.wikipedia.org/wiki/Vela%20Supernova%20Remnant
The Vela supernova remnant is a supernova remnant in the southern constellation Vela. Its source Type II supernova exploded approximately 11,000 years ago (and was about 900 light-years away). The association of the Vela supernova remnant with the Vela pulsar, made by astronomers at the University of Sydney in 1968, was direct observational evidence that supernovae form neutron stars. The Vela supernova remnant includes NGC 2736. Viewed from Earth, the Vela supernova remnant overlaps the Puppis A supernova remnant, which is four times more distant. Both the Puppis and Vela remnants are among the largest and brightest features in the X-ray sky. The Vela supernova remnant is one of the closest known to us. The Geminga pulsar is closer (and also resulted from a supernova), and in 1998 another near-Earth supernova remnant was discovered, RX J0852.0-4622, which from our point of view appears to be contained in the southeastern part of the Vela remnant. This remnant was not seen earlier because when viewed in most wavelengths, it is lost in the Vela remnant. See also CG 4 List of supernova remnants List of supernovae References External links Gum Nebula (annotated) Bill Blair's Vela Supernova Remnant page Gum Nebula Supernova remnants Vela (constellation)
Vela Supernova Remnant
[ "Astronomy" ]
290
[ "Vela (constellation)", "Constellations" ]
4,091,297
https://en.wikipedia.org/wiki/Potassium%20humate
Potassium humate is the potassium salt of humic acid. It is manufactured commercially by alkaline extraction of brown coal (lignite) leonardite and is used mainly as a soil conditioner. Extraction The extraction is performed in water with the addition of potassium hydroxide (KOH), sequestering agents, and hydrotropic surfactants. Heat is used to increase the solubility of humic acids and hence more potassium humate can be extracted. The resulting liquid is dried to produce the amorphous crystalline like product which can then be added as a granule to fertilizer. The potassium humate granules by way of chemical extraction lose their hydrophobic properties and are now soluble. Quality Depending on the source material, product quality varies. High quality oxidized lignite (brown coal), usually referred to as leonardite, is the best source material for extraction of large quantities of potassium humate. The less oxidized the coal, the less potassium humate extracted. Sources low in ash produce the best quality. Less oxidized brown coal contains a higher proportion of the insoluble humin fraction and along with peat which is lower in humic acid content and usually high in ash content requires separation by filtration or centrifugation to remove ash and humin. Peat is also high in non-humified organic matter that needs to be reduced to produce a high quality product. The benefit of peat is that it is usually 2-3 times higher in fulvic acid content, which are the low molecular weight fractions of humic acid that are high in oxygen containing functional groups and soluble at a low pH of <1. Fulvic acids have a higher cation exchange capacity and therefore have a higher chemical interaction with fertilizers and are able to form soluble chelates of trace metals. Uses Potassium humate is used in agriculture as a fertilizer additive to increase the efficiency of fertilizers especially nitrogen- and phosphorus-based fertilizer inputs. Other salts of humic acid are manufactured, mainly sodium humate, which is used in animal health supplements. It also can be used in aquaculture. References Soil chemistry Potassium compounds
Potassium humate
[ "Chemistry" ]
452
[ "Soil chemistry" ]
4,091,828
https://en.wikipedia.org/wiki/Dipsogen
A dipsogen is an agent that causes thirst. (From Greek: δίψα (dipsa), "thirst" and the suffix -gen, "to create".) They include Angiotensin II. References External links 'Fluid Physiology' by Kerry Brandis (from http://www.anaesthesiamcq.com) Physiology
Dipsogen
[ "Biology" ]
76
[ "Physiology" ]
4,091,942
https://en.wikipedia.org/wiki/Potassium%202-ethylhexanoate
Potassium 2-ethylhexanoate, also known as potassium iso-octanoate, is a chemical used to convert the tert-butylammmonium salt of clavulanic acid into potassium clavulanate (clavulanate potassium). It is also used as a corrosion inhibitor in automotive antifreeze and as a catalyst for polyurethane systems. References Potassium compounds Ethylhexanoates
Potassium 2-ethylhexanoate
[ "Chemistry" ]
90
[ "Organic compounds", "Organic compound stubs", "Organic chemistry stubs" ]
4,092,101
https://en.wikipedia.org/wiki/Christian%20CND
Christian CND (CCND) is a 'Specialist Section' of CND, the Campaign for Nuclear Disarmament and has existed since 1960. CCND is made up of individual Christians of various denominations who oppose nuclear weapons and who campaign for peace. The organisation has an elected executive of ten members, has an office in London and publishes a journal called 'Ploughshare' four times a year. Its symbol combines the original CND sign (commonly referred to as the 'Peace' sign) with images of a cross and a dove holding an olive branch. Christian CND is a member of the Network of Christian Peace Organisations. NOTE. College CND also used the abbreviation CCND. History Founded in 1960, chaired by Sidney Hinkes from 1964. In 1981 it was expanded and reorganised on a more permanent basis with its own membership, newsletter and administration, and considerable autonomy in forming its own policies. It organised many conferences at local and national level as well as acts of protest, liturgies and services at bases and government sites. Its members were also involved in letter writing, lobbying and educating for peace and disarmament. There were also several Christian CND local groups around the UK. References External links Christian CND website Catalogue of the CCND archives, held at the Modern Records Centre, University of Warwick Campaign for Nuclear Disarmament Anti-nuclear organizations Christian organisations based in the United Kingdom Organizations established in 1960 1960 in politics Christian political organizations 1960 in British politics 1960 in Christianity
Christian CND
[ "Engineering" ]
306
[ "Nuclear organizations", "Anti-nuclear organizations" ]
4,092,580
https://en.wikipedia.org/wiki/Software%20Communications%20Architecture
The Software Communications Architecture (SCA) is an open architecture framework that defines a standard way for radios to instantiate, configure, and manage waveform applications running on their platform. The SCA separates waveform software from the underlying hardware platform, facilitating waveform software portability and re-use to avoid costs of redeveloping waveforms. The latest version is SCA 4.1. Overview The SCA is published by the Joint Tactical Networking Center (JTNC). This architecture was developed to assist in the development of Software Defined Radio (SDR) communication systems, capturing the benefits of recent technology advances which are expected to greatly enhance interoperability of communication systems and reduce development and deployment costs. The architecture is also applicable to other embedded, distributed-computing applications such as Communications Terminals or Electronic Warfare (EW). The SCA has been structured to: Provide for portability of applications software between different SCA implementations, Leverage commercial standards to reduce development cost, Reduce software development time through the ability to reuse design modules, and Build on evolving commercial frameworks and architectures. The SCA is deliberately designed to meet commercial application requirements as well as those of military applications. Since the SCA is intended to become a self-sustaining standard, a wide cross-section of industry has been invited to participate in the development and validation of the SCA. The SCA is not a system specification but an implementation independent set of rules that constrain the design of systems to achieve the objectives listed above. Core Framework The Core Framework (CF) defines the essential "core" set of open software interfaces and profiles that provide for the deployment, management, interconnection, and intercommunication of software application components in an embedded, distributed-computing communication system. In this sense, all interfaces defined in the SCA are part of the CF. Standard Waveform Application Programming Interfaces (APIs) The Standard Waveform APIs define the key software interfaces that allow the waveform application and radio platform to interact. SCA use the APIs to separate waveform software from the underlying hardware platform, facilitating waveform software portability and re-use to avoid costs of redeveloping waveforms. Development Tools VIStology SCA-Pass - SCA 4.1 Waveform Conformance Verification IDE Reservoir Labs' R-Check - SCA Compliance Testing NordiaSoft eCo Suite - SCA 4.1 Integrated Development Environment and Core Framework ADLINK Spectra CX4 - SCA 4.1 Model Driven Tools Top News Software Communications Architecture v4.1 entered into the Department of Defense (DoD) Information Technology (IT) Standards Registry (DISR) as a mandated standard External links Software Communications Architecture Homepage Introduction to SCA Part I (Video) Introduction to SCA Part II (Video) SCA 4.1 Release Webinar SCA 2.2.2 Migration to SCA 4.1 (Video) Cobham Development Platform SCA and FACE Alignment SCA 4.1 Required in Major U.S. Navy Acquisition Navy Requires Open Architecture Wireless Innovation Forum - International Consortium Adoption by Germany Adoption by India Increasing Flexibility in Wireless SDR Systems R&S SDTR Link protocols Military radio systems Mobile telecommunications standards Radio technology
Software Communications Architecture
[ "Technology", "Engineering" ]
662
[ "Information and communications technology", "Telecommunications engineering", "Radio technology", "Mobile telecommunications standards", "Mobile telecommunications" ]
4,092,681
https://en.wikipedia.org/wiki/Air-to-cloth%20ratio
The air-to-cloth ratio is the volumetric flow rate of air (m3/minute; SI m3/second) flowing through a dust collector's inlet duct divided by the total cloth area (m2) in the filters. The result is expressed in units of velocity. The air-to-cloth ratio is typically between 1.5 and 3.5 metres per minute, mainly depending on the concentration of dust loading. External links Details on how to calculate air-to-cloth ratio Filters Engineering ratios
Air-to-cloth ratio
[ "Chemistry", "Mathematics", "Engineering" ]
104
[ "Metrics", "Engineering ratios", "Chemical equipment", "Quantity", "Filters", "Filtration", "Fluid dynamics stubs", "Fluid dynamics" ]
4,092,733
https://en.wikipedia.org/wiki/Dust%20collector
A dust collector is a system used to enhance the quality of air released from industrial and commercial processes by collecting dust and other impurities from air or gas. Designed to handle high-volume dust loads, a dust collector system consists of a blower, dust filter, a filter-cleaning system, and a dust receptacle or dust removal system. It is distinguished from air purifiers, which use disposable filters to remove dust. History The father of the dust collector was from Lübeck. In 1921, he patented three filter designs that he had pioneered to remove dust from air. Uses Dust collectors are used in many processes to either recover valuable granular solid or powder from process streams, or to remove granular solid pollutants from exhaust gases prior to venting to the atmosphere. Dust collection is an online process for collecting any process-generated dust from the source point on a continuous basis. Dust collectors may be of single unit construction, or a collection of devices used to separate particulate matter from the process air. They are often used as an air pollution control device to maintain or improve air quality. Mist collectors remove particulate matter in the form of fine liquid droplets from the air. They are often used for the collection of metal working fluids, and coolant or oil mists. Mist collectors are often used to improve or maintain the quality of air in the workplace environment. Fume and smoke collectors are used to remove sub-micrometer-size particulates from the air. They effectively reduce or eliminate particulate matter and gas streams from many industrial processes such as welding, rubber and plastic processing, high speed machining with coolants, tempering, and quenching. Process Dust collection systems work on the basic formula of capture, convey and collect. First, the dust must be captured or extracted. This is accomplished with devices such as capture hoods to catch dust at its source of origin. Many times, the machine producing the dust will have a port to which a duct can be directly attached. Second, the dust must be conveyed. This is done via a ducting system, properly sized and manifolded to maintain a consistent minimum air velocity required to keep the dust in suspension for conveyance to the collection device. A duct of the wrong size can lead to material settling in the duct system and clogging it. Finally, the dust is collected. This is done via a variety of means, depending on the application and the dust being handled. It can be as simple as a basic pass-through filter, a cyclonic separator, or an impingement baffle. It can also be as complex as an electrostatic precipitator, a multistage baghouse, or a chemically treated wet scrubber or stripping tower. Smaller dust collection systems use a single-stage vacuum unit to create suction and perform air filtration, where the waste material is drawn into an impeller and deposited into a container such as a bag, barrel, or canister. Air is recirculated into the shop after passing through a filter to trap smaller particulate. Larger systems utilize a two-stage system, which separates larger particles from fine dust using a pre-collection device, such as a cyclone or baffled canister, before drawing the air through the impeller. Air from these units can then be exhausted outdoors or filtered and recirculated back into the work space. Dust collection systems are often part of a larger air quality management program that also includes large airborne particle filtration units mounted to the ceiling of shop spaces and mask systems to be worn by workers. Air filtration units are designed to process large volumes of air to remove fine particles (2 to 10 micrometres) suspended in the air. Masks are available in a variety of forms, from simple cotton face masks to elaborate respirators with tanked air — the need for which is determined by the environment in which the worker is operating. In industry, round or rectangular ducts are used to prevent buildup of dust in processing equipment. Types Inertial separators Inertial separators separate dust from gas streams using a combination of forces, such as centrifugal, gravitational, and inertial. These forces move the dust to an area where the forces exerted by the gas stream are minimal. The separated dust is moved by gravity into a hopper, where it is temporarily stored. The three primary types of inertial separators are: Settling chambers Baffle chambers Centrifugal collectors Neither settling chambers nor baffle chambers are commonly used in the minerals processing industry. However, their principles of operation are often incorporated into the design of more efficient dust collectors. Settling chamber A settling chamber (or stiveroom) consists of a large box installed in the ductwork. The increase of cross section area at the chamber reduces the speed of the dust-filled airstream and heavier particles settle out. Settling chambers are simple in design and can be manufactured from almost any material. However, they are seldom used as primary dust collectors because of their large space requirements and low efficiency. A practical use is as precleaners for more efficient collect. Advantages: 1) simple construction and low cost 2) maintenance free 3) collects particles without need of water. Disadvantages: 1) low efficiency 2) large space required. Baffle chamber Baffle chambers use a fixed baffle plate that causes the conveying gas stream to make a sudden change of direction. Large-diameter particles do not follow the gas stream but continue into a dead air space and settle. Baffle chambers are used as precleaners. Centrifugal collectors Centrifugal collectors use cyclonic action to separate dust particles from the gas stream. In a typical cyclone, the dust gas stream enters at an angle and is spun rapidly. The centrifugal force created by the circular flow throws the dust particles toward the wall of the cyclone. After striking the wall, these particles fall into a hopper located underneath. Cyclone separators are found in all types of power and industrial applications, including pulp and paper plants, cement plants, steel mills, petroleum coke plants, metallurgical plants, saw mills and other kinds of facilities that process dust. Single-cyclone separators create a dual vortex to separate coarse from fine dust. The main vortex spirals downward and carries most of the coarser dust particles. The inner vortex, created near the bottom of the cyclone, spirals upward and carries finer dust particles. Multiple-cyclone separators consist of a number of small-diameter cyclones, operating in parallel and having a common gas inlet and outlet, as shown in the figure, and operate on the same principle as single cyclone separators—creating an outer downward vortex and an ascending inner vortex. Multiple-cyclone separators remove more dust than single cyclone separators because the individual cyclones have a greater length and smaller diameter. Secondary-air-flow separators use a secondary air flow, injected into the cyclone to accomplish several things. The secondary air flow increases the speed of the cyclonic action making the separator more efficient; it intercepts the particulate before it reaches the interior walls of the unit; and it forces the separated particulate toward the collection area. The secondary air flow protects the separator from particulate abrasion and allows the separator to be installed horizontally because gravity is not depended upon to move the separated particulate downward. Fabric filters Commonly known as baghouses, fabric collectors use filtration to separate dust particulates from dusty gases. They are one of the most efficient and cost-effective types of dust collectors available, and can achieve a collection efficiency of more than 99% for very fine particulates. Dust-laden gases enter the baghouse and pass through fabric bags that act as filters. The bags can be of woven or felted cotton, synthetic, or glass-fiber material in either a tube or envelope shape. Wet scrubbers Dust collectors that use liquid are known as wet scrubbers. In these systems, the scrubbing liquid (usually water) comes into contact with a gas stream containing dust particles. Greater contact of the gas and liquid streams yields higher dust removal efficiency. There is a large variety of wet scrubbers; however, all have one of three basic configurations of gas-humidification, gas-liquid contact or gas-liquid separation - Regardless of the contact mechanism used, as much liquid and dust as possible must be removed. Once contact is made, dust particulates and water droplets combine to form agglomerates. As the agglomerates grow larger, they settle into a collector. Spray-tower scrubber wet scrubbers may be categorized by pressure drop as follows: Low-energy scrubbers (0.5 to 2.5 inches water gauge - 124.4 to 621.9 Pa) Low- to medium-energy scrubbers (2.5 to 6 inches water gauge - 0.622 to 1.493 kPa) Medium- to high-energy scrubbers (6 to 15 inches water gauge - 1.493 to 3.731 kPa) High-energy scrubbers (greater than 15 inches water gauge - greater than 3.731 kPa) Due to the large number of commercial scrubbers available, it is not possible to describe each individual type here. However, the following sections provide examples of typical scrubbers in each category. Electrostatic precipitators (ESP) Electrostatic precipitators use electrostatic forces to separate dust particles from exhaust gases. A number of high-voltage, direct-current discharge electrodes are placed between grounded collecting electrodes. The contaminated gases flow through the passage formed by the discharge and collecting electrodes. Electrostatic precipitators operate on the same principle as home "Ionic" air purifiers. The airborne particles receive a negative charge as they pass through the ionized field between the electrodes. These charged particles are then attracted to a grounded or positively charged electrode and adhere to it. Unit collectors Unlike central collectors, unit collectors control contamination at its source. They are small and self-contained, consisting of a fan and some form of dust collector. They are suitable for isolated, portable, or frequently moved dust-producing operations, such as bins and silos or remote belt-conveyor transfer points. Advantages of unit collectors include small space requirements, the return of collected dust to main material flow, and low initial cost. However, their dust-holding and storage capacities, servicing facilities, and maintenance periods have been sacrificed. A number of designs are available, with capacities ranging from 200 to 2,000 ft3/min (90 to 900 L/s). There are two main types of unit collectors: Fabric collectors, with manual shaking or pulse-jet cleaning - normally used for fine dust Cyclone collectors - normally used for coarse dust Fabric collectors are frequently used in minerals processing operations because they provide high collection efficiency and uninterrupted exhaust airflow between cleaning cycles. Cyclone collectors are used when coarser dust is generated, as in woodworking, metal grinding, or machining. The following points should be considered when selecting a unit collector: Cleaning efficiency must comply with all applicable regulations. The unit maintains its rated capacity while accumulating large amounts of dust between cleanings. Simple cleaning operations do not increase the surrounding dust concentration. Has the ability to operate unattended for extended periods of time (for example, 8 hours). Automatic discharge or sufficient dust storage space to hold at least one week's accumulation. If renewable filters are used, they should not have to be replaced more than once a month. Durable Quiet Use of unit collectors may not be appropriate if the dust-producing operations are located in an area where central exhaust systems would be practical. Dust removal and servicing requirements are expensive for many unit collectors and are more likely to be neglected than those for a single, large collector. Selecting a dust collector Dust collectors vary widely in design, operation, effectiveness, space requirements, construction, and capital, operating, and maintenance costs. Each type has advantages and disadvantages. However, the selection of a dust collector should be based on the following general factors: Dust concentration and particle size – For minerals processing operations, the dust concentration can range from 0.1 to of dust per cubic foot of air (0.23 to 11.44 grams per cubic meter), and the particle size can vary from 0.5 to 100 micrometres (μm) in diameter. Degree of dust collection required – The degree of dust collection required depends on its potential as a health hazard or public nuisance, the plant location, the allowable emission rate, the nature of the dust, its salvage value, and so forth. The selection of a collector should be based on the efficiency required and should consider the need for high-efficiency, high-cost equipment, such as electrostatic precipitators; high-efficiency, moderate-cost equipment, such as baghouses or wet scrubbers; or lower cost, primary units, such as dry centrifugal collectors. Characteristics of airstream – The characteristics of the airstream can have a significant impact on collector selection. For example, cotton fabric filters cannot be used where air temperatures exceed 180 °F (82 °C). Also, condensation of steam or water vapor can blind bags. Various chemicals can attack fabric or metal and cause corrosion in wet scrubbers. Characteristics of dust – Moderate to heavy concentrations of many dusts (such as dust from silica sand or metal ores) can be abrasive to dry centrifugal collectors. Hygroscopic material can blind bag collectors. Sticky material can adhere to collector elements and plug passages. Some particle sizes and shapes may rule out certain types of fabric collectors. The combustible nature of many fine materials rules out the use of electrostatic precipitators. Methods of disposal – Methods of dust removal and disposal vary with the material, plant process, volume, and type of collector used. Collectors can unload continuously or in batches. Dry materials can create secondary dust problems during unloading and disposal that do not occur with wet collectors. Disposal of wet slurry or sludge can be an additional material-handling problem; sewer or water pollution problems can result if wastewater is not treated properly. Choosing the right size dust collector depends on airflow volume and air-to-cloth ratio that determine the efficiency of a system. Optimal dust collecting equipment increases employee retention and preserves equipment that helps lower maintenance and replacement costs. Choosing a too-large, undersized, or incapable dust collector can cause plenty of issues that impact performance and maintenance costs. Hence, the dust collector should be chosen in such a way that suits the company's specific workplace. It must provide a safe and healthy work environment for the employees. Moreover, employee efficiency and production should not ignore. Fan and motor The fan and motor system supplies mechanical energy to move contaminated air from the dust-producing source to a dust collector. Types of fans There are two main kinds of industrial fans: Centrifugal fans Axial-flow fans Centrifugal fans Centrifugal fans consist of a wheel or a rotor mounted on a shaft that rotates in a scroll-shaped housing. Air enters at the eye of the rotor, makes a right-angle turn, and is forced through the blades of the rotor by centrifugal force into the scroll-shaped housing. The centrifugal force imparts static pressure to the air. The diverging shape of the scroll also converts a portion of the velocity pressure into static pressure. There are three main types of centrifugal fans: Radial-blade fans - Radial-blade fans are used for heavy dust loads. Their straight, radial blades do not get clogged with material, and they withstand considerable abrasion. These fans have medium tip speeds and medium noise factors. Backward-blade fans - Backward-blade fans operate at higher tip speeds and thus are more efficient. Since material may build up on the blades, these fans should be used after a dust collector. Although they are noisier than radial-blade fans, backward-blade fans are commonly used for large-volume dust collection systems because of their higher efficiency. Forward-curved-blade fans - These fans have curved blades that are tipped in the direction of rotation. They have low space requirements, low tip speeds, and a low noise factor. They are usually used against low to moderate static pressures. Axial-flow fans Axial-flow fans are used in systems that have low resistance levels. These fans move the air parallel to the fan's axis of rotation. The screw-like action of the propellers moves the air in a straight-through parallel path, causing a helical flow pattern. The three main kinds of axial fans are: Propeller fans - These fans are used to move large quantities of air against very low static pressures. They are usually used for general ventilation or dilution ventilation and are good in developing up to 0.5 in. wg (124.4 Pa). Tube-axial fans - Tube-axial fans are similar to propeller fans except they are mounted in a tube or cylinder. Therefore, they are more efficient than propeller fans and can develop up to 3 to 4 in. wg (743.3 to 995 Pa). They are best suited for moving air containing substances such as condensible fumes or pigments. Vane-axial fans - Vane-axial fans are similar to tube-axial fans except air-straightening vanes are installed on the suction or discharge side of the rotor. They are easily adapted to multistaging and can develop static pressures as high as 14 to 16 in. wg (3.483 to 3.98 kPa). They are normally used for clean air only. Electric motors Electric motors are used to supply the necessary energy to drive the fan. Motors are selected to provide sufficient power to operate fans over the full range of process conditions (temperature and flow rate). Configurations Dust collectors can be configured into one of five common types: Ambient units - Ambient units are free-hanging systems for use when applications limit the use of source-capture arms or ductwork. Collection booths - Collector booths require no ductwork, and allow the worker greater freedom of movement. They are often portable. Downdraft tables - A downdraft table is a self-contained portable filtration system that removes harmful particulates and returns filtered air back into the facility with no external ventilation required. Source collector or Portable units - Portable units are for collecting dust, mist, fumes, or smoke at the source. Stationary units - An example of a stationary collector is a baghouse. Parameters involved in specifying dust collectors Important parameters in specifying dust collectors include airflow the velocity of the air stream created by the vacuum producer; system power, the power of the system motor, usually specified in horsepower; storage capacity for dust and particles, and minimum particle size filtered by the unit. Other considerations when choosing a dust collection system include the temperature, moisture content, and the possibility of combustion of the dust being collected. Systems for fine removal may only contain a single filtration system (such as a filter bag or cartridge). However, most units utilize a primary and secondary separation/filtration system. In many cases the heat or moisture content of dust can negatively affect the filter media of a baghouse or cartridge dust collector. A cyclone separator or dryer may be placed before these units to reduce heat or moisture content before reaching the filters. Furthermore, some units may have third and fourth stage filtration. All separation and filtration systems used within the unit should be specified. A baghouse is an air pollution abatement device used to trap particulate by filtering gas streams through large fabric bags. They are typically made of glass fibers or fabric. A cyclone separator is an apparatus for the separation, by centrifugal means, of fine particles suspended in air or gas. Electrostatic precipitators are a type of air cleaner, which charges particles of dust by passing dust-laden air through a strong (50-100 kV) electrostatic field. This causes the particles to be attracted to oppositely charged plates so that they can be removed from the air stream. An impinger system is a device in which particles are removed by impacting the aerosol particles into a liquid. Modular media type units combine a variety of specific filter modules in one unit. These systems can provide solutions to many air contaminant problems. A typical system incorporates a series of disposable or cleanable pre-filters, a disposable vee-bag or cartridge filter. HEPA or carbon final filter modules can also be added. Various models are available, including free-hanging or ducted installations, vertical or horizontal mounting, and fixed or portable configurations. Filter cartridges are made out of a variety of synthetic fibers and are capable of collecting sub-micrometre particles without creating an excessive pressure drop in the system. Filter cartridges require periodic cleaning. A wet scrubber, or venturi scrubber, is similar to a cyclone but it has an orifice unit that sprays water into the vortex in the cyclone section, collecting all of the dust in a slurry system. The water media can be recirculated and reused to continue to filter the air. Eventually the solids must be removed from the water stream and disposed of. Filter cleaning methods Online cleaning – automatically timed filter cleaning which allows for continuous, uninterrupted dust collector operation for heavy dust operations. Offline cleaning – filter cleaning accomplished during dust collector shut down. Practical whenever the dust loading in each dust collector cycle does not exceed the filter capacity. Allows for maximum effectiveness in dislodging and disposing of dust. On-demand cleaning – filter cleaning initiated automatically when the filter is fully loaded, as determined by a specified drop in pressure across the media surface. Reverse-pulse/Reverse-jet cleaning – Filter cleaning method which delivers blasts of compressed air from the clean side of the filter to dislodge the accumulated dust cake. Impact/Rapper cleaning – Filter cleaning method in which high-velocity compressed air forced through a flexible tube results in an arbitrary rapping of the filter to dislodge the dust cake. Especially effective when the dust is extremely fine or sticky. Dangers of neglect Proper dust collection and air filtration is important in any work space. Repeated exposure to wood dust can cause chronic bronchitis, emphysema, "flu-like" symptoms, and cancer. Wood dust also frequently contains chemicals and fungi, which can become airborne and lodge deeply in the lungs, causing illness and damage. Another concern is the possibility of dust explosions. See also Axial fan design References External links EPA Air Pollutants and Control Techniques Additional information on various wet scrubber topologies and techniques Dust Air filters Particulate control Solid-gas separation
Dust collector
[ "Chemistry" ]
4,694
[ "Air filters", "Filters", "Separation processes by phases", "Solid-gas separation" ]
4,092,803
https://en.wikipedia.org/wiki/Rosette%20%28design%29
A rosette is a round, stylized flower design. Origin The rosette derives from the natural shape of the botanical rosette, formed by leaves radiating out from the stem of a plant and visible even after the flowers have withered. History The rosette design is used extensively in sculptural objects from antiquity, appearing in Mesopotamia, and in funeral steles' decoration in Ancient Greece. The rosette was another important symbol of Ishtar which had originally belonged to Inanna along with the Star of Ishtar. It was adopted later in Romaneseque and Renaissance architecture, and also common in the art of Central Asia, spreading as far as India where it is used as a decorative motif in Greco-Buddhist art. Ancient origins One of the earliest appearances of the rosette in ancient art is in early fourth millennium BC Egypt. Another early Mediterranean occurrence of the rosette design derives from Minoan Crete; Among other places, the design appears on the Phaistos Disc, recovered from the eponymous archaeological site in southern Crete. Modern use The formalised flower motif is often carved in stone or wood to create decorative ornaments for architecture and furniture, and in metalworking, jewelry design and the applied arts to form a decorative border or at the intersection of two materials. Rosette decorations have been used for formal military awards. They also appear in modern, civilian clothes, and are often worn prominently in political or sporting events. Rosettes sometimes decorate musical instruments, such as around the perimeter of sound holes of guitars. Gallery See also Six petal rosette Footnotes Ornaments (architecture) Decorative arts Ornaments Visual motifs
Rosette (design)
[ "Mathematics" ]
326
[ "Symbols", "Visual motifs" ]
4,092,891
https://en.wikipedia.org/wiki/Shakedown%20%28testing%29
A shakedown is a period of testing or a trial journey undergone by a ship, aircraft or other craft and its crew before being declared operational. Statistically, a proportion of the components will fail after a relatively short period of use, and those that survive this period can be expected to last for a much longer, and more importantly, predictable life-span. For example, if a bolt has a hidden flaw introduced during manufacturing, it will not be as reliable as other bolts of the same type. Example procedures Racing cars Most racing cars require a "shakedown" test before being used at a race meeting. For example, on May 3, 2006, Luca Badoer performed shakedowns on all three of Ferrari's Formula One cars at the Fiorano Circuit, in preparation for the European Grand Prix at the Nürburgring. Badoer was the Ferrari F1 team's test driver at the time, while the main drivers were Michael Schumacher and Felipe Massa. Aircraft Aircraft shakedowns check avionics, flight controls, all systems, and the general airframe's airworthiness. In aircraft there are two forms of shakedown testing: shakedown testing of the design as a whole with flight-tests, and shakedown testing of individual aircraft. Shakedown testing of an aircraft design involves test flights of the prototypes, a process that actually starts months or years before first flight with simulator flights and hardware testing. This process often incorporates an iron bird test rig in which all the flight control systems are brought together in an engineering lab, while test-articles of the physical structure will be subjected to stress and fatigue loads beyond anything the aircraft is likely to encounter in service (sometimes, although not necessarily, testing one or more articles to destruction). The aircraft systems are gradually commissioned on board the prototypes; first on external power, then, once engines are fitted, on internal power, progressing to taxi trials and eventually first flight. Flight-testing proceeds conservatively, demonstrating that each test condition can be safely achieved before proceeding to the next. Prototype aircraft are generally heavily instrumented in order to support these flight-test objectives by capturing large amounts of data for both live analysis (which on larger aircraft such as airliners may happen at dedicated flight-test engineer stations on board) and for analysis post-flight. The ultimate aim of testing is to demonstrate the aircraft can operate safely throughout its flight envelope and that all regulatory requirements of the relevant civil aviation authorities have been met, allowing the design to receive its Certificate of Airworthiness. Shakedown testing of production aircraft is a simplified version of prototype testing. The design has been demonstrated to be safe and the objective is to now demonstrate that the components on an individual aircraft operate appropriately. Shakedown now comprises the general power-on trials, followed by one or more pre-delivery test flights carried out by the aircraft builder's personnel, and generally culminating in a final acceptance test also involving the purchaser's own flight crew and engineering personnel. Ship A shakedown for a ship is generally referred to as a sea trial. The maiden voyage takes place after a successful shakedown. However, for warships, the shakedown period extends post-commissioning as the new crew familiarise themselves with the ship and with operating together as a single unit, raising their proficiency until the warship can be considered operational. Hiking A shakedown hike is when a backpacker, in preparation for a long hike such as the Appalachian Trail, Pacific Crest Trail or the Continental Divide Trail, takes their selection of equipment on a shorter backpacking trip with the intention of testing its trail worthiness. A related term, the pack shakedown, is when a novice hiker has a more experienced hiker suggest changes to the novice's equipment, often simply suggesting things to leave out. See also Bathtub curve, the engineering concept behind shakedowns Demonstration and Shakedown Operation, tests performed by the United States Navy for submarine certification Burn-in References Transport operations
Shakedown (testing)
[ "Physics" ]
801
[ "Physical systems", "Transport", "Transport operations" ]
4,093,697
https://en.wikipedia.org/wiki/Monte%20Carlo%20localization
Monte Carlo localization (MCL), also known as particle filter localization, is an algorithm for robots to localize using a particle filter. Given a map of the environment, the algorithm estimates the position and orientation of a robot as it moves and senses the environment. The algorithm uses a particle filter to represent the distribution of likely states, with each particle representing a possible state, i.e., a hypothesis of where the robot is. The algorithm typically starts with a uniform random distribution of particles over the configuration space, meaning the robot has no information about where it is and assumes it is equally likely to be at any point in space. Whenever the robot moves, it shifts the particles to predict its new state after the movement. Whenever the robot senses something, the particles are resampled based on recursive Bayesian estimation, i.e., how well the actual sensed data correlate with the predicted state. Ultimately, the particles should converge towards the actual position of the robot. Basic description Consider a robot with an internal map of its environment. When the robot moves around, it needs to know where it is within this map. Determining its location and rotation (more generally, the pose) by using its sensor observations is known as robot localization. Because the robot may not always behave in a perfectly predictable way, it generates many random guesses of where it is going to be next. These guesses are known as particles. Each particle contains a full description of a possible future state. When the robot observes the environment, it discards particles inconsistent with this observation, and generates more particles close to those that appear consistent. In the end, hopefully most particles converge to where the robot actually is. State representation The state of the robot depends on the application and design. For example, the state of a typical 2D robot may consist of a tuple for position and orientation . For a robotic arm with 10 joints, it may be a tuple containing the angle at each joint: . The belief, which is the robot's estimate of its current state, is a probability density function distributed over the state space. In the MCL algorithm, the belief at a time is represented by a set of particles . Each particle contains a state, and can thus be considered a hypothesis of the robot's state. Regions in the state space with many particles correspond to a greater probability that the robot will be there—and regions with few particles are unlikely to be where the robot is. The algorithm assumes the Markov property that the current state's probability distribution depends only on the previous state (and not any ones before that), i.e., depends only on . This only works if the environment is static and does not change with time. Typically, on start up, the robot has no information on its current pose so the particles are uniformly distributed over the configuration space. Overview Given a map of the environment, the goal of the algorithm is for the robot to determine its pose within the environment. At every time the algorithm takes as input the previous belief , an actuation command , and data received from sensors ; and the algorithm outputs the new belief . Algorithm MCL: for to : motion_update sensor_update endfor for to : draw from with probability endfor return Example for 1D robot Consider a robot in a one-dimensional circular corridor with three identical doors, using a sensor that returns either true or false depending on whether there is a door. At the end of the three iterations, most of the particles are converged on the actual position of the robot as desired. Motion update During the motion update, the robot predicts its new location based on the actuation command given, by applying the simulated motion to each of the particles. For example, if a robot moves forward, all particles move forward in their own directions no matter which way they point. If a robot rotates 90 degrees clockwise, all particles rotate 90 degrees clockwise, regardless of where they are. However, in the real world, no actuator is perfect: they may overshoot or undershoot the desired amount of motion. When a robot tries to drive in a straight line, it inevitably curves to one side or the other due to minute differences in wheel radius. Hence, the motion model must compensate for noise. Inevitably, the particles diverge during the motion update as a consequence. This is expected since a robot becomes less sure of its position if it moves blindly without sensing the environment. Sensor update When the robot senses its environment, it updates its particles to more accurately reflect where it is. For each particle, the robot computes the probability that, had it been at the state of the particle, it would perceive what its sensors have actually sensed. It assigns a weight for each particle proportional to the said probability. Then, it randomly draws new particles from the previous belief, with probability proportional to . Particles consistent with sensor readings are more likely to be chosen (possibly more than once) and particles inconsistent with sensor readings are rarely picked. As such, particles converge towards a better estimate of the robot's state. This is expected since a robot becomes increasingly sure of its position as it senses its environment. Properties Non-parametricity The particle filter central to MCL can approximate multiple different kinds of probability distributions, since it is a non-parametric representation. Some other Bayesian localization algorithms, such as the Kalman filter (and variants, the extended Kalman filter and the unscented Kalman filter), assume the belief of the robot is close to being a Gaussian distribution and do not perform well for situations where the belief is multimodal. For example, a robot in a long corridor with many similar-looking doors may arrive at a belief that has a peak for each door, but the robot is unable to distinguish which door it is at. In such situations, the particle filter can give better performance than parametric filters. Another non-parametric approach to Markov localization is the grid-based localization, which uses a histogram to represent the belief distribution. Compared with the grid-based approach, the Monte Carlo localization is more accurate because the state represented in samples is not discretized. Computational requirements The particle filter's time complexity is linear with respect to the number of particles. Naturally, the more particles, the better the accuracy, so there is a compromise between speed and accuracy and it is desired to find an optimal value of . One strategy to select is to continuously generate additional particles until the next pair of command and sensor reading has arrived. This way, the greatest possible number of particles is obtained while not impeding the function of the rest of the robot. As such, the implementation is adaptive to available computational resources: the faster the processor, the more particles can be generated and therefore the more accurate the algorithm is. Compared to grid-based Markov localization, Monte Carlo localization has reduced memory usage since memory usage only depends on number of particles and does not scale with size of the map, and can integrate measurements at a much higher frequency. The algorithm can be improved using KLD sampling, as described below, which adapts the number of particles to use based on how sure the robot is of its position. Particle deprivation A drawback of the naive implementation of Monte Carlo localization occurs in a scenario where a robot sits at one spot and repeatedly senses the environment without moving. Suppose that the particles all converge towards an erroneous state, or if an occult hand picks up the robot and moves it to a new location after particles have already converged. As particles far away from the converged state are rarely selected for the next iteration, they become scarcer on each iteration until they disappear altogether. At this point, the algorithm is unable to recover. This problem is more likely to occur for small number of particles, e.g., , and when the particles are spread over a large state space. In fact, any particle filter algorithm may accidentally discard all particles near the correct state during the resampling step. One way to mitigate this issue is to randomly add extra particles on every iteration. This is equivalent to assuming that, at any point in time, the robot has some small probability of being kidnapped to a random position in the map, thus causing a fraction of random states in the motion model. By guaranteeing that no area in the map is totally deprived of particles, the algorithm is now robust against particle deprivation. Variants The original Monte Carlo localization algorithm is fairly simple. Several variants of the algorithm have been proposed, which address its shortcomings or adapt it to be more effective in certain situations. KLD sampling Monte Carlo localization may be improved by sampling the particles in an adaptive manner based on an error estimate using the Kullback–Leibler divergence (KLD). Initially, it is necessary to use a large due to the need to cover the entire map with a uniformly random distribution of particles. However, when the particles have converged around the same location, maintaining such a large sample size is computationally wasteful. KLD–sampling is a variant of Monte Carlo Localization where at each iteration, a sample size is calculated. The sample size is calculated such that, with probability , the error between the true posterior and the sample-based approximation is less than . The variables and are fixed parameters. The main idea is to create a grid (a histogram) overlaid on the state space. Each bin in the histogram is initially empty. At each iteration, a new particle is drawn from the previous (weighted) particle set with probability proportional to its weight. Instead of the resampling done in classic MCL, the KLD–sampling algorithm draws particles from the previous, weighted, particle set and applies the motion and sensor updates before placing the particle into its bin. The algorithm keeps track of the number of non-empty bins, . If a particle is inserted in a previously empty bin, the value of is recalculated, which increases mostly linear in . This is repeated until the sample size is the same as . It is easy to see KLD–sampling culls redundant particles from the particle set, by only increasing when a new location (bin) has been filled. In practice, KLD–sampling consistently outperforms and converges faster than classic MCL. References Robot navigation Monte Carlo methods
Monte Carlo localization
[ "Physics" ]
2,113
[ "Monte Carlo methods", "Computational physics" ]
4,093,822
https://en.wikipedia.org/wiki/System%20in%20a%20package
A system in a package (SiP) or system-in-package is a number of integrated circuits (ICs) enclosed in one chip carrier package or encompassing an IC package substrate that may include passive components and perform the functions of an entire system. The ICs may be stacked using package on package, placed side by side, and/or embedded in the substrate. The SiP performs all or most of the functions of an electronic system, and is typically used when designing components for mobile phones, digital music players, etc. Dies containing integrated circuits may be stacked vertically on the package substrate. They are internally connected by fine wires that are bonded to the package substrate. Alternatively, with a flip chip technology, solder bumps are used to join stacked chips together and to the package substrate, or even both techniques can be used in a single package. SiPs are like systems on a chip (SoCs) but less tightly integrated and not on a single semiconductor die. SIPs can be used either to reduce the size of a system, improve performance or to reduce costs. The technology evolved from multi chip module (MCM) technology, the difference being that SiPs also use die stacking, which stacks several chips or dies on top of each other. Technology SiP dies can be stacked vertically or tiled horizontally, with techniques like chiplets or quilt packaging. SiPs connect the dies with standard off-chip wire bonds or solder bumps, unlike slightly denser three-dimensional integrated circuits which connect stacked silicon dies with conductors running through the die using through-silicon vias. Many different 3D packaging techniques have been developed for stacking many fairly standard chip dies into a compact area. SiPs can contain several chips or dies—such as a specialized processor, DRAM, flash memory—combined with passive components—resistors and capacitors—all mounted on the same substrate. This means that a complete functional unit can be built in a single package, so that few external components need to be added to make it work. This is particularly valuable in space constrained environments like MP3 players and mobile phones as it reduces the complexity of the printed circuit board and overall design. Despite its benefits, this technique decreases the yield of fabrication since any defective chip in the package will result in a non-functional packaged integrated circuit, even if all other modules in that same package are functional. SiPs are in contrast to the common system on a chip (SoC) integrated circuit architecture which integrates components based on function into a single circuit die. An SoC will typically integrate a CPU, graphics and memory interfaces, hard-disk and USB connectivity, random-access and read-only memories, and secondary storage and/or their controllers on a single die. In comparison an SiP would connect these modules as discrete components in one or more chip packages or dies. An SiP resembles the common traditional motherboard-based PC architecture, as it separates components based on function and connects them through a central interfacing circuit board. An SiP has a lower grade of integration in comparison to an SoC. Hybrid integrated circuits (HICs) are somewhat similar to SiPs, however they tend to handle analog signals whereas SiPs usually handle digital signals, because of this HICs use older or less advanced technology (tend to use single layer circuit boards or substrates, not use die stacking, do not use flip chip or BGA for connecting components or dies, use only wire bonding for connecting dies or Small outline integrated circuit packages, use Dual in-line packages, or Single in-line packages for interfacing outside the Hybrid IC instead of BGA, etc.). SiP technology is primarily being driven by early market trends in wearables, mobile devices and the internet of things which do not demand the high numbers of produced units as in the established consumer and business SoC market. As the internet of things becomes more of a reality and less of a vision, there is innovation going on at the system on a chip and SiP level so that microelectromechanical (MEMS) sensors can be integrated on a separate die and control the connectivity. SiP solutions may require multiple packaging technologies, such as flip chip, wire bonding, wafer-level packaging, Through-silicon vias (TSVs), chiplets and more. Suppliers Advanced Micro Devices Amkor Technology Atmel AMPAK Technology Inc. NANIUM, S.A. ASE Group CeraMicro ChipSiP Technology Cypress Semiconductor STATS ChipPAC Ltd Toshiba Renesas SanDisk Samsung Silicon Labs Octavo Systems Nordic Semiconductor JCET Desay Sip Universal Scientific Industrial (USI) See also Advanced packaging (semiconductors) Multi-chip module System on a chip (SoC) Hybrid integrated circuit (HIC) References Packaging (microfabrication) Integrated circuits Electronic design Microtechnology Computer systems
System in a package
[ "Materials_science", "Technology", "Engineering" ]
971
[ "Computer engineering", "Packaging (microfabrication)", "Microtechnology", "Electronic design", "Materials science", "Computer systems", "Computer science", "Electronic engineering", "Design", "Computers", "Integrated circuits" ]
4,094,117
https://en.wikipedia.org/wiki/Negative%20stain
In microscopy, negative staining is an established method, often used in diagnostic microscopy, for contrasting a thin specimen with an optically opaque fluid. In this technique, the background is stained, leaving the actual specimen untouched, and thus visible. This contrasts with positive staining, in which the actual specimen is stained. Bright field microscopy For bright-field microscopy, negative staining is typically performed using a black ink fluid such as nigrosin and India ink. The specimen, such as a wet bacterial culture spread on a glass slide, is mixed with the negative stain and allowed to dry. When viewed with the microscope the bacterial cells, and perhaps their spores, appear light against the dark surrounding background. An alternative method has been developed using an ordinary waterproof marking pen to deliver the negative stain. Transmission electron microscopy In the case of transmission electron microscopy, opaqueness to electrons is related to the atomic number, i.e., the number of protons. Some suitable negative stains include ammonium molybdate, uranyl acetate, uranyl formate, phosphotungstic acid, osmium tetroxide, osmium ferricyanide and auroglucothionate. These have been chosen because they scatter electrons strongly and also adsorb to biological matter well. The structures which can be negatively stained are much smaller than those studied with the light microscope. Here, the method is used to view viruses, bacteria, bacterial flagella, biological membrane structures and proteins or protein aggregates, which all have a low electron-scattering power. Some stains, such as osmium tetroxide and osmium ferricyanide, are very chemically active. As strong oxidants, they cross-link lipids mainly by reacting with unsaturated carbon-carbon bonds, and thereby both fix biological membranes in place in tissue samples and simultaneously stain them. The choice of negative stain in electron microscopy can be very important. An early study of plant viruses using negatively stained leaf dips from a diseased plant showed only spherical viruses with one stain and only rod-shaped viruses with another. The verified conclusion was that this plant suffered from a mixed infection by two separate viruses. Negative staining at both light microscope and electron microscope level should never be performed with infectious organisms unless stringent safety precautions are followed. Negative staining is usually a very mild preparation method and thus does not reduce the possibility of operator infection. Other applications Negative staining transmission electron microscopy has also been successfully employed for study and identification of aqueous lipid aggregates like lamellar liposomes (le), inverted spherical micelles (M) and inverted hexagonal HII cylindrical (H) phases (see figure). References External links Electron microscopy stains Electron microscopy Microscopy Staining
Negative stain
[ "Chemistry", "Biology" ]
569
[ "Electron", "Electron microscopy", "Staining", "Microbiology techniques", "Microscopy", "Cell imaging" ]
4,094,126
https://en.wikipedia.org/wiki/MAFA
MAFA (Mast cell function-associated antigen) is a type II membrane glycoprotein, first identified on the surface of rat mucosal-type mast cells of the RBL-2H3 line. More recently, human and mouse homologues of MAFA have been discovered yet also (or only) expressed by NK and T-cells. MAFA is closely linked with the type 1 Fcɛ receptors in not only mucosal mast cells of humans and mice but also in the serosal mast cells of these same organisms. It has the ability to function as both a channel for calcium ions along with interact with other receptors to inhibit certain cell processes. It function is based on its specialized structure, which contains many specialized motifs and sequences that allow its functions to take place. Discovery Experimental discovery MAFA was initially discovered by Enrique Ortega and Israel Pecht in 1988 while studying the type 1 Fcɛ receptors (FcɛRI) and the unknown Ca2+ channels that allowed these receptors to work in the cellular membrane. Ortega and Pecht experimented through using a series of monoclonal antibodies on the RBL -2H3 line of rat mast cells. While experimenting and trying to find a specific antibody that would raise a response, the G63 monoclonal antibody was shown to raise a response by inhibiting the cellular secretions linked to the FcɛRI receptors in these rat mucosal mast cells. The G63 antibody attached to a specific membrane receptor protein that caused the inhibition process to occur. Specifically, the inhibition occurred by the G63 antibody and glycoprotein cross-linking so that the processes of inflammation mediator formation, Ca2+ intake into the cell, and the hydrolysis of phosphatidylinositides were all stopped. This caused biochemical inhibition of the normal FcɛRI response. The identified receptor protein was then isolated and studied where it was found that when cross-linked, the protein actually had a conformational change that localized the FcɛRI receptors. Based on these results, both Ortega and Pecht named this newly discovered protein Mast cell function-associated antigen or MAFA for short. Structure and coding Protein structure General structure MAFA is said to be a type II membrane glycoprotein, which means that its N-terminus will face the cytosol while its C-terminus will face the extracellular environment. The protein is 188 amino acids in length and has both hydrophobic and hydrophilic regions within these amino acids. The MAFA protein weighs between 28 and 40 kilodaltons and can exist as both a monomer or a homodimer in various species as seen by the SDS-PAGE results that show two broad bands based on these two forms. The MAFA core polypeptide sequence weights about 19 kilodaltons, however, a large amount of the weight comes from the N-linked oligosaccharides that are attached onto the protein. This heavy glycosylation is a common occurrence among type II membrane glycoproteins and is a key part of both their structure and function. The variation among glycosylation levels helps play an important role in the properties of MAFA proteins, so the protein must be properly made and modified in order to have full functionality. CRD region The C-terminus of MAFA contains 114 amino acids and has a distinct region called the carbohydrate recognition domain, or CRD for short. This region, as implied in the name, is where various carbohydrates and signaling molecules are recognized and attach to the protein. This CRD is present in many other glycoproteins present in higher level eukaryotes. The CRD is distinguished by a conserved 15 amino acid sequence that includes the following number of amino acids: two glycine residues, two leucine residues, five tryptophan residues, and six cysteine residues. These residues help to form various motifs through their interactions including both WIGL and CYYF motifs. Intracellular domain Along with specialized sequences on both the N terminus and C terminus, the intracellular domain of this protein contains a specialized sequence called the SIYSTL sequence, where the name is the one letter amino acid abbreviations of its residues. All of the amino acids in this sequence are polar in nature and are considered to be a part of the Immunoreceptor Tyrosine-based Inhibitory Motif (ITM). This ITIM allows the MAFA receptor protein to not only be considered a type II glycoprotein, but is also classified as an inhibitory receptor. Genetic coding Transcriptional and translational coding As with other proteins, the MAFA undergoes both transcription followed by translation and post-translational modifications in the ER and Golgi. The genomic coding region of this protein consists of 13 kilobytes of genetic information with five exons that are split by four introns in the gene. Of these five exons, three are used to help code the CRD region that was previously mentioned. This gene is also regulated through an upstream promoter region that is 664 basepairs up from the first nucleotide of the protein. Like other proteins, the gene is copied in multiple starting points and put together into an mRNA transcript. Alternative splicing After the code was transcribed into mRNA, the MAFA strand was also found to undergo alternative splicing which has allowed various forms of the MAFA protein to be translated and lead to many of the variations previously discussed. One form of this code deletes the transmembrane portion of the MAFA protein and causes a soluble version to be made, being unique to this protein and has allowed scientists to apply this alternative splicing idea to other Mast cell transmembrane proteins as well. Once translated, the protein enters the proper cellular pathways from the ER to the Golgi and eventually the cellular membrane, where it is integrated and begins its functionality. Function Channel functionality As discovered by Ortega and Pecht, one of the main functions of MAFA is to function as a Ca2+ channel as seen in their experiment with inhibition of Ca2+ when the G63 antibody was bound to the MAFA receptor region. Additionally, as seen by the fact that it is a type II membrane glycoprotein and by its ability to change conformation to allow varying amount of calcium to enter the cell, MAFA also functions as a receptor molecule and can be inhibit various processes in the mast cells. Specifically, this inhibition is in part due to the SIYSTL motif at the C-terminus of the protein, which is in the extracellular matrix. This motif is dense with Tyrosine residues, some of which are phosphorylated. The phosphorylation on these residues play the primary role in allowing MAFA to inhibit different biochemical processes. Clustering FcɛRI MAFA protein also interact greatly with FcɛRI receptors through the formation of aggregates and lipid rafts within the cellular membrane. By forming these aggregate structures, the conformation of MAFA is changed so that it can fully interact with the FcɛRI receptors and therefore cannot bind with the G63 monoclonal antibodies and is inhibited from allowing diffusion across its membrane. Along with inhibition of MAFA function, the FcɛRI receptor is also inhibited, meaning that even if a stimulus was bound to its receptor, the FcɛRI would not cause the hydrolysis of phosphatidylinositides as it normally does. Therefore, by forming these large clusters, both the function of MAFA and FcɛRI receptors are inhibited and can lead to further inhibitions of cell signaling processes within the cell. Even when the MAFA is not induced to interact heavily with FcɛRI, the mast cell membrane has natural interactions between these two receptors that cause small amounts of MAFA-FcɛRI complexes to be found without large changes to either of their functions. The specific mechanism by which the MAFA and FcɛRI interact and aggregate is still yet to be discovered. Cell cycle Along with interacting with other proteins, MAFA can form aggregates consisting only of itself, which are induced by either the monoclonal antibody G63, which was involved in its discovery, or by parts of the F(ab')2 antibody binding to its extracellular complex. By forming these MAFA groups, it was found to cause inhibition of cell cycle processes and prevent mitosis or DNA Replication from occurring. Specifically, this formation causes an increase in the tyrosine phosphorylation of various cyclins and proteins involved in the cell cycle. The main two proteins that are phosphorylated are p62DOK and inositol phosphatase SHIP and this causes further change of downstream processes that these proteins are involved in. For p62DOK, the phosphorylation process causes it to have increased binding to RasGAP, which functions to inhibit the Ras protein function by taking causing GTPase activity to take place and GDP to be bound, which inhibits Ras functionality. By having inhibition of Ras, further downstream promotion of DNA transcription is also halted, which includes some cell cycle proteins. For inositol phosphatase SHIP, the phosphorylation caused an increased amount of binding to Shc, which is normally found to be bound to Sos1 during cell cycling. Sos1 and SHIP both bind to Shc competitively and by having an increased affinity for Shc during phosphorylation, Sos1 binding decreases greatly. This relationship suggests that decreased Sos1 binding is also associated with halting the cell cycle, although the mechanism by which this inhibition occurs has not been discovered. Alternative forms MAFA can also exist in multiple forms due to alternative splicing and one of these forms in a soluble version of the protein where its transmembrane portion was not translated and modified. This form of MAFA can diffuse out of the cellular membrane and into the extracellular matrix without being degraded or broken down by lysosomes, meaning that it does serve a function within human cells. The degree of glycosylation along with the specific function of these proteins is still yet to be discovered, but it is hypothesized that they play an important role in helping maintain calcium levels along with limiting the formation of inflammation mediators within these mast cells. Much about these alternative forms is yet to be discovered. References Cell biology
MAFA
[ "Biology" ]
2,160
[ "Cell biology" ]
4,094,572
https://en.wikipedia.org/wiki/Spent%20nuclear%20fuel
Spent nuclear fuel, occasionally called used nuclear fuel, is nuclear fuel that has been irradiated in a nuclear reactor (usually at a nuclear power plant). It is no longer useful in sustaining a nuclear reaction in an ordinary thermal reactor and, depending on its point along the nuclear fuel cycle, it will have different isotopic constituents than when it started. Nuclear fuel rods become progressively more radioactive (and less thermally useful) due to neutron activation as they are fissioned, or "burnt", in the reactor. A fresh rod of low enriched uranium pellets (which can be safely handled with gloved hands) will become a highly lethal gamma emitter after 1–2 years of core irradiation, unsafe to approach unless under many feet of water shielding. This makes their invariable accumulation and safe temporary storage in spent fuel pools a prime source of high level radioactive waste and a major ongoing issue for future permanent disposal. Nature of spent fuel Nanomaterial properties In the oxide fuel, intense temperature gradients exist that cause fission products to migrate. The zirconium tends to move to the centre of the fuel pellet where the temperature is highest, while the lower-boiling fission products move to the edge of the pellet. The pellet is likely to contain many small bubble-like pores that form during use; the fission product xenon migrates to these voids. Some of this xenon will then decay to form caesium, hence many of these bubbles contain a large concentration of . In the case of mixed oxide (MOX) fuel, the xenon tends to diffuse out of the plutonium-rich areas of the fuel, and it is then trapped in the surrounding uranium dioxide. The neodymium tends to not be mobile. Also metallic particles of an alloy of Mo-Tc-Ru-Pd tend to form in the fuel. Other solids form at the boundary between the uranium dioxide grains, but the majority of the fission products remain in the uranium dioxide as solid solutions. A paper describing a method of making a non-radioactive "uranium active" simulation of spent oxide fuel exists. Fission products Spent nuclear fuel contains 3% by mass of 235U and 239Pu (also indirect products in the decay chain); these are considered radioactive waste or may be separated further for various industrial and medical uses. The fission products include every element from zinc through to the lanthanides; much of the fission yield is concentrated in two peaks, one in the second transition row (Zr, Mo, Tc, Ru, Rh, Pd, Ag) and the other later in the periodic table (I, Xe, Cs, Ba, La, Ce, Nd). Many of the fission products are either non-radioactive or only short-lived radioisotopes, but a considerable number are medium to long-lived radioisotopes such as 90Sr, 137Cs, 99Tc and 129I. Research has been conducted by several different countries into segregating the rare isotopes in fission waste including the "fission platinoids" (Ru, Rh, Pd) and silver (Ag) as a way of offsetting the cost of reprocessing; this is not currently being done commercially. The fission products can modify the thermal properties of the uranium dioxide; the lanthanide oxides tend to lower the thermal conductivity of the fuel, while the metallic nanoparticles slightly increase the thermal conductivity of the fuel. Table of chemical data Plutonium About 1% of the mass is 239Pu and 240Pu resulting from conversion of 238U, which may be considered either as a useful byproduct, or as dangerous and inconvenient waste. One of the main concerns regarding nuclear proliferation is to prevent this plutonium from being used by states, other than those already established as nuclear weapons states, to produce nuclear weapons. If the reactor has been used normally, the plutonium is reactor-grade, not weapons-grade: it contains more than 19% 240Pu and less than 80% 239Pu, which makes it not ideal for making bombs. If the irradiation period has been short then the plutonium is weapons-grade (more than 93%). Uranium 96% of the mass is the remaining uranium: most of the original 238U and a little 235U. Usually 235U would be less than 0.8% of the mass along with 0.4% 236U. Reprocessed uranium will contain 236U, which is not found in nature; this is one isotope that can be used as a fingerprint for spent reactor fuel. If using a thorium fuel to produce fissile 233U, the SNF (Spent Nuclear Fuel) will have 233U, with a half-life of 159,200 years (unless this uranium is removed from the spent fuel by a chemical process). The presence of 233U will affect the long-term radioactive decay of the spent fuel. If compared with MOX fuel, the activity around one million years in the cycles with thorium will be higher due to the presence of the not fully decayed 233U. For natural uranium fuel, fissile component starts at 0.7% 235U concentration in natural uranium. At discharge, total fissile component is still 0.5% (0.2% 235U, 0.3% fissile 239Pu, 241Pu). Fuel is discharged not because fissile material is fully used-up, but because the neutron-absorbing fission products have built up and the fuel becomes significantly less able to sustain a nuclear reaction. Some natural uranium fuels use chemically active cladding, such as Magnox, and need to be reprocessed because long-term storage and disposal is difficult. Minor actinides Spent reactor fuel contains traces of the minor actinides. These are actinides other than uranium and plutonium and include neptunium, americium and curium. The amount formed depends greatly upon the nature of the fuel used and the conditions under which it was used. For instance, the use of MOX fuel (239Pu in a 238U matrix) is likely to lead to the production of more 241Am and heavier nuclides than a uranium/thorium based fuel (233U in a 232Th matrix). For highly enriched fuels used in marine reactors and research reactors, the isotope inventory will vary based on in-core fuel management and reactor operating conditions. Spent fuel decay heat When a nuclear reactor has been shut down and the nuclear fission chain reaction has ceased, a significant amount of heat will still be produced in the fuel due to the beta decay of fission products. For this reason, at the moment of reactor shutdown, decay heat will be about 7% of the previous core power if the reactor has had a long and steady power history. About 1 hour after shutdown, the decay heat will be about 1.5% of the previous core power. After a day, the decay heat falls to 0.4%, and after a week it will be 0.2%. The decay heat production rate will continue to slowly decrease over time. Spent fuel that has been removed from a reactor is ordinarily stored in a water-filled spent fuel pool for a year or more (in some sites 10 to 20 years) in order to cool it and provide shielding from its radioactivity. Practical spent fuel pool designs generally do not rely on passive cooling but rather require that the water be actively pumped through heat exchangers. If there is a prolonged interruption of active cooling due to emergency situations, the water in the spent fuel pools may therefore boil off, possibly resulting in radioactive elements being released into the atmosphere. Fuel composition and long term radioactivity The use of different fuels in nuclear reactors results in different SNF composition, with varying activity curves. Long-lived radioactive waste from the back end of the fuel cycle is especially relevant when designing a complete waste management plan for SNF. When looking at long-term radioactive decay, the actinides in the SNF have a significant influence due to their characteristically long half-lives. Depending on what a nuclear reactor is fueled with, the actinide composition in the SNF will be different. An example of this effect is the use of nuclear fuels with thorium. Th-232 is a fertile material that can undergo a neutron capture reaction and two beta minus decays, resulting in the production of fissile U-233. Its radioactive decay will strongly influence the long-term activity curve of the SNF around a million years. A comparison of the activity associated to U-233 for three different SNF types can be seen in the figure on the top right. The burnt fuels are Thorium with Reactor-Grade Plutonium (RGPu), Thorium with Weapons-Grade Plutonium (WGPu) and Mixed Oxide fuel (MOX, no thorium). For RGPu and WGPu, the initial amount of U-233 and its decay around a million years can be seen. This has an effect in the total activity curve of the three fuel types. The initial absence of U-233 and its daughter products in the MOX fuel results in a lower activity in region 3 of the figure on the bottom right, whereas for RGPu and WGPu the curve is maintained higher due to the presence of U-233 that has not fully decayed. Nuclear reprocessing can remove the actinides from the spent fuel so they can be used or destroyed (see Long-lived fission product#Actinides). Spent fuel corrosion Noble metal nanoparticles and hydrogen According to the work of corrosion electrochemist David W. Shoesmith, the nanoparticles of Mo-Tc-Ru-Pd have a strong effect on the corrosion of uranium dioxide fuel. For instance his work suggests that when hydrogen (H2) concentration is high (due to the anaerobic corrosion of the steel waste can), the oxidation of hydrogen at the nanoparticles will exert a protective effect on the uranium dioxide. This effect can be thought of as an example of protection by a sacrificial anode, where instead of a metal anode reacting and dissolving it is the hydrogen gas that is consumed. Storage, treatment, and disposal Spent nuclear fuel is stored either in spent fuel pools (SFPs) or in dry casks. In the United States, SFPs and casks containing spent fuel are located either directly on nuclear power plant sites or on Independent Spent Fuel Storage Installations (ISFSIs). ISFSIs can be adjacent to a nuclear power plant site, or may reside away-from-reactor (AFR ISFSI). The vast majority of ISFSIs store spent fuel in dry casks. The Morris Operation is currently the only ISFSI with a spent fuel pool in the United States. Nuclear reprocessing can separate spent fuel into various combinations of reprocessed uranium, plutonium, minor actinides, fission products, remnants of zirconium or steel cladding, activation products, and the reagents or solidifiers introduced in the reprocessing itself. If these constituent portions of spent fuel were reused, and additional wastes that may come as a byproduct of reprocessing are limited, reprocessing could ultimately reduce the volume of waste that needs to be disposed. Alternatively, the intact spent nuclear fuel can be directly disposed of as high-level radioactive waste. The United States has planned disposal in deep geological formations, such as the Yucca Mountain nuclear waste repository, where it has to be shielded and packaged to prevent its migration to humans' immediate environment for thousands of years. On March 5, 2009, however, Energy Secretary Steven Chu told a Senate hearing that "the Yucca Mountain site no longer was viewed as an option for storing reactor waste." Geological disposal has been approved in Finland, using the KBS-3 process. In Switzerland, the Federal Council approved in 2008, the plan for the deep geological repository for radioactive waste. Remediation Algae has shown selectivity for strontium in studies, where most plants used in bioremediation have not shown selectivity between calcium and strontium, often becoming saturated with calcium, which is present in greater quantities in nuclear waste. Strontium-90 is a radioactive byproduct produced by nuclear reactors used in nuclear power. It is a component of nuclear waste and spent nuclear fuel. The half life is long, around 30 years, and is classified as high-level waste. Researchers have looked at the bioaccumulation of strontium by Scenedesmus spinosus (algae) in simulated wastewater. The study claims a highly selective biosorption capacity for strontium of S. spinosus, suggesting that it may be appropriate for use of nuclear wastewater. A study of the pond alga Closterium moniliferum using non-radioactive strontium found that varying the ratio of barium to strontium in water improved strontium selectivity. Risks Spent nuclear fuel stays a radiation hazard for extended periods of time with half-lifes as high as 24,000 years. For example 10 years after removal from a reactor, the surface dose rate for a typical spent fuel assembly still exceeds 10,000 rem/hour—far greater than the fatal whole-body dose for humans of about 500 rem received all at once. There is debate over whether spent fuel stored in a pool is susceptible to incidents such as earthquakes or terrorist attacks that could potentially result in a release of radiation. In the rare occurrence of a fuel failure during normal operation, the primary coolant can enter the element. Visual techniques are normally used for the postirradiation inspection of fuel bundles. Since the September 11 attacks the Nuclear Regulatory Commission has instituted a series of rules mandating that all fuel pools be impervious to natural disaster and terrorist attack. As a result, used fuel pools are encased in a steel liner and thick concrete, and are regularly inspected to ensure resilience to earthquakes, tornadoes, hurricanes, and seiches. See also Nuclear power Spent nuclear fuel shipping cask Nuclear meltdown References Nuclear fuels Nuclear reprocessing Radioactive waste
Spent nuclear fuel
[ "Chemistry", "Technology" ]
2,928
[ "Environmental impact of nuclear power", "Radioactive waste", "Hazardous waste", "Radioactivity" ]
4,094,667
https://en.wikipedia.org/wiki/List%20of%20plants%20in%20the%20Bible
This article lists plants referenced in the Bible, ordered alphabetically by English common/colloquial name. For plants whose identities are unconfirmed or debated the most probable species is listed first. Plants named in the Old Testament (Hebrew Bible or Tenakh) are given with their Hebrew name, while those mentioned in the New Testament are given with their Greek names. A B–E F–I J–M N–R S T–Z Notes See also Animals in the Bible Figs in the Bible References Sources Post, G.E. Bible Dictionary Contributions Zohary, Michael (1982) Plants of the Bible. New York: Cambridge University Press. External links Biblical Gardens Plants of the Bible, Missouri Botanical Garden Project "Bibelgarten im Karton" (biblical garden in a cardboard box) of a social and therapeutic horticultural group (handicapped persons) named "Flowerpower" from Germany List of biblical gardens in Europe Biblical Botanical Gardens Society USA Bible Plants
List of plants in the Bible
[ "Biology" ]
197
[ "Lists of biota", "Lists of plants", "Plants" ]
4,094,719
https://en.wikipedia.org/wiki/TEK%20search%20engine
TEK is an email-based search engine developed by the TEK project at the Massachusetts Institute of Technology. The search engine enables users to search the Web using only email. It is intended to be used by people with low internet connectivity (for example, high-priced internet connection and low bandwidth connection in developing countries). TEK stands for "Time Equals Knowledge"; the search engine compensates the searching availability to the time needed for searching. To perform a web search, a user sends a query via email to a server (which is located at MIT). The server then performs the search using existing search engines, downloads actual pages, and emails a subset of those pages back to the user. References Levison L, Thies B, Amarasinghe S. The TEK Search Engine. Development by design workshop, MIT, Boston, MA. July 2001. http://tek.sourceforge.net/papers/tek-dyd01.pdf External links TEK Project Internet search engines
TEK search engine
[ "Technology" ]
208
[ "Computing stubs", "World Wide Web stubs" ]
4,094,864
https://en.wikipedia.org/wiki/Link%20Layer%20Discovery%20Protocol
The Link Layer Discovery Protocol (LLDP) is a vendor-neutral link layer protocol used by network devices for advertising their identity, capabilities, and neighbors on a local area network based on IEEE 802 technology, principally wired Ethernet. The protocol is formally referred to by the IEEE as Station and Media Access Control Connectivity Discovery specified in IEEE 802.1AB with additional support in IEEE 802.3 section 6 clause 79. LLDP performs functions similar to several proprietary protocols, such as Cisco Discovery Protocol, Foundry Discovery Protocol, Nortel Discovery Protocol and Link Layer Topology Discovery. Information gathered Information gathered with LLDP can be stored in the device management information base (MIB) and queried with the Simple Network Management Protocol (SNMP) as specified in . The topology of an LLDP-enabled network can be discovered by crawling the hosts and querying this database. Information that may be retrieved include: System name and description Port name and description VLAN name IP management address System capabilities (switching, routing, etc.) MAC/PHY information MDI power Link aggregation Applications The Link Layer Discovery Protocol may be used as a component in network management and network monitoring applications. One such example is its use in data center bridging requirements. The (DCBX) is a discovery and capability exchange protocol that is used for conveying capabilities and configuration of the above features between neighbors to ensure consistent configuration across the network. LLDP is used to advertise power over Ethernet capabilities and requirements and negotiate power delivery. Media endpoint discovery extension Media Endpoint Discovery is an enhancement of LLDP, known as LLDP-MED, that provides the following facilities: Auto-discovery of LAN policies (such as VLAN, Layer 2 Priority and Differentiated services (Diffserv) settings) enabling plug and play networking. Device location discovery to allow creation of location databases and, in the case of Voice over Internet Protocol (VoIP), Enhanced 911 services. Extended and automated power management of Power over Ethernet (PoE) end points. Inventory management, allowing network administrators to track their network devices, and determine their characteristics (manufacturer, software and hardware versions, serial or asset number). The LLDP-MED protocol extension was formally approved and published as the standard ANSI/TIA-1057 by the Telecommunications Industry Association (TIA) in April 2006. System Capability Codes Frame structure LLDP information is sent by devices from each of their interfaces at a fixed interval, in the form of an Ethernet frame. Each frame contains one LLDP Data Unit (LLDPDU). Each LLDPDU is a sequence of type–length–value (TLV) structures. The Ethernet frame used in LLDP typically has its destination MAC address set to a special multicast address that 802.1D-compliant bridges do not forward. Other multicast and unicast destination addresses are permitted. The EtherType field is set to 0x88cc. Each LLDP frame starts with the following mandatory TLVs: Chassis ID, Port ID, and Time-to-Live. The mandatory TLVs are followed by any number of optional TLVs. The frame optionally ends with a special TLV, named end of LLDPDU in which both the type and length fields are 0. Accordingly, an Ethernet frame containing an LLDPDU has the following structure: Each of the TLV components has the following basic structure: Custom TLVs are supported via a TLV type 127. The value of a custom TLV starts with a 24-bit organizationally unique identifier and a 1 byte organizationally specific subtype followed by data. The basic format for an organizationally specific TLV is shown below: According to IEEE Std 802.1AB, §9.6.1.3, "The Organizationally Unique Identifier shall contain the organization's OUI as defined in IEEE Std 802-2001." Each organization is responsible for managing its subtypes. Notes References External links Tutorial on the Link Layer Discovery Protocol on EE Times 802.1AB - Station and Media Access Control Connectivity Discovery on IEEE 802.1 Link Layer Discovery Protocol on The Wireshark Wiki Device discovery protocols Ethernet standards IEEE standards Link protocols Logical link control
Link Layer Discovery Protocol
[ "Technology" ]
866
[ "Computer standards", "IEEE standards" ]
9,253,629
https://en.wikipedia.org/wiki/International%20Medical%20Commission%20on%20Bhopal
The International Medical Commission on Bhopal (IMCB) was established in 1993 to organise medical responses to the 1984 Bhopal disaster (India). Background The immediate scientific and medical response to the 1984 Bhopal disaster constituted an extraordinary pulling together of hospitals, medical personnel and social services in the area. Coping with a disaster of this scale was unheard of anywhere in the world, and there was widespread admiration for those who responded, often risking their own lives in the process. However, when the long term after effects began to appear, it was obvious that the social and legal climate was inadequate since there was little experience in dealing with a major environmental disaster. Scientific and medical personnel needed access to accident-related and toxicologic information to understand the causes and potential consequences of the disaster. Union Carbide, the primary repository of this information, faced with lawsuits and the prospect of bankruptcy, closed down its channels of communication. On the other hand, the extreme sensitivities of the local and national government bodies towards all aspects of the disaster, coupled with the lack of expertise and funds, resulted in an inadequate response on India's part to meet the urgent health care and social recovery needs of the community. Whereas local health professionals and the interested scientific community abroad expected a flood of information from a disaster of this magnitude, only a trickle resulted. These transnational political and legal ramifications threw a veil of secrecy around the disaster and obstructed the discovery of vital medical and toxicologic information. The medical community was often frustrated in its attempts to understand the links between gas exposure and health and devise appropriate treatment strategies. As an example, ignorance about whether the main poison, methyl isocyanate, could decompose to deadly cyanide gas, led to years of acrimonious debate on the merits of treating the gas victims for cyanide poisoning. Recognising the dire need of the gas victims, the Permanent Peoples' Tribunal met in 1992 and recommended that an international medical commission provide an in-depth independent assessment of the situation in Bhopal. In 1993, the Bhopal Group for Information and Action (BGIA) made a proposal. The International Medical Commission on Bhopal (IMCB) was thus constituted with 14 professionals from 12 countries who were chosen on the basis of their medical expertise and experience in environmental health, toxicology, neurology, immunology, respiratory medicine and family medicine. Drs. Rosalie Bertell and Gianni Tognoni served as the co-chairpersons of the IMCB. At the request of Carbide gas victim organisations, the IMCB conducted a humanitarian visit to India in January 1994 to contribute in any way possible to the relief of the victims and to suggest ways to in which such catastrophic accidents could be prevented in the future or their effects mitigated. During their stay, the IMCB met with government officials, various disaster experts, hospitals, research teams, local private physicians, biochemists, botanists, various survivor groups, environmental activists and veterinarians. Goals The main goals of the IMCB were: Betterment of the lives of the victims with rational diagnostic methods and treatment Clarification of the place and form of international medical assistance and documentation after a catastrophic accident Recommending legislation to protect humans from military and industrial pollution Mobilisation of international assistance in response to the request of survivors rather than waiting for government invitation. Provide guidelines for planning health research on the impact of major accidents Establishment of a precedent for international protection for medical research against interference from conflicted corporations or governments Legitimisation of the voices of survivor organisations and their participation in relevant decisions Promoting ethical and scientific standards for information collection and communication to victims Coordination of medical, research, and legal information to assist victims in claims Alerting the Government of India to the need for full disclosure of potential hazards and environmental impact studies prior to allowing any hazardous industry to set up in India The commissioners divided their work in various groups: Community & clinical studies: survey of the population followed by clinical testing of selected groups Assessment of availability and quality of medical care, including level of medical resources available. Examination of the adequacy and equity of laws and regulations relating to claims and the distribution of compensation; Evaluation of drug therapy by examination of prescriptions routinely given to survivors; Accident analysis; Review of studies and published literature on the disaster. The IMCB committed itself to a) provide a full report of its findings and recommendations to the Governments of India and Madhya Pradesh, victims' organisations, and all other interested parties; b) stand ready to assist the government of India and medical colleagues to implement the recommendations of the commission; c) enlist the National Advisory Committee to follow up the initiatives of the commission; d) recommend research studies to be undertaken in India on the long-term effects of the gas exposure, and e) assure the wide circulation of its experience and findings in the professional literature. Findings Union Carbide The IMCB publicly condemned Union Carbide and reiterated the company's full liability not only for responsibility in causing the deadly gas leak, but also for the confounding role of its behaviour with respect to pre-accident preventive and exposure mitigating efforts, and the timely and effective application of the appropriate medical measures at the time of the accident. This included the lack of transparency about the composition of the gases released, resulting in the absence of rational methods of care and planning resulting in loss of sight and in some cases life, and creation of suspicion and conflict among professionals and the population. There was also a lack of emergency preparation which would have made the public and professionals aware of the potential toxins inside the plant and how to respond to an accident. Indian government The government of India also was faulted since no clear guidelines were laid down to determine compensation to the victims resulting in undue delays and aggravation of their health status and/or economic survival. The secrecy surrounding the health studies undertaken by the Indian Council of Medical Research (ICMR) may initially have been instituted to protect the litigation process, but in reality made the rational medical treatment and establishment of claims almost impossible. In hindsight, it is clear that the secrecy served no purpose whatsoever and has resulted in non-publication of the information. Moreover, because of the secrecy about the accident itself and the chemicals released, it was difficult for the survivors to document their claims. The commission also noted an excessive fear among government personnel of bogus claims. In fulfilling its commitment, results of the community studies conducted by the IMCB have been communicated to the affected population in the form of public meetings, which provided a forum for the victims to ask questions and provide comments. The studies have also been published in various national and international journals so that the scientific community has access to this information. Recommendations of the IMCB The IMCB made the following recommendations: Reorganisation of the health system to establish a network of community-based primary care clinics; The gas-related disease categories need to be broadened to include central nervous system and psychological (PTD) injury; A conference to determine best practice rehabilitation medicine, including both Western and Indian expertise, must be undertaken to develop rational treatments and prescription drugs for survivors. Health data collected by the ICMR should be communicated to the population and submitted for publication in professional journals. Gas victims to have the right of access to their medical records; Victim organisations should be adequately represented in the national and state commissions dealing with the disaster; Criteria for compensation should include medical, economic and social damage to the victims Allocation of resources for economic and social rehabilitation of people and their communities should be made. Thorough examination of the impact of the toxic waste buried on the Union Carbide site and its potential for further damage to public health needs to be researched. Long-term effects It is now well known that persistent and chronic gas-related health effects are present in the Bhopal population. However, the full spectrum of effects is yet to be defined, especially in those exposed as children or in utero, and as manifested in survivor reproductive health. There has been a lack of systematic collection of relevant information in these reproductive effects, and also with respect to cancer development or other chronic illnesses as sequelae of the gas exposure. Recent investigations have shown that local well water has become contaminated by the improper storage of a large amount of hazardous waste in the facility, or on its grounds. This toxic waste is especially hazardous to those still suffering the effects of direct exposure to the gas. As of 2007, the prospects for learning the sequelae of this disaster do not appear to be bright. What is sorely needed is an independent body to coordinate the health care, research, rehabilitation of gas victims, and care for potential effects in their offspring. Instead of the non-directive symptomatic medical treatment that currently exists, clear guidelines and criteria need to be formulated for specific medical conditions such as damage to bronchial tubes, sleep apnea, neuron destruction, etc. Such an effort could be implemented through India's existing health care pyramid. Community-level health units should be developed to serve a maximum of 5000 people each. Local hospitals with multiple departments can be used to provide secondary care. A specialised medical centre dedicated to treatment and research of the more serious problems arising from the gas leak should be established. The IMCB believes it is a mistake to simply increase the number of hospital beds in Bhopal. The community has need for more neighbourhood clinics, non-drug respiratory therapy, clean air and water, and sheltered workshops, not for more hospital beds. Need for long-term monitoring The IMCB has recommended that long-term monitoring of the community for illness and response to treatment be done for several decades. This would include the study of exposed and unexposed areas to observe patterns of illness and death as well as to detect the occurrence of related chronic diseases and the appearance of new diseases. Such an approach needs to be one in which the health professionals involve the community of gas victims as active partners in investigation, provide them with feedback on community health, ensure that their health risks are properly communicated, and thereby enabling an increase in their consciousness, autonomy and self-determination. Dhara and Acquilla critiqued aspects of post-disaster epidemiology which served as obstacles to the conduct of scientific and valid epidemiological investigation. The original cohort of registered by the Indian Council of Medical research (ICMR) was chosen on the basis of health effects rather than any true measure of gas exposure. Instead, the cohort of 96,000 was selected based on severe, moderate, and mildly affected areas based on death rates. Prior versions of the technical report characterised the area as 'affected' but later versions contained confusing and contradictory terms such as 'exposed and affected', 'exposed but unaffected' and the term 'affected' was used interchangeably with 'exposed'. In epidemiological studies, it is well known that not all subjects in an exposed area are affected. As early as 1987, a dispersion model was available which delineated the exposed areas but was never used. The selected cohort ratio was heavily skewed toward the severely affected area (75%) and such a selection would have introduced bias in the results and an incomplete understanding of the health effects in the population. The non-random cluster of deaths sample selection approach instead of randomised selection using a sampling frame had the potential for interviewer bias due to prior knowledge of potential health effects. Persons migrating out were excluded rather than treated as lost to follow-up thereby shrinking the sample size available for analysis. Operational problems with such a large cohort included inadequate staffing and equipment – only 20 research assistants were available for monitoring the 96,000 person cohortiv and we estimate that one research assistants would have the herculean task of interviewing 40 families daily. The six monthly morbidity and mortality prevalence data has not been consistently published since the cohort was formed. There may be some internal reports but these are not available to the wider scientific community or even the general public. Timely publication of epidemiological data is vital to understanding the spectrum of gas-related disease and provision of health services. ICMR's first comprehensive reports appeared more than twenty years after the disaster, thus rendering them merely an academic exercise. The Bhopal medical community was faced with 1) the urgent health care needs of the affected community, 2) the non-availability of toxicological and accident-related information, 3) the extreme sensitivity of local and national government bodies toward all aspects of the disaster, 4) lack of expertise, and 5) the lack of funds available to independent researchers to conduct investigations. Faced with lawsuits and the prospect of bankruptcy, Union Carbide's efforts to keep open channels of communication were highly inadequate to address these issues and were considered by many to be a major human rights violation. In addition, the transnational political and legal ramifications served to throw a veil of secrecy around the disaster, thus impeding the discovery of essential pieces of information. Medical, toxicological, and accident-analysis data were not made public, thereby frustrating the efforts of the medical community to understand the linkage between exposures and health effects and devise appropriate treatment strategies. As an example, the lack of information about whether MIC could thermally decompose to hydrocyanic acid led to years of contentious debate on the merits of treating the gas victims for cyanide poisoning and an unfortunate violation of patient confidentiality. Koplan et al. indicated that post-disaster epidemiologic studies should accurately estimate exposure to enable correct dose-response relationship modeling. These data are needed for a) identifying ill and exposed persons, b) determining long-term effects, and c) linking exposure and effects for use in litigation and to determine compensation. In the absence of the above modeling, studies on Bhopal victims will suffer from the limitation that the link between exposure and health effects cannot be easily made. Working with other agencies Recognising that Bhopal is a tragic model of an industrial epidemic, members of IMCB have expressed willingness to organise international teams when requested, to provide technical assistance and evaluation of other environmental disasters. Rather than the provision of emergency relief functions, for which there are other organisations such as Medecins sans Frontieres and the Red Cross/Red Crescent, the IMCB envisioned three levels: response to communities who appeal on the basis of chronic disability due to a disaster, after its acute phase is over; represent victims at the international level, for example, the World Health Agency, to recommend legislative changes required to implement the International Bill of Rights relevant to health and safety, and working to define the appropriate public health investigations to serve the needs of the injured community rather than use the victim community to merely serve the needs of science. The International Bill of Rights includes: The Universal Declaration of Human Rights, proclaimed on 10 Dec 1948; The International Covenant on Economic, Social, and Cultural Rights (1976), and the International Covenant on Civil and Political Rights, 1976. The steps to be taken to achieve the full realisation of this right shall include: provision for the reduction of infant deaths and for healthy development of the child; improvement of all aspects of environmental and industrial hygiene; prevention, treatment, and control of epidemic, endemic, occupational and other diseases; creation of conditions which would assure to all people medical service and medical attention in the event of sickness, assuring the victims a living, work and social environment conducive to healing of its injuries. To protect these rights, an international body, free of industry and government pressures, and competent to advise on health and safety standards, is required to be able to mediate just and equitable resolution and compensation of damage in the case of unanticipated disasters. Members of the IMCB Rosalie Bertell (Canada), Gianni Tognoni (Italy) Thomas Callendar (USA) Jerry Havens (USA) V. Ramana Dhara (USA) Birger Heinzow (Germany) Marinus Verweij (Netherlands) Sushma Acquilla (UK) Paul Cullinan (UK) Wang Zhengang (China) Jerzy Jaskowski (Poland) Leonid Titov (Belarus) Ingrid Eckerman (Sweden), C. Sathyamala (India/UK) Carbide gas victims' organisations which worked with IMCB Bhopal Gas Peedit Mahila Stationery Karmachari Sangh Bhopal Gas Peedit Mahila Udyog Sanghatana Bhopal Group for Information & Action Nirashvrit Pension Bhogi Karmachari Sangh Zahreeli Gas Kand Sangharsh Morcha Bhopal Gas Peedith Sangharsh Sahayog Samiti Continuing Bhopal-type disasters in India Data from the National Disaster Management Agency (NDMA) show that 130 significant chemical accidents occurred in the last decade. The accidents resulted in 259 deaths and 563 major injuries. Almost 38 years after Bhopal, accidents continue to occur in the small, medium, and large-scale industries in the public, private, and transnational corporation sectors. Numerous environmental laws, and state and national disaster management agencies were created, with thousands of people being trained in environmental protection. The recurring accidents, though, indicate that industrial safety and protection of the occupational health of its labor force is still wanting. To correct the imbalance of power between citizens and corporations, the IMCB has recommended that vital stakeholders like citizen bodies be represented in the local, state, and national agencies dealing with disasters. Victims should be compensated for medical, economic, and social harm, and communities should be rehabilitated from the damage caused. Further reading Dhara VR. Findings of the International Medical Commission on Bhopal. The Hindu – Survey of Environment; 2003. References Publications of the IMCB members Multiple articles in International Perspectives in Public Health; 1996; Vols 11&12. International Institute of Concern for Public Health. Dhara V. Ramana. The Bhopal Gas Leak: Lessons from studying the impact of a disaster in a developing nation. Doctoral thesis-Univ. of Massachusetts Lowell. 2000. Dhara V.R, Gassert TH. The Bhopal Syndrome:persistent questions on toxicity & management. IJOEH 2002;8:380–386 Dhara VR & Acquilla S. Correspondence on 2012 Bhopal lung function study in Indian Journal Medical Research. Dec 2012. Response by De, S. http://www.icmr.nic.in/ijmr/2012/december/Correspondence.pdf Dhara VR & Acquilla S. Further observations on post-disaster epidemiology in Indian Journal Medical Research. Aug 2013. http://icmr.nic.in/ijmr/2013/august/0816.pdf Dhara VR. Six Blind Men & The Elephant: Healing Bhopal. TEDMED @ Centers for Disease Control & Prevention, Atlanta, GA, USA; Sep 2013. https://www.youtube.com/watch?v=SQ4Qbx8czfc Mfsrep.Int:4,1995 MPH 2001:24 Publications of other authors Bhopal Group for Information and Action (BGIA). Letter July 16, 1993 Health Effects of the toxic gas leak from the Union Carbide methyl isocyanate plant in Bhopal. Indian Council of Medical Research. Technical report on population based long term epidemiological studies (1996-2010);2013. https://webdrive.service.emory.edu/users/vdhara/www.BhopalPublications/Health%20Effects%20&%20Epidemiology/Bhopal%20ICMR%202013%20tech%20report%201996-2010.pdf Media Dhara VR. Six Blind Men & The Elephant: Healing Bhopal. TEDMED @ Centers for Disease Control & Prevention, Atlanta, GA, USA; Sep 2013. https://www.youtube.com/watch?v=SQ4Qbx8czfc Bhopal disaster 1984 industrial disasters International medical and health organizations Organizations established in 1993 Toxicology organizations
International Medical Commission on Bhopal
[ "Environmental_science" ]
4,135
[ "Toxicology organizations", "Toxicology" ]
9,254,545
https://en.wikipedia.org/wiki/38%20Lyncis
38 Lyncis is a multiple star system in the northern constellation of Lynx. It located about 125 light-years from the Sun, based on parallax. When viewed through a moderate telescope, two components—a brighter blue-white star of magnitude 3.9 and a fainter star of magnitude 6.1 that has been described as lilac as well as blue-white—can be seen. The pair have an angular separation of and an estimated period of . The fainter component is itself a close binary which can only be resolved using speckle interferometry. The two were separated by in 1993 and in 2008, and have an estimated orbital period of . A further faint star, component E away, is a proper-motion companion. Two other faint companions listed in multiple star catalogues as components C and D are unrelated background objects. 38 Lyncis was given as a standard star for the spectral class of A3 V when the Morgan-Keenan classification system was first defined in 1943, apparently for the two components combined. The primary star, component A, is a class A main sequence star around twice the mass of the sun. An effective temperature of and a radius of mean that it is over thirty times more luminous than the sun. It has been listed as a λ Boötis star, although it is no longer considered to be a member. The fainter of the pair, component B, has been given a spectral class of A4V, although it consists of two very close stars. Their properties are poorly-known, even the difference in their apparent magnitudes can only be estimated to be approximately 2. Based on this, their masses are estimated to be and respectively. Component E is a 15th magnitude star with an approximate spectral type of M2, a red dwarf, and an estimated mass of , and a temperature of . From the Star-Registration registry, this star was named: Isabelle Deslaurier. References A-type main-sequence stars 4 Lynx (constellation) Durchmusterung objects Lyncis, 38 080081 045688 3690 M-type main-sequence stars
38 Lyncis
[ "Astronomy" ]
432
[ "Lynx (constellation)", "Constellations" ]
9,254,714
https://en.wikipedia.org/wiki/Monkland%20and%20Kirkintilloch%20Railway
The Monkland and Kirkintilloch Railway was an early mineral railway running from a colliery at Monklands to the Forth and Clyde Canal at Kirkintilloch, Scotland. It was the first railway to use a rail ferry, the first public railway in Scotland, and the first in Scotland to use locomotive power successfully, and it had a great influence on the successful development of the Lanarkshire iron industry. It opened in 1826. It was built to enable the cheaper transport of coal to market, breaking the monopoly of the Monkland Canal. It connected with the Forth and Clyde Canal at Kirkintilloch, giving onward access not only to Glasgow, but to Edinburgh as well. The development of good ironstone deposits in the Coatbridge area made the railway successful, and the ironstone pits depended at first on the railway. Horse traction was used at first, but steam locomotive operation was later introduced: the first successful such use in Scotland. Passengers were later carried, and briefly the M&KR formed a section of the principal passenger route between Edinburgh and Glasgow. In 1848 the company merged with two adjoining railway lines to become the Monkland Railways; which in turn were absorbed by the Edinburgh and Glasgow Railway. A short length of the original route remains in use in the Coatbridge area. Formation of the railway In the first decades of the 19th century, the City of Glasgow had a large and increasing requirement for coal, for domestic and industrial use, and after the cessation of coal extraction from local pits, this was chiefly supplied from the Lanarkshire coal field, centred near Airdrie, in Monkland. There was also some extraction of iron ore in the area. The Monkland Canal had opened in 1794, and provided a considerable stimulus to the coalpits in Monkland, and early iron workings were encouraged also. However, before the era of a proper road network, the canal had a virtual monopoly of transport, and it set its prices accordingly; so successful was its exploitation of the situation that it "for many years yielded a dividend of Cent. per cent ... arising solely on its tolls on coal". A group of interested businessmen promoted the Monkland & Kirkintilloch Railway to link the coal pits and iron works to the Forth and Clyde Canal at Kirkintilloch. If coal and minerals were transshipped there, they could reach not only Glasgow, escaping the monopoly of the Monkland Canal but also Edinburgh. The scheme obtained authority on 17 May 1824 by the (5 Geo. 4. c. xlix), with share capital of £32,000 and powers to raise a further £10,000 by additional shares or by borrowing. Construction started by contract the following month. The engineer for the scheme was Thomas Grainger, in his first large undertaking; he had previously been chiefly engaged in road schemes. When he became engaged on the construction of the railway, he took as his assistant John Miller, and a year later the two men formed a partnership, Grainger & Miller, which was to be heavily involved in Scottish railway schemes. It was built to the track gauge of , and as other 'coal railways' opened up in the area in connection with the line, this track gauge became established for their use. It is not known why Grainger chose this gauge. He must have been aware of the huge success of the Stockton and Darlington Railway, built to a gauge of 4 ft 8½ in. The convention for specifying gauge had not settled down at this early date; as late as 1845 Captain Coddington of the Railway Inspectorate was describing another railway and wrote: "Gauge of rails 4 ft 8½ in from centre of rail to centre of rail, and 4 feet 6 inches from inside to inside of rail." and it is not impossible that Grainger intended to imitate the Stockton line but mistook the parameter. Whatever the reason, he inadvertently caused huge disadvantage to the M&KR and several other coal railways in Central Scotland. Opening The line is quoted to have opened in October 1826, but the section north of Gartsherrie at least must have opened during May 1826, although the earliest "opening" may have been for trial runs only. On 1 June 1826 a coal merchant, James Shillinglaw advertised coal from Gartsherrie for sale in Edinburgh: At first the railway did not own wagons or horses to pull them, and independent hauliers operated over the line, paying the company a toll for the privilege. It was only from 1835 that the company started to acquire its own fleet of wagons. The M&KR operated successfully from its opening: revenue was £704 in 1826, £2,020 in 1827, and £2,837 in 1828. The price of coal in Glasgow fell markedly, as much due to the weakening of the cartel previously in force. However the bulk of coal arrived in Glasgow by the Canal—about 89% in 1830. Route George Buchanan, writing in 1832, described the route: The railway commences at Palacecraig and Cairnhill Collieries about a mile south-west of Airdrie, and nine or ten miles east of Glasgow in a straight line. From this point it runs about a mile westwards, passing close to the north of the [William Dixon's] Calder Iron Works; it then turns to the north-west, and about half a mile farther on, crosses the Edinburgh and Glasgow road by Airdrie, at the same point this road crosses the Monkland Canal by the Coat Bridge. This is nine miles from the Cross at Glasgow, and two miles west of Airdrie. From this point the line continues nearly due north for a quarter of a mile, and here a small branch goes off eastwards about three-fourths of a mile, to the Colliery of Kipps [near Archibald Frew's Kippsbyre Colliery]. The line then advances northwards for about a mile, passing to the east of Gartsherrie Coal and Iron Works, and on to Gargill Colliery, where it turns nearly north for two miles; and turning again to the north-westwards with several turns till it reaches the canal opposite Kirkintilloch. The whole length from Cairnhill Bridge to the Forth and Clyde Canal is ten miles and the fall about 127 feet. In some parts it is quite level, and in others runs with a gentle but variable declination. ... It was originally laid with a single line of rails and passing places. Ground was taken, however, to lay a double line of way and this has since been gradually carrying into effect. The expense of the original line was about L. 3700 (i.e. £3,700) a mile. Priestley said that the fall from Palacecraig to the canal was , and from Kipps Colliery to the canal of . Gradients were moderate, with the steepest on the main line being 1 in 120 to the east of Bedlay, and 1 in 80 on the Kipps branch. The line crossed Main Street and Bank Street on the level at what is now the roundabout for Sunnyside Street, a little to the east of the later high level line, now which crosses Bank Street on a bridge. The canal passes under the road at this point. The level crossing at this important road junction was eliminated when the high level lattice girder bridge was built in 1872. Grainger had been instructed "to fit the road for locomotive engines", although the railway passed under the Cumbernauld Road near Bedlay in a low tunnel, with only headroom. Birkinshaw rails The M&KR used stone block sleepers with Birkinshaw's patent malleable iron rails. At this early date the technology of rail configuration had not matured, but John Birkinshaw had secured a patent in 1820 for a T-section fish-bellied edge rail of malleable iron. His patent specified that they should be formed by passing through rollers—as they were fish-bellied, presumably only the head was shaped in the rollers. Cast iron, as used until then, is brittle and ill-suited to heavy railway use; malleable iron is heat treated after casting and is able to withstand shocks. The Stockton and Darlington Railway was the first public railway to adopt Birkinshaw's rail; the Monkland and Kirkintilloch Railway was either the second or the third public railway. Other railways, friendly and competing Ballochney Railway As the first railway in a rapidly developing industrial area, the M&KR soon found that industry was springing up just beyond its reach. While some short branches and extensions were built (see below), other railways took the challenge and connected the new works. First to follow the M&KR was the Ballochney Railway, opened in 1828, and running eastwards from the end of the Kipps branch to "that part of the Monkland Coal Field to the North and East of Airdrie". The Ballochney was dependent on the M&KR for onwards conveyance of the minerals, and relations were friendly. Garnkirk and Glasgow Railway The success of mineral railways throughout Great Britain was apparent, and before the M&KR was opened, businessmen in Glasgow were proposing a direct railway: after all, transport over the M&KR involved transshipment to canal at Kirkintilloch, and was by no means direct. Support for the idea quickly gained strength, and the direct line, to be called the Garnkirk and Glasgow Railway (G&GR) gained Parliamentary authority in May 1826. At first it was to diverge from the M&KR near Bedlay and run more or less directly to Townhead, but its proprietors had second thoughts and changed the point of junction to Gartsherrie Bridge. The G&GR opened in 1831, and at first relations with the M&KR were friendly; the G&GR was dependent on its mineral traffic originating on the M&KR. From the M&KR point of view, they had a wharf on the deep water Forth and Clyde Canal, reached by seagoing vessels, whereas the G&GR was to terminate on the shallow "cut of junction" (the connection at Townhead between the Monkland Canal and the Forth and Clyde Canal). As technology and trade developed this relationship changed; the M&KR was a feeder railway, dependent on canals, and the G&GR, for onward conveyance; its locomotives were technically less advanced; and the G&GR seemed to flirt with extensions and alliances that threatened to cut the M&KR out. Gradually the G&GR became more of a competitor and less of an ally. Slamannan Railway In 1840 the Slamannan Railway opened between a point on the Ballochney Railway at Arbuckle and a wharf on the Union Canal at Causewayend. While the promoters suggested that traffic would arise from coal pits actually on their line of route, the obvious objective was to convey Monklands coal to Edinburgh direct, by-passing the Forth and Clyde Canal and much of the transit over the M&KR. However a long route over unpopulated terrain, ending in a rope-worked incline and a transshipment to the canal, seriously limited its potential. The opening of the Slamannan line did give rise to a faster passenger journey from Edinburgh to Glasgow, by canal from Edinburgh to Causewayend, and then successively by the Slamannan, Ballochney, Monkland and Kirkintilloch and Garnkirk & Glasgow Railways; the journey took four hours. Passing through remote moorland with few mineral deposits actually being worked the Slamannan line was never a success, and the opening of the better engineered Edinburgh and Glasgow Railway in 1842 dealt it a near-fatal blow. Industrial development Technical development in iron production had a massive influence in the Coatbridge area. Iron ores had been extracted in the area since the beginning of the century, but David Mushet discovered blackband ironstone which had a much richer iron content coupled with carboniferous material and in 1828 James Beaumont Neilson invented the hot blast process of iron smelting. In the third and fourth decades of the nineteenth century the iron industry expanded hugely in the Coatbridge area. There were 17 blast furnaces in 1826 and 53 in 1843. The hot blast process consumed large quantities of local coal; the processes previously in use had required coke, for the production of which the local coals were unsuitable. This encouraged further coal production, as well as ironstone extraction. The smelting process also required limestone, conveyed at first by horse and cart from the Cumbernauld area; and fireclay, available in the Gartsherrie and Garnkirk areas, for manufacturing refractory bricks for lining the kilns and withstanding high temperatures. The M&KR found itself straddling the centre of the iron smelting industry, but aligned and engineered for carrying coal to Kirkintilloch, and not connected to the developing ironstone and coal pits. This generated huge potential, but also considerable challenges as the needs of the dominant industries developed. Other local mineral railways were constructed to access pits and works, and as the M&KR was located at the centre of the iron industry, they worked in collaboration with it, and adopted the same track gauge. As inter-city railways developed elsewhere, they adopted what had become the standard gauge of , and they quickly became the dominant transport medium. The Edinburgh and Glasgow Railway opened in 1842. The local lines in Monkland could not transfer their wagons to those other lines and, operating with horses and technically primitive locomotives, on stone block sleepered track, they found themselves at an enormous disadvantage. Growth and development It soon became clear to the promoters of the railway that much work had to be done on the line after opening, due to the heavy use of the line. In February 1830 it was reported that of track had been doubled, and that a further would be doubled during the subsequent Spring, and the decision was taken to raise the additional £10,000 of share capital, authorised in the 1824 act of Parliament. Thomas and Paterson imply that this work was "making the line fit to receive" locomotives. The first locomotive ran from 1831 (see below). In 1833 the M&KR was seeking parliamentary authority for two new branches, and additional capital, the additional capital from 1830 having been used up. Operating methods were revealed when Charles Tennant used the hearing to press for an altered method of working. The M&KR was operating between The Howes (Coatbridge) and Gargill with locomotives; the G&GR was permitting hauliers to operate with horse traction, continuing on to the M&KR. The M&KR had been operating this section as two single lines, one for steam traction and one for horses; the M&KR said that the drivers of horses had been "taking off their horses [i.e. coasting downhill] and allowing their waggons to come in contact and collision with the steam carriages". Tennent demanded conversion to ordinary double track working, and there was much manoeuvring in the parliamentary stages. Finally the M&KR got its way, retaining segregation of horse and locomotive haulage, and the M&KR got its act of Parliament, the (3 & 4 Will. 4. c. cxiv), on 24 July 1833. The additional capital authorised was £20,000 and this was obtained as a bank loan. In 1834 another connection was made at Whifflet, when the Wishaw and Coltness Railway made a junction there, bringing pits at Coltness into the M&KR network. In 1834-5 a basin was constructed by the Forth and Clyde Canal at Kirkintilloch on M&KR land; originally the transshipment point had been a simple canalside wharf. The new basin was opened on 28 February 1835. In 1835 the Forth and Clyde Canal acquired a 14-ton iron boat equipped with rails and turntables to carry railway wagons. The plan was to load wagons from the M&KR for onward conveyance to any point on the Canal; as well as factory sidings this apparently included transfer to seagoing vessels at Grangemouth, and possibly Bowling. At small locations, individual wagons were probably manoeuvred onto hard standing, not necessarily to siding tracks, and the arrangement avoided two transshipments of the material carried. In December 1835 the M&KR expended £81 for new wagons and for cutting rails, i.e. making the approach to the loading point at Kirkintilloch. In 1836 the "coal waggon boat" earned £540. A branch was opened in 1837 from Whifflat Junction (the present spelling is Whifflet) to Rosehall, passing through a short tunnel. There was a colliery there, and several tramways were built to connect pits in the area to the M&KR. The line was leased for 30 years from Whit Sunday 1838 to Addie and Millar [or Miller] and worked by them. In 1839 the company secured authority in the (2 & 3 Vict. c. lxx) for a substantial increase in its capital, to £124,000, "for the purpose of re-laying the line with heavy rails, and otherwise providing for the augmented traffic". In July 1843 further lines were authorised by the (6 & 7 Vict. c. lxxix), with capital further increased to £210,000. In 1842 the M&KR responded to the continuing growth in traffic by acquiring five new locomotives and tenders and 232 new wagons. In 1846 the alignment around Sunnyside Junction at Coatbridge was altered. Gartsherrie Iron Works had been contained between the Monkland Canal and the railway, and the line was shifted eastwards, close to Sunnyside Street, to enable the ironworks to be expanded. Locomotives When the line opened, the motive power was horses, owned by independent hauliers. However the technical developments achieved on the Liverpool and Manchester Railway were noted, and it had been announced that the Garnkirk and Glasgow Railway would use locomotive power. The M&K company decided to purchase a locomotive; it was designed by George Dodds, the company's own superintendent, and it was constructed by Murdoch, Aitken and Company of Glasgow. Locomotive no. 1 (as it was designated) was delivered on 10 May 1831: It was taken from the workshop, Hill Street, on Tuesday morning, and being started on the railway below Chryston, it passed several miles along the railway, sometimes going at the rate of , although the company's engines are not required to move, when loaded, at a greater speed than an hour. The locomotive was the first to operate successfully on a commercial basis in Scotland. The M&KR expended £5,925 on strengthening the track for locomotive operation. The same makers delivered No. 2 on 10 September 1831. These locomotives were of the "Killingworth" type, considered even at this date rather old-fashioned: Dodds had a conservative outlook and had specified this type in preference to the technically progressive English designs of Robert Stephenson. They had two vertical cylinders, and the pistons had piston rings; the boiler was long by diameter, with 62 copper tubes diameter; working pressure was . The wheels were in diameter. The locomotives were reported to have been very reliable. When the second locomotive was acquired, the two units operated on either side of Bedlay tunnel which had inadequate clearance; horses were used through the tunnel. In January 1832 through working was started, the line having been doubled, and the tunnel opened out. The location in question is at Bedlay, on a sharp curve immediately south of the Stirling Road, now the A80. A key factor in the ability to run locomotives at this early date was the use of Birkinshaw patent 'malleable' wrought iron rails, rolled by machine to lengths. These were strong enough to bear the weight of locomotives, unlike the plateways (such as the Kilmarnock and Troon Railway, where the first, unsuccessful attempt to operate locomotives in Scotland took place) or ordinary cast iron rails, which were brittle and prone to fracture under heavy unsprung loads. In 1837 the Company built a workshop for the locomotive purposes at Kipps. Locomotives Nos. 3 and 4 were made by the Company itself in 1834 and 1838 respectively. Locomotives named Zephyr, Atlas and Orion were operating in the 1840s. Passengers Early days A trial of a railway coach took place on 8 July 1828; it was reported in The Scotsman newspaper: Railway Coach. The first railway coach constructed in Scotland for the conveyance of passengers, made a trial journey in the neighbourhood of Airdrie on Tuesday. It is dragged by one horse, and is to ply on the Kirkintilloch Railway, in carrying passengers to boats on the canal. It is meant to carry 24 passengers, but started in high style with no less than forty, within and without.M E Quick, Railway Passenger Stations in England Scotland and Wales—A Chronology, The Railway and Canal Historical Society, 2002 Tomlinson infers that a regular passenger service started on that day, and later authors have followed him, but it is not certain that the trial immediately led to a regular run. If and when it did, it must have been from Leaend (on the Ballochney Railway) to Kirkintilloch. In any case, it seems to have been very short-lived. When the Garnkirk and Glasgow Railway opened its line, it operated a service between Leaend and its Townhead terminus in Glasgow, running over part of the Ballochney Railway and the M&KR Kipps branch, and calling at The Howes in Coatbridge. There were four trains each way daily. As the M&KR operated like a toll road at this time, there was nothing surprising about another business, or in this case another railway company, operating passenger trains over the M&KR line. At first, from 1 June 1831 this was a horse-drawn service, but a few weeks later the G&GR put a locomotive, called St Rollox on, running as far as Gartsherrie. The M&KR would not allow the locomotive over their line, so horse traction took over from there to Leaend. This service continued until 1843. Bradshaw's Guide shows, in a section headed Garnkirk and Glasgow Railway, passenger trains from Glasgow to Airdrie, &c, at 7½ and 10½ a.m., 1½ and 4½ p.m. Airdrie to Glasgow, &c, 8¾ and 11¾ a.m., 2¾ and 5¾ p.m. Fares, Glasgow and Airdrie 1s. 0d.-- 6d. In June 1831 there was a horse-drawn service from Calder Iron Works (on the M&KR) to Gartsherrie, connecting there with the Leaend to Glasgow service, and in the summer of 1832 the G&GR advertised a service from Cairnhill Bridge (near the Calder Iron Works) to Glasgow, and also from "Clarkston": the Clarkston Wester Moffat location on the Ballochney Railway, via Kipps: A Railway Carriage starts from Clarkston and Cairnhill Bridge every Wednesday at a quarter to 8 o'clock A.M. and returns with the evening train from the Railway Depot. Another advertisement dated 15 October 1832 announced that the Clarkston and Cairnhill carriages were "now discontinued". M&KR passenger services and stations Summarising passenger stations during M&KR days is difficult, and many sources are contradictory. In the earliest days horse drawn trains probably stopped wherever someone wanted to board or alight, without formal station premises. On 1 June 1831 a passenger service was run from Airdrie Leaend to the Howes (in Coatbridge) and on to Gargill. This was later extended to Glasgow Townhead over the Garnkirk and Glasgow Railway, operated in collaboration with that line. There was a short lived connecting service about this time from Calder Iron Works to Gartsherrie. In the summer of 1832 weekly services were advertised from Cairnhill Bridge and Clarkston to connect with the Airdire to Glasgow trains. From late 1839 there was service from Leaend to the Howes, Chryston and Kirkintilloch Basin, but this probably ceased in 1840 or soon after. There was a passenger service from Airdrie Hallcraig Street to Kirkintilloch (the exchange station on the E&GR) from 26 December 1844. There was a connecting service from Kirkintilloch to Kirkintilloch Basin until 23 March 1846. The main passenger service was suspended on 26 July 1847 while the track gauge was altered. On 28 July 1847 the service resumed, now running through to Glasgow Queen Street. On 1 December 1847 the service was diverted to Glasgow over the Garnkirk line from Gartsherrie, but on 10 December 1849 it was reverted to Queen Street. A connecting service was run southwards to Cairnhill Bridge, but this was shortened back to terminate at Whifflat (now called Whifflet) on 1 December 1850. The whole passenger service was suspended on 10 December 1851. A passenger service was reinstated from Glasgow Buchanan Street to Airdrie Hallcraig Street in August 1852. Taking the stations from north to south, they were as follows. Kirkintilloch Basin, open from 1828 at the northern terminus by the canal; closed as a passenger station by 23 March 1846; there was a nearby North British Railway station, opened in 1848, and which continued until 1964; Woodley; the Edinburgh and Glasgow Railway had a station called Kirkintilloch, by the modern Easter Garngaber Road; the M&KR opened Woodley station a little to the north on its line in 1844; connecting passengers would have had a 100 yards walk; but in the same year the M&KR constructed a spur to exchange sidings alongside the E&GR at Garngaber and there was passenger exchange there also; (the M&KR and the E&GR had different track gauges at this date); the station was also referred to as Kirkintilloch and also Kirkintilloch Junction; it closed on 26 July 1847; Bridgend, at Gartferry Road, opened about 1839 and closed 10 December 1851; Bedlay, opened 1849 (i.e. after the end of the M&KR's independent existence) and closed again the same year; Garnqueen, probably at Main Street, opened 10 December 1849; closed on 10 December 1851; Gartsherrie, at the junction with the G&GR, also known as Gargill; opened 1 June 1831; and closed 10 December 1851; adjacent Caledonian station continued 1940; The Howes: probably located at the site of Sunnyside Junction; it opened in 1831 and closed in 1851; also known as South End; Coatbridge: opened 10 December 1849; closed 10 December 1851, on the M&KR line alongside the present-day Coatbridge Central station (which is on the parallel GG&CR line); when the M&KR main line was rebuilt at a higher level in North British Railway days (to cross over Bank Street and eliminate the level crossing), a station at the same location was opened in 1871 on the new high level line; Whifflat; (nowadays spelt Whifflet); opened 10 December 1849; closed 10 December 1851; the NBR later (1871) opened a station at this location); Calder Iron Works; opened June 1831; soon closed; Cairnhill Bridge; opened summer 1832; closed by mid October 1832; reopened by Monklands Railway 10 December 1849; closed 1 January 1850. The location of the early Coatbridge "Howes" station is especially difficult to determine, and it is not shown on any available mapping; when the M&KR opened, Coatbridge was not yet an established community. At times it was called "The Howes", or just "Howes". Cobb places it at Kipps, but this must be wrong. Martin says that there was a stationary engine in 1836 for the accommodation of the traffic coming from the Wishaw Railway, and quotes accounts for the purchase of winding apparatus and ropes. This must have been to rope-haul trains coming from the south up to what became Sunnyside Junction, from the level crossing at Bank Street. There is a "Howes Basin" immediately south-west of the Sunnyside Junction location on the 1864 Ordnance Survey Map. As trains to Leaend stopped at The Howes, it must have been at the junction where the M&KR main line and the Kipps branch diverged. Later developments The opening of the Slamannan Railway in 1840 gave rise to a brief inter-city passenger traffic between Edinburgh and Glasgow, by the Union Canal to Causewayend, thence by the Slamannan line to Arbuckle, and from there by the Ballochney Railway to Kipps, the M&KR to Gartsherrie and the Garnkirk and Glasgow Railway to Townhead. The journey time was four hours or more. The Union Canal basin was not in the centre of Edinburgh, and there were three rope-worked inclines on the route. The stage coach journey time was similar. At first a success, the passenger traffic soon waned, and the opening of the Edinburgh and Glasgow Railway in 1842 put an end to it. None of the stations is referred to in the 1843 Bradshaw or the 1850 Bradshaw There does not seem to have been much attempt at passenger business in later years, nor on the Kirkintilloch main line. However, on and from 26 December 1844, four trains ran each way daily from the Hallcraig Street station at Airdrie (newly opened on the Ballochney line) to an exchange station at the point of intersection with the Edinburgh and Glasgow Railway, somewhat to the east of the present Lenzie station. This used a new spur to an exchange station; the two railways had different track gauges at this time. The Ballochney company purchased seven second-hand coaches from the Midland Railway for the service. Also in December 1844, a horse-drawn passenger conveyance ran from Kirkintilloch to the Bothlin Viaduct, at the point of intersection, (i.e. over the northern extremity of the M&KR) for connectional purposes, but this seems to have been short-lived. The M&KR altered its track gauge to standard on 26 and 27 July 1847, and on the following day the Airdrie service was able to run through to Glasgow (Queen Street) via Bishopbriggs; the journey now took 45 minutes. There must have been a lower scale of fares over the E&GR portion (presumably proportionate to fares charged over the G&GR to Glasgow, and undercutting the E&GR's own fares); in November the E&GR gave notice that it would charge the full rate for its portion of the journey. This meant that the fares were now equal to much longer journeys on the M&KR itself, so the M&KR transferred the train service back to the G&GR route from 1 December 1847. However they returned to the Bishopbriggs route from 10 December 1849. It ran until 10 December 1851, from when it was discontinued. Losing the competitive race At the time the Monkland and Kirkintilloch line was being built, there was a huge acceleration in the rate of technological change, and the pioneer lines—the coal railways—found themselves left behind by more advanced railways; the Edinburgh and Glasgow Railway (E&GR) opened its line, engineered for fast locomotive hauled trains, in 1842. The Monkland and Kirkintilloch line found itself left behind; it had a track gauge that prevented through working with the developing networks; to convey minerals to Glasgow, it relied on either transshipment to a canal or transfer to another line (the G&GR); to get to Edinburgh was even worse: access was over the Ballochney Railway, with two rope-worked inclined planes, and then the Slamannan line, with another rope-worked inclined plane, and transshipment at Causewayend to the Union Canal. This competitive disadvantage was equally keenly felt by the other coal railways with which the M&K collaborated—the Ballochney and Slamannan companies. They decided that their interests lay in collaboration, and they formed a joint working arrangement from 29 March 1845. In 1844 the M&KR had built a short spur to transshipment sidings with the E&GR at Garngaber, a little east of the present-day Lenzie station. The inconvenience of the transshipment emphasised the fact that their now non-standard track gauge prevented easy transfer of traffic to the developing railway network. Working together, they decided to change the track gauge to standard gauge; they got parliamentary authority, and effected the change together on 26 July and 27 July 1847. The Garnkirk and Glasgow had originally been one of the "coal railways" but its management was more progressive and at first a close ally, it gradually became a competitor. It changed its gauge to standard a few weeks after the M&KR, and it opened its new line by-passing the M&KR in 1845, becoming the Glasgow, Garnkirk and Coatbridge Railway (GG&CR), and allying itself more closely with the Wishaw and Coltness Railway (W&CR) to the south. At this time, promoters were forwarding the idea of a railway from Carlisle, connecting with the developing English network, and they needed a route from the Southern Uplands into Glasgow. This was neatly provided by the W&CR and the GG&CR, who now transformed from coal railways to elements of an intercity main line. The new Caledonian Railway opened its trunk line from Carlisle to Glasgow (over the Garnkirk line) in 1848, emphasising the isolation of the M&KR. At the same time the Caledonian opened a route to Greenhill, to connect with the Scottish Central Railway there. The Caledonian used a short section of the Monkland and Kirkintilloch route, between Gartsherrie and Garnqueen South Junction, to get access to its onward route. In later decades the traffic from Motherwell to Perth adopted this route, and in the twentieth century, Caledonian express passenger trains used what had become a section of the North British Railway, and from 1923 as the respective successor companies, the London Midland and Scottish Railway used London and North Eastern Railway tracks. Amalgamation The joint working arrangement between the M&KR, the Ballochney Railway and the Slamannan Railway was working well, while the competitive pressures were increasing, and the three companies decided to amalgamate: they did so on 14 August 1848, forming a new company called the Monkland Railways. The new company took steps to consolidate its business, building a number of branches to collieries; a longer branch to Bathgate and a new chemical works there; a branch to Bo'ness; and eventually the "New Line" joining Coatbridge and Airdrie to Bathgate direct, via Armadale. The Monkland Railways company was absorbed by the Edinburgh and Glasgow Railway by an act of Parliament, dated 5 July 1865, effective from 31 July 1865. A day later (on 1 August 1865) the Edinburgh and Glasgow Railway was absorbed into the North British Railway. After 1865 The Monkland and Kirkintilloch Railway had been built as a north-south line connecting to a canal, and the other coal railways had equally obsolescent origins. For the time being the mineral traffic was dominant, but more direct access to Glasgow was required, provided from 1871 by the Coatbridge to Glasgow line. This enabled a much more direct passenger access from the Monklands area to Glasgow, and a through route from Edinburgh to Glasgow, on an east-west axis and running briefly over the M&KR route. Trains on the north-south axis, from Motherwell towards Stirling, used the short section of the M&KR route between Gartsherrie and Garnqueen as already described. Passing through Coatbridge, the line crossed Main Street and Bank Street at their junction by a level crossing. This was an exceedingly inconvenient arrangement, and in 1871 the Monkland Railways built the line at a higher level, crossing Bank Street by a large lattice iron bridge. There had been a mineral siding running eastward from the level crossing location to a pit and iron works near the present day Coatbank Street roundabout. When the main line was reconstructed, a new connection to the siding was formed from a junction a short distance further south, and a Sheepford goods station was provided. There was an extension to Rochsholloch Iron works and a tube works. In 1878 the North British Railway sponsored a nominally independent railway, the Glasgow, Bothwell, Hamilton and Coatbridge Railway to develop colliery access; this connected to the Rosehall colliery line near Whifflet, giving direct access towards the ironworks of Coatbridge. In 1895 a spur was opened from Bridgend Junction to Waterside Junction, enabling through running from the M&KR route northbound to the E&GR route eastbound, avoiding reversal at Lenzie. In the twentieth century the best days of the Monklands iron industry were past, and gradual decline set in, and the duplication of access to the remaining pits and works was damaging for the former M&KR line. In 1959 the two connections to the E&GR main line (at Lenzie and Waterside Junction) closed. The Sheepford section had already shut down in 1951, and the Bothwell line closed in 1955. The Cairnill and Palacecraig section closed by the 1950s. On 2 April 1966 the main line to Kirkintilloch Basin Goods Station closed north of Bedlay Colliery. That too closed in 1969 leaving a stub to Leckethill but in 1982 the line north of Gartsherrie closed completely. Present day (2008) Two very short sections of the M&KR line remain: the north-south line from Sunnyside Junction to Whifflet (although the section of route was rebuilt at a higher level to eliminate the level crossing at Bank Street in 1871/2 the eastward section from Sunnyside Junction to Greenside Junction, now part of the Glasgow – Airdrie line, electrified in 1960. Links to other lines and modes of transportation The Ballochney Railway at Kipps. The Caledonian Railway Main Line at Garnqueen South Junction and Gartsherrie North Junction. The Forth and Clyde Canal at Kirkintilloch The Garnkirk and Glasgow Railway at Gartsherrie Junction The Slamannan Railway The Wishaw and Coltness Railway Notes References Further reading A Short History of the railways of Coatbridge and Airdrie. North British Railway Closed railway lines in Scotland Mining railways Early Scottish railway companies Pre-grouping British railway companies Railway companies established in 1824 Railway lines opened in 1826 Railway companies disestablished in 1848 Standard gauge railways in Scotland 4 ft 6 in gauge railways in Scotland Horse-drawn railways 1824 establishments in Scotland Kirkintilloch British companies established in 1824 British companies disestablished in 1848 Coal in Scotland
Monkland and Kirkintilloch Railway
[ "Engineering" ]
8,111
[ "Mining equipment", "Mining railways" ]
9,255,172
https://en.wikipedia.org/wiki/Repeat%20unit
A repeat unit or repeating unit (or mer) is a part of a polymer whose repetition would produce the complete polymer chain (except for the end-groups) by linking the repeat units together successively along the chain, like the beads of a necklace. A repeat unit is sometimes called a mer (or mer unit) in polymer chemistry. "Mer" originates from the Greek word meros, which means "a part". The word polymer derives its meaning from this, which means "many mers". A repeat unit (mer) is not to be confused with the term monomer, which refers to the small molecule from which a polymer is synthesized. Overview One of the simplest repeat units is that of the addition polymer polyvinyl chloride, -[CH2-CHCl]n-, whose repeat unit is -[CH2-CHCl]-. In this case the repeat unit has the same atoms as the monomer vinyl chloride CH2=CHCl. When the polymer is formed, the C=C double bond in the monomer is replaced by a C-C single bond in the polymer repeat unit, which links by two new bonds to adjoining repeat units. In condensation polymers (see examples below), the repeat unit contains fewer atoms than the monomer or monomers from which it is formed. The subscript "n" denotes the degree of polymerisation, that is, the number of units linked together. The molecular mass of the repeat unit, MR, is simply the sum of the atomic masses of the atoms within the repeat unit. The molecular mass of the chain is just the product nMR. Other than monodisperse polymers, there is normally a molar mass distribution caused by chains of different length. In copolymers there are two or more types of repeat unit, which may be arranged in alternation, or at random, or in other more complex patterns. Other vinyl polymers Polyethylene may be considered either as -[CH2-CH2-]n- with a repeat unit of -[CH2-CH2]-, or as [-CH2-]n-, with a repeat unit of -[CH2]-. Chemists tend to consider the repeat unit as -[CH2-CH2]- since this polymer is made from the monomer ethylene (CH2=CH2). More complex repeat units can occur in vinyl polymers -[CH2-CHR]n-, if one hydrogen in the ethylene repeat unit is substituted by a larger fragment R. Polypropylene -[CH2-CH(CH3)]n- has the repeat unit -[CH2-CH(CH3)]. Polystyrene has a chain where the substituent R is a phenyl group (C6H5), corresponding to a benzene ring minus one hydrogen: -[CH2-CH(C6H5)]n-, so the repeat unit is -[CH2-CH(C6H5)]-. Condensation polymers: repeat unit and structural units In many condensation polymers, the repeat unit contains two structural units related to the comonomers which have been polymerized. For example, in polyethylene terephthalate (PET or "polyester"), the repeat unit is -CO-C6H4-CO-O-CH2-CH2-O-. The polymer is formed by the condensation reaction of the two monomers terephthalic acid (HOOC-C6H4-COOH) and ethylene glycol (HO-CH2-CH2-OH), or their chemical derivatives. The condensation involves loss of water, as an H is lost from each HO- group in the glycol, and an OH from each HOOC- group in the acid. The two structural units in the polymer are then considered to be -CO-C6H4-CO- and -O-CH2-CH2-O-. References Polymer chemistry
Repeat unit
[ "Chemistry", "Materials_science", "Engineering" ]
849
[ "Materials science", "Polymer chemistry" ]
9,257,247
https://en.wikipedia.org/wiki/National%20Centers%20for%20Biomedical%20Computing
The National Centers for Biomedical Computing (NCBCs) are part of the U.S. National Institutes of Health plan to develop and implement the core of a universal computing infrastructure that is urgently needed to speed progress in biomedical research. Their mission is to create innovative software programs and other tools that will enable the biomedical community to integrate, analyze, model, simulate, and share data on human health and disease. Recognizing the potential benefits to human health that can be realized from applying and advancing the field of biomedical computing, the Biomedical Information Science and Technology Initiative (BISTI) was launched at the NIH in April 2000. This initiative is aimed at making optimal use of computer science and technology to address problems in biology and medicine. The full text of the original BISTI Report is available. As of April 2016, the web site for the National Centers for Biomedical Computing (http://www.ncbcs.org) is no longer managed by that organization, though many of the centers still are supported. Current Centers Center for Computational Biology National Center for Biomedical Ontology Simbios: Physics-based Simulation of Biological Structures, Stanford University National Center for Integrative Biomedical Informatics National Center for Multi-Scale Study of Cellular Networks National Alliance for Medical Imaging Computing See also Biositemaps Biomedical Computation Review, a quarterly magazine created by Simbios to help build community among the diverse disciplines that participate in the field. External links NIH Roadmap National Centers for Biomedical Computing, archive.org National Center for Multi-Scale Study of Cellular Networks National Alliance for Medical Imaging Computing Genomics Proteomics Medical research institutes in the United States Bioinformatics organizations National Institutes of Health References
National Centers for Biomedical Computing
[ "Biology" ]
340
[ "Bioinformatics", "Bioinformatics organizations" ]
9,257,264
https://en.wikipedia.org/wiki/FLUXNET
FLUXNET is a global network of micrometeorological tower sites that use eddy covariance methods to measure the exchanges of carbon dioxide, water vapor, and energy between the biosphere and atmosphere. FLUXNET is a global 'network of regional networks' that serves to provide an infrastructure to compile, archive and distribute data for the scientific community. The most recent FLUXNET data product, FLUXNET2015, is hosted by the Lawrence Berkeley National Laboratory (USA) and is publicly available for download.  Currently there are over 1000 active and historic flux measurement sites. FLUXNET works to ensure that different flux networks are calibrated to facilitate comparison between sites, and it provides a forum for the distribution of knowledge and data between scientists. Researchers also collect data on site vegetation, soil, trace gas fluxes, hydrology, and meteorological characteristics at the tower sites. History and Background FLUXNET started in 1997 and has grown from a handful of sites in North America and Europe to a current population exceeding 260 registered sites world-wide.  Today, FLUXNET consists of regional networks in North America (AmeriFlux, Fluxnet-Canada, NEON), South America (LBA), Europe (CarboEuroFlux, ICOS), Australasia (OzFlux), Asia (China Flux, and Asia Flux) and Africa (AfriFlux).   At each tower site, the eddy covariance flux measurements are made every 30 minutes and are integrated on daily, monthly and annual time scales.  The spatial scale of the footprint at each tower site reaches between 200 m and a kilometer. An overarching intent of FLUXNET, and its regional partners, is to provide data that can be used to validate terrestrial carbon fluxes derived from sensors on NASA satellites, such as TERRA and AQUA, and from biogeochemical models.   To achieve this overarching goal, the objectives and priorities of FLUXNET have evolved as the network has grown and matured.  During the initial stages of FLUXNET, the priority of our research was to develop value-added products, such as gap-filled data sets of net ecosystem productivity, NEP, evaporation, energy exchange and meteorology.  The rationales for this undertaking were: 1) to compute daily, monthly and annual sums of net carbon, water and energy exchange; and 2) to produce continuous datasets for the execution and testing of a variety of biogeochemical/biophysical/ecosystem dynamic models and satellite-based remote sensing algorithms.   During the second stage of FLUXNET the research priority involved the decomposition of NEE measurements into component fluxes such as GPP and ecosystem respiration, Reco.  This step is required for FLUXNET to be a successful tool for validating MODIS-based estimate of terrestrial carbon exchange; algorithms driven by satellite-based remote sensing instruments are unable to assess NEE directly, and instead compute GPP or NPP. In the intervening years, FLUXNET scientists have used the flux-component datasets (GPP, Reco) to assess how canopy photosynthesis and ecosystem respiration vary as a function of: 1) season; 2) plant functional type; and 3) environmental drivers. While these initial studies have contributed significantly towards understanding the physiology of whole ecosystems, they only represent an initial step towards the future evolution and productivity of FLUXNET.   For example, the majority of the early work was produced with a subset of field sites, which was heavily biased towards coniferous and deciduous forests.   With the continued growth and extended duration of the network, many new opportunities, relating to the spatial/temporal aspects of carbon dioxide exchange, remain to be explored.  First, FLUXNET has expanded to include broader representation of vegetation types and climates.  The network now includes numerous tower sites over tropical and alpine forests, savanna, chaparral, tundra, grasslands, wetlands and an assortment of agricultural crops. Second, the scope of many studies over deciduous and conifer forests has expanded. Several contributing research groups are conducting chronosequence studies associated with disturbance by fire and logging.   From this work, scientists are learning that information on disturbance needs to be incorporated into model schemes that rely on climate drivers and plant functional type to upscale of tower fluxes to landscapes and regions—adding another level of complexity. Third, FLUXNET is partnering with other groups that are measuring the changes in phenology with networks of digital cameras, soil moisture and methane fluxes. Today, with many datasets extending beyond two decades, FLUXNET has the opportunity to provide data that is necessary to assess the impacts of climate and ecosystem factors on inter-annual variations and trends of carbon dioxide and water vapor fluxes. The sharing of data has also been instrumental in developing techniques that use machine learning methods and combine data streams from FLUXNET, remote sensing and gridded data products to produce maps of carbon and water fluxes. References Further reading Pastorello, G., D. Papale, H. Chu, C. Trotta, D. Agarwal, E. C. Canfora, D. Baldocchi, and M. Torn (2016), The FLUXNET2015 Dataset: The longest record of global carbon, water, and energy fluxes is updated, Eos Trans. AGU. Pastorello, G., et al. (2020), The FLUXNET2015 dataset and the ONEFlux processing pipeline for eddy covariance data, Scientific Data, 7(1), 225, doi:10.1038/s41597-020-0534-3. External links FLUXNET FLUXNET2015 Dataset (2015) FLUXNET LaThuile Dataset (2007) FLUXNET Marconi Dataset (2000) Historical Interactive Map of Fluxnet Sites Historical FLUXNET at ORNL Historical Fluxdata.org Fluxnet on NOSA Regional FLUXNET websites AmeriFlux AsiaFlux CarboEurope Chinaflux European Fluxes Database Fluxnet-Canada KoFlux OzFlux Urban Flux Network Applied and interdisciplinary physics Meteorological data and networks
FLUXNET
[ "Physics" ]
1,241
[ "Applied and interdisciplinary physics" ]
9,257,963
https://en.wikipedia.org/wiki/The%20enemy%20of%20my%20enemy%20is%20my%20friend
"The enemy of my enemy is my friend" is an ancient proverb which suggests that two parties can or should work together against a common enemy. The exact meaning of the modern phrase was first expressed in the Latin phrase "Amicus meus, inimicus inimici mei" ("my friend, the enemy of my enemy"), which had become common throughout Europe by the early 18th century, while the first recorded use of the current English version came in 1884. Examples Rajamandala A Sanskrit treatise on statecraft, the Arthashastra of Kautilya states: A neighboring power would be the first to dispute control of territory, and therefore Kautilya finds neighboring kings to be natural enemies of any conqueror. A king whose territories border those of the enemy would also have this relationship with them, and therefore be a natural ally. This system of relationships was termed Rajamandala (meaning circle of kings) and informed the foreign policy of Chandragupta's Empire. This early theory of geopolitics is still recognized today as the Mandala theory of foreign relations. World War II The idea that "the enemy of my enemy is my friend" functioned in various guises as foreign policy by the Allies during World War II. In Europe, tension was common between the Western Allies and the Soviet Union. Despite their inherent differences, they recognized a need to work together to meet the threat of Nazi aggression under the leadership of Adolf Hitler. Both U.S. President Franklin D. Roosevelt and British Prime Minister Winston Churchill were wary of the Soviet Union under the leadership of Joseph Stalin. However, both developed policies with an understanding that Soviet cooperation was necessary for the Allied war effort to succeed. There is a quote from Winston Churchill made to his personal secretary John Colville on the eve of Germany's invasion of the Soviet Union (Operation Barbarossa). He was quoted as saying, "if Hitler invaded Hell, I would make at least a favourable reference to the Devil in the House of Commons." Stalin reciprocated these feelings towards his Western allies. He was distrustful and feared that they would negotiate a separate peace with Nazi Germany. However, he also viewed their assistance as critical in resisting the Nazi invasion. The doctrine of "the enemy of my enemy is my friend" was employed by nation states in regions outside of the European theater as well. In the Second Sino-Japanese War, within the Pacific theater, an alliance was formed between Chinese Communists and Chinese Nationalists. Leading up to this, these forces had battled each other throughout the Chinese Civil War. However, they formed an alliance, the Second United Front in response to the mutual threat of Japanese aggression. Similarly, the Malayan Communist Party and the British Empire agreed a truce for the Malayan campaign and subsequent Japanese Occupation. Cold War The doctrine was also used extensively during the Cold War between Western Bloc nations and the Soviet Union. The Soviets and the Chinese aided North Korea during the Korean War as well as the Viet Cong/North Vietnamese during the Vietnam War to oppose American foreign policy goals. Likewise, the United States and its allies supported the Afghan mujahideen after the Soviet invasion in the hopes of thwarting the spread of Communism. In the Third World, both superpowers supported regimes whose values were at odds with the ideals espoused by their governments. These ideals were capitalism and liberal democracy in the case of the United States, and the Marxist–Leninist interpretation of Communism and proletarian democracy in the case of the Soviet Union. In order to oppose the spread of Communism, the United States government supported dictatorial regimes, such as Mobutu Sese Seko in Zaire, Suharto in Indonesia, and Augusto Pinochet in Chile. The support provided by the Soviet Union towards nations with overtly anti-Communist governments, such as Gamal Abdul Nasser in Egypt, in order to oppose American influence, is another example of "the enemy of my enemy is my friend" as policy on an international scale. The Soviets also backed India to counter both the pro-American Pakistani government and the People's Republic of China (following the Sino-Soviet split), despite the fact that India had a democratic government. Similarly, China, following the split, lent support to nations and factions that embraced an anti-Soviet, often Maoist form of Communism, but whose governments nonetheless embraced Sinophobic policies at home, such as the Khmer Rouge-ruled regime of Democratic Kampuchea. Middle East In an example of this doctrine at work in Middle Eastern foreign policy, the United States backed the Iraqi government under Saddam Hussein during the Iran–Iraq War, as a strategic response to the anti-American Iranian Revolution of 1979. A 2001 study of international relations in the Middle East used the proverb as the basis of its main thesis, examining how enmity between adverse nations evolve and alliances develop in response to common threats. Balance theory In mathematical sociology, a signed graph may be used to represent a social network that may or may not be balanced, depending upon the signs found along cycles. Fritz Heider considered a pair of friends with a common enemy as a balanced triangle. The full spectrum of changes induced by unbalanced networks was described by Anatol Rapoport: The hypothesis implies roughly that attitudes of the group members will tend to change in such a way that one's friends' friends will tend to become one's friends and one's enemies' enemies also one's friends, and one's enemies' friends and one's friends' enemies will tend to become one's enemies, and moreover, that these changes tend to operate even across several removes (one's friends' friends' enemies' enemies tend become friends by an iterative process). Frank Harary described how balance theory can predict coalition formation in international relations: One can draw the signed graph of a given state of events and examine it for balance. If it is balanced there will be a tendency for the status quo. If it is not balanced, one should question each of the bonds between pairs of nations in a cycle with regard to relative strength in the situation. One might then predict that the weakest such bond will change sign. Harary illustrated the method as a gloss on some events in the Middle East using several signed graphs, one of which represented eight nations. See also Frenemy You are either with us, or against us Law of excluded middle Lesser of two evils principle Unholy alliance (geopolitical) References Adages International relations Proverbs Friendship Interpersonal relationships Political terminology Power (social and political) concepts
The enemy of my enemy is my friend
[ "Biology" ]
1,344
[ "Behavior", "Interpersonal relationships", "Human behavior" ]
9,258,009
https://en.wikipedia.org/wiki/Digital%20media%20player
A digital media player (also known as a streaming device or streaming box) is a type of consumer electronics device designed for the storage, playback, or viewing of digital media content. They are typically designed to be integrated into a home cinema configuration, and attached to a television or AV receiver or both. The term is most synonymous with devices designed primarily for the consumption of content from streaming media services such as internet video, including subscription-based over-the-top content services. These devices usually have a compact form factor (either as a compact set-top box, or a dongle designed to plug into an HDMI port), and contain a 10-foot user interface with support for a remote control and, in some cases, voice commands, as control schemes. Some services may support remote control on digital media players using their respective mobile apps, while Google's Chromecast ecosystem is designed around integration with the mobile apps of content services. A digital media player's operating system may provide a search engine for locating content available across multiple services and installed apps. Many digital media players offer internal access to digital distribution platforms, where users can download or purchase content such as films, television episodes, and apps. In addition to internet sources, digital media players may support the playback of content from other sources, such as external media (including USB drives or memory cards), or streamed from a computer or media server. Some digital media players may also support video games, though their complexity (which can range from casual games to ports of larger games) depends on operating system and hardware support, and besides those marketed as microconsoles, are not usually promoted as the device's main function. Digital media players do not usually include a tuner for receiving terrestrial television, nor disc drives for Blu-rays or DVD. Some devices, such as standalone Blu-ray players, may include similar functions to digital media players (often in a reduced form), as well as recent generations of video game consoles, while smart TVs integrate similar functions into the television itself. Some TV makers have, in turn, licensed operating system platforms from digital media players as middleware for their smart TVs—such as Android TV, Amazon Fire TV, and Roku—which typically provide a similar user experience to their standalone counterparts, but with TV-specific features and settings reflected in their user interface. Overview In the 2010s, with the popularity of portable media players and digital cameras, as well as fast Internet download speeds and relatively cheap mass storage, many people came into possession of large collections of digital media files that cannot be played on a conventional analog HiFi without connecting a computer to an amplifier or television. The means to play these files on a network-connected digital media player that is permanently connected to a television is seen as a convenience. The rapid growth in the availability of online content has made it easier for consumers to use these devices and obtain content. YouTube, for instance, is a common plug-in available on most networked devices. Netflix has also struck deals with many consumer-electronics makers to make their interface available in the device's menus, for their streaming subscribers. This symbiotic relationship between Netflix and consumer electronics makers has helped propel Netflix to become the largest subscription video service in the U.S. using up to 20% of U.S. bandwidth at peak times. Media players are often designed for compactness and affordability, and tend to have small or non-existent hardware displays other than simple LED lights to indicate whether the device is powered on. Interface navigation on the television is usually done with an infrared remote control, while more-advanced digital media players come with high-performance remote controls which allow control of the interface using integrated touch sensors. Some remotes also include accelerometers for air mouse features which allow basic motion gaming. Most digital media player devices are unable to play physical audio or video media directly, and instead require a user to convert these media into playable digital files using a separate computer and software. They are also usually incapable of recording audio or video. In the 2010s, it is also common to find digital media player functionality integrated into other consumer-electronics appliances, such as DVD players, set-top boxes, smart TVs, or even video game consoles. Terminology Digital media players are also commonly referred to as a digital media extender, digital media streamer, digital media hub, digital media adapter, or digital media receiver (which should not be confused with AV receiver). Digital media player manufacturers use a variety of names to describe their devices. Some more commonly used alternative names include: Connected DVD Connected media player Digital audio receiver Digital media adapter Digital media connect Digital media extender Digital media hub Digital media player Digital media streamer Digital media receiver Digital media renderer Digital video receiver Digital video streamer HD Media Player HDD media player Media Extender Media Regulator Net connected media player Network connected media player Network media player Networked Digital Video Disc Networked entertainment gateway OTT player Over-the-Top player Smart Television media player Smart Television player Streaming media box Streaming media player Streaming video player Wireless Media Adapter YouTube Player Support History By November 2000, an audio-only digital media player was demonstrated by a company called SimpleDevices, which was awarded two patents covering this invention in 2006. Developed under the SimpleFi name by Motorola in late 2001, the design was based on a Cirrus Arm-7 processor and the wireless HomeRF networking standard which pre-dated 802.11b in the residential markets. Other early market entrants in 2001 included the Turtle Beach AudioTron, Rio Receiver and SliMP3 digital media players. An early version of a video-capable digital media player was presented by F.C. Jeng et al. in the International Conf. on Consumer Electronics in 2002. It included a network interface card, a media processor for audio and video decoding, an analog video encoder (for video playback to a TV), an audio digital to analog converter for audio playback, and an IR (infrared receiver) for remote-control-interface. A concept of a digital media player was also introduced by Intel in 2002 at the Intel Developer Forum as part of their Extended Wireless PC Initiative. Intel's digital media player was based on an Xscale PXA210 processor and supported 802.11b wireless networking. Intel was among the first to use the Linux embedded operating system and UPnP technology for its digital media player. Networked audio and DVD players were among the first consumer devices to integrate digital media player functionality. Examples include the Philips Streamium-range of products that allowed for remote streaming of audio, the GoVideo D2730 Networked DVD player which integrated DVD playback with the capability to stream Rhapsody audio from a PC, and the Buffalo LinkTheater which combined a DVD player with a digital media player. More recently, the Xbox 360 gaming console from Microsoft was among the first gaming devices that integrated a digital media player. With the Xbox 360, Microsoft also introduced the concept of a Windows Media Center Extender, which allows users to access the Media center capabilities of a PC remotely, through a home network. More recently, Linksys, D-Link, and HP introduced the latest generation of digital media players that support 720p and 1080p high resolution video playback and may integrate both Windows Extender and traditional digital media player functionality. Typical features A digital media player can connect to the home network using either a wireless (IEEE 802.11a, b, g, and n) or wired Ethernet connection. Digital media players includes a user interface that allows users to navigate through their digital media library, search for, and play back media files. Some digital media players only handle music; some handle music and pictures; some handle music, pictures, and video; while others go further to allow internet browsing or controlling Live TV from a PC with a TV tuner. Some other capabilities which are accomplished by digital media players include: Play, catalog, and store local hard disk, flash drive, or memory card music CDs and view CD album art, view digital photos, and watch DVD and Blu-ray or other videos. Stream movies, music, photos (media) over the wired or wireless network using technologies like DLNA View digital pictures (one by one or as picture slideshows) Stream online video to a TV from services such as Netflix and YouTube. Play video games. Browse the Web, check email and access social networking services through downloadable Internet applications. Video conference by connecting a webcam and microphone. In the 2010s, there were stand-alone digital media players on the market from AC Ryan, Asus, Apple (e.g., Apple TV), NetGear (e.g., NTV and NeoTV models), Dune, iOmega, Logitech, Pivos Group, Micca, Sybas (Popcorn Hour), Amkette EvoTV, D-Link, EZfetch, Fire TV, Android TV, Pinnacle, Xtreamer, and Roku, just to name a few. The models change frequently, so it is advisable to visit their web sites for current model names. Processors These devices come with low power consumption processors or SoC (System on Chip) and are most commonly either based on MIPS or ARM architecture processors combined with integrated DSP GPU in a SoC (or MPSoC) package. They also include RAM-memory and some type of built-in type of non-volatile computer memory (Flash memory). Internal hard-drive capabilities HD media player or HDD media player (HDMP) is a consumer product that combines digital media player with a hard drive (HD) enclosure with all the hardware and software for playing audio, video and photos to a television. All these can play computer-based media files to a television without the need for a separate computer or network connection, and some can even be used as a conventional external hard drive. These types of digital media players are sometimes sold as empty shells to allow the user to fit their own choice of hard drive (some can manage unlimited hard disk capacity and other only a certain capacity, i.e. 1TB, 2TB, 3TB, or 4TB), and the same model is sometimes sold with or without an internal hard drive already fitted. Formats, resolutions and file systems Digital media players can usually play H.264 (SD and HD), MPEG-4 Part 2 (SD and HD), MPEG-1, MPEG-2 .mpg, MPEG-2 .TS, VOB and ISO images video, with PCM, MP3 and AC3 audio tracks. They can also display images (such as JPEG and PNG) and play music files (such as FLAC, MP3 and Ogg). Operating system While most media players have traditionally been running proprietary or open source software frameworks versions based Linux as their operating systems, many newer network connected media players are based on the Android platform which gives them an advantage in terms of applications and games from the Google Play store. Even without Android some digital media players still have the ability to run applications (sometimes available via an app store), interactive on-demand media, personalized communications, and social networking features. Connections There are two ways to connect an extender to its central media server - wired, or wireless. Streaming and communication protocols While early digital media players used proprietary communication protocols to interface with media servers, today most digital media players either use standard-based protocols such SMB/CIFS/SAMBA or NFS, or rely on some version of UPnP (Universal Plug and Play) and DLNA (Digital Living Network Alliance) standards. DLNA-compliant digital media players and Media Servers is meant to guarantee a minimum set of functionality and proper interoperability among digital media players and servers regardless of the manufacturer, but unfortunately not every manufacturer follows the standards perfectly which can lead to incompatibility. Media server Some digital media players will only connect to specific media server software installed on a PC to stream music, pictures and recorded or live TV originating from the computer. Apple iTunes can, for example, be used this way with the Apple TV hardware that connects to a TV. Apple has developed a tightly integrated device and content management ecosystem with their iTunes Store, personal computers, iOS devices, and the AppleTV digital media receiver. The most recent version of the AppleTV has lost the hard-drive that was included in its predecessor and fully depends on either streaming internet content, or another computer on the home network for media. Connection ports Television connection is usually done via; composite, SCART, Component, HDMI video, with Optical Audio (TOSLINK/SPDIF), and connect to the local network and broadband internet using either a wired Ethernet or a wireless Wi-Fi connection, and some also have built-in Bluetooth support for remotes and game-pads or joysticks. Some players come with USB (USB 2.0 or USB 3.0) ports which allow local media content playback. Use Market impact on traditional television services The convergence of content, technology, and broadband access allows consumers to stream television shows and movies to their high-definition television in competition with pay television providers. The research company SNL Kagan expects 12 million households, roughly 10%, to go without cable, satellite or telco video service by 2015 using Over The Top services. This represents a new trend in the broadcast television industry, as the list of options for watching movies and TV over the Internet grows at a rapid pace. Research also shows that even as traditional television service providers are trimming their customer base, they are adding Broadband Internet customers. Nearly 76.6 million U.S. households get broadband from leading cable and telephone companies, although only a portion have sufficient speeds to support quality video steaming. Convergence devices for home entertainment will likely play a much larger role in the future of broadcast television, effectively shifting traditional revenue streams while providing consumers with more options. According to a report from the researcher NPD In-Stat, only about 12 million U.S. households have their either Web-capable TVs or digital media players connected to the Internet, although In-Stat estimates about 25 million U.S. TV households own a set with the built-in network capability. Also, In-Stat predicts that 100 million homes in North America and western Europe will own digital media players and television sets that blend traditional programs with Internet content by 2016. Use for illegal streaming Since at least 2015, dealers have marketed digital media players, often running the Android operating system and branded as being "fully-loaded", that are promoted as offering free streaming access to copyrighted media content, including films and television programs, as well as live feeds of television channels. These players are commonly bundled with the open source media player software Kodi, which is in turn pre-loaded with plug-ins enabling access to services streaming this content without the permission of their respective copyright holders. These "fully-loaded" set-top boxes are often sold through online marketplaces such as Amazon.com and eBay, as well as local retailers. The spread of these players has been attributed to their low cost and ease of use, with user experiences similar to legal subscription services such as Netflix. "Fully-loaded" set-top boxes have been subject to legal controversies, especially noting that their user experiences made them accessible to end-users who may not always realize that they are actually streaming pirated content. In the United Kingdom, the Federation Against Copyright Theft (FACT) has taken court actions on behalf of rightsholders against those who market digital media players pre-loaded with access to copyrighted content. In January 2017, an individual seller plead not guilty to charges of marketing and distributing devices that circumvent technological protection measures. In March 2017, the High Court of Justice ruled that BT Group, Sky plc, TalkTalk, and Virgin Media must block servers that had been used on such set-top boxes to illegally stream Premier League football games. Later in the month, Amazon UK banned the sale of "certain media players" that had been pre-loaded with software to illegally stream copyrighted content. On 26 April 2017, the European Court of Justice ruled that the distribution of set-top boxes with access to unauthorized streams of copyrighted works violated the exclusive rights to communicate them to the public. In September 2017, a British seller of such boxes pled guilty to violations of the Copyright, Designs and Patents Act for selling devices that can circumvent effective technical protection measures. In Canada, it was initially believed that these set-top boxes fell within a legal grey area, as the transient nature of streaming content did not necessarily mean that the content was being downloaded in violation of Canadian copyright law. However, on 1 June 2016, a consortium of Canadian media companies (BCE Inc., Rogers Communications, and Videotron) obtained a temporary federal injunction against five retailers of Android-based set-top boxes, alleging that their continued sale were causing "irreparable harm" to their television businesses, and that the devices' primary purpose were to facilitate copyright infringement. The court rejected an argument by one of the defendants, who stated that they were only marketing a hardware device with publicly available software, ruling that the defendants were "deliberately encourag[ing] consumers and potential clients to circumvent authorized ways of accessing content." 11 additional defendants were subsequently added to the suit. The lawyer of one of the defendants argued that retailers should not be responsible for the actions of their users, as any type of computing device could theoretically be used for legal or illegal purposes. In April 2017, the Federal Court of Appeal blocked an appeal requesting that the injunction be lifted pending the outcome of the case. Although the software is free to use, the developers of Kodi have not endorsed any add-on or Kodi-powered device intended for facilitating copyright infringement. Nathan Betzen, president of the XBMC Foundation (the non-profit organization which oversees the development of the Kodi software), argued that the reputation of Kodi had been harmed by third-party retailers who "make a quick buck modifying Kodi, installing broken piracy add-ons, advertising that Kodi lets you watch free movies and TV, and then vanishing when the user buys the box and finds out that the add-on they were sold on was a crummy, constantly breaking mess." Betzen stated that the XBMC Foundation was willing to enforce its trademarks against those who use them to promote Kodi-based products which facilitate copyright infringement. Following a lawsuit by Dish Network against TVAddons, a website that offered streaming add-ons that were often used with Kodi and on such devices, in June 2017, the group shut down its add-ons and website. A technology analyst speculated that the service could eventually re-appear under a different name in the future, as have torrent trackers. In June, the service's operator was also sued by the Bell/Rogers/Videotron consortium for inducing copyright infringement. In June 2017, Televisa was granted a court order banning the sale of all Roku products in Mexico, as it was alleged that third-parties had been operating subscription television services for the devices that contain unlicensed content. The content is streamed through unofficial apps that are added to the devices through hacking. Roku objected to the allegations, stating that these services were not certified by the company or part of its official Channels platform, whose terms of service require that they have rights to stream the content that they offer. Roku also stated that it actively cooperates with reports of channels that infringe copyrights. The ruling was overturned in October 2018 after Roku took additional steps to remove channels with unauthorized content from the platform. In May 2018, the Federal Communications Commission sent letters to the CEOs of Amazon.com and eBay, asking for their help in removing such devices from their marketplaces. The letter cited malware risks, fraudulent use of FCC certification marks, and how their distribution through major online marketplaces may incorrectly suggest that they are legal and legitimate products. In Saudi Arabia, the practice of using digital media players for pirated television content first became popular during the Qatar diplomatic crisis, after Qatari pay television network beIN Sports was banned from doing business in the country. The pirate subscription television service BeoutQ operated a satellite television service featuring repackaged versions of the beIN Sports channels, but its Android-based satellite boxes also included a pre-loaded app store offering apps for multiple streaming and subscription services dealing primarily in copyrighted media. See also Comparison of digital media players Cord-cutting Digital Living Network Alliance Digital video recorder List of smart TV platforms Second screen Streaming media System on a chip Tivoization Tekpix References External links HP MediaSmart Connect Wins Popular Mechanics Editor's Choice Award at CES 2008 CNET Editors' Best Network Music Players Universal remote codes IPTV Smarters PC Magazine Media Hub & Receiver Finder AudioFi Reviews of wireless players PC World's Future Gear: PC on the HiFi, and the TV Consumer electronics Media players Networking hardware Television technology Multimedia Android (operating system) devices Digital audio
Digital media player
[ "Technology", "Engineering" ]
4,327
[ "Information and communications technology", "Television technology", "Computer networks engineering", "Networking hardware", "Multimedia" ]
9,258,044
https://en.wikipedia.org/wiki/MEDCIN
Medcin, is a system of standardized medical terminology, a proprietary medical vocabulary and was developed by Medicomp Systems, Inc. MEDCIN is a point-of-care terminology, intended for use in Electronic Health Record (EHR) systems, and it includes over 280,000 clinical data elements encompassing symptoms, history, physical examination, tests, diagnoses and therapy. This clinical vocabulary contains over 38 years of research and development as well as the capability to cross map to leading codification systems such as SNOMED CT, CPT, ICD-9-CM/ICD-10-CM, DSM, LOINC, CDT, CVX, and the Clinical Care Classification (CCC) System for nursing and allied health. The MEDCIN coding system is marketed for point-of-care documentation. Several Electronic Health Record (EHR) systems embed MEDCIN, which allows them to produce structured and numerically codified patient charts. Such structuring enables the aggregation, analysis, and mining of clinical and practice management data related to a disease, a patient or a population. History MEDCIN was initially developed by Peter S. Goltra, founder of Medicomp Systems “as an intelligent clinical database for documentation at the time of care." The first few years of the development were spent in designing the structure of a knowledge engine that would enable the population of relationships between clinical events. Since 1978, the MEDCIN database engine has been continuously refined and expanded to include concepts from clinical histories, test, physical examination, therapies and diagnoses to enable coding of complete patient encounters with the collaboration of physicians and teaching institutions such as Cornell, Harvard, and Johns Hopkins. Features Multiple Hierarchical Structure MEDCIN data elements are organized in multiple clinical hierarchies, where users can easilynavigate to a medical term by following down the tree of clinical propositions. The clinical propositions define unique intellectual clinical content. An example of such similar propositions include "wheezing which is worse during cold weather" and "wheezing which is worse with a cold" differ in meaning significantly to clinicians and therefore it enables the software to present relevant items to clinical users. This hierarchy provides an inheritance of clinical properties between data elements, which greatly enhances the capabilities of EHR systems and as well providing logical presentation structures for the clinical users. The linkage of MEDCIN data elements through the use of describing many diagnoses in the diagnostic index creates multiple hierarchies. The MEDCIN engine uses Intelligent Prompting and navigation tools to enable clinicians to select specific clinical terms that they need rather than having to create new terms for rapid documentation. Enhances EHRs usability MEDCIN has been designed to work as an interface terminology to include components to make EHRs more usable when it is used in conjunction with proprietary physician and nursing documentation tools. According to Rosenbloom et al. (2006), investigators such as Chute et al., McDonald et al., Rose et al. and Campbell et al. have defined clinical interface terminologies as “a systematic collection of health care-related phrases (term)” (p. 277) that supports the capturing of patient-related clinical information entered by clinicians into software programs such as clinical note capture and decision support tools. For an interface terminology to be clinical usable, it has to be able to describe any clinical presentation with speed, ease of use, and accuracy for clinicians to accomplish the intended tasks (e.g. documenting patient care) when using the medical terminology. In addition, the terms in medical terminology must have medical relationships. MEDCIN's presentation engine, accomplishes this usability criteria by using the Intelligent Prompting capabilities to present a relevant list of MEDCIN clinical terms for rapid clinical documentations. Another usability feature that the MEDCIN presentation engine provides is the medical relationships of clinical terms through multiple clinical hierarchies for each MEDCIN term. Support for ICD-10-CM coding In August 2012, Medicomp Systems released an updated version of the software embedded with ICD-10-CM (International Classification of Diseases, 10th Revision, Clinical Modification) mappings and functionality to comply with the transition from ICD-9-CM to ICD-10-CM as mandated by the US Department of Health and Human Services. This new version is specially designed to make the ICD-10 more usable in the EHR systems by providing clinicians easier access to bi-directional mappings, accurate data and codes through their EHR products. The ICD-10 is published by the World Health Organization (WHO) to enable the systematic collection of morbidity and mortality data from different countries for statistical analysis. Integration to most EHRs and Legacy systems MEDCIN terminology engine can be easily integrated into existing EHRs and legacy systems to enable mapping of existing terminologies and other coding systems such as ICD, DSM, CPT, LOINC, SNOMED CT and the Clinical Care Classification (CCC) System to generate seamless codified data at point of care. MEDCIN's interoperability features enable easy access and sharing of patient data between health care facilities. Interface with Electronic Health Record (EHR) systems MEDCIN has been implemented into several commercial EHR systems as an interface terminology to support integrated care, clinical documentation, health maintenance monitoring and disease management, and the care planning functions of physicians, nurses and allied health professionals. Such commercial EHR systems include EHRs from EPIC, Allscripts, Pulse, Mckesson, and the United States Department of Defence's (DoD) EHR system, the Armed Forces Health Longitudinal Technology Application (AHLTA). AHLTA AHLTA is an EHR system developed for the US Department of Defense. This application uses the Medicomp's MEDCIN terminology engine for clinical documentation purposes. Figure 1, shows an example of the MEDCIN terminology where the physician can search for the correct terms for input into the patient note. MEDCIN Nursing Plan of Care The Nursing Plan of Care (POC) was developed by Medicomp Systems, for the Clinical Care Classification (CCC) System. The CCC System is a standardized, coded nursing terminology that provides a unique framework and coding structure for accessing, classifying and documenting patient care by nurses and other allied health professionals. The CCC is directly linked in the MEDCIN nursing POC to medical terminology with the purpose of creating patient plan of care by extracting a pool of documentation from the EHR history. The CCC nursing terminology is integrated into the MEDCIN clinical database through a contextual hierarchical tree, providing an array of terminology standards and concepts with Intelligent Prompting capabilities of the MEDCIN engine. See also Clinical Care Classification System Current Procedural Terminology (CPT) International Classification of Diseases Revision 9 (ICD-9) LOINC National Drug Code (NDC) SNOMED Health Level 7 (HL7) Health informatics References External links Medicomp home page Further reading Goltra, Peter S. MEDCIN: A New Nomenclature for Clinical Medicine. Ann Arbour, MI 1997:Springer-Verlag . Medical classification Electronic health records Health informatics
MEDCIN
[ "Technology", "Biology" ]
1,463
[ "Electronic health records", "Information technology", "Health informatics", "Medical technology" ]
9,258,361
https://en.wikipedia.org/wiki/Ruppeiner%20geometry
Ruppeiner geometry is thermodynamic geometry (a type of information geometry) using the language of Riemannian geometry to study thermodynamics. George Ruppeiner proposed it in 1979. He claimed that thermodynamic systems can be represented by Riemannian geometry, and that statistical properties can be derived from the model. This geometrical model is based on the inclusion of the theory of fluctuations into the axioms of equilibrium thermodynamics, namely, there exist equilibrium states which can be represented by points on two-dimensional surface (manifold) and the distance between these equilibrium states is related to the fluctuation between them. This concept is associated to probabilities, i.e. the less probable a fluctuation between states, the further apart they are. This can be recognized if one considers the metric tensor gij in the distance formula (line element) between the two equilibrium states where the matrix of coefficients gij is the symmetric metric tensor which is called a Ruppeiner metric, defined as a negative Hessian of the entropy function where U is the internal energy (mass) of the system and Na refers to the extensive parameters of the system. Mathematically, the Ruppeiner geometry is one particular type of information geometry and it is similar to the Fisher–Rao metric used in mathematical statistics. The Ruppeiner metric can be understood as the thermodynamic limit (large systems limit) of the more general Fisher information metric. For small systems (systems where fluctuations are large), the Ruppeiner metric may not exist, as second derivatives of the entropy are not guaranteed to be non-negative. The Ruppeiner metric is conformally related to the Weinhold metric via where T is the temperature of the system under consideration. Proof of the conformal relation can be easily done when one writes down the first law of thermodynamics (dU = TdS + ...) in differential form with a few manipulations. The Weinhold geometry is also considered as a thermodynamic geometry. It is defined as a Hessian of the internal energy with respect to entropy and other extensive parameters. It has long been observed that the Ruppeiner metric is flat for systems with noninteracting underlying statistical mechanics such as the ideal gas. Curvature singularities signal critical behaviors. In addition, it has been applied to a number of statistical systems including Van der Waals gas. Recently the anyon gas has been studied using this approach. Application to black hole systems This geometry has been applied to black hole thermodynamics, with some physically relevant results. The most physically significant case is for the Kerr black hole in higher dimensions, where the curvature singularity signals thermodynamic instability, as found earlier by conventional methods. The entropy of a black hole is given by the well-known Bekenstein–Hawking formula where is the Boltzmann constant, is the speed of light, is the Newtonian constant of gravitation and is the area of the event horizon of the black hole. Calculating the Ruppeiner geometry of the black hole's entropy is, in principle, straightforward, but it is important that the entropy should be written in terms of extensive parameters, where is ADM mass of the black hole and Na are the conserved charges and a runs from 1 to n. The signature of the metric reflects the sign of the hole's specific heat. For a Reissner–Nordström black hole, the Ruppeiner metric has a Lorentzian signature which corresponds to the negative heat capacity it possess, while for the BTZ black hole, we have a Euclidean signature. This calculation cannot be done for the Schwarzschild black hole, because its entropy is which renders the metric degenerate. References . Riemannian geometry Thermodynamics New College of Florida faculty Mathematical physics
Ruppeiner geometry
[ "Physics", "Chemistry", "Mathematics" ]
795
[ "Applied mathematics", "Theoretical physics", "Thermodynamics", "Mathematical physics", "Dynamical systems" ]
9,259,009
https://en.wikipedia.org/wiki/Ilya%20Lifshitz
Ilya Mikhailovich Lifshitz (, ; January 13, 1917 – October 23, 1982) was a leading Soviet theoretical physicist, brother of Evgeny Lifshitz. He is known for his works in solid-state physics, electron theory of metals, disordered systems, and the theory of polymers. Work Ilya Lifshitz was born into a Ukrainian Jewish family in Kharkov, Kharkov Governorate, Russian Empire (now Kharkiv, Ukraine). Together with Arnold Kosevich, in 1954 Lifshitz established the connection between the oscillation of magnetic characteristics of metals and the form of an electronic surface of Fermi (Lifshitz–Kosevich formula) from de Haas–van Alphen experiments. Lifshitz was one of the founders of the theory of disordered systems. He introduced some of the basic notions, such as self-averaging, and discovered what is now called Lifshitz tails and Lifshitz singularity. In perturbation theory, Lifshitz introduced the notion of spectral shift function, which was later developed by Mark Krein. A phase transition involving topological changes of the material's Fermi surface is called a Lifshitz phase transition. Starting from the late 1960s, Lifshitz started considering problems of statistical physics of polymers. Together with his students Alexander Yu. Grosberg and Alexei R. Khokhlov, Lifshitz proposed a theory of coil-to-globule transition in homopolymers and derived the formula for the conformational entropy of a polymer chain, that is referred to as the Lifshitz entropy. References External links Page at KPI Moscow university site 1917 births 1982 deaths Scientists from Kharkiv People from Kharkov Governorate Academic staff of the National University of Kharkiv Foreign associates of the National Academy of Sciences Full Members of the USSR Academy of Sciences Kharkiv Polytechnic Institute alumni National University of Kharkiv alumni Recipients of the Lenin Prize Recipients of the Order of the Red Banner of Labour Theoretical physicists Jewish physicists Soviet Jews Soviet physicists Burials at Kuntsevo Cemetery
Ilya Lifshitz
[ "Physics" ]
449
[ "Theoretical physics", "Theoretical physicists" ]
9,261,687
https://en.wikipedia.org/wiki/Histoplasma%20capsulatum
Histoplasma capsulatum is a species of dimorphic fungus. Its sexual form is called Ajellomyces capsulatus. It can cause pulmonary and disseminated histoplasmosis. Histoplasma capsulatum is "distributed worldwide, except in Antarctica, but most often associated with river valleys" and occurs chiefly in the "Central and Eastern United States" followed by "Central and South America, and other areas of the world". It is most prevalent in the Ohio and Mississippi River valleys. It was discovered by Samuel Taylor Darling in 1906. Growth and morphology Histoplasma capsulatum is an ascomycetous fungus closely related to Blastomyces dermatitidis. It is potentially sexual, and its sexual state, Ajellomyces capsulatus, can readily be produced in culture, though it has not been directly observed in nature. H. capsulatum groups with B. dermatitidis and the South American pathogen Paracoccidioides brasiliensis in the recently recognized fungal family Ajellomycetaceae. It is dimorphic and switches from a mould-like (filamentous) growth form in the natural habitat to a small, budding yeast form in the warm-blooded animal host. Like B. dermatitidis, H. capsulatum has two mating types, "+" and "–". The great majority of North American isolates belongs to a single genetic type, but a study of multiple genes suggests a recombining, sexual population. A recent analysis has suggested that the prevalent North American genetic type and a less common type should be considered separate phylogenetic species, distinct from H. capsulatum isolates obtained in Central and South America and other parts of the world. These entities are temporarily designated NAm1 (the rare type, which includes a famous experimental isolate designated "the Downs strain") and NAm2 (the common type). As yet, no well-established clinical or geographic distinction is seen between these two genetic groups. In its asexual form, the fungus grows as a colonial microfungus strongly similar in macromorphology to B. dermatitidis. A microscopic examination shows a marked distinction: H. capsulatum produces two types of conidia, globose macroconidia, 8–15 μm, with distinctive tuberculate or finger-like cell wall ornamentation, and ovoid microconidia, 2–4 μm, which appear smooth or finely roughened. Whether either of these conidial types is the principal infectious particle is unclear. They form on individual short stalks and readily become airborne when the colony is disturbed. Ascomata of the sexual state are 80–250 μm, and are very similar in appearance and anatomy to those described above for B. dermatitidis. The ascospores are similarly minute, averaging 1.5 μm. The budding yeast cells formed in infected tissues are small (about 2–4 μm) and are characteristically seen forming in clusters within phagocytic cells, including histiocytes and other macrophages, as well as monocytes. An African phylogenetic species, H. duboisii, often forms larger yeast cells to 15 μm. Geographic distribution Histoplasma capsulatum is "distributed worldwide, except in Antarctica, but most often associated with river valleys" and occurs chiefly in the "Central and Eastern United States" followed by "Central and South America, and other areas of the world" It is most prevalent in the Ohio and Mississippi river valleys. The enzootic and endemic zones of H. capsulatum can be roughly divided into core areas, where the fungus occurs widely in soil or on vegetation contaminated by bird droppings or equivalent organic inputs, and peripheral areas, where the fungus occurs relatively rarely in association with soil, but is still found abundantly in heavy accumulations of bat or bird guano in enclosed spaces such as caves, buildings, and hollow trees. The principal core area for this species includes the valleys of the Mississippi, Ohio, and Potomac Rivers in the USA, as well as a wide span of adjacent areas extending from Kansas, Illinois, Indiana, and Ohio in the north to Mississippi, Louisiana, and Texas in the south. In some areas, such as Kansas City, skin testing with the histoplasmin antigen preparation shows that 80–90% of the resident population have an antibody reaction to H. capsulatum, probably indicating prior subclinical infection. Northern U.S. states such as Minnesota, Michigan, New York and Vermont are peripheral areas for histoplasmosis, but have scattered counties where 5–19% of lifetime residents show exposure to H. capsulatum. One New York county, St. Lawrence county (across the St. Lawrence River from the Cornwall– Preston – Brockville area of Ontario, Canada) shows exposures over 20%. The distribution of H. capsulatum in Canada is not as well documented as in the US. The St. Lawrence Valley is probably the best known endemic region based both on case reports and on a number of skin test reaction studies that were done between 1945 and 1970. The Montreal area is a particularly well documented endemic focus, not just in the agricultural regions surrounding the city but also within the city itself. The Mount Royal area in central Montreal, especially the north and east sides of Mt. Royal Park, showed exposure rates between 20 and 50% in schoolchildren and locally lifetime-resident university students. A particularly high rate of 79.3% exposure was shown in St. Thomas, Ontario, south of London, Ontario, after 7 local residents had died of histoplasmosis in 1957. Based on numerous small regional studies, histoplasmin skin test reactors form ca. 10–50 % of the population in much of southern Ontario and in Quebec’s St. Lawrence Valley, ca. 5% in southern Manitoba and some northerly parts of Quebec (e.g., Abitibi-Témiscamingue), and ca. 1% in Nova Scotia. Exposure of aboriginal Canadians occurs remarkably far north in Quebec, but has not been reported in similar boreal biogeoclimatic zones in many other parts of Canada. Recently and remarkably, a cluster of four indigenously acquired cases of histoplasmosis was shown to be associated with a golf course in suburban Edmonton, Alberta. Examination suggested that local soil was the source. Spectrum of disease Histoplasmosis is usually a subclinical infection that does not come to the attention of the person involved. The organism tends to remain alive in the scattered pulmonary calcifications; therefore, some cases are detected by emergence of serious infection when a patient becomes immunocompromised, perhaps decades later. Frank cases are most often seen as acute pulmonary histoplasmosis, a disease that resembles acute pneumonia but is usually self-limited. It is most often seen in children newly exposed to H. capsulatum or in heavily exposed individuals. Erythematous skin conditions arising from antigen reactions may complicate the disease, as may myalgias, arthralgias, and rarely, arthritic conditions. Emphysema sufferers may contract chronic cavitary pulmonary histoplasmosis as a disease complication; eventually the cavity formed may be occupied by an Aspergillus fungus ball (aspergilloma), potentially leading to massive hemoptysis. Another uncommon form of histoplasmosis is a slowly progressing condition known as granulomatous mediastinitis, in which the lymph nodes in the mediastinal cavity between the lungs become inflamed and ultimately necrotic; the swollen nodes or draining fluid may ultimately affect the bronchi, the superior vena cava, the esophagus or the pericardium. A particularly dangerous condition is mediastinal fibrosis, in which a subset of individuals with granulomatous mediastinitis develop an uncontrolled fibrotic reaction that may press on the lungs or the bronchi, or may cause right heart failure. There are a number of other rare pulmonary manifestations of histoplasmosis. Histoplasmosis, like blastomycosis, may disseminate haematogenously to infect internal organs and tissues, but it does so in a very low proportion of cases, and half or more of these dissemination cases involve immunocompromisation. Unlike blastomycosis, histoplasmosis is a recognized AIDS-defining illness in people with HIV infection; disseminated histoplasmosis affects approximately 5% of AIDS patients with CD4+ cell counts <150 cells/μL in highly endemic areas. The incidence of this condition dropped significantly after introduction of current anti-HIV therapies. Other conditions very uncommonly associated with H. capsulatum include endocarditis and peritonitis. Ecology and epidemiology Histoplasma capsulatum appears to be strongly associated with the droppings of certain bird species as well as bats. A mixture of these droppings and certain soil types is particularly conducive to proliferation. In highly endemic areas there is a strong association with soil under and around chicken houses, and with areas where soil or vegetation has become heavily contaminated with faecal material deposited by flocking birds such as starlings and blackbirds. Bird roosting areas that are Histoplasma-free appear to be lower in nitrogen, phosphorus, organic matter and moisture than contaminated roosting areas. The guano of gulls and other colonially nesting water-associated birds is rarely connected to histoplasmosis. Bat dwellings, including caves, attics and hollow trees, are classic H. capsulatum habitats. Histoplasmosis outbreaks are typically associated with cleaning guano accumulations or clearing guano-covered vegetation, or with exploration of bat caves. In addition, however, outbreaks may be associated with wind-blown dust liberated by construction projects in endemic areas: a classic outbreak is one associated with intense construction activity, including subway construction, in Montreal in 1963. As with blastomycosis, a good understanding of the precise ecological affinities of H. capsulatum is greatly complicated by the difficulty of isolating the fungus directly from nature. Again, the mouse passage procedure originally devised by Emmons must be used. A direct PCR technique for detection of H. capsulatum in soil has been published. H. capsulatum appears particularly likely to cause clinical disease in young children, persons working in sites contaminated by conducive bird or bat droppings, persons exposed to construction dust raised from contaminated sites, immunocompromised patients, and emphysema sufferers. Elimination of the agent from contaminated soils typically involves the use of toxic fumigants with limited success. Etymology In 1905, Samuel Taylor Darling serendipitously identified a protozoan-like microorganism in an autopsy specimen while trying to understand malaria, which was prevalent during the construction of the Panama Canal. He named this microorganism Histoplasma capsulatum because it invaded the cytoplasm (plasma) of histiocyte-like cells (Histo) and had a refractive halo mimicking a capsule (capsulatum), a misnomer. Additional images See also Blastomyces Coccidioides References Onygenales Fungi described in 1906 Fungi and humans Fungal pathogens of humans Fungus species
Histoplasma capsulatum
[ "Biology" ]
2,381
[ "Fungi and humans", "Fungi", "Fungus species", "Humans and other species" ]
9,261,777
https://en.wikipedia.org/wiki/Solarimeter
A solarimeter is a pyranometer, a type of measuring device used to measure combined direct and diffuse solar radiation. An integrating solarimeter measures energy developed from solar radiation based on the absorption of heat by a black body. The principle this instrument was designed on was first developed by the Italian priest, Father Angelo Bellani. He invented the actinometric method which is based on physical and chemical techniques. References Meteorological instrumentation and equipment
Solarimeter
[ "Technology", "Engineering" ]
89
[ "Meteorological instrumentation and equipment", "Measuring instruments" ]
9,262,446
https://en.wikipedia.org/wiki/Allysine
Allysine is a derivative of lysine that features a formyl group in place of the terminal amine. The free amino acid does not exist, but the allysine residue does. It is produced by aerobic oxidation of lysine residues by the enzyme lysyl oxidase. The transformation is an example of a post-translational modification. The semialdehyde form exists in equilibrium with a cyclic derivative. Allysine is involved in the production of elastin and collagen. Increased allysine concentration in tissues has been correlated to the presence of fibrosis. Allysine residues react with sodium 2-naphthol-6-sulfonate to produce a fluorescent bis-naphtol-allysine product. In another assay, allysine-containing proteins are reduced with sodium borohydride to give a peptide containing the 6-hydroxynorleucine (6-hydroxy-2-aminocaproic acid) residue, which (unlike allysine) is stable to proteolysis. Further reading See also Saccharopine References Alpha-Amino acids Aldehydes Aldehydic acids
Allysine
[ "Chemistry", "Biology" ]
246
[ "Biochemistry stubs", "Biotechnology stubs", "Biochemistry" ]
9,262,565
https://en.wikipedia.org/wiki/Isodesmosine
Isodesmosine is a lysine derivative found in elastin. Isodesmosine is an isomeric pyridinium-based amino acid resulting from the condensation of four lysine residues between elastin proteins by lysyl-oxidase. These represent ideal biomarkers for monitoring elastin turnover because these special cross-links are only found in mature elastin in mammals. See also Desmosine References Alpha-Amino acids
Isodesmosine
[ "Chemistry", "Biology" ]
99
[ "Biochemistry stubs", "Biotechnology stubs", "Biochemistry" ]
9,262,614
https://en.wikipedia.org/wiki/DocBook%20XSL
The DocBook XSL stylesheets are a set of XSLT stylesheets for the XML-based DocBook language. Purpose DocBook is a semantic markup language. That is, it specifies the meaning of the elements in a document, not how they are intended to be presented to the end user. It provides separation between the content of the document and the visual representation. While DocBook is a readable markup language, it is not intended to be read by end-users in its DocBook form. The purpose of DocBook XSL is to provide a standard set of transformations from DocBook to several presentational formats. Output formats DocBook XSL provides for transforms into the following formats: HTML, both as single pages and in a "chunked" format that outputs sections to different pages. XHTML XSL-FO, and from there, usually PDF Man Pages WebHelp Web help Webhelp is a chunked HTML output format in the DocBook xslt stylesheets that was introduced in version 1.76.1. The documentation for web help also provides an example of web help and is part of the DocBook xsl distribution. Its major features include CSS-based page layout without frameset, multilingual full content search, Table of contents (TOC) pane with collapsible TOC tree, Auto-synchronization of content pane and TOC. This web help format was originally implemented by Kasun Gajasinghe and David Cramer as part of the Google Summer of Code 2010 program. DocBook XSL also has transformations to slide-like formats for HTML and XSL-FO. EPUB support is currently experimental. Configuration DocBook XSL's stylesheets are highly configurable. Each of the different formats has a number of XSLT parameters available for simple customization. For example, the XSL-FO transforms allow the user to define the size of the pages. Additionally, the XSLT documents themselves are modular; it is possible for the user to add, change, or replace particular levels of functionality. This can allow DocBook XSL to process new documentation tags added to the standard DocBook, or to simply change how the XSLTs generate the resulting format. References External links DocBook Project - SourceForge project maintaining the DocBook XSL and DSSSL transforms. DocBook XSL Reference - Reference documentation for DocBook XSL transforms. - HTML edition of book explaining the use of DocBook XSL. Docbkx Maven Plugin - A Maven plugin based on the DocBook XSL Stylesheets, packaging everything required to target multiple output formats. ant4docbook - an Ant task for DocBook. Typesetting software DocBook
DocBook XSL
[ "Technology" ]
576
[ "Computing stubs", "Digital typography stubs" ]
9,262,725
https://en.wikipedia.org/wiki/Fenoprop
Fenoprop, also called 2,4,5-TP, is the organic compound 2-(2,4,5-trichlorophenoxy)propionic acid. It is a phenoxy herbicide and a plant growth regulator, an analog of 2,4,5-T in which the latter's acetic acid sidechain is replaced with a propionate group (with an extra CH3). The addition of this extra methyl group creates a chiral centre in the molecule and useful biological activity is found only in the (2R)-isomer. The compound's mechanism of action is to mimic the auxin growth hormone indoleacetic acid (IAA). When sprayed on plants it induces rapid, uncontrolled growth. As with 2,4,5-T, fenoprop is toxic to shrubs and trees. The name Silvex was used in the USA but it has been banned from use there since 1985. According to the Environmental Protection Agency its greatest use was as a postemergence herbicide for control of woody plants, and broadleaf herbaceous weeds in rice and bluegrass turf, in sugarcane, in rangeland improvement programs and on lawns. Fenoprop and some of its esters were in use from 1945 but are now obsolete. See also Phenoxy herbicides 2,4,5-Trichlorophenoxyacetic acid References Auxinic herbicides Carboxylic acids Chlorobenzene derivatives Phenol ethers
Fenoprop
[ "Chemistry" ]
322
[ "Carboxylic acids", "Functional groups" ]
9,263,122
https://en.wikipedia.org/wiki/Lead%20scandium%20tantalate
Lead scandium tantalate (PST) is a mixed oxide of lead, scandium, and tantalum. It has the formula Pb(Sc0.5Ta0.5)O3. It is a ceramic material with a perovskite structure, where the Sc and Ta atoms at the B site have an arrangement that is intermediate between ordered and disordered configurations, and can be fine-tuned with thermal treatment. It is ferroelectric at temperatures below , and is also piezoelectric. Like structurally similar lead zirconate titanate and barium strontium titanate, PST can be used for manufacture of uncooled focal plane array infrared imaging sensors for thermal cameras. References Lead(II) compounds Scandium compounds Tantalates Ceramic materials Ferroelectric materials Piezoelectric materials Infrared sensor materials Perovskites
Lead scandium tantalate
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
182
[ "Physical phenomena", "Inorganic compounds", "Ferroelectric materials", "Tantalates", "Inorganic compound stubs", "Salts", "Materials", "Electrical phenomena", "Ceramic materials", "Ceramic engineering", "Piezoelectric materials", "Hysteresis", "Matter" ]
9,263,732
https://en.wikipedia.org/wiki/Commercial%20Standard%20Digital%20Bus
The Commercial Standard Digital Bus (CSDB) is a multidrop bus, formerly known as the Collins Standard Digital Bus. The maximum speed is 50 kbit/s. Most civilian aircraft use one of 3 serial buses: the Commercial Standard Digital Bus (CSDB), ARINC 429, or AS-15531. The Commercial Standard Digital Bus is a two-wire asynchronous broadcast data transmission bus. Data is transmitted over an interconnecting cable by devices that comply with Electronic Industries Association (EIA) RS-422A. The physical layer is EIA-422. Messages on the CSDB consist of one address byte followed by any number of data bytes. References Avionics Serial buses
Commercial Standard Digital Bus
[ "Technology" ]
147
[ "Avionics", "Aircraft instruments" ]
9,264,025
https://en.wikipedia.org/wiki/Terrain%20awareness%20and%20warning%20system
In aviation, a terrain awareness and warning system (TAWS) is generally an on-board system aimed at preventing unintentional impacts with the ground, termed "controlled flight into terrain" accidents, or CFIT. The specific systems currently in use are the ground proximity warning system (GPWS) and the enhanced ground proximity warning system (EGPWS). The U.S. Federal Aviation Administration (FAA) introduced the generic term TAWS to encompass all terrain-avoidance systems that meet the relevant FAA standards, which include GPWS, EGPWS and any future system that might replace them. As of 2007, 5% of the world's commercial airlines still lacked a TAWS. A study by the International Air Transport Association examined 51 accidents and incidents and found that pilots did not adequately respond to a TAWS warning in 47% of cases. Several factors can still place aircraft at risk for CFIT accidents: older TAWS systems, deactivation of the EGPWS system, or ignoring TAWS warnings when an airport is not in the TAWS database. History Beginning in the early 1970s, a number of studies looked at the occurrence of CFIT accidents, where a properly functioning airplane under the control of a fully qualified and certificated crew is flown into terrain (or water or obstacles) with no apparent awareness on the part of the crew. In the 1960s and 70s, there was an average of one CFIT accident per month, and CFIT was the single largest cause of air travel fatalities during that time. C. Donald Bateman, an engineer at Honeywell, is credited with developing the first ground proximity warning system (GPWS); in an early test, conducted after the 1971 crash of Alaska Airlines Flight 1866, the device provided sufficient warning for a small plane to avoid the terrain, but not enough for the larger Boeing 727 jetliner involved. Bateman's earliest devices, developed in the 1960s, used radio waves to measure altitude and triggered an alarm when the aircraft was too low, but it was not aimed forward and could not provide sufficient warning of steeply rising terrain ahead. Early GPWS mandates Findings from these early studies indicated that many such accidents could have been avoided if a GPWS had been used. As a result of these studies and recommendations from the U.S. National Transportation Safety Board (NTSB), in 1974 the FAA required all (Part 121) certificate holders (that is, those operating large turbine-powered airplanes) and some (Part 135) certificate holders (that is, those operating large turbojet airplanes) to install TSO-approved GPWS equipment. In 1978, the FAA extended the GPWS requirement to Part 135 certificate holders operating smaller airplanes: turbojet-powered airplanes with ten or more passenger seats. These operators were required to install TSO-approved GPWS equipment or alternative ground proximity advisory systems that provide routine altitude callouts whether or not there is any imminent danger. This requirement was considered necessary because of the complexity, size, speed, and flight performance characteristics of these airplanes. The GPWS equipment was considered essential in helping the pilots of these airplanes to regain altitude quickly and avoid what could have been a CFIT accident. Installation of GPWS or alternative FAA-approved advisory systems was not required on turbo-propeller powered (turboprop) airplanes operated under Part 135 because, at that time, the general consensus was that the performance characteristics of turboprop airplanes made them less susceptible to CFIT accidents. For example, it was thought that turboprop airplanes had a greater ability to respond quickly in situations where altitude control was inadvertently neglected, as compared to turbojet airplanes. However, later studies, including investigations by the NTSB, analyzed CFIT accidents involving turboprop airplanes and found that many of these accidents could have been avoided if GPWS equipment had been used. Some of these studies also compared the effectiveness of the alternative ground proximity advisory system to the GPWS. GPWS was found to be superior in that it would warn only when necessary, provide maximum warning time with minimal unwanted alarms, and use command-type warnings. Based on these reports and NTSB recommendations, in 1992 the FAA amended §135.153 to require GPWS equipment on all turbine-powered airplanes with ten or more passenger seats. Evolution to EGPWS & TAWS After these rules were issued, advances in terrain mapping technology permitted the development of a new type of ground proximity warning system that provides greater situational awareness for flight crews. The FAA has approved certain installations of this type of equipment, known as the enhanced ground proximity warning system (EGPWS). However, in the proposed final rule, the FAA is using the broader term "terrain awareness and warning system" (TAWS) because the FAA expects that a variety of systems may be developed in the near future that would meet the improved standards contained in the proposed final rule. The breakthrough that enabled successful EGPWS came after the dissolution of the Soviet Union in 1991; the USSR had created detailed terrain maps of the world, and Bateman convinced his director of engineering to purchase them after the political chaos made them available, enabling earlier terrain warnings. The TAWS improves on existing GPWS systems by providing the flight crew much earlier aural and visual warning of impending terrain, forward looking capability, and continued operation in the landing configuration. These improvements provide more time for the flight crew to make smoother and gradual corrective action. United Airlines was an early adopter of the EGPWS technology. The CFIT of American Airlines Flight 965 in 1995 convinced that carrier to add EGPWS to all its aircraft; although the Boeing 757 was equipped with the earlier GPWS, the terrain warning was issued only 13 seconds before the crash. In 1998, the FAA issued Notice No. 98-11, Terrain Awareness and Warning System, proposing that all turbine-powered U.S.-registered airplanes type certificated to have six or more passenger seats (exclusive of pilot and copilot seating), be equipped with an FAA-approved terrain awareness and warning system. On March 23, 2000, the FAA issued Amendments 91–263, 121–273, and 135-75 (Correction 135.154). These amendments amended the operating rules to require that all U.S. registered turbine-powered airplanes with six or more passenger seats (exclusive of pilot and copilot seating) be equipped with an FAA-approved TAWS. The mandate only affects aircraft manufactured after March 29, 2002. By 2006, aircraft upset accidents had overtaken CFIT as the leading cause of aircraft accident fatalities, credited to the widespread deployment of TAWS. On March 7, 2006, the NTSB called on the FAA to require all U.S.-registered turbine-powered helicopters certified to carry at least 6 passengers to be equipped with a terrain awareness and warning system. The technology had not yet been developed for the unique flight characteristics of helicopters in 2000. A fatal helicopter crash in the Gulf of Mexico, involving an Era Aviation Sikorsky S-76A++ helicopter with two pilots transporting eight oil service personnel, was one of many crashes that prompted the decision. President Barack Obama awarded the National Medal of Technology and Innovation to Bateman in 2010 for his invention of GPWS and its later evolution into EGPWS/TAWS. Workings A modern TAWS works by using digital elevation data and airplane instrumental values to predict if a likely future position of the aircraft intersects with the ground. The flight crew is thus provided with "earlier aural and visual warning of impending terrain, forward looking capability, and continued operation in the landing configuration." TAWS types Class A TAWS includes all the requirements of Class B TAWS, below, and adds the following additional three alerts and display requirements of: Excessive closure rate to terrain alert Flight into terrain when not in landing configuration alert Excessive downward deviation from an ILS glideslope alert Required: Class A TAWS installations shall provide a terrain awareness display that shows either the surrounding terrain or obstacles relative to the airplane, or both. Class B TAWS is defined by the U.S. FAA as: A class of equipment that is defined in TSO-C151b and RTCA DO-161A. As a minimum, it will provide alerts for the following circumstances: Reduced required terrain clearance Imminent terrain impact Premature descent Excessive rates of descent Negative climb rate or altitude loss after takeoff Descent of the airplane to 500 feet above the terrain or nearest runway elevation (voice callout "Five Hundred") during a non-precision approach. Optional: Class B TAWS installation may provide a terrain awareness display that shows either the surrounding terrain or obstacles relative to the airplane, or both. Class C defines voluntary equipment intended for small general aviation airplanes that are not required to install Class B equipment. This includes minimum operational performance standards intended for piston-powered and turbine-powered airplanes, when configured with fewer than six passenger seats, excluding any pilot seats. Class C TAWS equipment shall meet all the requirements of a Class B TAWS with the small aircraft modifications described by the FAA. The FAA has developed Class C to make voluntary TAWS usage easier for small aircraft. Effects and statistics Prior to the development of GPWS, large passenger aircraft were involved in 3.5 fatal CFIT accidents per year, falling to 2 per year in the mid-1970s. A 2006 report stated that from 1974, when the U.S. FAA made it a requirement for large aircraft to carry such equipment, until the time of the report, there had not been a single passenger fatality in a CFIT crash by a large jet in U.S. airspace. After 1974, there were still some CFIT accidents that GPWS was unable to help prevent, due to the "blind spot" of those early GPWS systems. More advanced systems were developed. Older TAWS, or deactivation of the EGPWS, or ignoring its warnings when airport is not in its database, still leave aircraft vulnerable to possible CFIT incidents. In April 2010, a Polish Air Force Tupolev Tu-154M aircraft crashed near Smolensk, Russia, in a possible CFIT accident killing all passengers and crew, including the Polish President. The aircraft was equipped with TAWS made by Universal Avionics Systems of Tucson. According to the Russian Interstate Aviation Committee TAWS was turned on. However, the airport where the aircraft was going to land (Smolensk (XUBS)) is not in the TAWS database. In January 2008 a Polish Air Force Casa C-295M crashed in a CFIT accident near Mirosławiec, Poland, despite being equipped with EGPWS; the investigation found the EGPWS warning sounds had been disabled, and the pilot-in-command was not properly trained with EGPWS. See also Index of aviation articles List of aviation, avionics, aerospace and aeronautical abbreviations Airborne collision avoidance system Controlled flight into terrain (CFIT) Digital fly-by-wire Ground proximity warning system / enhanced GPWS Runway Awareness and Advisory System References External links Honeywell Enhanced Ground Proximity Warning System (EGPWS) FAR Sec. 121.354 – Terrain awareness and warning system Terrain Awareness and Warning System; Final Rule TSO-C151b Terrain Avoidance and Warning System PDF , TSO-C151b Web Page TAWS - FAA Mandates A New Proximity to Safety by Gary Picou Avionics Aircraft collision avoidance systems
Terrain awareness and warning system
[ "Technology" ]
2,314
[ "Avionics", "Aircraft collision avoidance systems", "Aircraft instruments" ]
7,119,900
https://en.wikipedia.org/wiki/Hall%27s%20conjecture
In mathematics, Hall's conjecture is an open question on the differences between perfect squares and perfect cubes. It asserts that a perfect square y2 and a perfect cube x3 that are not equal must lie a substantial distance apart. This question arose from consideration of the Mordell equation in the theory of integer points on elliptic curves. The original version of Hall's conjecture, formulated by Marshall Hall, Jr. in 1970, says that there is a positive constant C such that for any integers x and y for which y2 ≠ x3, Hall suggested that perhaps C could be taken as 1/5, which was consistent with all the data known at the time the conjecture was proposed. Danilov showed in 1982 that the exponent 1/2 on the right side (that is, the use of |x|1/2) cannot be replaced by any higher power: for no δ > 0 is there a constant C such that |y2 − x3| > C|x|1/2 + δ whenever y2 ≠ x3. In 1965, Davenport proved an analogue of the above conjecture in the case of polynomials: if f(t) and g(t) are nonzero polynomials over the complex numbers C such that g(t)3 ≠ f(t)2 in C[t], then The weak form of Hall's conjecture, stated by Stark and Trotter around 1980, replaces the square root on the right side of the inequality by any exponent less than 1/2: for any ε > 0, there is some constant c(ε) depending on ε such that for any integers x and y for which y2 ≠ x3, The original, strong, form of the conjecture with exponent 1/2 has never been disproved, although it is no longer believed to be true and the term Hall's conjecture now generally means the version with the ε in it. For example, in 1998, Noam Elkies found the example 4478849284284020423079182 − 58538865167812233 = -1641843, for which compatibility with Hall's conjecture would require C to be less than .0214 ≈ 1/50, so roughly 10 times smaller than the original choice of 1/5 that Hall suggested. The weak form of Hall's conjecture would follow from the ABC conjecture. A generalization to other perfect powers is Pillai's conjecture, though it is also known that Pillai's conjecture would be true if Hall's conjecture held for any specific 0 < ε < 1/2. The table below displays the known cases with . Note that y can be computed as the nearest integer to x3/2. This list is known to contain all examples with (the first 44 entries in the table) but may be incomplete past that point. References Elkies, N.D. "Rational points near curves and small nonzero | 'x3 - y2'| via lattice reduction", http://arxiv.org/abs/math/0005139 Danilov, L.V., "The Diophantine equation   'x3   -  y2 '  ' =  k  ' and Hall's conjecture", 'Math. Notes Acad. Sci. USSR' 32(1982), 617-618. Gebel, J., Pethö, A., and Zimmer, H.G.: "On Mordell's equation", 'Compositio Math.' 110(1998), 335-367. I. Jiménez Calvo, J. Herranz and G. Sáez Moreno, "A new algorithm to search for small nonzero |'x3 - y2'| values", 'Math. Comp.' 78 (2009), pp. 2435-2444. S. Aanderaa, L. Kristiansen and H. K. Ruud, "Search for good examples of Hall's conjecture", 'Math. Comp.' 87 (2018), 2903-2914. External links a page on the problem by Noam Elkies Conjectures Unsolved problems in number theory Abc conjecture
Hall's conjecture
[ "Mathematics" ]
878
[ "Unsolved problems in mathematics", "Unsolved problems in number theory", "Conjectures", "Abc conjecture", "Mathematical problems", "Number theory" ]
7,119,987
https://en.wikipedia.org/wiki/Index%20of%20robotics%20articles
Robotics is the branch of technology that deals with the design, construction, operation, structural disposition, manufacture and application of robots. Robotics is related to the sciences of electronics, engineering, mechanics, and software. The word "robot" was introduced to the public by Czech writer Karel Čapek in his play R.U.R. (Rossum's Universal Robots), published in 1920. The term "robotics" was coined by Isaac Asimov in his 1941 science fiction short-story "Liar!" Articles related to robotics include: 0–9 A B C D E F G H I J K L M N O P Question R S T U V W X Y Z References External links Robotics Robotics
Index of robotics articles
[ "Technology" ]
144
[ "Indexes of computer topics", "Computers" ]
7,120,484
https://en.wikipedia.org/wiki/Glass%20ionomer%20cement
A glass ionomer cement (GIC) is a dental restorative material used in dentistry as a filling material and luting cement, including for orthodontic bracket attachment. Glass-ionomer cements are based on the reaction of silicate glass-powder (calciumaluminofluorosilicate glass) and polyacrylic acid, an ionomer. Occasionally water is used instead of an acid, altering the properties of the material and its uses. This reaction produces a powdered cement of glass particles surrounded by matrix of fluoride elements and is known chemically as glass polyalkenoate. There are other forms of similar reactions which can take place, for example, when using an aqueous solution of acrylic/itaconic copolymer with tartaric acid, this results in a glass-ionomer in liquid form. An aqueous solution of maleic acid polymer or maleic/acrylic copolymer with tartaric acid can also be used to form a glass-ionomer in liquid form. Tartaric acid plays a significant part in controlling the setting characteristics of the material. Glass-ionomer based hybrids incorporate another dental material, for example resin-modified glass ionomer cements (RMGIC) and compomers (or modified composites). Non-destructive neutron scattering has evidenced GIC setting reactions to be non-monotonic, with eventual fracture toughness dictated by changing atomic cohesion, fluctuating interfacial configurations and interfacial terahertz (THz) dynamics. It is on the World Health Organization's List of Essential Medicines. Background Glass ionomer cement is primarily used in the prevention of dental caries. This dental material has good adhesive bond properties to tooth structure, allowing it to form a tight seal between the internal structures of the tooth and the surrounding environment. Dental caries are caused by bacterial production of acid during their metabolic actions. The acid produced from this metabolism results in the breakdown of tooth enamel and subsequent inner structures of the tooth, if the disease is not intervened by a dental professional, or if the carious lesion does not arrest and/or the enamel re-mineralises by itself. Glass ionomer cements act as sealants when pits and fissures in the tooth occur and release fluoride to prevent further enamel demineralisation and promote remineralisation. Fluoride can also hinder bacterial growth, by inhibiting their metabolism of ingested sugars in the diet. It does this by inhibiting various metabolic enzymes within the bacteria. This leads to a reduction in the acid produced during the bacteria's digestion of food, preventing a further drop in pH and therefore preventing caries. There is evidence that when using sealants, only 6% of people develop tooth decay over a 2-year period, in comparison to 40% of people when not using a sealant. However, it is recommended that the use of fluoride varnish alongside glass ionomer sealants should be applied in practice to further reduce the risk of secondary dental caries. Resin-modified glass ionomers The addition of resin to glass ionomers improves them significantly, allowing them to be more easily mixed and placed. Resin-modified glass ionomers allow equal or higher fluoride release and there is evidence of higher retention, higher strength and lower solubility. Resin-based glass ionomers have two setting reactions: an acid-base setting and a free-radical polymerisation. The free-radical polymerisation is the predominant mode of setting, as it occurs more rapidly than the acid-base mode. Only the material properly activated by light will be optimally cured. The presence of resin protects the cement from water contamination. Due to the shortened working time, it is recommended that placement and shaping of the material occurs as soon as possible after mixing. History Dental sealants were first introduced as part of the preventative programme, in the late 1960s, in response to increasing cases of pits and fissures on occlusal surfaces due to caries. This led to glass ionomer cements to be introduced in 1972 by Wilson and Kent as derivative of the silicate cements and the polycarboxylate cements. The glass ionomer cements incorporated the fluoride releasing properties of the silicate cements with the adhesive qualities of polycarboxylate cements. This incorporation allowed the material to be stronger, less soluble and more translucent (and therefore more aesthetic) than its predecessors. Glass ionomer cements were initially intended to be used for the aesthetic restoration of anterior teeth and were recommended for restoring Class III and Class V cavity preparations. There have now been further developments in the material's composition to improve properties. For example, the addition of metal or resin particles into the sealant is favoured due to the longer working time and the material being less sensitive to moisture during setting. When glass ionomer cements were first used, they were mainly used for the restoration of abrasion/erosion lesions and as a luting agent for crown and bridge reconstructions. However, this has now been extended to occlusal restorations in deciduous dentition, restoration of proximal lesions and cavity bases and liners. This is made possible by the ever-increasing new formulations of glass ionomer cements. One of the early commercially successful GICs, employing G338 glass and developed by Wilson and Kent, served purpose as non-load bearing restorative materials. However, this glass resulted in a cement too brittle for use in load-bearing applications such as in molar teeth. The properties of G338 being shown to be related to its phase-composition, specifically the interplay between its three amorphous phases Ca/Na-Al-Si-O, Ca-Al-F and Ca-P-O-F, as characterised by mechanical testing, differential scanning calorimetry (DSC) and X-ray diffraction (XRD), as well as quantum chemical modelling and ab initio molecular dynamics simulations. Glass ionomer versus resin-based sealants When the two dental sealants are compared, there has always been a contradiction as to which materials is more effective in caries reduction. Therefore, there are claims against replacing resin-based sealants, the current gold standard, with glass ionomer. Advantages Glass ionomer sealants are thought to prevent caries through a steady fluoride release over a prolonged period and the fissures are more resistant to demineralization, even after the visible loss of sealant material, however, a systemic review found no difference in caries development when GICs was used as a fissure sealing material compared to the conventional resin based sealants, in addition, it has less retention to the tooth structure than the resin based sealants. These sealants have hydrophilic properties, allowing them to be an alternative of the hydrophobic resin in the generally wet oral cavity. Resin-based sealants are easily destroyed by saliva contamination. They chemically bond with both enamel and dentin and do not necessarily require preparation/mechanical retention and can therefore be applied without harming existing tooth structure. This makes them ideal in many situations when tooth preservation is foremost and with minimally invasive techniques, particularly Class V fillings where there is a larger area of exposed dentin with only a thin ring of enamel. This often results in longer retention and service life than resin Class V fillings. They chemically bond to enamel and dentin leaving a smaller gap for bacteria to enter. Particularly when paired with silver diamine fluoride this can arrest caries and harden active caries and prevent further damage. They can be placed and cured outside of clinical settings and do not require a curing light. Chemically curable glass ionomer cements are considered safe from allergic reactions but a few have been reported with resin-based materials. Nevertheless, allergic reactions are very rarely associated with both sealants. Disadvantages The main disadvantage of glass ionomer sealants or cements has been inadequate retention or simply lack of strength, toughness, and limited wear resistance. For instance, due to its poor retention rate, periodic recalls are necessary, even after 6 months, to eventually replace the lost sealant. Different methods have been used to address the physical shortcomings of the glass ionomer cements such as thermo-light curing (polymerization), or addition of the zirconia, hydroxyapatite, N-vinyl pyrrolidone, N-vinyl caprolactam, and fluoroapatite to reinforce the glass ionomer cements. Clinical applications Glass ionomers are widely used due to their versatile properties and ease of use. Prior to procedures, starter materials for glass ionomers are supplied as a powder and liquid or as a powder mixed with water. These materials can be mixed and encapsulated. Preparation of the material should involve following manufacture instructions. A paper pad or cool dry glass slab may be used for mixing the raw materials though it is important to note that the use of the glass slab will retard the reaction and hence increase the working time. The raw materials in liquid and powder form should not be dispensed onto the chosen surface until the mixture is required in the clinical procedure the glass ionomer is being used for, as a prolonged exposure to the atmosphere could interfere with the ratio of chemicals in the liquid. At the stage of mixing, a spatula should be used to rapidly incorporate the powder into the liquid for a duration of 45–60 seconds depending on manufacture instructions and the individual products. Once mixed together to form a paste, an acid-base reaction occurs which allows the glass ionomer complex to set over a certain period of time and this reaction involves four overlapping stages: Dissolution Gelation Hardening (3–6 min) Maturation (24 hr – 1 yr) It is important to note that glass ionomers have a long setting time and need protection from the oral environment in order to minimize interference with dissolution and prevent contamination. The type of application for glass ionomers depends on the cement consistency as varying levels of viscosity from very high viscosity to low viscosity, can determine whether the cement is used as luting agents, orthodontic bracket adhesives, pit and fissure sealants, liners and bases, core build-ups, or intermediate restorations. Clinical uses The different clinical uses of glass ionomer compounds as restorative materials include; Cermets, which are essentially metal reinforced, glass ionomer cements, used to aid in restoring tooth loss as a result of decay or cavities to the tooth surfaces near the gingival margin, or the tooth roots, though cermets can be incorporated at other sites on various teeth, depending on the function required. They maintain adhesion to enamel and dentine and have an identical setting reaction to other glass ionomers. The development of cermets is an attempt to improve the mechanical properties of glass ionomers, particularly brittleness and abrasion resistance by incorporating metals such as silver, tin, gold and titanium. The use of these materials with glass ionomers appears to increase the value of compressive strength and fatigue limit as compared to conventional glass ionomer, however there is no marked difference in the flexural strength and resistance to abrasive wear as compared to glass ionomers. Dentine surface treatment, which can be performed with glass ionomer cements as the cement has adhesive characteristics which may be useful when placed in undercut cavities. The surfaces on which the glass cement ionomers are placed would be adequately prepared by removing the precipitated salivary proteins, present from saliva as this would greatly reduce the receptiveness of the glass ionomer cement and dentine surface, to bond formation. A number of different substances can be used to remove this element, such as citric acid, however the most effective substance seems to be polyacrylic acid, which is applied to the tooth surface for 30 seconds before it is washed off. The tooth is then dried to ensure the surface is receptive to bond formation but care is taken to ensure desiccation does not occur. Matrix techniques with glass ionomers, which are used to aid in proximal cavity restorations of anterior teeth. Between the teeth that are adjacent to the cavity, the matrix is inserted, commonly before any dentine surface conditioning. Once the material is inserted in excess, the matrix is placed around the tooth root and kept in place with the help of firm digital pressure while the material sets. Once set, the matrix can be carefully removed using a sharp probe or excavator. Fissure sealants, which involve the use of glass ionomers as the materials can be mixed to achieve a certain fluid consistency and viscosity that allows the cement to sink into fissures and pits located in posterior teeth and fill these spaces which pose as a site for caries risk, thereby reducing the risk of caries manifesting. Orthodontic brackets, which can involve the use of glass ionomer cements as an adhesive cement that forms strong chemical bonds between the enamel and the many metals which are used in orthodontic brackets such as stainless steel. Fluoride varnishes have been combined with sealant application in the prevention of dental caries. There is low certainty evidence that the combined usage of both increases the overall effectiveness as compared to using fluoride varnish alone. Chemistry and setting reaction All GICs contain a basic glass and an acidic polymer liquid, which set by an acid-base reaction. The polymer is an ionomer, containing a small proportion – some 5 to 10% – of substituted ionic groups. These allow it to be acid decomposable and clinically set readily. The glass filler is generally a calcium alumino fluorosilicate powder, which upon reaction with a polyalkenoic acid gives a glass polyalkenoate-glass residue set in an ionised, polycarboxylate matrix. The acid base setting reaction begins with the mixing of the components. The first phase of the reaction involves dissolution. The acid begins to attack the surface of the glass particles, as well as the adjacent tooth substrate, thus precipitating their outer layers but also neutralising itself. As the pH of the aqueous solution rises, the polyacrylic acid begins to ionise, and becoming negatively charged it sets up a diffusion gradient and helps draw cations out of the glass and dentine. The alkalinity also induces the polymers to dissociate, increasing the viscosity of the aqueous solution. The second phase is gelation, where as the pH continues to rise and the concentration of the ions in solution to increase, a critical point is reached and insoluble polyacrylates begin to precipitate. These polyanions have carboxylate groups whereby cations bind them, especially Ca2+ in this early phase, as it is the most readily available ion, crosslinking into calcium polyacrylate chains that begin to form a gel matrix, resulting in the initial hard set, within five minutes. Crosslinking, H bonds and physical entanglement of the chains are responsible for gelation. During this phase, the GIC is still vulnerable and must be protected from moisture. If contamination occurs, the chains will degrade and the GIC lose its strength and optical properties. Conversely, dehydration early on will crack the cement and make the surface porous. Over the next twenty four hours maturation occurs. The less stable calcium polyacrylate chains are progressively replaced by aluminium polyacrylate, allowing the calcium to join the fluoride and phosphate and diffuse into the tooth substrate, forming polysalts, which progressively hydrate to yield a physically stronger matrix. The incorporation of fluoride delays the reaction, increasing the working time. Other factors are the temperature of the cement, and the powder to liquid ratio – more powder or heat speeding up the reaction. GICs have good adhesive relations with tooth substrates, uniquely chemically bonding to dentine and, to a lesser extend, to enamel. During initial dissolution, both the glass particles and the hydroxyapatite structure are affected, and thus as the acid is buffered the matrix reforms, chemically welded together at the interface into a calcium phosphate polyalkenoate bond. In addition, the polymer chains are incorporated into both, weaving cross links, and in dentine the collagen fibres also contribute, both linking physically and H-bonding to the GIC salt precipitates. There is also microretention from porosities occurring in the hydroxyapatite. Works employing non-destructive neutron scattering and terahertz (THz) spectroscopy have evidenced that GIC's developing fracture toughness during setting is related to interfacial THz dynamics, changing atomic cohesion and fluctuating interfacial configurations. Setting of GICs is non-monotonic, characterised by abrupt features, including a glass–polymer coupling point, an early setting point, where decreasing toughness unexpectedly recovers, followed by stress-induced weakening of interfaces. Subsequently, toughness declines asymptotically to long-term fracture test values. Glass ionomer cement as a permanent material Fluoride release and remineralisation The pattern of fluoride release from glass ionomer cement is characterised by an initial rapid release of appreciable amounts of fluoride, followed by a taper in the release rate over time.  An initial fluoride “burst” effect is desirable to reduce the viability of remaining bacteria in the inner carious dentin, hence, inducing enamel or dentin remineralization.  The constant fluoride release during the following days are attributed to the fluoride ability to diffuse through cement pores and fractures. Thus, continuous small amounts of fluoride surrounding the teeth reduces demineralization of the tooth tissues. A study by Chau et al. shows a negative correlation between acidogenicity of the biofilm and the fluoride release by GIC, suggestive that enough fluoride release may decrease the virulence of cariogenic biofilms.  In addition, Ngo et al. (2006) studied the interaction between demineralised dentine and Fuji IX GP which includes a strontium – containing glass as opposed to the more conventional calcium-based glass in other GICs. A substantial amount of both strontium and fluoride ions was found to cross the interface into the partially demineralised dentine affected by caries. This promoted mineral depositions in these areas where calcium ion levels were low. Hence, this study supports the idea of glass ionomers contributing directly to remineralisation of carious dentine, provided that good seal is achieved with intimate contact between the GIC and partly demineralised dentine. This, then raises a question, “Is glass ionomer cement a suitable material for permanent restorations?” due to the desirable effects of fluoride release by glass ionomer cement. Glass Ionomer Cement in Primary Teeth Numerous studies and reviews have been published with respect to GIC used in primary teeth restorations. Findings of a systematic review and meta-analysis suggested that conventional glass ionomers were not recommended for Class II restorations in primary molars.  This material showed poor anatomical form and marginal integrity, and composite restorations were shown to be more successful than GIC when good moisture control could be achieved.  Resin modified glass ionomer cements (RMGIC) were developed to overcome the limitations of the conventional glass ionomer as a restorative material. A systematic review supports the use of RMGIC in small to moderate sized class II cavities, as they are able to withstand the occlusal forces on primary molars for at least one year.  With their desirable fluoride releasing effect, RMGIC may be considered for Class I and Class II restorations of primary molars in high caries risk population. Glass Ionomer Cement in Permanent Teeth With regard to permanent teeth, there is insufficient evidence to support the use of RMGIC as long term restorations in permanent teeth. Despite the low number of randomised control trials, a meta- analysis review by Bezerra et al. [2009] reported significantly fewer carious lesions on the margins of glass ionomer restorations in permanent teeth after six years as compared to amalgam restorations.  In addition, adhesive ability and longevity of GIC from a clinical standpoint can be best studied with restoration of non- carious cervical lesions. A systematic review shows GIC has higher retention rates than resin composite in follow up periods of up to 5 years. Unfortunately, reviews for Class II restorations in permanent teeth with glass ionomer cement are scarce with high bias or short study periods. However, a study  [2003] of the compressive strength and the fluoride release was done on 15 commercial fluoride- releasing restorative materials. A negative linear correlation was found between the compressive strength and fluoride release (r2=0.7741), i.e., restorative materials with high fluoride release have lower mechanical properties. References Further reading Dental materials Glass chemistry World Health Organization essential medicines
Glass ionomer cement
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
4,437
[ "Glass engineering and science", "Glass chemistry", "Dental materials", "Materials", "Matter" ]
7,120,525
https://en.wikipedia.org/wiki/Great%20Lives
Great Lives is a BBC Radio 4 biography series, produced in Bristol. It has been presented by Joan Bakewell, Humphrey Carpenter, Francine Stock and currently (since April 2006) Matthew Parris. A distinguished guest is asked to nominate the person they feel is truly deserving of the title "Great Life". The presenter and a recognised expert (a biographer, family member or fellow practitioner) are on hand to discuss the life. The programmes are 28 minutes long, originally broadcast on Fridays at 23:00, more recently at 16:30 on Tuesday with a repeat at 23:00 on Friday. Programmes Series 0, August–November 2001 Series 1, May–August 2002 Series 2, October–December 2002 Series 3, April–June 2003 Series 4, October–December 2003 Series 5, April–June 2004 Series 6, October–December 2004 1The programme originally scheduled was by the guest film-maker David Puttnam (who nominated the Irish nationalist leader Michael Collins). It was withdrawn due to "production quality". Hogmanay Special, 31 December 2004 Carpenter died on 4 January 2005, this was his last Great Lives programme 1 Series 7, April–June 2005 Series 8, October 2005 – February 2006 Series 9, April–June 2006 Series 10, August–September 2006 Series 11, December 2006 – January 2007 Series 12, April–May 2007 Series 13, August–October 2007 Series 14, December 2007 – January 2008 Series 15, April–May 2008 Series 16, August–September 2008 Series 17, December 2008 – February 2009 Series 18, April–May 2009 Series 19, August–September 2009 Series 20, December 2009 – February 2010 Series 21, April–May 2010 Series 22, August–September 2010 Series 23, November 2010 – January 2011 Garvey was previously nominated by Yvonne Brown in Series 7 Programme 7 1 Series 24, April–May 2011 Series 25, August–September 2011 Series 26, December 2011 – January 2012 Series 27, April–May 2012 Series 28, July–September 2012 Series 29, December 2012 – January 2013 Series 30, April–May 2013 Series 31, August–October 2013 Series 32, December 2013 – January 2014 Series 33, April–May 2014 Series 34, August–October 2014 Series 35, December 2014 – January 2015 Series 36, April–May 2015 Series 37, August–September 2015 Series 38, December 2015 – January 2016 Series 39, April–May 2016 Series 40, August–September 2016 Series 41, December 2016 – January 2017 Series 42, April–May 2017 Series 43, August–September 2017 Series 44, December 2017 - January 2018 Series 45, April–May 2018 Series 46, July–September 2018 Series 47, December 2018 – January 2019 Series 48, April–May 2019 Series 49, July–September 2019 Series 50, December 2019 – January 2020 Series 51, April–June 2020 Series 52, August–September 2020 Series 53, December 2020 – January 2021 Series 54, April–June 2021 Series 55, August–September 2021 Series 56, December 2021 – January 2022 Series 57, April–May 2022 Series 58, May–September 2022 Series 59, December 2022 – January 2023 Series 60, April–May 2023 Series 61, June–September 2023 Series 62, November 2023 – January 2024 Series 63, April 2024 – May 2024 Series 64, August 2024 – References External links Detailed list of Great Lives episodes 2001 radio programme debuts BBC Radio 4 programmes Biographical works Cultural depictions of Friedrich Nietzsche Cultural depictions of Henri Matisse Cultural depictions of H. G. Wells Cultural depictions of Alfred the Great Cultural depictions of Franz Schubert Cultural depictions of Alexander the Great Cultural depictions of Louis Armstrong Cultural depictions of Lord Byron Cultural depictions of Mother Teresa Cultural depictions of Baruch Spinoza Cultural depictions of James Cook Cultural depictions of Robert Falcon Scott Cultural depictions of Bob Marley Cultural depictions of David Lloyd George Cultural depictions of Elizabeth I Cultural depictions of Charles Dickens Cultural depictions of Niccolò Machiavelli Cultural depictions of Pyotr Ilyich Tchaikovsky Cultural depictions of Thomas Paine Cultural depictions of Horatio Nelson Cultural depictions of Lyndon B. Johnson Cultural depictions of Benjamin Disraeli Cultural depictions of George Orwell Cultural depictions of Robert Burns Cultural depictions of Genghis Khan Cultural depictions of George Sand Cultural depictions of Wolfgang Amadeus Mozart Cultural depictions of Ronald Reagan Cultural depictions of Arthur Wellesley, 1st Duke of Wellington Cultural depictions of Mae West Cultural depictions of Sigmund Freud Cultural depictions of Leon Trotsky Cultural depictions of Eleanor Roosevelt Cultural depictions of Charles Darwin Cultural depictions of Albert Einstein Cultural depictions of Pope John Paul II Cultural depictions of Marie Curie Cultural depictions of Billie Holiday Cultural depictions of Julius Caesar Cultural depictions of George Bernard Shaw Cultural depictions of Elvis Presley Cultural depictions of Rembrandt Cultural depictions of George Washington Cultural depictions of Katherine Mansfield Cultural depictions of Henry VII of England Cultural depictions of William Hogarth Cultural depictions of Luciano Pavarotti Cultural depictions of Robert F. Kennedy Cultural depictions of Napoleon Cultural depictions of Carl Jung Cultural depictions of Frank Sinatra Cultural depictions of Fred Astaire Cultural depictions of Harry Houdini Cultural depictions of Henry V of England Cultural depictions of Nero Cultural depictions of Pablo Picasso Cultural depictions of John Lennon Cultural depictions of Sappho Cultural depictions of Richard Nixon Cultural depictions of Golda Meir Cultural depictions of Winston Churchill Cultural depictions of Walt Disney Cultural depictions of Sammy Davis Jr. Cultural depictions of Thomas Edison Cultural depictions of Simone de Beauvoir Cultural depictions of Lewis Carroll Cultural depictions of Emily Dickinson Cultural depictions of William Shakespeare Cultural depictions of Ludwig II of Bavaria Cultural depictions of Dylan Thomas Cultural depictions of Oscar Wilde Cultural depictions of Gertrude Stein Cultural depictions of Francisco Goya Cultural depictions of Joséphine de Beauharnais Cultural depictions of Laurel & Hardy Cultural depictions of the Marx Brothers Cultural depictions of Grigori Rasputin Cultural depictions of Galileo Galilei Cultural depictions of David Livingstone Cultural depictions of Arthur Conan Doyle Cultural depictions of Salvador Dalí Cultural depictions of Florence Nightingale Cultural depictions of Rabindranath Tagore Cultural depictions of Bernard Montgomery Cultural depictions of Hank Williams Cultural depictions of Dante Alighieri Cultural depictions of Ernest Hemingway Cultural depictions of James Brown Cultural depictions of Erwin Rommel Cultural depictions of Richard III of England Cultural depictions of Marlon Brando Cultural depictions of Katharine Hepburn Cultural depictions of Abraham Lincoln Cultural depictions of Alfred Hitchcock Cultural depictions of Lucrezia Borgia Cultural depictions of Lenny Bruce Cultural depictions of Richard I of England Cultural depictions of Virginia Woolf Cultural depictions of Neville Chamberlain Cultural depictions of C. S. Lewis Cultural depictions of Pope John XXIII Cultural depictions of Sitting Bull Cultural depictions of Steve Jobs Cultural depictions of Andy Kaufman Cultural depictions of Muhammad Ali Cultural depictions of Mahatma Gandhi Cultural depictions of Orson Welles Cultural depictions of Catherine the Great Cultural depictions of David Bowie Cultural depictions of Charlie Chaplin Cultural depictions of Freddie Mercury Cultural depictions of Catherine de' Medici Cultural depictions of Jane Austen Cultural depictions of Jim Morrison Cultural depictions of Benito Mussolini Cultural depictions of Frida Kahlo Cultural depictions of Edward III of England Cultural depictions of Hans Christian Andersen Cultural depictions of Hypatia Cultural depictions of J. R. R. Tolkien Cultural depictions of Franklin D. Roosevelt Cultural depictions of Judy Garland Cultural depictions of Kurt Cobain Cultural depictions of Frederick the Great Cultural depictions of Thomas Jefferson Cultural depictions of Nikola Tesla Cultural depictions of Catherine Parr
Great Lives
[ "Astronomy" ]
1,459
[ "Cultural depictions of Hypatia", "Cultural depictions of astronomers", "Cultural depictions of Galileo Galilei" ]
7,121,345
https://en.wikipedia.org/wiki/Alloyant
Metallurgy
Alloyant
[ "Chemistry", "Materials_science", "Engineering" ]
5
[ "Metallurgy", "Materials science", "nan" ]
7,122,398
https://en.wikipedia.org/wiki/Protein%20c-Fos
Protein c-Fos is a proto-oncogene that is the human homolog of the retroviral oncogene v-fos. It is encoded in humans by the FOS gene. It was first discovered in rat fibroblasts as the transforming gene of the FBJ MSV (Finkel–Biskis–Jinkins murine osteogenic sarcoma virus) (Curran and Tech, 1982). It is a part of a bigger Fos family of transcription factors which includes c-Fos, FosB, Fra-1 and Fra-2. It has been mapped to chromosome region 14q21→q31. c-Fos encodes a 62 kDa protein, which forms heterodimer with c-jun (part of Jun family of transcription factors), resulting in the formation of AP-1 (Activator Protein-1) complex which binds DNA at AP-1 specific sites at the promoter and enhancer regions of target genes and converts extracellular signals into changes of gene expression. It plays an important role in many cellular functions and has been found to be overexpressed in a variety of cancers. Structure and function c-Fos is a 380 amino acid protein with a basic leucine zipper region for dimerisation and DNA-binding and a transactivation domain at C-terminus, and, like Jun proteins, it can form homodimers. In vitro studies have shown that Jun–Fos heterodimers are more stable and have stronger DNA-binding activity than Jun–Jun homodimers. A variety of stimuli, including serum, growth factors, tumor promoters, cytokines, and UV radiation induce their expression. The c-fos mRNA and protein is generally among the first to be expressed and hence referred to as an immediate early gene. It is rapidly and transiently induced, within 15 minutes of stimulation. Its activity is also regulated by posttranslational modification caused by phosphorylation by different kinases, like MAPK, CDC2, PKA or PKC which influence protein stability, DNA-binding activity and the trans-activating potential of the transcription factors. It can cause gene repression as well as gene activation, although different domains are believed to be involved in both processes. It is involved in important cellular events, including cell proliferation, differentiation and survival; genes associated with hypoxia; and angiogenesis; which makes its dysregulation an important factor for cancer development. It can also induce a loss of cell polarity and epithelial-mesenchymal transition, leading to invasive and metastatic growth in mammary epithelial cells. The importance of c-fos in biological context has been determined by eliminating endogenous function by using anti-sense mRNA, anti-c-fos antibodies, a ribozyme that cleaves c-fos mRNA or a dominant negative mutant of c-fos. The transgenic mice thus generated are viable, demonstrating that there are c-fos dependent and independent pathways of cell proliferation, but display a range of tissue-specific developmental defects, including osteoporosis, delayed gametogenesis, lymphopenia and behavioral abnormalities. Clinical significance The AP-1 complex has been implicated in transformation and progression of cancer. In osteosarcoma and endometrial carcinoma, c-Fos overexpression was associated with high-grade lesions and poor prognosis. Also, in a comparison between precancerous lesion of the cervix uteri and invasive cervical cancer, c-Fos expression was significantly lower in precancerous lesions. c-Fos has also been identified as independent predictor of decreased survival in breast cancer. It was found that overexpression of c-fos from class I MHC promoter in transgenic mice leads to the formation of osteosarcomas due to increased proliferation of osteoblasts whereas ectopic expression of the other Jun and Fos proteins does not induce any malignant tumors. Activation of the c-Fos transgene in mice results in overexpression of cyclin D1, A and E in osteoblasts and chondrocytes, both in vitro and in vivo, which might contribute to the uncontrolled growth leading to tumor. Human osteosarcomas analyzed for c-fos expression have given positive results in more than half the cases and c-fos expression has been associated with higher frequency of relapse and poor response to chemotherapy. Several studies have raised the idea that c-Fos may also have tumor-suppressor activity, that it might be able to promote as well as suppress tumorigenesis. Supporting this is the observation that in ovarian carcinomas, loss of c-Fos expression correlates with disease progression. This double action could be enabled by differential protein composition of tumour cells and their environment, for example, dimerisation partners, co-activators and promoter architecture. It is possible that the tumor suppressing activity is due to a proapoptotic function. The exact mechanism by which c-Fos contributes to apoptosis is not clearly understood, but observations in human hepatocellular carcinoma cells indicate that c-Fos is a mediator of c-myc-induced cell death and might induce apoptosis through the p38 MAP kinase pathway. Fas ligand (FASLG or FasL) and the tumour necrosis factor-related apoptosis-inducing ligand (TNFSF10 or TRAIL) might reflect an additional apoptotic mechanism induced by c-Fos, as observed in a human T-cell leukaemia cell line. Another possible mechanism of c-Fos involvement in tumour suppression could be the direct regulation of BRCA1, a well established factor in familial breast and ovarian cancer. In addition, the role of c-fos and other Fos family proteins has also been studied in endometrial carcinoma, cervical cancer, mesotheliomas, colorectal cancer, lung cancer, melanomas, thyroid carcinomas, esophageal cancer, hepatocellular carcinomas, etc. Cocaine, methamphetamine, morphine, and other psychoactive drugs have been shown to increase c-Fos production in the mesocortical pathway (prefrontal cortex) as well as in the mesolimbic reward pathway (nucleus accumbens), as well as display variability depending on prior sensitization. c-Fos repression by ΔFosB's AP-1 complex within the D1-type medium spiny neurons of the nucleus accumbens acts as a molecular switch that enables the chronic induction of ΔFosB, thus allowing it to accumulate more rapidly. As such, the c-Fos promoter finds utilization in drug addiction research in general, as well as with context-induced relapse to drug-seeking and other behavioral changes associated with chronic drug taking. An increase in c-Fos production in androgen receptor-containing neurons has been observed in rats after mating. Applications Expression of c-fos is an indirect marker of neuronal activity because c-fos is often expressed when neurons fire action potentials. Up-regulation of c-fos mRNA in a neuron is considered a marker for activity. The c-fos promoter has also been utilized for drug abuse research. Scientists use this promoter to turn on transgenes in rats, allowing them to manipulate specific neuronal ensembles to assess their role in drug-related memories and behavior. TetTag mice have been created to reactivate or silence cFos-expressing neurons with optogenetic tools or with DREADDs. Interactions c-Fos has been shown to interact with: BCL3, COBRA1, CSNK2A1, CSNK2A2, DDIT3, JUN NCOA1, NCOR2, RELA, RUNX1, RUNX2, SMAD3, and TBP. See also Leptomycin c-Jun Egr-1 References Further reading External links Drosophila kayak - The Interactive Fly Oncogenes Transcription factors Δ0
Protein c-Fos
[ "Chemistry", "Biology" ]
1,721
[ "Induced stem cells", "Gene expression", "Transcription factors", "Signal transduction" ]
7,122,953
https://en.wikipedia.org/wiki/Plasma%20contactor
Plasma contactors are devices used on spacecraft in order to prevent accumulation of electrostatic charge through the expulsion of plasma (often Xenon). An electrical contactor is an electrically controlled switch which closes a power or high voltage electrical circuit. A plasma contactor changes the electrically insulating vacuum into a conductor by providing movable electrons and positive gas ions. This conductive path closes a phantom loop circuit to discharge or neutralize the static electricity that can build up on a spacecraft. Space contains regions with varying concentrations of charged particles such as the plasma sheet, and a static charge builds up as the spacecraft moves between these regions, or as the electrical potential varies within such a region. Static electricity may also build up on a spacecraft as a result of space radiation, including sunlight, depending on the materials used on the surfaces of the spacecraft. A plasma contactor is mounted on the Z1 segment of the International Space Station Integrated Truss Structure. References Michael J. Patterson "Cathodes Delivered for Space Station Plasma Contactor System," Research And Technology 1998, NASA Technical Report TM-1999-208815, NASA Glenn Research Center (retrieved 30 November 2012) Plasma technology and applications Spacecraft components
Plasma contactor
[ "Physics", "Astronomy" ]
241
[ "Plasma technology and applications", "Astronomy stubs", "Spacecraft stubs", "Plasma physics" ]
7,123,267
https://en.wikipedia.org/wiki/Static%20web%20page
A static web page, sometimes called a flat page or a stationary page, is a web page that is delivered to a web browser exactly as stored, in contrast to dynamic web pages which are generated by a web application. Consequently, a static web page displays the same information for all users, from all contexts, subject to modern capabilities of a web server to negotiate content-type or language of the document where such versions are available and the server is configured to do so. However, a webpage's JavaScript can introduce dynamic functionality which may make the static web page dynamic. Overview Static web pages are often HTML documents, stored as files in the file system and made available by the web server over HTTP (nevertheless URLs ending with ".html" are not always static). However, loose interpretations of the term could include web pages stored in a database, and could even include pages formatted using a template and served through an application server, as long as the page served is unchanging and presented essentially as stored. The content of static web pages remain stationary irrespective of the number of times it is viewed. Such web pages are suitable for the contents that rarely need to be updated, though modern web template systems are changing this. Maintaining large numbers of static pages as files can be impractical without automated tools, such as static site generators. Any personalization or interactivity has to run client-side, which is restricting. Advantages Provide improved security over dynamic websites (dynamic websites are at risk to web shell attacks if a vulnerability is present) Improved performance for end users compared to dynamic websites Fewer or no dependencies on systems such as databases or other application servers Cost savings from utilizing cloud storage, as opposed to a hosted environment Security configurations are easy to set up, which makes it more secure Disadvantages Dynamic functionality must be performed on the client side Static site generators Static site generators are applications that compile static websites - typically populating HTML templates in a predefined folder and file structure, with content supplied in a format such as Markdown or AsciiDoc. Examples of static site generators include: Ruby programming language: Jekyll (powers GitHub Pages) Middleman Go programming language: Hugo JavaScript programming language: Next.js Astro.build Python programming language: Pelican Julia programming language: Franklin References External links The definitive listing of Static Site Generators, a community-curated list of static site generators. Web 1.0 Static website generators Web development
Static web page
[ "Engineering" ]
505
[ "Software engineering", "Web development" ]
7,124,184
https://en.wikipedia.org/wiki/Sulfuryl%20chloride%20fluoride
Sulfuryl chloride fluoride is a chemical compound with the formula SO2ClF. It is a colorless, easily condensed gas. It is a tetrahedral molecule. Liquified sulfuryl chloride fluoride is employed as a solvent for highly oxidizing compounds. Preparation The laboratory-scale synthesis begins with the preparation of potassium fluorosulfite: SO2 + KF → KSO2F This salt is then chlorinated to give sulfuryl chloride fluoride KSO2F + Cl2 → SO2ClF + KCl Further heating (180 °C) of potassium fluorosulfite with the sulfuryl chloride fluoride gives sulfuryl fluoride. KSO2F + SO2ClF → SO2F2 + KCl + SO2 Alternatively, sulfuryl chloride fluoride can be prepared without using gases as starting materials by treating sulfuryl chloride with ammonium fluoride or potassium fluoride in trifluoroacetic acid. SO2Cl2 + NH4F → SO2ClF + NH4Cl References Sulfuryl compounds Sulfur oxohalides Nonmetal halides Inorganic solvents
Sulfuryl chloride fluoride
[ "Chemistry" ]
241
[ "Sulfuryl compounds", "Functional groups" ]
7,124,240
https://en.wikipedia.org/wiki/Live%20blood%20analysis
Live blood analysis (LBA), live cell analysis, Hemaview or nutritional blood analysis is the use of high-resolution dark field microscopy to observe live blood cells. Live blood analysis is promoted by some alternative medicine practitioners, who assert that it can diagnose a range of diseases. There is no scientific evidence that live blood analysis is reliable or effective, and it has been described as a fraudulent means of convincing people that they are ill and should purchase dietary supplements. Live blood analysis is not accepted in laboratory practice and its validity as a laboratory test has not been established. There is no scientific evidence for the validity of live blood analysis, it has been described as a pseudoscientific, bogus and fraudulent medical test, and its practice has been dismissed by the medical profession as quackery. The field of live blood microscopy is unregulated, there is no training requirement for practitioners and no recognised qualification, no recognised medical validity to the results, and proponents have made false claims about both medical blood pathology testing and their own services, which some have refused to amend when instructed by the Advertising Standards Authority. It has its origins in the now-discarded theories of pleomorphism promoted by Günther Enderlein, notably in his 1925 book Bakterien-Cyklogenie. In January 2014 prominent live blood proponent and teacher Robert O. Young was arrested and charged for practising medicine without a license, and in March 2014 Errol Denton, a former student of his, a UK live blood practitioner, was convicted on nine counts in a rare prosecution under the Cancer Act 1939, followed in May 2014 by another former student, Stephen Ferguson. Overview Proponents claim that live blood analysis provides information "about the state of the immune system, possible vitamin deficiencies, amount of toxicity, pH and mineral imbalance, areas of concern and weaknesses, fungus and yeast." Some even claim it can "spot cancer and other degenerative immune system diseases up to two years before they would otherwise be detectable" or say they can diagnose "lack of oxygen in the blood, low trace minerals, lack of exercise, too much alcohol or yeast, weak kidneys, bladder or spleen." Practitioners include alternative medicine providers such as nutritionists, herbologists, naturopaths, and chiropractors. Dark field microscopy is useful to enhance contrast in unstained samples, but live blood analysis is not proven to be useful for any of its claimed indications. Two journal articles published in the alternative medical literature found that darkfield microscopy seemed unable to detect cancer, and that live blood analysis lacked reliability, reproducibility, and sensitivity and specificity. Edzard Ernst, professor of complementary medicine at the University of Exeter and University of Plymouth, notes: "No credible scientific studies have demonstrated the reliability of LBA for detecting any of the above conditions." Ernst describes live blood analysis as a "fraudulent" means of convincing patients to buy dietary supplements. Quackwatch has been critical of live blood analysis, noting dishonesty in the claims brought forward by its proponents. The alternative medicine popularizer Andrew Weil dismissed live blood analysis as "completely bogus", writing: "Dark-field microscopy combined with live blood analysis may sound like cutting-edge science, but it's old-fashioned hokum. Don't buy into it." Common diagnoses There are several common diagnoses by the LBA practitioners that are actually based on observation of artifacts normally found in microscopy, and ignorance of basic biological science: Acid in the blood: When the red blood cells stack on top of one another and appear like stacks of coins, it is called 'rouleaux' formation. By observation of the rouleaux, the LBA practitioners diagnose 'acid in the blood', while other practitioners suggest a weak pancreas. Rouleaux of red blood cells under the microscope is an artifact which occurs when the blood sample at the edge of the coverslip starts to dry out; where a large number of red blood cells clump together; or when the blood starts to clot when contacted with the glass. These artifacts are observed in only small, selected areas on the slide, while near the center of the slide the red blood cells are free floating. Blood acidosis is a severe illness and can not be diagnosed by observation of blood, nor treated by dietary supplements. Uric acid crystals and/or cholesterol plaques: Microscopic splinters of glass are often present when the slide is not cleaned thoroughly. Observation of such shards is claimed by the LBA practitioners to be uric acid crystals or cholesterol plaques, and thus to be indicative of 'acid imbalance, stress or poor lymphatic circulation' among other vague ailments. Uric acid crystals and cholesterol plaques, if present, are not visible in the blood samples. Parasites: Particles of dirt and debris, commonly found on glass slides not cleaned thoroughly, or slightly deformed red blood cells are mistaken to be parasites. Patients with parasites in the blood stream would be very sick and in need of immediate medical care, not by nutritional or herbal supplements or life style change as often recommended by LBA practitioners. Bacteria and yeast: LBA practitioners observe small irregular shape on the red blood cell membrane, a common artifact, and claim it represents bacteria or yeasts budding off the edge of the cell membrane. This claim violates the basic principle of biology that each living organism is unique and can not be transformed from one into another. Presence of bacteria or yeasts in the blood indicates the patient is in danger of developing sepsis, a life-threatening condition. Fermentations: Light spots on some red blood cells are identified by LBA practitioners as fermentations caused by high sugar content in the blood. Fermentation is a chemical reaction of breaking down sugar into alcohol and carbon dioxide catalyzed by enzymes produced in yeast. The red blood cells are not yeasts and cannot ferment sugar. Regulatory issues In 1996, the Pennsylvania Department of Laboratories informed three Pennsylvania chiropractors that Infinity2's "Nutritional Blood Analysis" could not be used for diagnostic purposes unless they maintain a laboratory that has both state and federal certification for complex testing. In 2001, the Health and Human Services Office of the Inspector General issued a report on regulation of "unestablished laboratory tests" that focused on live blood cell analysis and the difficulty of regulating unestablished tests and laboratories. In 2002, an Australian naturopath was convicted and fined for falsely claiming that he could diagnose illness using live blood analysis after the death of a patient. He was acquitted of manslaughter. He subsequently changed his name and was later banned from practice for life. In 2005, the Rhode Island Department of Health ordered a chiropractor to stop performing live blood analysis. An attorney for the State Board of Examiners in Chiropractic Medicine described the test as "useless" and a "money-making scheme... The point of it all is apparently to sell nutritional supplements." A state medical board official said that live blood analysis has no discernible value, and that the public "should be very suspicious of any practitioner who offers this test." In 2011, the UK General Medical Council suspended a doctor's licence to practise after he used live blood analysis to diagnose patients with Lyme disease. The doctor accepted he had been practising "bad medicine". In 2013, following several Advertising Standards Authority adjudications against claims made by LBA practitioners, the Committee of Advertising Practice added new guidelines to their AdviceOnline database advising what LBA marketers may claim in their advertising material. These state that "CAP is yet to see any evidence for the efficacy of this therapy which, without rigorous evidence to support it, should be advertised on an availability-only platform." One of these practitioners, Errol Denton, who practised out of a serviced office in Harley Street, was prosecuted in December 2013 under the Cancer Act 1939, and chose to use a Freeman on the Land defence. On March 20, 2014, he was convicted on nine counts under the Cancer Act 1939 and fined £9,000 plus around £10,000 in costs. In April 2018, Denton was further convicted of two counts of "engaging in unfair commercial practice" and one of "selling food not of the quality demanded", for selling a bottle of colloidal silver drink to an undercover trading standards officer in February 2016, after examining a drop of her blood and from it claiming that she had dislocated her shoulder. He was made the subject of a Criminal Behaviour Order, fined £2,250, and ordered pay £15,000 in costs. See also List of ineffective cancer treatments References Alternative cancer treatments Alternative medical diagnostic methods Alternative medicine Blood tests Health fraud Microscopy Pseudoscience Laboratory fraud
Live blood analysis
[ "Chemistry" ]
1,810
[ "Blood tests", "Chemical pathology", "Microscopy" ]
7,124,500
https://en.wikipedia.org/wiki/Cracovian
In astronomical and geodetic calculations, Cracovians are a clerical convenience introduced in 1925 by Tadeusz Banachiewicz for solving systems of linear equations by hand. Such systems can be written as in matrix notation where x and b are column vectors and the evaluation of b requires the multiplication of the rows of A by the vector x. Cracovians introduced the idea of using the transpose of A, AT, and multiplying the columns of AT by the column x. This amounts to the definition of a new type of matrix multiplication denoted here by '∧'. Thus . The Cracovian product of two matrices, say A and B, is defined by , where BT and A are assumed compatible for the common (Cayley) type of matrix multiplication. Since , the products and will generally be different; thus, Cracovian multiplication is non-associative. Cracovians are an example of a quasigroup. Cracovians adopted a column-row convention for designating individual elements as opposed to the standard row-column convention of matrix analysis. This made manual multiplication easier, as one needed to follow two parallel columns (instead of a vertical column and a horizontal row in the matrix notation.) It also sped up computer calculations, because both factors' elements were used in a similar order, which was more compatible with the sequential access memory in computers of those times — mostly magnetic tape memory and drum memory. Use of Cracovians in astronomy faded as computers with bigger random access memory came into general use. Any modern reference to them is in connection with their non-associative multiplication. Named for recognition of the City of Cracow. In programming In R the desired effect can be achieved via the crossprod() function. Specifically, the Cracovian product of matrices A and B can be obtained as crossprod(B, A). References Banachiewicz, T. (1955). Vistas in Astronomy, vol. 1, issue 1, pp 200–206. Herget, Paul; (1948, reprinted 1962). The computation of orbits, University of Cincinnati Observatory (privately published). Asteroid 1751 is named after the author. Kocinski, J. (2004). Cracovian Algebra, Nova Science Publishers. Astrometry History of astronomy Matrix theory
Cracovian
[ "Astronomy" ]
480
[ "Astrometry", "Astronomical sub-disciplines", "History of astronomy" ]
7,125,022
https://en.wikipedia.org/wiki/Multibeam%20echosounder
A multibeam echosounder (MBES) is a type of sonar that is used to map the seabed. It emits acoustic waves in a fan shape beneath its transceiver. The time it takes for the sound waves to reflect off the seabed and return to the receiver is used to calculate the water depth. Unlike other sonars and echo sounders, MBES uses beamforming to extract directional information from the returning soundwaves, producing a swathe of depth soundings from a single ping. History and progression Multibeam sonar sounding systems, also known as swathe (British English) or swath (American English) , originated for military applications. The concept originated in a radar system that was intended for the Lockheed U-2 high altitude reconnaissance aircraft, but the project was derailed when the aircraft flown by Gary Powers was brought down by a Soviet missile in May 1960. A proposal for using the "Mills Cross" beamforming technique adapted for use with bottom mapping sonar was made to the US Navy. Data from each ping of the sonar would be automatically processed, making corrections for ship motion and transducer depth sound velocity and refraction effects, but at the time there was insufficient digital data storage capacity, so the data would be converted into a depth contour strip map and stored on continuous film. The Sonar Array Sounding System (SASS) was developed in the early 1960s by the US Navy, in conjunction with General Instrument to map large swathes of the ocean floor to assist the underwater navigation of its submarine force. SASS was tested aboard the USS Compass Island (AG-153). The final array system, composed of sixty-one one degree beams with a swathe width of approximately 1.15 times water depth, was then installed on the USNS Bowditch (T-AGS-21), USNS Dutton (T-AGS-22) and USNS Michelson (T-AGS-23). At the same time, a Narrow Beam Echo Sounder (NBES) using 16 narrow beams was also developed by Harris ASW and installed on the Survey Ships Surveyor, Discoverer and Researcher. This technology would eventually become Sea Beam Only the vertical centre beam data was recorded during surveying operations. Starting in the 1970s, companies such as General Instrument (now SeaBeam Instruments, part of L3 Klein) in the United States, Krupp Atlas (now Atlas Hydrographic) and Elac Nautik (now part of the Wärtsilä Corporation) in Germany, Simrad (now Kongsberg Discovery) in Norway and RESON now Teledyne RESON A/S in Denmark developed systems that could be mounted to the hull of large ships, as well as on small boats (as technology improved, multibeam echosounders became more compact and lighter, and operating frequencies increased). The first commercial multibeam is now known as the SeaBeam Classic and was put in service in May 1977 on the Australian survey vessel HMAS Cook. This system produced up to 16 beams across a 45-degree arc. The (retronym) term "SeaBeam Classic" was coined after the manufacturer developed newer systems such as the SeaBeam 2000 and the SeaBeam 2112 in the late 1980s. The second SeaBeam Classic installation was on the French Research Vessel Jean Charcot. The SB Classic arrays on the Charcot were damaged in a grounding and the SeaBeam was replaced with an EM120 in 1991. Although it seems that the original SeaBeam Classic installation was not used much, the others were widely used, and subsequent installations were made on many vessels. SeaBeam Classic systems were subsequently installed on the US academic research vessels (Scripps Institution of Oceanography, University of California), the (Lamont–Doherty Earth Observatory of Columbia University) and the (Woods Hole Oceanographic Institution). As technology improved in the 1980s and 1990s, higher-frequency systems which provided higher resolution mapping in shallow water were developed, and today such systems are widely used for shallow-water hydrographic surveying in support of navigational charting. Multibeam echosounders are also commonly used for geological and oceanographic research, and since the 1990s for offshore oil and gas exploration and seafloor cable routing. More recently, multibeam echsounders are also used in the renewable energy sector such as offshore windfarms. In 1989, Atlas Electronics (Bremen, Germany) installed a second-generation deep-sea multibeam called Hydrosweep DS on the German research vessel Meteor. The Hydrosweep DS (HS-DS) produced up to 59 beams across a 90-degree swath, which was a vast improvement and was inherently ice-strengthened. Early HS-DS systems were installed on the (Germany), the (Germany), the (US) and the (India) in 1989 and 1990 and subsequently on a number of other vessels including the (US) and (Japan). As multibeam acoustic frequencies have increased and the cost of components has decreased, the worldwide number of multibeam swathe systems in operation has increased significantly. The required physical size of an acoustic transducer used to develop multiple high-resolution beams, decreases as the multibeam acoustic frequency increases. Consequently, increases in the operating frequencies of multibeam sonars have resulted in significant decreases in their weight, size and volume characteristics. The older and larger, lower-frequency multibeam sonar systems, that required considerable time and effort mounting them onto a ship's hull, used conventional tonpilz-type transducer elements, which provided a usable bandwidth of approximately 1/3 octave. The newer and smaller, higher-frequency multibeam sonar systems can easily be attached to a survey launch or to a tender vessel. Shallow water multibeam echosounders, like those from Teledyne Odom, R2Sonic and Norbit, which can incorporate sensors for measuring transducer motion and sound speed local to the transducer, are allowing many smaller hydrographic survey companies to move from traditional single beam echosounders to multibeam echosounders. Small low-power multibeam swathe systems are also now suitable for mounting on an Autonomous Underwater Vehicle (AUV) and on an Autonomous Surface Vessel (ASV). Multibeam echosounder data may include bathymetry, acoustic backscatter, and water column data. (Gas plumes now commonly identified in midwater multibeam data are termed flares.) Type 1-3 piezo-composite transducer elements, are being employed in a multispectral multibeam echosounder to provide a usable bandwidth that is in excess of 3 octaves. Consequently, multispectral multibeam echosounder surveys are possible with a single sonar system, which during every ping cycle, collects multispectral bathymetry data, multispectral backscatter data, and multispectral water column data in each swathe. Theory of operation A multibeam echosounder is a device typically used by hydrographic surveyors to determine the depth of water and the nature of the seabed. Most modern systems work by transmitting a broad acoustic fan shaped pulse from a specially designed transducer across the full swathe acrosstrack with a narrow alongtrack then forming multiple receive beams (beamforming) that are much narrower in the acrosstrack (around 1 degree depending on the system). From this narrow beam, a two way travel time of the acoustic pulse is then established utilizing a bottom detection algorithm. If the speed of sound in water is known for the full water column profile, the depth and position of the return signal can be determined from the receive angle and the two-way travel time. In order to determine the transmit and receive angle of each beam, a multibeam echosounder requires accurate measurement of the motion of the sonar relative to a cartesian coordinate system. The measured values are typically heave, pitch, roll, yaw, and heading. To compensate for signal loss due to spreading and absorption a time-varied gain circuit is designed into the receiver. For deep water systems, a steerable transmit beam is required to compensate for pitch. This can also be accomplished with beamforming. References Further reading Louay M.A. Jalloul and Sam. P. Alex, "Evaluation Methodology and Performance of an IEEE 802.16e System", Presented to the IEEE Communications and Signal Processing Society, Orange County Joint Chapter (ComSig), December 7, 2006. Available at: https://web.archive.org/web/20110414143801/http://chapters.comsoc.org/comsig/meet.html B. D. V. Veen and K. M. Buckley. Beamforming: A versatile approach to spatial filtering. IEEE ASSP Magazine, pages 4–24, Apr. 1988. H. L. Van Trees, Optimum Array Processing, Wiley, NY, 2002. "A Primer on Digital Beamforming" by Toby Haynes, March 26, 1998 "What Is Beamforming?" by Greg Allen. "Two Decades of Array Signal Processing Research" by Hamid Krim and Mats Viberg in IEEE Signal Processing Magazine, July 1996 External links A Note on Fifty Years of Multi-beam Sounding Pole to Sea Beam (NOAA History) MB-System open source software for processing multibeam data News and application articles of multibeam equipment on Hydro International Memorial website for USNS Bowditch, USNS Dutton and USNS Michelson {First application of Multibeam} Oceanography Sonar
Multibeam echosounder
[ "Physics", "Environmental_science" ]
1,957
[ "Hydrography", "Oceanography", "Hydrology", "Applied and interdisciplinary physics" ]
7,125,109
https://en.wikipedia.org/wiki/Nuclear%20transport
Nuclear transport refers to the mechanisms by which molecules move across the nuclear membrane of a cell. The entry and exit of large molecules from the cell nucleus is tightly controlled by the nuclear pore complexes (NPCs). Although small molecules can enter the nucleus without regulation, macromolecules such as RNA and proteins require association with transport factors known as nuclear transport receptors, like karyopherins called importins to enter the nucleus and exportins to exit. Nuclear import Protein that must be imported to the nucleus from the cytoplasm carry nuclear localization signals (NLS) that are bound by importins. An NLS is a sequence of amino acids that acts as a tag. They are most commonly hydrophilic sequences containing lysine and arginine residues, although diverse NLS sequences have been documented. Proteins, transfer RNA, and assembled ribosomal subunits are exported from the nucleus due to association with exportins, which bind signaling sequences called nuclear export signals (NES). The ability of both importins and exportins to transport their cargo is regulated by the Ran small G-protein. G-proteins are GTPase enzymes that bind to a molecule called guanosine triphosphate (GTP) which they then hydrolyze to create guanosine diphosphate (GDP) and release energy. The RAN enzymes exist in two nucleotide-bound forms: GDP-bound and GTP-bound. In its GTP-bound state, Ran is capable of binding importins and exportins. Importins release cargo upon binding to RanGTP, while exportins must bind RanGTP to form a ternary complex with their export cargo. The dominant nucleotide binding state of Ran depends on whether it is located in the nucleus (RanGTP) or the cytoplasm (RanGDP). Nuclear export Nuclear export roughly reverses the import process; in the nucleus, the exportin binds the cargo and Ran-GTP and diffuses through the pore to the cytoplasm, where the complex dissociates. Ran-GTP binds GAP and hydrolyzes GTP, and the resulting Ran-GDP complex is restored to the nucleus where it exchanges its bound ligand for GTP. Hence, whereas importins depend on RanGTP to dissociate from their cargo, exportins require RanGTP in order to bind to their cargo. A specialized mRNA exporter protein moves mature mRNA to the cytoplasm after post-transcriptional modification is complete. This translocation process is actively dependent on the Ran protein, although the specific mechanism is not yet well understood. Some particularly commonly transcribed genes are physically located near nuclear pores to facilitate the translocation process. Export of tRNA is also dependent on the various modifications it undergoes, thus preventing export of improperly functioning tRNA. This quality control mechanism is important due to tRNA's central role in translation, where it is involved in adding amino acids to a growing peptide chain. The tRNA exporter in vertebrates is called exportin-t. Exportin-t binds directly to its tRNA cargo in the nucleus, a process promoted by the presence of RanGTP. Mutations that affect tRNA's structure inhibit its ability to bind to exportin-t, and consequentially, to be exported, providing the cell with another quality control step. As described above, once the complex has crossed the envelope it dissociates and releases the tRNA cargo into the cytosol. Protein shuttling Many proteins are known to have both NESs and NLSs and thus shuttle constantly between the nucleus and the cytosol. In certain cases one of these steps (i.e., nuclear import or nuclear export) is regulated, often by post-translational modifications. Nuclear import limits the propagation of large proteins expressed in skeletal muscle fibers and possibly other syncytial tissues, maintaining localized gene expression in certain nuclei. Combining both NESs and NLSs promotes propagation of large proteins to more distant nuclei in muscle fibers. Protein shuttling can be assessed using a heterokaryon fusion assay. References External links Nuclear Transport animations Nuclear Transport illustrations Cell biology
Nuclear transport
[ "Biology" ]
853
[ "Cell biology" ]
7,125,377
https://en.wikipedia.org/wiki/Myoepithelial%20cell
Myoepithelial cells (sometimes referred to as myoepithelium) are cells usually found in glandular epithelium as a thin layer above the basement membrane but generally beneath the luminal cells. These may be positive for alpha smooth muscle actin and can contract and expel the secretions of exocrine glands. They are found in the sweat glands, mammary glands, lacrimal glands, and salivary glands. Myoepithelial cells in these cases constitute the basal cell layer of an epithelium that harbors the epithelial progenitor. In the case of wound healing, myoepithelial cells reactively proliferate. Presence of myoepithelial cells in a hyperplastic tissue proves the benignity of the gland and, when absent, indicates cancer. Only rare cancers like adenoid cystic carcinomas contains myoepithelial cells as one of the malignant components. It can be found in endoderm or ectoderm. Markers Myoepithelial cells are true epithelial cells positive for keratins, not to be confused with myofibroblasts which are true mesenchymal cells positive for vimentin. These cells are generally positive for alpha smooth muscle actin (αSMA), cytokeratin 5/6 and other high molecular weight cytokeratins, p63 and caldesmon. Myoepithelial cells are stellate in shape and are also known as basket cells. They lie between the basement membrane and glandular epithelium. Each cell consists of a cell body from which 4-8 processes radiate and embrace the secretory unit. Myoepithelial cells have contractile functions. They help in expelling secretions from the lumen of secretory units and facilitate the movement of saliva in salivary ducts. Other Cancers Involving This Type of Cell Myoepithelioma of the head and neck - A (usually) benign tumor of the head/neck consisting of solely myoepithelial cells. Epithelial-myoepithelial carcinoma (of the salivary glands) - A low-grade malignant tumor composed of both neoplastic epithelial and neoplastic myoepithelial cells (a biphasic tumor). Epithelial-myoepithelial carcinoma of the lung- A malignant tumor composed of both epithelial and myoepithelial tissues whose pathology resemble salivary cells. Adenomyoepithelioma of the breast- A (usually) benign tumor of the breast composed of Myoepithelial and Adeno (glandular) cells. Myoepithelioma of the Breast- A usually benign or exceedingly rare malignant tumor of the breast which mimics IDC but with cells resembling adenoid cysts. If malignant, this is also known as a Myoepithelial Carcinoma. See also List of distinct cell types in the adult human body References External links - "Axillary Sweat Gland: Myoepithelium" - "thick skin" "Simple Tubular Coiled" Cell biology Epithelial cells
Myoepithelial cell
[ "Biology" ]
666
[ "Cell biology" ]
7,125,481
https://en.wikipedia.org/wiki/Monsoon%20trough
The monsoon trough is a portion of the Intertropical Convergence Zone in the Western Pacific, as depicted by a line on a weather map showing the locations of minimum sea level pressure, and as such, is a convergence zone between the wind patterns of the southern and northern hemispheres. Westerly monsoon winds lie in its equatorward portion while easterly trade winds exist poleward of the trough. Right along its axis, heavy rains can be found which usher in the peak of a location's respective rainy season. The monsoon trough plays a role in creating many of the world's rainforests. The term monsoon trough is most commonly used in monsoonal regions of the Western Pacific such as Asia and Australia. The migration of the ITCZ/monsoon trough into a landmass heralds the beginning of the annual rainy season during summer months. Depressions and tropical cyclones often form in the vicinity of the monsoon trough, with each capable of producing a year's worth of rainfall in a matter of days. Movement and strength Monsoon troughing in the western Pacific reaches its zenith in latitude during the late summer when the wintertime surface ridge in the opposite hemisphere is the strongest. It can reach as far as the 40th parallel in East Asia during August and the 20th parallel in Australia during February. Its poleward progression is accelerated by the onset of the summer monsoon which is characterized by the development of lower air pressure over the warmest part of the various continents. In the Southern Hemisphere, the monsoon trough associated with the Australian monsoon reaches its most southerly latitude in February, oriented along a west-northwest/east-southeast axis. North-south-oriented mountain barriers, like the Rockies and the Andes, and large massifs, such as the Plateau of Tibet, also influence atmospheric flow. Effect of wind surges Increases in the relative vorticity, or spin, with the monsoon trough are normally a product of increased wind convergence within the convergence zone of the monsoon trough. Wind surges can lead to this increase in convergence. A strengthening or equatorward movement in the subtropical ridge can cause a strengthening of a monsoon trough as a wind surge moves towards the location of the monsoon trough. As fronts move through the subtropics and tropics of one hemisphere during their winter, normally as shear lines when their temperature gradient becomes minimal, wind surges can cross the equator in oceanic regions and enhance a monsoon trough in the other hemisphere's summer. A key way of detecting whether a wind surge has reached a monsoon trough is the formation of a burst of thunderstorms within the monsoon trough. Monsoon depressions If a circulation forms within the monsoon trough, it is able to compete with the neighboring thermal low over the continent, and a wind surge will occur at its periphery. Such a circulation which is broad in nature within a monsoon trough is known as a monsoon depression. In the Northern Hemisphere, monsoon depressions are generally asymmetric, and tend to have their strongest winds on their eastern periphery. Light and variable winds cover a large area near their center, while bands of showers and thunderstorms develop within their area of circulation. The presence of an upper level jet stream poleward and west of the system can enhance its development by leading to increased diverging air aloft over the monsoon depression, which leads to a corresponding drop in surface pressure. Even though these systems can develop over land, the outer portions of monsoon depressions are similar to tropical cyclones. In India, for example, 6 to 7 monsoon depressions move across the country yearly, and their numbers within the Bay of Bengal increase during July and August of El Niño events. Monsoon depressions are efficient rainfall producers, and can generate a year's worth of rainfall when they move through drier areas, such as the outback of Australia. Some tropical cyclones recognised by Regional Specialized Meteorological Centres would have characteristics of a monsoon depression throughout their lifetime. The Joint Typhoon Warning Center (JTWC) added monsoon depression as a category in 2015, and Cyclone Komen is the first system recognised as a fully monsoon depression by JTWC. Roles In rainy season Since the monsoon trough is an area of convergence in the wind pattern, and an elongated area of low pressure at the surface, the trough focuses low level moisture and is defined by one or more elongated bands of thunderstorms when viewing satellite imagery. Its abrupt movement to the north between May and June is coincident with the beginning of the monsoon regime and rainy seasons across South and East Asia. This convergence zone has been linked to prolonged heavy rain events in the Yangtze river as well as northern China. Its presence has also been linked to the peak of the rainy season in locations within Australia. In tropical cyclogenesis A monsoon trough is a significant genesis region for tropical cyclones. Vorticity-rich low level environments, with significant low level spin, lead to a better than average chance of tropical cyclone formation due to their inherent rotation. This is because a pre-existing near-surface disturbance with sufficient spin and convergence is one of the six requirements for tropical cyclogenesis. There appears to be a 15- to 25-day cycle in thunderstorm activity associated with the monsoon trough, which is roughly half the wavelength of the Madden–Julian oscillation, or MJO. This mirrors tropical cyclone genesis near these features, as genesis clusters in 2–3 weeks of activity followed by 2–3 weeks of inactivity. Tropical cyclones can form in outbreaks around these features under special circumstances, tending to follow the next cyclone to its poleward and west. Whenever the monsoon trough on the eastern side of the summertime Asian monsoon is in its normal orientation (oriented east-southeast to west-northwest), tropical cyclones along its periphery will move with a westward motion. If it reverses its orientation, orienting southwest to northeast, tropical cyclones will move more poleward. Tropical cyclone tracks with S-shapes tend to be associated with reverse-oriented monsoon troughs. The South Pacific convergence zone and South Atlantic convergence zones are generally reverse oriented. The failure of the monsoon trough, or the ITCZ, to move south of the equator in the eastern Pacific Ocean and Atlantic Ocean during the southern hemisphere summer, is considered one of the factors causing tropical cyclones to not normally form in those regions. It has also been noted that when the monsoon trough lies near 20 degrees north latitude in the Pacific, the frequency of tropical cyclones is 2 to 3 times greater than when it lies closer to 10 degrees north. References Tropical meteorology Atmospheric dynamics
Monsoon trough
[ "Chemistry" ]
1,312
[ "Atmospheric dynamics", "Fluid dynamics" ]
7,125,835
https://en.wikipedia.org/wiki/Lugduname
Lugduname (from lat. Lugdunum for Lyon) is one of the most potent sweetening agents known. Lugduname has been estimated to be between 220,000 and 300,000 times as sweet as sucrose (table sugar), with estimates varying between studies. It was developed at the University of Lyon, France in 1996. Lugduname is part of a family of potent sweeteners which contain acetic acid functional groups attached to guanidine. See also Carrelame Sucrononic acid References External links Sugar substitutes Guanidines Benzonitriles Benzodioxoles Anilines Alpha-Amino acids
Lugduname
[ "Chemistry" ]
139
[ "Guanidines", "Functional groups" ]
7,125,851
https://en.wikipedia.org/wiki/Turning%20radius
The turning radius (alternatively, turning diameter or turning circle) of a vehicle defines the minimum dimension (typically the radius or diameter) of available space required for that vehicle to make a semi-circular U-turn without skidding. The Oxford English Dictionary describes turning circle as "the smallest circle within which a ship, motor vehicle, etc., can be turned round completely". The term thus refers to a theoretical minimal circle in which for example an aeroplane, a ground vehicle or a watercraft can be turned around. The terms (radius, diameter, or circle) can have different meanings; refer to the section. Definition On wheeled vehicles with the common type of front wheel steering (i.e. one, two or even four wheels at the front capable of steering), the vehicle's turning diameter measures the minimum space needed to turn the vehicle around while the steering is set to its maximum displacement from the central 'straight ahead' position - i.e. either extreme left or right. If a marker pen was placed on the point of the vehicle furthest from the center of the turn, the diameter of the circle traced during the turn defines the value of that vehicle's turning diameter. Mathematically, the turning radius would be half of the turning diameter. The curb-to-curb turning radius, which considers the chassis and wheels only without body protrusions, can be expressed as a simplified function of the wheelbase, tire width, and steering angle: Aircraft have a similar minimum turning circle concept, generally associated with a standard rate turn, in which an aircraft enters a coordinated turn which changes its heading at a rate of 3° per second, or 180° in one minute. In this case, the turning radius depends on the true airspeed (in knots) as: Turning diameter is sometimes used in everyday language as a generalized term rather than with numerical figures. For example, a wheeled vehicle with a very small turning circle may be described as having a "tight turning radius", meaning that it is easier to turn around very tight corners. Wheeled vehicles with four-wheel steering will have a smaller turning radius than vehicles that steer wheels on one axle. Exceptions Technically, the minimum possible turning circle for a vehicle would be where it does not move either forwards or backwards while turning and simply pivots on its central axis. For a rectangular vehicle capable of doing this, the smallest turning circle would be equal to the diagonal length of the vehicle. As an example, some boats can be turned in this way, generally by using azimuth thrusters. Some wheeled vehicles are designed to spin around their central axis by making all wheels steerable, such as certain lawnmowers and wheelchairs as they do not follow a circular path as they turn. In this case the vehicle is referred to as a "zero turning radius" vehicle. Some camera dollies used in the film industry have a "round" mode which allows them to spin around their z axis by allowing synchronized inverse rotation of their left and right wheel sets, effectively giving them "zero" turning radius. Many conventionally steerable vehicles (only one axle with steerable wheels) can reverse the direction of travel in a space smaller than the stated turning radius by executing a specialized maneuver, such as a J-turn or similar skid, or in a discontinuous motion such as a three-point turn. Alternative nomenclature Other terms are sometimes used synonymously for turning diameter, which can lead to confusion. Turning radius and diameter The automotive term turning radius has been used as equivalent and interchangeable with the turning diameter. For example, the 2017 Audi A4 is specified by the manufacturer as having a turning diameter (curb-to-curb) of . Mathematically, the radius of a circle is half the diameter, so the correct turning radius in this example would be =  m. However, another source lists the turning radius of the same vehicle as also being 11.6 m, which is the turning diameter. In practice, the values of turning diameter tend to be listed more frequently in vehicle specifications, so the term turning diameter will therefore be more correct in most cases. The turning diameter will always give a higher number for a given vehicle, and the turning diameter measurement is usually preferred by automotive manufacturers. Such mixing of terms can lead to confusion among consumers. Turning circle The term turning circle is another term also sometimes used synonymously for the turning diameter. Some argue that turning circle is less ambiguous than turning radius, but "turning circle" may introduce its own ambiguities since the same circle can be defined by multiple measurements, including the radius , diameter (, twice as big), or circumference (, about 6.28 times as big). For example, Motor Trend refers to a "curb-to-curb turning circle" of a 2008 Cadillac CTS as , but the terminology is not yet settled. AutoChannel.com refers to the "turning radius" of the same car as . Turning circle is also sometimes used to refer to the path swept in the manoeuvre, i.e. the arc, or the circle's circumference in the case when the manoeuvre makes a complete turn. Different measurement methods There are two methods for measuring the vehicle turning diameter which will give slightly different results. These two methods are called wall-to-wall and curb-to-curb (US spelling), or alternatively kerb-to-kerb (UK spelling). The wall-to-wall turning circle is the minimum distance between two walls, both of which exceed the height of the vehicle, in which the vehicle can make a U-turn. The kerb-to-kerb turning circle is the minimum distance between two raised curbs, both of which are lower than the lowest body protrusions, in which the vehicle can make a U-turn. The wall-to-wall turning circle is greater than the kerb-to-kerb measure for the same vehicle because of the front and rear body overhangs. One can find these two ways of measuring the turning circle used in auto specifications, for example, a van might be listed as having a turning circle (in meters) of 12.1 (C) / 12.4 (W). Curb-to-curb A curb or curb-to-curb turning circle will show the straight-line distance from one side of the circle to the other, through the center. The name "curb-to-curb" indicates that a street would have to be this wide before this car can make a U-turn and not hit a street curb with a wheel. If you took the street curb and built it higher, as high as the car, and tried to make a U-turn in the street, parts of the car (bumper) would hit the wall. The kerb-to-kerb turning circle can be smaller than the turning circle as it refers to only a partial circle (~180°) with the vehicle alongside one kerb to start with. To perform a U turn in a forward direction only, the centre of the turn is not coincident with the centre of the road - thus a complete circle would not be possible (without driving onto the pavement to complete the manoeuvre). It also does not take into account that part of the vehicle that overhangs the wheels where as 'turning circle' does. Wall-to-wall The name wall or wall-to-wall turning circle denotes how far apart the two walls would have to be to allow a U-turn without scraping the walls. Legal requirements for road vehicles European Union and Switzerland Road vehicles must be able to carry out a 360 degrees turn on an annulus with an outer radius of and an inner radius of , measured wall-to-wall. In addition, when entering this annulus, no part of the vehicle can overreach a tangent by more than ; this tangent is drawn at the outer, 12.5 m limit of the annulus. New Zealand New Zealand requires that road vehicles can perform a 360 degrees turn on a circle with a diameter, measured wall-to-wall. The only part of the vehicle that may reach over this limitation are collapsible mirrors. Common uses Aeroplanes Watercraft Wheeled vehicles See also Breakover angle Dubins path Minimum railway curve radius Overhang (automotive) Ride height U-turn (maneuver) References External links Vehicle Turning Radius explanation + visuals Grounds Maintenance Magazine Article about Zero Radius Lawn Mowers Vehicle technology Engineering concepts Radii
Turning radius
[ "Engineering" ]
1,736
[ "Vehicle technology", "Mechanical engineering by discipline", "nan" ]
15,925,282
https://en.wikipedia.org/wiki/Mobility%20aid
A mobility aid is a device that helps individuals with mobility impairments to walk or improve their overall mobility. These aids range from walking aids, which assist those with limited walking capabilities, to wheelchairs and mobility scooters, which are used for severe disabilities or longer distances that would typically be covered on foot. For individuals who are blind or visually impaired, white canes and guide dogs have been long-standing resources. Additional aids are designed to facilitate mobility and transfers within buildings, including navigating between different floor levels. The term "mobility aid" traditionally refers to technology mechanical devices and is commonly used in government documents such as documents related to tax concessions. It refers to devices that provide a level of mobility comparable to unaided walking or standing from a seated position. Advancements in technology are likely to expand the functionalities of these devices through the integration of sensors and the provision of audio or tactile feedback. Walking aids Walking aids are devices designed to assist individuals with mobility impairments in maintaining upright ambulation. These aids include assistive canes, crutches, walkers, and more specialized devices such as gait trainers, and upright walkers. Each type of aid is designed to support users in different ways, which include improving stability, reducing lower-limb loading, and facilitating movement. Improving Stability Walking aids enhance stability by providing additional points of contact, which expand the user's range of stable center of gravity positioning. Reducing Lower-Limb Loading By transferring the load to the arms, walking aids significantly reduce the impact and static forces exerted on the affected limbs, alleviating stress and potential pain. : Facilitating Movement By transferring the load to the arms, walking aids significantly reduce the impact and static forces exerted on the affected limbs, alleviating stress and potential pain. Canes The cane or walking stick is the simplest form of walking aid. It is held in the hand and transmits loads to the floor through a shaft. The load which can be applied through a cane is transmitted through the user's hands and wrists and limited by these. Crutches A crutch also transmits loads to the ground through a shaft, but has two points of contact with the arm, at the hand and either below the elbow or below the armpit. This allows significantly greater loads to be exerted through a crutch in comparison with a cane. Canes, Crutches, and forearm crutch combinations Devices on the market today include a number of combinations for canes, crutches, and forearm crutches. These crutches have bands that encircle the forearms and handles for the patient to hold and rest their hands on to support the body weight. The forearm crutch typically gives a user the support of the cane but with additional forearm support to assist in mobility. The forearm portion helps increase balance, lateral stability and also reduces the load on the wrist. Walkers A walker (also known as a Zimmer frame) is the most stable walking aid and consists of a freestanding metal framework with three or more points of contact which the user places in front of them and then grips during movement. The points of contact may be either fixed rubber ferrules as with crutches and canes, or wheels, or a combination of both. Wheeled walkers are also known as rollators. Many of these walkers also come with an inbuilt seat so that the user may rest during use and with metal pouches to carry personal belongings. Walker cane hybrid A walker cane hybrid was introduced in 2012 designed to bridge the gap between a cane and a walker. The hybrid has two legs which provide lateral (side-to-side) support which a cane does not. It can be used with two hands in front of the user, similar to a walker, and provides an increased level of support compared with a cane. It can be adjusted for use with either one or two hands, at the front and at the side, as well as a stair climbing assistant. The hybrid is not designed to replace a walker which normally has four legs and provides 4-way support using both hands. Gait trainers Another device to assist walking that has entered the market in recent years is the gait trainer. This is a mobility aid that is more supportive than the standard walker. It typically offers support that assists weight-bearing and balance. The accessories or product parts that attach to the product frame provide unweighting support and postural alignment to enable walking practice. Seated walking scooter The Walk Aid Scooter allows a user with normal balance and foot, knee or hip conditions to unload the lower extremities. The two-wheeled scooter has a bicycle-type seat and handlebars, and is manually propelled with one or both feet like a balance bicycle. This walking aid scooter provides more support than a cane and is lighter, less bulky and easier to propel than a wheelchair. Wheelchairs and scooters Wheelchairs and mobility scooters substitute for walking by providing a wheeled device on which the user sits. Wheelchairs may be either manually propelled (by the user or by an aide) or electrically powered (commonly known as a "powerchair"). There are different types of wheelchair power add-ons that turn any manual wheelchair into a power assisted. Mobility scooters are electrically powered, as are motorized wheelchairs. Wheelchairs and Scooters are normally recommended for any individual due to significant mobility/balance impairment. A Registered Occupational Therapist or Physiotherapist (few cases) are able to provide object and clinical testing to ensure proper and safe device recommendations. Stairlifts and similar devices A stairlift is a mechanical device for lifting people and wheelchairs up and down stairs. Sometimes special purpose lifts are provided elsewhere to facilitate access for those with disabilities, for example at entrances to raised bus stops in Curitiba, Brazil. A wheelchair lift is specifically designed to carry the user and the wheelchair. This can either be through floor or utilizing the staircase. Others Mobility aids may also include adaptive technology such as sling lifts or other patient transfer devices that help transfer users between beds and chairs or lift chairs (and other sit-to-stand devices), transfer or convertible chairs. Knee scooters help some users. As people start to live longer mobility is about for many reclaiming aspects of independence which before were denied to them. See also Accessibility Handcycle Hobcart Recumbent bicycle Transport divide References Michael W. Whittle, R (2008). "Pathological and Other Abnormal gaits", Gait Analysis, An Introduction, Butterworth Heinemann & Elsevier, (122-130). External links Assistive Devices and Mobility Aids: Travelers with Disabilities and Medical Conditions Regulations for travelers from the US Transportation Security Administration Mobility device Accessibility
Mobility aid
[ "Engineering" ]
1,385
[ "Accessibility", "Design" ]
15,925,628
https://en.wikipedia.org/wiki/Hepatitis%20B
Hepatitis B is an infectious disease caused by the hepatitis B virus (HBV) that affects the liver; it is a type of viral hepatitis. It can cause both acute and chronic infection. Many people have no symptoms during an initial infection. For others, symptoms may appear 30 to 180 days after becoming infected and can include a rapid onset of sickness with nausea, vomiting, yellowish skin, fatigue, yellow urine, and abdominal pain. Symptoms during acute infection typically last for a few weeks, though some people may feel sick for up to six months. Deaths resulting from acute stage HBV infections are rare. An HBV infection lasting longer than six months is usually considered chronic. The likelihood of developing chronic hepatitis B is higher for those who are infected with HBV at a younger age. About 90% of those infected during or shortly after birth develop chronic hepatitis B, while less than 10% of those infected after the age of five develop chronic cases. Most of those with chronic disease have no symptoms; however, cirrhosis and liver cancer eventually develop in about 25% of those with chronic HBV. The virus is transmitted by exposure to infectious blood or body fluids. In areas where the disease is common, infection around the time of birth or from contact with other people's blood during childhood are the most frequent methods by which hepatitis B is acquired. In areas where the disease is rare, intravenous drug use and sexual intercourse are the most frequent routes of infection. Other risk factors include working in healthcare, blood transfusions, dialysis, living with an infected person, travel in countries with high infection rates, and living in an institution. Tattooing and acupuncture led to a significant number of cases in the 1980s; however, this has become less common with improved sterilization. The viruses cannot be spread by holding hands, sharing eating utensils, kissing, hugging, coughing, sneezing, or breastfeeding. The infection can be diagnosed 30 to 60 days after exposure. The diagnosis is usually confirmed by testing the blood for parts of the virus and for antibodies against the virus. It is one of five main hepatitis viruses: A, B, C, D, and E. During an initial infection, care is based on a person's symptoms. In those who develop chronic disease, antiviral medication such as tenofovir or interferon may be useful; however, these drugs are expensive. Liver transplantation is sometimes recommended for cases of cirrhosis or hepatocellular carcinoma. Hepatitis B infection has been preventable by vaccination since 1982. As of 2022, the hepatitis B vaccine is between 98% and 100% effective in preventing infection. The vaccine is administered in several doses; after an initial dose, two or three more vaccine doses are required at a later time for full effect. The World Health Organization (WHO) recommends infants receive the vaccine within 24 hours after birth when possible. National programs have made the hepatitis B vaccine available for infants in 190 countries as of the end of 2021. To further prevent infection, the WHO recommends testing all donated blood for hepatitis B before using it for transfusion. Using antiviral prophylaxis to prevent mother-to-child transmission is also recommended, as is following safe sex practices, including the use of condoms. In 2016, the WHO set a goal of eliminating viral hepatitis as a threat to global public health by 2030. Achieving this goal would require the development of therapeutic treatments to cure chronic hepatitis B, as well as preventing its transmission and using vaccines to prevent new infections. An estimated 296 million people, or 3.8% of the global population, had chronic hepatitis B infections as of 2019. Another 1.5 million developed acute infections that year, and 820,000 deaths occurred as a result of HBV. Cirrhosis and liver cancer are responsible for most HBV-related deaths. The disease is most prevalent in Africa (affecting 7.5% of the continent's population) and in the Western Pacific region (5.9%). Infection rates are 1.5% in Europe and 0.5% in the Americas. According to some estimates, about a third of the world's population has been infected with hepatitis B at one point in their lives. Hepatitis B was originally known as "serum hepatitis". Signs and symptoms Acute infection with virus is associated with acute viral hepatitis, an illness that begins with general ill-health, loss of appetite, nausea, vomiting, body aches, mild fever, and dark urine, and then progresses to development of jaundice. The illness lasts for a few weeks and then gradually improves in most affected people. A few people may have a more severe form of liver disease known as fulminant hepatic failure and may die as a result. The infection may be entirely asymptomatic and may go unrecognized. Chronic infection with virus may be asymptomatic or may be associated with chronic inflammation of the liver (chronic hepatitis), leading to cirrhosis over a period of several years. This type of infection dramatically increases the incidence of hepatocellular carcinoma (HCC; liver cancer). Across Europe, hepatitis B and C cause approximately 50% of hepatocellular carcinomas. Chronic carriers are encouraged to avoid consuming alcohol as it increases their risk for cirrhosis and liver cancer. virus has been linked to the development of membranous glomerulonephritis (MGN). Symptoms outside of the liver are present in 1–10% of HBV-infected people and include serum-sickness–like syndrome, acute necrotizing vasculitis (polyarteritis nodosa), membranous glomerulonephritis, and papular acrodermatitis of childhood (Gianotti–Crosti syndrome). The serum-sickness–like syndrome occurs in the setting of acute , often preceding the onset of jaundice. The clinical features are fever, skin rash, and polyarteritis. The symptoms often subside shortly after the onset of jaundice but can persist throughout the duration of acute . About 30–50% of people with acute necrotizing vasculitis (polyarteritis nodosa) are HBV carriers. HBV-associated nephropathy has been described in adults but is more common in children. Membranous glomerulonephritis is the most common form. Other immune-mediated hematological disorders, such as essential mixed cryoglobulinemia and aplastic anemia have been described as part of the extrahepatic manifestations of HBV infection, but their association is not as well-defined; therefore, they probably should not be considered etiologically linked to HBV. Cause Transmission Transmission of virus results from exposure to infectious blood or body fluids containing blood. HBV is 50 to 100 times more infectious than human immunodeficiency virus (HIV). HBV can be transmitted through several routes of infection. In vertical transmission, HBV is passed from mother to child (MTCT) during childbirth. Without intervention, a mother who is positive for HBsAg has a 20% risk of passing the infection to her offspring at the time of birth. This risk is as high as 90% if the mother is also positive for HBeAg. Early life horizontal transmission can occur through bites, lesions, certain sanitary habits, or other contact with secretions or saliva containing HBV. Adult horizontal transmission is known to occur through sexual contact, blood transfusions and transfusion with other human blood products, re-use of contaminated needles and syringes. Breastfeeding after proper immunoprophylaxis does not appear to contribute to mother-to-child-transmission (MTCT) of HBV. Virology Structure virus (HBV) is a member of the hepadnavirus family. The virus particle (virion) consists of an outer lipid envelope and an icosahedral nucleocapsid core composed of core protein. These virions are 30–42 nm in diameter. The nucleocapsid encloses the viral DNA and a DNA polymerase that has reverse transcriptase activity. The outer envelope contains embedded proteins that are involved in viral binding of, and entry into, susceptible cells. The virus is one of the smallest enveloped animal viruses. The 42 nm virions, which are capable of infecting liver cells known as hepatocytes, are referred to as "Dane particles". In addition to the Dane particles, filamentous and spherical bodies lacking a core can be found in the serum of infected individuals. These particles are not infectious and are composed of the lipid and protein that forms part of the surface of the virion, which is called the surface antigens (HBsAg), and is produced in excess during the life cycle of the virus. Genome The genome of HBV is made of circular DNA, but it is unusual because the DNA is not fully double-stranded. One end of the full length strand is linked to the HBV DNA polymerase. The genome is 3020–3320 nucleotides long (for the full-length strand) and 1700–2800 nucleotides long (for the short length-strand). The negative-sense (non-coding) is complementary to the viral mRNA. The viral DNA is found in the nucleus soon after infection of the cell. The partially double-stranded DNA is rendered fully double-stranded by completion of the (+) sense strand and removal of a protein molecule from the (−) sense strand and a short sequence of RNA from the (+) sense strand. Non-coding bases are removed from the ends of the (−) sense strand and the ends are rejoined. There are four known genes encoded by the genome, called C, X, P, and S. The core protein is coded for by gene C (HBcAg), and its start codon is preceded by an upstream in-frame AUG start codon from which the pre-core protein is produced. HBeAg is produced by proteolytic processing of the pre-core protein. In some rare strains of the virus known as hepatitis B virus precore mutants, no HBeAg is present. The DNA polymerase is encoded by gene P. Gene S is the gene that codes for the surface antigen (HBsAg). The HBsAg gene is one long open reading frame but contains three in frame "start" (ATG) codons that divide the gene into three sections, pre-S1, pre-S2, and S. Because of the multiple start codons, polypeptides of three different sizes called large (the order from surface to the inside: pre-S1, pre-S2, and S ), middle (pre-S2, S), and small (S) are produced. There is a myristyl group, which plays an important role in infection, on the amino-terminal end of the preS1 part of the large (L) protein. In addition to that, N terminus of the L protein have virus attachment and capsid binding sites. Because of that, the N termini of half of the L protein molecules are positioned outside the membrane and the other half positioned inside the membrane. The function of the protein coded for by gene X is not fully understood but it is associated with the development of liver cancer. It stimulates genes that promote cell growth and inactivates growth regulating molecules. Pathogenesis The life cycle of virus is complex. is one of a few known pararetroviruses: non-retroviruses that still use reverse transcription in their replication process. The virus gains entry into the cell by binding to NTCP on the surface and being endocytosed. Because the virus multiplies via RNA made by a host enzyme, the viral genomic DNA has to be transferred to the cell nucleus by host proteins called chaperones. The partially double-stranded, circular viral DNA is then made fully double stranded by HBV DNA polymerase, transforming the genome into covalently closed circular DNA (cccDNA). This cccDNA serves as a template for transcription of four viral mRNAs by host RNA polymerase. The largest mRNA, (which is longer than the viral genome), is used to make the new copies of the genome and to make the capsid core protein and the viral DNA polymerase. These four viral transcripts undergo additional processing and go on to form progeny virions that are released from the cell or returned to the nucleus and re-cycled to produce even more copies. The long mRNA is then transported back to the cytoplasm where the virion P protein (the DNA polymerase) synthesizes DNA via its reverse transcriptase activity. Serotypes and genotypes The virus is divided into four major serotypes (adr, adw, ayr, ayw) based on antigenic epitopes presented on its envelope proteins, and into eight major genotypes (A–H). The genotypes have a distinct geographical distribution and are used in tracing the evolution and transmission of the virus. Differences between genotypes affect the disease severity, course and likelihood of complications, and response to treatment and possibly vaccination. There are two other genotypes I and J but they are not universally accepted as of 2015. The diversity of genotypes is not shown equally in the world. For example, A, D, and E genotypes have been seen in Africa prevalently while B and C genotypes are observed in Asia as widespread. Genotypes differ by at least 8% of their sequence and were first reported in 1988 when six were initially described (A–F). Two further types have since been described (G and H). Most genotypes are now divided into subgenotypes with distinct properties. Mechanisms virus primarily interferes with the functions of the liver by replicating in hepatocytes. A functional receptor is NTCP. There is evidence that the receptor in the closely related duck hepatitis B virus is carboxypeptidase D. The virions bind to the host cell via the preS domain of the viral surface antigen and are subsequently internalized by endocytosis. HBV-preS-specific receptors are expressed primarily on hepatocytes; however, viral DNA and proteins have also been detected in extrahepatic sites, suggesting that cellular receptors for HBV may also exist on extrahepatic cells. During HBV infection, the host immune response causes both hepatocellular damage and viral clearance. Although the innate immune response does not play a significant role in these processes, the adaptive immune response, in particular virus-specific cytotoxic T lymphocytes(CTLs), contributes to most of the liver injury associated with HBV infection. CTLs eliminate HBV infection by killing infected cells and producing antiviral cytokines, which are then used to purge HBV from viable hepatocytes. Although liver damage is initiated and mediated by the CTLs, antigen-nonspecific inflammatory cells can worsen CTL-induced immunopathology, and platelets activated at the site of infection may facilitate the accumulation of CTLs in the liver. Diagnosis The tests, called assays, for detection of virus infection involve serum or blood tests that detect either viral antigens (proteins produced by the virus) or antibodies produced by the host. Interpretation of these assays is complex. The surface antigen (HBsAg) is most frequently used to screen for the presence of this infection. It is the first detectable viral antigen to appear during infection. However, early in an infection, this antigen may not be present and it may be undetectable later in the infection as it is being cleared by the host. The infectious virion contains an inner "core particle" enclosing viral genome. The icosahedral core particle is made of 180 or 240 copies of the core protein, alternatively known as core antigen, or HBcAg. During this 'window' in which the host remains infected but is successfully clearing the virus, IgM antibodies specific to the core antigen (anti-HBc IgM) may be the only serological evidence of disease. Therefore, most diagnostic panels contain HBsAg and total anti-HBc (both IgM and IgG). Shortly after the appearance of the HBsAg, another antigen called e antigen (HBeAg) will appear. Traditionally, the presence of HBeAg in a host's serum is associated with much higher rates of viral replication and enhanced infectivity; however, variants of the virus do not produce the 'e' antigen, so this rule does not always hold true. During the natural course of an infection, the HBeAg may be cleared, and antibodies to the 'e' antigen (anti-HBe) will arise immediately afterwards. This conversion is usually associated with a dramatic decline in viral replication. If the host is able to clear the infection, eventually the HBsAg will become undetectable and will be followed by IgG antibodies to the surface antigen and core antigen (anti-HBs and anti HBc IgG). The time between the removal of the HBsAg and the appearance of anti-HBs is called the window period. A person negative for HBsAg but positive for anti-HBs either has cleared an infection or has been vaccinated previously. Individuals who remain HBsAg positive for at least six months are considered to be carriers. Carriers of the virus may have chronic hepatitis B, which would be reflected by elevated serum alanine aminotransferase (ALT) levels and inflammation of the liver, if they are in the immune clearance phase of chronic infection. Carriers who have seroconverted to HBeAg negative status, in particular those who acquired the infection as adults, have very little viral multiplication and hence may be at little risk of long-term complications or of transmitting infection to others. However, it is possible for individuals to enter an "immune escape" with HBeAg-negative hepatitis. PCR tests have been developed to detect and measure the amount of HBV DNA, called the viral load, in clinical specimens. These tests are used to assess a person's infection status and to monitor treatment. Individuals with high viral loads, characteristically have ground glass hepatocytes on biopsy. Prevention Vaccine Vaccines for the prevention of hepatitis B have been routinely recommended for babies since 1991 in the United States. The first dose is generally recommended within a day of birth. The hepatitis B vaccine was the first vaccine capable of preventing cancer, specifically liver cancer. Most vaccines are given in three doses over a course of days. A protective response to the vaccine is defined as an anti-HBs antibody concentration of at least 10 mIU/ml in the recipient's serum. The vaccine is more effective in children and 95 percent of those vaccinated have protective levels of antibody. This drops to around 90% at 40 years of age and to around 75 percent in those over 60 years. The protection afforded by vaccination is long lasting even after antibody levels fall below 10 mIU/ml. For newborns of HBsAg-positive mothers: hepatitis B vaccine alone, hepatitis B immunoglobulin alone, or the combination of vaccine plus hepatitis B immunoglobulin, all prevent hepatitis B occurrence. Furthermore, the combination of vaccine plus hepatitis B immunoglobulin is superior to vaccine alone. This combination prevents HBV transmission around the time of birth in 86% to 99% of cases. Tenofovir given in the second or third trimester can reduce the risk of mother to child transmission by 77% when combined with hepatitis B immunoglobulin and the hepatitis B vaccine, especially for pregnant women with high hepatitis B virus DNA levels. However, there is not sufficient evidence that the administration of hepatitis B immunoglobulin alone during pregnancy, might reduce transmission rates to the newborn infant. No randomized control trial has been conducted to assess the effects of hepatitis B vaccine during pregnancy for preventing infant infection. All those with a risk of exposure to body fluids such as blood should be vaccinated, if not already. Testing to verify effective immunization is recommended and further doses of vaccine are given to those who are not sufficiently immunized. In 10- to 22-year follow-up studies there were no cases of hepatitis B among those with a normal immune system who were vaccinated. Only rare chronic infections have been documented. Vaccination is particularly recommended for high risk groups including: health workers, people with chronic kidney failure, and men who have sex with men. Both types of the hepatitis B vaccine, the plasma-derived vaccine (PDV) and the recombinant vaccine (RV) are of similar effectiveness in preventing infection in both healthcare workers and chronic kidney failure groups. One difference was noticed among the health worker group: the RV intramuscular route was significantly more effective compared with the RV intradermal route of administration. Other In assisted reproductive technology, sperm washing is not necessary for males with hepatitis B to prevent transmission, unless the female partner has not been effectively vaccinated. In females with hepatitis B, the risk of transmission from mother to child with IVF is no different from the risk in spontaneous conception. Those at high risk of infection should be tested as there is effective treatment for those who have the disease. Groups that screening is recommended for include those who have not been vaccinated and one of the following: people from areas of the world where hepatitis B occurs in more than 2%, those with HIV, intravenous drug users, men who have sex with men, and those who live with someone with hepatitis B. Screening during pregnancy is recommended in the United States. Treatment Acute infection does not usually require treatment and most adults clear the infection spontaneously. Early antiviral treatment may be required in fewer than 1% of people, whose infection takes a very aggressive course (fulminant hepatitis) or who are immunocompromised. On the other hand, treatment of chronic infection may be necessary to reduce the risk of cirrhosis and liver cancer. Chronically infected individuals with persistently elevated serum alanine aminotransferase, a marker of liver damage, and HBV DNA levels are candidates for therapy. Treatment lasts from six months to a year, depending on medication and genotype. Treatment duration when medication is taken by mouth, however, is more variable and usually longer than one year. Although none of the available medications can clear the infection, they can stop the virus from replicating, thus minimizing liver damage. As of 2024, there are seven medications licensed for the treatment of infection in the United States. These include antiviral medications lamivudine, adefovir, tenofovir disoproxil, tenofovir alafenamide, telbivudine, and entecavir, and the two immune system modulators interferon alpha-2a and PEGylated interferon alpha-2a. In 2015, the World Health Organization recommended tenofovir or entecavir as first-line agents. Those with current cirrhosis are in most need of treatment. The use of interferon, which requires injections daily or thrice weekly, has been supplanted by long-acting PEGylated interferon, which is injected only once weekly. However, some individuals are much more likely to respond than others, and this might be because of the genotype of the infecting virus or the person's heredity. The treatment reduces viral replication in the liver, thereby reducing the viral load (the amount of virus particles as measured in the blood). Response to treatment differs between the genotypes. Interferon treatment may produce an e antigen seroconversion rate of 37% in genotype A but only a 6% seroconversion in type D. Genotype B has similar seroconversion rates to type A while type C seroconverts only in 15% of cases. Sustained e antigen loss after treatment is ~45% in types A and B but only 25–30% in types C and D. It seems unlikely that the disease will be eliminated by 2030, the goal set in 2016 by WHO. However, progress is being made in developing therapeutic treatments. In 2010, the Hepatitis B Foundation reported that 3 preclinical and 11 clinical-stage drugs were under development, based on largely similar mechanisms. In 2020, they reported that there were 17 preclinical- and 32 clinical-stage drugs under development, using diverse mechanisms. Prognosis virus infection may be either acute (self-limiting) or chronic (long-standing). Persons with self-limiting infection clear the infection spontaneously within weeks to months. Children are less likely than adults to clear the infection. More than 95% of people who become infected as adults or older children will stage a full recovery and develop protective immunity to the virus. However, this drops to 30% for younger children, and only 5% of newborns that acquire the infection from their mother at birth will clear the infection. This population has a 40% lifetime risk of death from cirrhosis or hepatocellular carcinoma. Of those infected between the age of one to six, 70% will clear the infection. Hepatitis D (HDV) can occur only with a concomitant infection, because HDV uses the HBV surface antigen to form a capsid. Co-infection with hepatitis D increases the risk of liver cirrhosis and liver cancer. Polyarteritis nodosa is more common in people with infection. Cirrhosis A number of different tests are available to determine the degree of cirrhosis present. Transient elastography (FibroScan) is the test of choice, but it is expensive. Aspartate aminotransferase to platelet ratio index may be used when cost is an issue. Reactivation virus DNA remains in the body after infection, and in some people, including those that do not have detectable HBsAg, the disease recurs. Although rare, reactivation is seen most often following alcohol or drug use, or in people with impaired immunity. HBV goes through cycles of replication and non-replication. Approximately 50% of overt carriers experience acute reactivation. Males with baseline ALT of 200 UL/L are three times more likely to develop a reactivation than people with lower levels. Although reactivation can occur spontaneously, people who undergo chemotherapy have a higher risk. Immunosuppressive drugs favor increased HBV replication while inhibiting cytotoxic T cell function in the liver. The risk of reactivation varies depending on the serological profile; those with detectable HBsAg in their blood are at the greatest risk, but those with only antibodies to the core antigen are also at risk. The presence of antibodies to the surface antigen, which are considered to be a marker of immunity, does not preclude reactivation. Treatment with prophylactic antiviral drugs can prevent the serious morbidity associated with HBV disease reactivation. Epidemiology Approximately 254 million people had chronic HBV infection as of 2022. Another 1.2 million cases of acute HBV infection also occurred that year. Regional prevalences across the globe range from around 7.5% in Africa to 0.5% in the Americas. The primary method of HBV transmission and the prevalence of chronic HBV infection in specific regions often correspond with one another. In populations where HBV infection rates are 8% or higher, which are classified as high prevalence, vertical transmission (usually occurring during birth) is most common, though rates of early childhood transmission can also be significant among these populations. In 2021, 19 African countries had infection rates ranging between 8-19%, placing them in the high prevalence category. High prevalence of HBV also exists in Mongolia. In moderate prevalence areas where 2–7% of the population is chronically infected, the disease is predominantly spread horizontally, often among children, but also vertically. China's HBV infection rate is at the higher end of the moderate prevalence classification with an infection rate of 6.89% as of 2019. HBV prevalence in India is also moderate, with studies placing India's infection rate between 2-4%. Countries with low HBV prevalence include Australia (0.9%), those in the WHO European Region (which average 1.5%), and most countries in North and South America (which average 0.28%). In the United States, an estimated 0.26% of the population was living with HBV infection as of 2018. History Findings of HBV DNA in ancient human remains have shown that HBV has infected humans for at least ten millennia, both in Eurasia and in the Americas. This disproved the belief that hepatitis B originated in the New World and spread to Europe around 16th century. Hepatitis B virus subgenotype C4 is exclusively present in Australian aborigines, suggesting an ancient origin as much as 50,000 years old. However, analyses of ancient HBV genomes suggested that the most recent common ancestor of all known human HBV strains was dated to between 20,000 and 12,000 years ago, pointing to a more recent origin for all HBV genotypes. The evolution of HBV in humans was shown to reflect known events of human history such as the first peopling of the Americas during the late Pleistocene and the Neolithic transition in Europe. Ancient DNA studies have also showed that some ancient hepatitis viral strains still infect humans, while other strains became extinct. The earliest record of an epidemic caused by virus was made by Lurman in 1885. An outbreak of smallpox occurred in Bremen in 1883 and 1,289 shipyard employees were vaccinated with lymph from other people. After several weeks, and up to eight months later, 191 of the vaccinated workers became ill with jaundice and were diagnosed with serum hepatitis. Other employees who had been inoculated with different batches of lymph remained healthy. Lurman's paper, now regarded as a classical example of an epidemiological study, proved that contaminated lymph was the source of the outbreak. Later, numerous similar outbreaks were reported following the introduction, in 1909, of hypodermic needles that were used, and, more importantly, reused, for administering Salvarsan for the treatment of syphilis. The largest recorded outbreak of hepatitis B was the infection of up to 330,000 American soldiers during World War II. The outbreak has been blamed on a yellow fever vaccine made with contaminated human blood serum, and after receiving the vaccinations about 50,000 soldiers developed jaundice. The virus was not discovered until 1966 when Baruch Blumberg, then working at the National Institutes of Health (NIH), discovered the Australia antigen (later known to be surface antigen, or HBsAg) in the blood of Aboriginal Australian people. Although a virus had been suspected since the research published by Frederick MacCallum in 1947, David Dane and others discovered the virus particle in 1970 by electron microscopy. In 1971, the FDA issued its first-ever blood supply screening order to blood banks. By the early 1980s the genome of the virus had been sequenced, and the first vaccines were being tested. Society and culture World Hepatitis Day, observed 28 July, aims to raise global awareness of and hepatitis C and encourage prevention, diagnosis, and treatment. It has been led by the World Hepatitis Alliance since 2007 and in May 2010, it received global endorsement from the World Health Organization. See also Infectious causes of cancer Oncovirus References External links Sexually transmitted diseases and infections Virus-related cutaneous conditions Infectious causes of cancer Wikipedia medicine articles ready to translate Wikipedia infectious disease articles ready to translate Vaccine-preventable diseases +B
Hepatitis B
[ "Biology" ]
6,654
[ "Vaccination", "Vaccine-preventable diseases" ]
15,925,983
https://en.wikipedia.org/wiki/Tablet%20press
A tablet press is a mechanical device that compresses powder into tablets of uniform size and weight. A tablet press can be used to manufacture tablets of a wide variety of materials, including pharmaceuticals, nutraceuticals, cleaning products, industrial pellets and cosmetics. To form a tablet, the granulated powder material must be metered into a cavity formed by two punches and a die, and then the punches must be pressed together with great force to fuse the material together. A tablet is formed by the combined pressing action of two punches and a die. In the first step of a typical operation, the bottom punch is lowered in the die creating a cavity into which the granulated feedstock is fed. The exact depth of the lower punch can be precisely controlled to meter the amount of powder that fills the cavity. The excess is scraped from the top of the die, and the lower punch is drawn down and temporarily covered to prevent spillage. Then, the upper punch is brought down into contact with the powder as the cover is removed. The force of compression is delivered by high pressure compression rolls which fuse the granulated material together into a hard tablet. After compression, the lower punch is raised to eject the tablet. Tablet tooling design is critical to ensuring a robust tablet compression process. Considerations when designing pharmaceutical tablet compression tool design include tooling set, head flat, top head angle, top head radius, head back angle, and punch shank. As well as ensuring a single dose of drug, the tablet tooling is also critical in ensuring the size, shape, embossing and other physical characteristics of the tablet that are required for identification. There are 2 types of tablet presses: single-punch and rotary tablet presses. Most high-speed tablet presses take the form of a rotating turret that holds any number of punches. As they rotate around the turret, the punches come into contact with cams which control the punch's vertical position. Punches and dies are usually custom made for each application, and can be made in a wide variety of sizes, shapes, and can be customized with manufacturer codes and scoring lines to make tablets easier to break. Depending on tablet size, shape, material, and press configuration, a typical modern press can produce from 250,000 to over 1,700,000 tablets an hour. See also Tablet (pharmacy) References Machines Drug manufacturing
Tablet press
[ "Physics", "Technology", "Engineering" ]
481
[ "Physical systems", "Machines", "Mechanical engineering" ]
15,928,042
https://en.wikipedia.org/wiki/The%20E%20and%20B%20Experiment
The E and B Experiment (EBEX) was an experiment that measured the cosmic microwave background radiation of a part of the sky during two sub-orbital (high-altitude) balloon flights and took large, high-fidelity images of the CMB polarization anisotropies using a telescope which flew at over high. The altitude of the telescope made it possible to reduce the atmospheric absorption of microwaves, which allowed massive cost reduction compared to a satellite probe, however, only a small part of the sky can be scanned and for a shorter duration than a typical satellite mission such as WMAP. The first flight was an engineering flight over North America in 2009. For the science flight, EBEX was launched on 29 December 2012, near McMurdo Station in Antarctica. It circled around the South Pole using the polar vortex winds before landing on 24 January 2013 about from McMurdo. Instrumentation EBEX consists of a 1.5 m Dragone-type telescope that provides a resolution of 8 arcminutes in frequency bands centered on 150, 250, and 410 GHz. Polarimetry is achieved with a continuously-rotating achromatic half-wave plate supported by a superconducting magnetic bearing and a fixed wire grid polarizer. The wire grid is mounted at 45 degrees to the incoming light beam and transmits one polarization state while reflecting the other. Each polarization state is subsequently detected by its own focal plane with a 6 degree instantaneous field-of-view on the sky. Each of the focal planes contains up to 960 transition-edge sensors read out with frequency-domain SQUID multiplexing. Temporary disappearance The EBEX telescope was reported missing in May 2012, while in transit from the University of Minnesota to the NASA Columbia Scientific Balloon Facility in Palestine, Texas. The driver of the truck said that the trailer had been stolen while parked at a motel in Dallas. Scientists and employees of the trucking company searched the area and found the missing trailer parked at a truck wash near Hutchins, Texas. The trailer had been opened, but no scientific equipment had been stolen and the telescope was undamaged. Flight EBEX launched from Williams Field on the Antarctic coast on 29 December 2012. See also Cosmic microwave background experiments Observational cosmology References External links Main (UMN) Site Miller CMB group Physics experiments Radio astronomy Cosmic microwave background experiments Balloon-borne telescopes Astronomical experiments in the Antarctic
The E and B Experiment
[ "Physics", "Astronomy" ]
492
[ "Radio astronomy", "Experimental physics", "Physics experiments", "Astronomical sub-disciplines" ]
15,928,103
https://en.wikipedia.org/wiki/Annonacin
Annonacin is a chemical compound with toxic effects, especially in the nervous system, found in some fruits such as the paw paw, custard apples, soursop, and others from the family Annonaceae. It is a member of the class of compounds known as acetogenins. Annonacin-containing fruit products are regularly consumed throughout the West Indies for their traditional medicine uses. Traditional medicine Historically, plants and fruits of Annonaceae (particularly Annona muricata and Annona squamosa) have been consumed in various forms throughout the West Indies, usually as hot water extracts of leaves. These annonacin-containing herbal teas are thought to be useful in folk medicine. On the Caribbean island of Guadeloupe, such teas are consumed mainly for their sedative qualities. Use of annonacin products in Guadeloupe often lasts from early childhood through old age, and daily consumption is not uncommon. It was discovered in Guadeloupe that atypical Parkinsonism was predominant in elderly males, who regularly consume annonacin-containing herbal teas. Of 87 people with Parkinsonism transferred to one clinic between 1996 and 1998, 25% had Parkinson's, while 36% had progressive supranuclear palsy and 39% had atypical Parkinsonism. Neurotoxicity Annonacin is a disabling and potentially lethal neurotoxin. Like other acetogenins, it is a mitochondrial complex I (NADH-dehydrogenase) inhibitor. As NADH-dehydrogenase is responsible for the conversion of NADH to NAD+ as well as the establishment of a proton gradient in the mitochondria, annonacin disables the ability of a cell to generate ATP through oxidative phosphorylation, leading to cell apoptosis or necrosis. The LC50 of annonacin is 0.018 μM to dopaminergic neurons, and it is the damage done to these neurons that results in the neurodegenerative effects of the toxin. Annonacin is 100 times more toxic than 1-methyl-4-phenylpyridinium (MPP+), another potent mitochondrial complex I inhibitor. Compared to MPP+, annonacin produces a wider and more dramatic loss of neurons, not only in the nigro-striatal system, but in the basal ganglia and brainstem nuclei as well. Annonacin has been linked to the abnormally high incidence of progressive supranuclear palsy as well as atypical Parkinsonism in the Caribbean island of Guadeloupe where consumption of fruits, such as the soursop (Annona muricata), is common. An average-sized soursop fruit contains 15 mg of annonacin, while a can of commercial nectar contains 36 mg and a cup of infusion, 140 μg. Studies in rodents indicate that consumption of annonacin (3.8 and 7.6 mg per kg per day for 28 days) caused brain lesions consistent with Parkinson's disease. An adult who consumes a fruit or can of nectar daily over the course of a year is estimated to ingest the same amount of annonacin that induced brain lesions in the rodents receiving purified annonacin intravenously. References Furanones Tetrahydrofurans Polyols Polyketides Neurotoxins NADH dehydrogenase inhibitors Respiratory toxins Plant toxins
Annonacin
[ "Chemistry" ]
721
[ "Biomolecules by chemical classification", "Cellular respiration", "Chemical ecology", "Natural products", "Respiratory toxins", "Plant toxins", "Polyketides", "Neurochemistry", "Neurotoxins" ]
15,928,363
https://en.wikipedia.org/wiki/Broadcast-safe
Broadcast-safe video (broadcast legal or legal signal) is a term used in the broadcast industry to define video and audio compliant with the technical or regulatory broadcast requirements of the target area or region the feed might be broadcasting to. In the United States, the Federal Communications Commission (FCC) is the regulatory authority; in most of Europe, standards are set by the European Broadcasting Union (EBU). Broadcast safe video Broadcast safe standard-definition video Broadcast safe 625 video Broadcast safe standards for 625 lines of standard-definition (Inaccurately referred to as PAL, a colour encoding that is usually used with such systems) video are: Common name = 625/50i (576i) Commonly used digital SD SDI baseband signal = SMPTE 259M-C, 270 Mbit/s bitrate Commonly used No. of Vertical Lines = 625 (576 visible active video) Commonly used Frame rate = 25 Hz (25 interlaced frame/s) Commonly used TV Resolution = 720 x 576 (576i) Black levels = 0 mV or 0 IRE White levels (Chrominance amplitude): 700 mV p-p or 100 IRE - 100% intensity setting which corresponds to 100.0.100.0 SMPTE color bars. 75% intensity corresponding to 100.0.75.0 color bars, also referred to as EBU Bars. Variants Broadcast safe 525 video Broadcast safe standards for 525 lines of standard-definition (System M, NTSC, NTSC-J, PAL-M) video are: Common name = 525/60i (480i) Commonly used digital SD SDI baseband signal = SMPTE 259M-C, 270 Mbit/s bitrate Commonly used Frame rate = 30 frame/s black and white, 29.97 interlaced frame/s color Black level = 7.5 IRE for NTSC, 0 IRE for NTSC-J in Japan and PAL-M in Brazil. Blanking level = 0 IRE White levels = 100 IRE, 714 mV Maximum signal level = 120 IRE Minimum signal level = -20 IRE Variants Broadcast safe High Definition video Digital broadcasting is very different from analog. The NTSC and PAL standards describe both transmission of the signal and how the electrical signal is converted into an image. In digital, there is a separation between the subject of how data is to be transmitted from tower to TV, and the subject of what content that data might contain. While data transmission is likely to be a fixed and consistent affair, the content could vary from High Definition video, to SD multicasting the next, and even to non-video datacasting. For ATSC 1.0, 8VSB transmits the data, while MPEG-2 encodes pictures and sound. Broadcast safe audio Broadcast engineers in North America usually line up their audio gear to nominal reference level of 0 dB on a VU meter aligned to +4 dBu or -20 dBFS, in Europe equating to roughly +4 dBm or -18 dBFS. Peak signal levels must not exceed the nominal level by more than +10 dB. Broadcast audio as a rule must be as free as possible of Gaussian noise, that is to say, it must be as far from the noise floor, as is reasonably possible considering the storage or transmission medium. Broadcast audio must have a good signal-to-noise ratio, where speech or music is a bare minimum of 16db above the noise of the recording or transmission system. For audio that has a much poorer signal-to-noise ratio (like cockpit voice recorders), sonic enhancement is recommended. Non-standard video Although almost any video gear can create problems when broadcast, equipment aimed at consumers sometimes produces video signals which are not broadcast safe. Usually this is to reduce the cost of the gear, since a non-standard video signal in the home might not create the problems that one might find in a broadcast facility. Potential flaws exist with: VHS and 8 mm: Consumer devices generally lack time base correction that may cause problems with genlock and sync with some analogue and digital broadcast equipment. Consumer analogue video systems have greater system noise and lower chrominance and luminance than is normal for standard definition TV. As a general broadcast engineering rule all analogue videotape origin material should be genlocked before transmission, but this is not mandatory or necessary for all conditions. All analogue videotape by default is broadcast safe under normal playing conditions. Older videogame systems: Video game consoles before the sixth generation and 8-bit home computers generated a video signal lacking the half scan line needed to make interlace happen. This subtle simplification caused NTSC sets to scan 240p/60 instead of 480i60, with similar results for PAL. While this actually improved picture quality for the kind of low-definition images that videogames of this era generated, such a signal modification could cause problems in a broadcast environment as the signal behaviour is outside the original television system specifications. Genlocking—but not timebase correction—are the recommended broadcast engineering solutions. Computer video signals: Computer video can be set up to run at many different frame or field rates, ranging 50 frame/s to more than 240 frame/s. Computer video is generally progressive by default, but many interlaced modes exist. A scan converter is typically needed to convert these signals to one of the many acceptable broadcast standards, such as 59.94 Hz or 50 Hz. This type of conversion typically degrades the quality of the broadcast image, usually resulting with either "motion artifacts" or a lower resolution. It is recommended that the display rate be set to equal the target television rate if possible. In digital television only environments In nations that have fully converted to digital television, broadcast safe analogue television takes on a slightly different meaning. All broadcasting systems will have been mostly converted to digital only outputs, leaving fewer entry points for analogue television signals. What this means is that all devices that feed to the television transmitter must take in and feed standard analogue television signals into the transmission chain. Mostly it is up to the switcher to notify if there is non-broadcast safe video to the programmer. However, due to the limitations of many switchers for DTV and HDTV it ultimately is up to the automation systems to alert the programmer of non-broadcast safe video inputs. As a matter of broadcast engineering practice, 4:3 analogue television signals will always pose the most problems with broadcast safe compliance. The use of portable and cheap timebase-genlock systems for analogue television inputs in the digital television studio will be clearly mandatory for the next 50 years. See also 576i 480i VU meter Peak programme meter SMPTE color bars ProcAmp Safe area References Broadcast engineering ITU-R recommendations Television technology Television terminology
Broadcast-safe
[ "Technology", "Engineering" ]
1,387
[ "Information and communications technology", "Broadcast engineering", "Electronic engineering", "Television technology" ]
15,928,614
https://en.wikipedia.org/wiki/Water%2C%20Sanitation%20and%20Hygiene%20Monitoring%20Program
The Water, Sanitation and Hygiene Monitoring Program or WaSH MP is a local initiative that is responsible for monitoring the enduring crisis in the water sector in the Palestinian territories (oPt). Overview In a region already suffering severe water stress, the ongoing political, economic and social crisis in the oPt has resulted in near catastrophic consequences for the water, sanitation and hygiene (WaSH) situation. Local and international non-governmental organizations (NGOs) working in the water sector, in tandem with the Palestinian Water Authority (PWA), are trying, under restrictive political conditions and within limited budgets, to ensure that all Palestinians are able to access sufficient water supplies and sanitation services. History The Palestinian Hydrology Group (PHG), as the leading Palestinian NGO working in the water sector, has undertaken the responsibility for initiating the program. In June 2002 the WaSH MP was launched in response to the urgent need for increased information, resources and action related to the deteriorating WaSH situation in the West Bank and Gaza Strip. The need for quantitative data to support ongoing advocacy and humanitarian initiatives by donors, international NGOs, and local NGOs working in the water sector in the oPt has been the main driving force behind the initiation and further development of the WaSH MP. Objective The main objective of the WaSH MP is to respond to the water crisis in the oPt by increasing local and international awareness of the WaSH situation while further encouraging mobilization around the emergency needs of the most vulnerable communities. Additionally, the hope is that this will help to stimulate political and environmental change through the use of timely and pertinent information (data) in order to help remedy the dire WaSH situation. These efforts are inline with achieving the UN Millennium Development Goals (MDG’s) in relation to water and sanitation (UN MDG 7 – Target 10). As part of the WaSH MP, PHG has taken the responsibility for monitoring the extent to which Target 10 can be realized under these deteriorating conditions in the oPt, and for identifying the limitations in its achievement. In the process of identifying the main constraints facing the realization of this goal, and in addressing water issues and crises afflicted on Palestinian communities in the West Bank and Gaza Strip, reliable water related data collection has become of paramount importance. Program Scope In the current phase of the program, the WaSH MP managed to collect data from 660 of the 708 Palestinian communities and had disseminated the information through annual, monthly and weekly reports. This program covers the overwhelming majority of communities in the oPt and spans across all governorates throughout the West Bank and Gaza Strip. A total of 660 communities are now surveyed during each annual monitoring period. The remaining 48 communities which are not surveyed as part of the ongoing monitoring process are left out due to various reasons including: religious locality with limited access, East Jerusalem communities (limited access to municipal information), no people due to Israeli planned transfer, seasonal community, and or a community with services that are an integral part of another locality. Program Indicators Information quality of WaSH MP data outputs is defined in terms of its "fitness for use" by the end user. The six dimensions of data quality are relevance, accuracy, timeliness, accessibility, interpretability, and coherence. The information is collected from the field, by means of a standardized questionnaire, and is checked as part of a quality assurance process by Technical Field Monitors (TFMs) who are located in the West Bank and Gaza Strip. In order to provide this service the WaSH MP has further refined its data criteria by creating twelve indicators. The twelve indicators currently used are as follows: status of households with year-round access to a water source, per capita water supply, wastewater network coverage, cesspit and septic tank coverage, availability of solid waste collection system, prevalence of water-borne diseases, status of operation and maintenance of water supply facilities, cost recovery for water supply services, unaccounted-for water within the water supply system, monthly household income spent on water supply, monthly household income spent on sanitation, as well as a compilation of major community problems and needs. The commitment on behalf of the WaSH MP to the continuous improvement of data will ensure that international organizations, donors, INGO’s, NGO’s, academics and all interested persons receive WaSH-related data that meets the growing needs of the Palestinian people, as well as the development and humanitarian sectors. Program Deliverables The WaSH MP has been alerting organizations working in the water sector in the oPt to problems that require urgent and longer-term attention. Furthermore, the WaSH MP has striven in educating those outside the Palestinian water sector nationally and internationally on the WaSH crisis in the oPt. The program also provides a direct channel for WaSH related information regarding the most vulnerable areas to be disseminated to the humanitarian sector and the international community in order for them to stay abreast of emergency needs. As part of the program deliverables an annual report is published along with quarterly based need assessment reports, monthly data sets, and pilot monitoring of selected Palestinian communities. Additionally emergency alerts are posted on a need be basis and are identified as part of the continuous monitoring process. The WaSH MP website comprises an archive of all published material, including reports, alerts and photographs to serve as a reference source. Obstacles In the past, WaSH related data collection in the oPt has been difficult to source, disparate in nature, and generally incomplete. The data available failed to describe, accurately, the reality on the ground for many communities — much less has it served as a tool for improving the dissemination of critical information. It also failed to provide a comprehensive indication of the vulnerability of numerous communities, and could not be relied upon to assess whether these communities had the capacity and coping mechanisms to solve WaSH related problems. This can be attributed to the difficulties in acquiring a consistent source of data while under occupation. The unpredictable nature of life in the oPt is primarily due to the lack of control Palestinians have over their own affairs. This creates numerous obstacles to planning and implementing a monitoring program. Support In the past financial support has been awarded by Oxfam-GB and UNICEF. References 1 United Nations Millennium Development Goals (MDG’s) United Nations Millennium Development Goals 2 UN MDG 7 – Target 10: United Nations Millennium Development Goal 7 aims to ensure environmental sustainability. Among the targets related to Goal 7, is Target 10: “To cut in half, by 2015, the proportion of people without sustainable access to safe drinking water and basic sanitation”. United Nations Millennium Development Goals 3 The Palestinian Central Bureau of Statistics (PCBS) identified 708 communities within the oPt as part of the only comprehensive census in 1997. PCBS.ORG External links Water, Sanitation and Hygiene Monitoring Program (WaSH MP) Website Palestinian Central Bureau of Statistics (PCBS) Website Hygiene Sewerage Waste management by in the Palestinian Territories Water treatment
Water, Sanitation and Hygiene Monitoring Program
[ "Chemistry", "Engineering", "Environmental_science" ]
1,378
[ "Water treatment", "Water pollution", "Sewerage", "Environmental engineering", "Water technology" ]
15,929,035
https://en.wikipedia.org/wiki/Telavancin
Telavancin (trade name Vibativ by Cumberland Pharmaceuticals) is a bactericidal lipoglycopeptide for use in MRSA or other Gram-positive infections. Telavancin is a semi-synthetic derivative of vancomycin. The FDA approved the drug in September 2009 for complicated skin and skin structure infections (cSSSI), and in June 2013 for hospital-acquired and ventilator-associated bacterial pneumonia caused by Staphylococcus aureus. History On 19 October 2007, the US Food and Drug Administration (FDA) issued an approvable letter for telavancin. Its developer, Theravance, submitted a complete response to the letter, and the FDA has assigned a Prescription Drug User Fee Act (PDUFA) target date of 21 July 2008. On 19 November 2008, an FDA antiinfective drug advisory committee concluded that they would recommend telavancin be approved by the FDA. The FDA approved the drug on 11 September 2009 for complicated skin and skin structure infections (cSSSI). Theravance has also submitted telavancin to the FDA in a second indication, nosocomial pneumonia, sometimes referred to as hospital-acquired pneumonia, or HAP. On 30 November 2012, an FDA advisory panel endorsed approval of a once-daily formulation of telavancin for nosocomial pneumonia when other alternatives are not suitable. However, telavancin did not win the advisory committee's recommendation as first-line therapy for this indication. The committee indicated that the trial data did not prove "substantial evidence" of telavancin's safety and efficacy in hospital-acquired pneumonia, including ventilator-associated pneumonia caused by Gram-positive organisms Staphylococcus aureus and Streptococcus pneumoniae. On 21 June 2013 FDA gave approval for telavancin to treat patients with hospital-acquired pneumonia, but indicated it should be used only when alternative treatments are not suitable. FDA staff had indicated telavancin has a "substantially higher risk for death" for patients with kidney problems or diabetes compared to vancomycin. On March 11 2013, Clinigen Group plc and Theravance, Inc. announced that they have entered into an exclusive commercialization agreement in the European Union (EU) and certain other countries located in Europe for VIBATIV® (telavancin) for the treatment of nosocomial pneumonia (hospital-acquired), including ventilator-associated pneumonia, known or suspected to be caused by methicillin resistant Staphylococcus aureus (MRSA) when other alternatives are not suitable. Mechanism of action Like vancomycin, telavancin inhibits bacterial cell wall synthesis by binding to the D-Ala-D-Ala terminus of the peptidoglycan in the growing cell wall (see Pharmacology and chemistry of vancomycin). In addition, it disrupts bacterial membranes by depolarization. Adverse effects Common but harmless adverse effects include nausea, vomiting, constipation, and headache. Telavancin has a higher rate of kidney failure than vancomycin in two clinical trials. It showed teratogenic effects in animal studies. Interactions Telavancin inhibits the liver enzymes CYP3A4 and CYP3A5. No data regarding the clinical relevance are available. References Antibiotics Halogen-containing natural products Astellas Pharma Chloroarenes
Telavancin
[ "Biology" ]
717
[ "Antibiotics", "Biocides", "Biotechnology products" ]
15,929,459
https://en.wikipedia.org/wiki/Arabic%20mile
The Arab, Arabic, or Arabian mile (, al-mīl) was a historical Arabic unit of length. Its precise length is disputed, lying between and . It was used by medieval Arab geographers and astronomers. The predecessor of the modern nautical mile, it extended the Roman mile to fit an astronomical approximation of 1 minute of an arc of latitude measured along a north–south meridian. The distance between two pillars whose latitudes differed by 1 degree in a north–south direction was measured using sighting pegs along a flat desert plane. There were 4,000 cubits in an Arabic mile. If al-Farghani used the legal cubit as his unit of measurement, then an Arabic mile was 1,995 meters long. If he used al-Ma'mun's surveying cubit, it was 1,925 meters long or During the Umayyad period (661–750), the "Umayyad mile" was roughly equivalent to , or a little more than , or about 2 biblical miles, for every Umayyad mile. Al-Ma'mun's arc measurement Around 830 AD, Caliph Al-Ma'mun commissioned a group of Muslim astronomers and Muslim geographers to perform an arc measurement from Tadmur (Palmyra) to Raqqa, in modern Syria. They found the cities to be separated by one degree of latitude and the corresponding meridian arc distance to be 66⅔ Arabic miles and thus calculated the Earth's circumference to be . Using this measurement, knowing that earth's circumference is 40,007.683 km makes the Arabic mile little more than 1,666.994 metres. With Firuzabadi in his famous dictionary saying that a mile equals 3000 old dhira (ie cubit) this makes the dhira about 0.5556647 metres which is consistent with the tradition that kaaba height's is 27 dhira and its current height of 15 metres. Another estimate given by his astronomers was 56⅔ Arabic miles ( per degree), which corresponds to a circumference of , very close to the current values of per degree and circumference, respectively. See also Ancient Arabic units of measurement mile Biblical mile Notes Bibliography Paul Lunde. “Al-Faraghani and the Short Degree.” The Middle East and the Age of Discovery Aramco World Magazine Exhibition Issue, 43:3. pp. 15–17. Obsolete units of measurement Geography in the medieval Islamic world Units of length
Arabic mile
[ "Mathematics" ]
518
[ "Obsolete units of measurement", "Quantity", "Units of measurement", "Units of length" ]
5,451,893
https://en.wikipedia.org/wiki/20th%20Armoured%20Brigade%20Combat%20Team%20%28United%20Kingdom%29
The 20th Armoured Brigade Combat Team, previously the 20th Armoured Infantry Brigade, is an armoured infantry brigade formation of the British Army, currently headquartered at Wing Barracks, Bulford, Wiltshire, as part of the 3rd (United Kingdom) Division. History Cold War On 15 September 1950, the 20th Armoured Brigade was reformed in the UK for a strategic reserve role. However, the brigade was moved to Münster, Germany in December 1951 to supplement the British contribution to NATO forces in Europe, where it again came under the command of 6th Armoured Division, this time as part of the British Army of the Rhine (BAOR). The 1957 Defence White Paper announced the end of National Service, resulting in a number of reductions and changes across the armed forces. Part of this restructuring saw the disbandment of the 6th Armoured Division in April 1958. The Brigade survived as the new 20th Armoured Brigade Group, initially under the command of the 4th Infantry Division, and moved to Hobart Barracks, Detmold. It assumed the insignia of the old Division – the "Iron Fist" symbol that it wears to this day. The pattern of life was determined by the BAOR training cycle and the demands of higher formation exercises as politicians and military commanders considered how best to face the threat of a Soviet invasion. Brigade troops frequently found themselves supporting multi-national NATO exercises, often working alongside the fledgling Bundeswehr (German Army). In October 1961, the Brigade participated in Exercise Spearpoint which was designed to demonstrate that the BAOR was able to conduct large-scale intensive operations under both conventional and nuclear conditions. In September 1959, The Royal Corps of Signals reorganised all of their independent squadrons into a single numbering system from 200 upwards. This meant that when the Brigade's Signal Squadron adopted the title "200" in 1962 it automatically became the 'Senior Signal Squadron' in the British Army by precedence. Two years later it amalgamated with the brigade's Headquarters Squadron and took over responsibility for the administration and defence of the HQ and together the two separate units are designated as "20th Armoured Brigade Headquarters and Signal Squadron (200)". On 22 June 1974, 20th Armoured Brigade and the German 21st Panzer Brigade, based in Augustdorf, held a partnership parade to emphasise the confidence and understanding that exists between the allied forces of the NATO countries. BAOR experimented with a major restructuring towards the end of the 1970s as it reorganised into four divisions, each with two task force headquarters. These task forces could command any grouping of units from within their division and were designated sequentially Alpha through Hotel. As a result, on 1 December 1977, 20th Armoured Brigade was temporarily renamed "Task Force Hotel" under the command of the 4th Armoured Division. However, Task Force Hotel reverted to its brigade designation on 1 January 1980 and its units were realigned under the Brigade Headquarters. Further unit rotations continued throughout this period with many famous regiments and battalions of the British Army converting to the armoured role to serve within the brigade. Typical were the Life Guards and the Blues and Royals, who served on a four-year rotational plan. As the Life Guards Regiment moved to BAOR, it became a Tank Regiment for the first time in its history, only to re-role as an infantry battalion in order to deploy on three separate operational tours of Northern Ireland. Post-Cold War Following the fall of the Berlin Wall in November 1989 and the anticipated peace dividend at the end of the Cold War, the British government announced a series of cuts in defence spending under the 1990 "Options for Change" programme. As a result of the restructuring, in December 1992, the Brigade merged with the 33rd Armoured Brigade and moved its headquarters to Barker Barracks, Paderborn, where it came under the command of the 1st (UK) Armoured Division. By 1994, the overall troop strength in Germany had been halved and BAOR was replaced by British Forces Germany (BFG).Headquarters 20th Armoured Brigade, with some elements of the Brigade deployed to the former SFR Yugoslavia (FRY) in April 1995 to take command of Sector South West under the United Nations mandate. Based at Gornji Vakuf in central Bosnia and Herzegovina, the commander was responsible for a large multi-national UN force as well as having responsibility for all forces in FRY. The end of the tour coincided with a declaration of peace and a shift in emphasis to a larger NATO force. In October 1996, the Brigade returned to FRY as part of Multi-National Division (South-West). It was initially based at Šipovo, moving to Banja Luka in December 1996, whilst overseeing the transition from IFOR to SFOR and Operation RESOLUTE to Operation LODESTAR. The Brigade returned to Paderborn in April 1997. In August 1999, the Brigade again deployed to Banja Luka on Operation PALATINE. It returned to Paderborn in December 1999, and moved to Antwerp Barracks, Sennelager on 20 August 2001. Operation Telic (Iraq) In October 2003, the Brigade first deployed to southern Iraq on Operation Telic 3, where it was based at Basra Palace. The Brigade's first two-months of the deployment was dominated by low-level battles against fanatical Fedayeen and foreign fighters infiltrating across the border with Iran who were actively supported by Iranian Al Quds forces, post-operational reports also mention former Ba'athist regime loyalists. Before their deployment, United Nations Security Council Resolution 1511 was passed, which set the basis for rebuilding Iraq and establishing security. The aim was to eventually transfer authority from the Coalition Provisional Authority to an Iraqi Transitional National Assembly. The middle two months of the Brigade's tour was dominated by security sector reform, to achieve this aim meant the building of capacity in the Iraqi Security Forces (especially the paramilitary Iraqi Civil Defence Corps and police) and civilian Iraqi institutions. Security sector reform would remain an objective for the rest of the tour. The final two months of Operation Telic 3 were dominated by high-intensity operations against resurgent Shia militias (notably the Jaish al Mahdi (JAM). 20th Armoured Brigade was awarded the Freedom of the City of Paderborn by the town council on 28 May 2005. The right to exercise the freedom was presented "as a contribution for consolidation of the Anglo-German friendship, the joint solidarity in NATO and a further element for the building of the joint house Europe". The Brigade returned to southern Iraq again in April 2006 during Operation TELIC 8, and was situated in Basra, Al Amarah and Al Muthanna Provinces. During the seven month summer tour, the troops contributed to the successful handover of security in two of the four Iraqi Provinces within the Multinational Division (South East) [MND(SE)]. The Iron Fist returned to Basra for a third time in 2008 for Operation TELIC 13. It became the last British brigade to serve in Iraq at the end of the UK's six-year combat mission in the country on 30 April 2009. Operation Herrick (Afghanistan) 20th Armoured Brigade took over command of Task Force Helmand in Afghanistan from 3 Commando Brigade Royal Marines on 9 October 2011, officially marking the start of Operation Herrick 15. Future Under the Army's new 2020 structure, in January 2015 the Brigade was retitled to 20th Armoured Infantry Brigade incorporating three armoured infantry battle groups. In 2016 the Brigade began its high readiness training in preparation for becoming NATO's lead for the Very High Readiness Joint Task Force Land [VJTF(L)] in 2017. The Brigade Headquarters moved to Wing Barracks, Bulford, in 2019. Under the Future Soldier programme, the brigade has been redesignated as the 20th Armoured Brigade Combat Team, and in the future will control a reconnaissance regiment equipped with the General Dynamics Ajax. The current armoured regiment (QRH) will be re-equipped with the Challenger 3 MBT and the armoured infantry battalions with the Warrior IFV re-equipped with the Boxer AFV. Structure The brigade is based at Bulford Camp. It will form as part of the Reaction Force. The current organisation of the brigade under the Defence in a Competitive Age is: 20th Armoured Infantry Brigade Headquarters, at Wing Barracks, Bulford Camp Royal Dragoon Guards, in Warminster (Armoured Cavalry) Queen's Royal Hussars, at Assaye Barracks, Tidworth Camp (Armoured) 1st Battalion, Royal Regiment of Fusiliers, at Mooltan Barrack, Tidworth Camp (Armoured Infantry) 5th Battalion, Royal Regiment of Fusiliers, in Newcastle upon Tyne (Army Reserve – Armoured Infantry, paired with 1 R Fusiliers) 5th Battalion, The Rifles, at Ward Barracks, Bulford Garrison (Armoured Infantry) 7th Battalion, The Rifles, in Reading (Army Reserve – Armoured Infantry, paired with 5 Rifles) 3 Armoured Close Support Battalion, Royal Electrical and Mechanical Engineers, in Tidworth (Armoured Close Support) Alliances Germany - Panzerbrigade 21 (21st Panzer Brigade) Brigade commanders Recent commanders have included: 1954–1956 Brigadier John Hackett 1958–1961 Brigadier James d'Avigdor-Goldsmid 1963–1965 Brigadier Richard Ward 1965–1968 Brigadier Patrick Howard-Dobson 1969–1970 Brigadier John Stanier 1972–1973 Brigadier Richard Lawson 1973–1975 Brigadier Maurice Johnston 1978–1979 Brigadier Bernard Gordon Lennox 1979–1981 Brigadier John Stibbon 1985-1987 Brigadier Michael Regan 1987–1989 Brigadier Michael Walker 1992–1994 Brigadier Arthur Denaro 1994–1996 Brigadier Andrew Pringle 1996–1997 Brigadier David Leakey 1997–1999 Brigadier Nick Parker 1999–2001 Brigadier Jeffrey Cook 2001–2004 Brigadier David Rutherford-Jones 2004–2005 Brigadier Nick Carter 2005–2007 Brigadier James Everard 2007–2009 Brigadier Tom Beckett 2009–2012 Brigadier Patrick Sanders 2012–2014 Brigadier James Swift 2014–2016 Brigadier Ian Mortimer 2016–2018 Brigadier Michael Elviss 2018–2020 Brigadier Dominic Biddick 2020–2021 Brigadier Patrick Ginn 2021–Present Brigadier Carl Boswell See also Formation reconnaissance regiment Very High Readiness Joint Task Force Land 3rd (UK) Division British Forces Germany Notes References External links Official British Army website of 20th Armoured Infantry Brigade (The Iron Fist) BAOR locations British Army Locations from 1945 20 Military units and formations established in 1950 Military units and formations disestablished in 1977 Military units and formations established in 1980 Future Soldier
20th Armoured Brigade Combat Team (United Kingdom)
[ "Engineering" ]
2,080
[ "Military projects", "Future Soldier" ]