id
int64
580
79M
url
stringlengths
31
175
text
stringlengths
9
245k
source
stringlengths
1
109
categories
stringclasses
160 values
token_count
int64
3
51.8k
32,148,163
https://en.wikipedia.org/wiki/ARID%20domain
In molecular biology, the ARID domain (AT-rich interaction domain; also known as BRIGHT (B-cell Regulator of Ig Heavy chain Transcription) domain)) is a protein domain that binds to DNA. ARID domain-containing proteins are found in fungi, plants and invertebrate and vertebrate metazoans. ARID-encoding genes are involved in a variety of biological processes including embryonic development, cell lineage gene regulation and cell cycle control. Although the specific roles of this domain and of ARID-containing proteins in transcriptional regulation are yet to be elucidated, they include both positive and negative transcriptional regulation and a likely involvement in the modification of chromatin structure. The basic structure of the ARID domain appears to be a series of six alpha-helices separated by beta-strands, loops, or turns, but the structured region may extend to an additional helix at either or both ends of the basic six. Based on primary sequence homology, they can be partitioned into three structural classes: Minimal ARID proteins that consist of a core domain formed by six alpha helices; ARID proteins that supplement the core domain with an N-terminal alpha-helix; and Extended-ARID proteins, which contain the core domain and additional alpha-helices at their N- and C-termini. The human SWI-SNF complex protein ARID1A is an ARID family member with non-sequence-specific DNA binding activity. The ARID consensus and other structural features are common to both ARID1A and yeast SWI1, suggesting that ARID1A is a human counterpart of SWI1. The approximately 100-residue ARID sequence is present in a series of proteins strongly implicated in the regulation of cell growth, development, and tissue-specific gene expression. Although about a dozen ARID proteins can be identified from database searches, to date, only Bright (a regulator of B-cell-specific gene expression), dead ringer (a Drosophila melanogaster gene product required for normal development), and MRF-2 (which represses expression from the Cytomegalovirus enhancer) have been analyzed directly with regard to their DNA binding properties. Each binds preferentially to AT-rich sites. In contrast, ARID1A shows no sequence preference in its DNA binding activity, thereby demonstrating that AT-rich binding is not an intrinsic property of ARID domains and that ARID family proteins may be involved in a wider range of DNA interactions. References Protein domains
ARID domain
Biology
505
3,722,964
https://en.wikipedia.org/wiki/Sadistic%20personality%20disorder
Sadistic personality disorder is an obsolete term for a proposed personality disorder defined by a pervasive pattern of sadistic and cruel behavior. People who fitted this diagnosis were thought to have a desire to control others and to have accomplished this through use of physical or emotional violence. The diagnosis proposal appeared in the appendix of the Diagnostic and Statistical Manual of Mental Disorders (DSM-III-R), however it was never put to use in clinical settings and later versions of the DSM (DSM-IV, DSM-IV-TR, and DSM-5) had it removed. Among other reasons, psychiatrists believed it would be used to legally excuse sadistic behavior. Comorbidity with other personality disorders Sadistic personality disorder was thought to have been frequently comorbid with other personality disorders, primarily other types of psychopathological disorders. In contrast, sadism has also been found in patients who do not display any or other forms of psychopathic disorders. Conduct disorder in childhood, and Alcohol use disorder were thought to have been frequently comorbid with Sadistic personality disorder. Researchers had difficulty distinguishing sadistic personality disorder from the other personality disorders due to high levels of comorbidity, hence another reason why it was eventually removed. Diagnostic criteria According to the DSM-III-R, the diagnostic criteria were defined by a pervasive pattern of sadistic and cruel behavior that began in early adulthood. It was defined by four of the following. Has used physical cruelty or violence for the purpose of establishing dominance in a relationship (not merely to achieve some noninterpersonal goal, such as striking someone in order to rob him/her). Humiliates or demeans people in the presence of others. Has treated or disciplined someone under his/her control unusually harshly. Is amused by, or takes pleasure in, the psychological or physical suffering of others (including animals). Has lied for the purpose of harming or inflicting pain on others (not merely to achieve some other goal). Gets other people to do what he/she wants by frightening them (through intimidation or even terror). Restricts the autonomy of people with whom he or she has a close relationship, e.g., will not let spouse leave the house unaccompanied or permit teenage daughter to attend social functions. Is fascinated by violence, weapons, injury, or torture. This behavior couldn’t have been better explained by sexual sadism disorder and it had to have been directed towards more than one person. Differential diagnosis Millon's subtypes Theodore Millon claimed there were four subtypes of sadism, which he termed enforcing sadism, explosive sadism, spineless sadism, and tyrannical sadism. History Sadistic personality disorder was developed as forensic psychiatrists had noticed many patients with sadistic behavior. It was introduced to the DSM in 1987 and it was placed in the DSM-III-R as a way to facilitate further systematic clinical study and research. It was removed from the DSM for numerous reasons, including the fact it could be used to legally excuse sadistic acts. Sadistic personality disorder also shared a high rate of comorbidity with other disorders, implying that it was not a distinct disorder on its own. Millon writes that "Physically abusive, sadistic personalities are most often male, and it was felt that any such diagnosis might have the paradoxical effect of legally excusing cruel behavior." Researchers were also concerned about the stigmatizing nature of the disorder, and that it put patients at higher risk of abuse from prison guards. Theorists like Theodore Millon wanted to generate further study on SPD, and so proposed it to the DSM-IV Personality Disorder Work Group, who rejected it. Sub-clinical sadism in personality psychology There is renewed interest in studying sadism as a personality trait. Sadism joins with subclinical psychopathy, narcissism, and Machiavellianism to form the so-called "dark tetrad’ of personality. See also Antisocial personality disorder, a personality disorder characterized by a long term pattern of disregard for, or violation of, the rights of others Bullying Evil Genes Malignant narcissism Psychopathy Sadism and masochism Schadenfreude Self-defeating personality disorder (masochistic personality disorder) Sexual sadism disorder Zoosadism Sociopathy References Blaney, P. H., Millon, T. (2009). Oxford Textbook of Psychopathology. New York: Oxford University Press. Davis, R., Millon, T. (2000). Personality Disorders in Modern Life. Canada: John Wiley & Sons, Inc. Livesley, J. (1995). The DSM-IV Personality Disorders. New York, NY: Guilford Press. Million, T. (1996). Disorders of Personality DSM-IV and Beyond. New York: Wiley-Interscience Publication. Pacana, G. (2011, March 2). Sadists and sadistic personality disorder. External links "Provisional Psychological Profile of Washington, D.C.-Area Sniper" provides some theoretical descriptions of the sadistic personality, which, in addition to being a "white man", were traits concluded by the author to describe the D.C. sniper attacks shooter. Criminology Forensic psychology Violence Obsolete terms for mental disorders
Sadistic personality disorder
Biology
1,101
31,907,173
https://en.wikipedia.org/wiki/Enterotoxin%20type%20B
In the field of molecular biology, enterotoxin type B, also known as Staphylococcal enterotoxin B (SEB), is an enterotoxin produced by the gram-positive bacteria Staphylococcus aureus. It is a common cause of food poisoning, with severe diarrhea, nausea and intestinal cramping often starting within a few hours of ingestion. Being quite stable, the toxin may remain active even after the contaminating bacteria are killed. It can withstand boiling at 100 °C for a few minutes. Gastroenteritis occurs because SEB is a superantigen, causing the immune system to release a large amount of cytokines that lead to significant inflammation. Additionally, this protein is one of the causative agents of toxic shock syndrome. Function The function of this protein is to facilitate the infection of the host organism. It is a virulence factor designed to induce pathogenesis. One of the major virulence exotoxins is the toxic shock syndrome toxin (TSST), which is secreted by the organism upon successful invasion. It causes a major inflammatory response in the host via superantigenic properties, and is the causative agent of toxic shock syndrome. It functions as a superantigen through activation of a significant fraction of T-cells (up to 20%) by cross-linking MHC class II molecules with T-cell receptors. TSST is a multisystem illness with several symptoms such as high fever, hypotension, dizziness, rash and peeling skin. Structure All of these toxins share a similar two-domain fold (N and C-terminal domains) with a long alpha-helix in the middle of the molecule, a characteristic beta-barrel known as the "oligosaccharide/oligonucleotide fold" at the N-terminal domain and a beta-grasp motif at the C-terminal domain. Each superantigen possesses slightly different binding mode(s) when it interacts with MHC class II molecules or the T-cell receptor. N-terminal domain The N-terminal domain is also referred to as OB-fold, or in other words the oligonuclucleotide binding fold. This region contains a low-affinity major histocompatibility complex class II (MHC II) site which causes an inflammatory response. The N-terminal domain contains regions involved in Major Histocompatibility Complex class II association. It is a five stranded beta barrel that forms an OB fold. C-terminal domain The beta-grasp domain has some structural similarities to the beta-grasp motif present in immunoglobulin-binding domains, ubiquitin, 2Fe-2 S ferredoxin and translation initiation factor 3 as identified by the SCOP database. References Protein families Virulence factors Protein domains Inflammations Bacterial toxins Biological toxin weapons Superantigens Protein superfamilies
Enterotoxin type B
Chemistry,Biology
609
30,880,775
https://en.wikipedia.org/wiki/Anthony%20Stone
Anthony J. Stone is a British theoretical chemist and emeritus professor in the Department of Chemistry at the University of Cambridge. Education Stone studied Natural Sciences at Emmanuel College, Cambridge and obtained a Ph.D. in theoretical chemistry under H. Christopher Longuet-Higgins. Career and research In 1964 he took up a position in the Department of Chemistry at the University of Cambridge, where he remained until his retirement in 2006. He is known for the Stone–Wales defect of fullerene isomers. References British chemists Theoretical chemists Members of the University of Cambridge Department of Chemistry Fellows of Emmanuel College, Cambridge Living people Year of birth missing (living people) Alumni of Emmanuel College, Cambridge
Anthony Stone
Chemistry
137
2,993,470
https://en.wikipedia.org/wiki/Thyrse
A thyrse is a type of inflorescence in which the main axis grows indeterminately, and the subaxes (branches) have determinate growth. Gallery References Plant morphology
Thyrse
Biology
40
37,430,358
https://en.wikipedia.org/wiki/Light%20sheet%20fluorescence%20microscopy
Light sheet fluorescence microscopy (LSFM) is a fluorescence microscopy technique with an intermediate-to-high optical resolution, but good optical sectioning capabilities and high speed. In contrast to epifluorescence microscopy only a thin slice (usually a few hundred nanometers to a few micrometers) of the sample is illuminated perpendicularly to the direction of observation. For illumination, a laser light-sheet is used, i.e. a laser beam which is focused only in one direction (e.g. using a cylindrical lens). A second method uses a circular beam scanned in one direction to create the lightsheet. As only the actually observed section is illuminated, this method reduces the photodamage and stress induced on a living sample. Also the good optical sectioning capability reduces the background signal and thus creates images with higher contrast, comparable to confocal microscopy. Because light sheet fluorescence microscopy scans samples by using a plane of light instead of a point (as in confocal microscopy), it can acquire images at speeds 100 to 1,000 times faster than those offered by point-scanning methods. This method is used in cell biology and for microscopy of intact, often chemically cleared, organs, embryos, and organisms. Starting in 1994, light sheet fluorescence microscopy was developed as orthogonal plane fluorescence optical sectioning microscopy or tomography (OPFOS) mainly for large samples and later as the selective/single plane illumination microscopy (SPIM) also with sub-cellular resolution. This introduced an illumination scheme into fluorescence microscopy, which has already been used successfully for dark field microscopy under the name ultramicroscopy. Setup Basic setup In this type of microscopy, the illumination is done perpendicularly to the direction of observation (see schematic image at the top of the article). The expanded beam of a laser is focused in only one direction by a cylindrical lens, or by a combination of a cylindrical lens and a microscope objective as the latter is available in better optical quality and with higher numerical aperture than the first. This way a thin sheet of light or lightsheet is created in the focal region that can be used to excite fluorescence only in a thin slice (usually a few micrometers thin) of the sample. The fluorescence light emitted from the lightsheet is then collected perpendicularly with a standard microscope objective and projected onto an imaging sensor (usually a CCD, electron-multiplying CCD or CMOS camera). In order to let enough space for the excitation optics/lightsheet an observation objective with high working distance is used. In most light sheet fluorescence microscopes the detection objective and sometimes also the excitation objective are fully immersed in the sample buffer, so usually the sample and excitation/detection optics are embedded into a buffer-filled sample chamber, which can also be used to control the environmental conditions (temperature, carbon dioxide level ...) during the measurement. The sample mounting in light sheet fluorescence microscopy is described below in more detail. As both the excitation lightsheet and the focal plane of the detection optics have to coincide to form an image, focusing different parts of the sample can not be done by translating the detection objective, but usually the whole sample is translated and rotated instead. Extensions of the basic idea In recent years, several extensions to this scheme have been developed: The use of two counter-propagating lightsheets helps to reduce typical selective plane illumination microscopy artifacts, like shadowing (see first z-stack above) In addition to counter-propagating lightsheets a setup with detection from two opposing sides has been proposed in 2012. This allows measurement of z- and rotation-stacks for a full 3D reconstruction of the sample more rapidly. The lightsheet can also be created by scanning a normal laser focus up and down. This also allows use of self-reconstructing beams (such as bessel beams or Airy beams) for the illumination which improve the penetration of the lightsheet into thick samples, as the negative effect of scattering on the lightsheet is reduced. These self-reconstructing beams can be modified to counteract intensity losses using attenuation-compensation techniques, further increasing the signal collected from within thick samples. In oblique plane microscopy (OPM) the detection objective is used to also create the lightsheet: The lightsheet is now emitted from this objective under an angle of about 60°. Additional optics is used to also tilt the focal plane used for detection by the same angle. Light sheet fluorescence microscopy has also been combined with two-photon (2P) excitation, which improves the penetration into thick and scattering samples. Use of 2P excitation in near-infrared wavelengths has been used to replace 1P excitation in blue-visible wavelengths in brain imaging experiments involving response to visual stimuli. Selective plane illumination microscopy can also be combined with techniques such as fluorescence correlation spectroscopy, to allow spatially resolved mobility measurements of fluorescing particles (e.g. fluorescent beads, quantum dots or fluorescently labeled proteins) inside living biological samples. Also a combination of a selective plane illumination microscope with a gated image intensifier camera has been reported that allowed measuring a map of fluorescence lifetimes (fluorescence lifetime imaging, FLIM). Light sheet fluorescence microscopy was combined with super resolution microscopy techniques to improve its resolution beyond the Abbe limit. Also a combination of stimulated emission depletion microscopy (STED) and selective plane illumination microscopy has been published, that leads to a reduced lightsheet thickness due to the stimulated emission depletion microscopy effect. See also the section on the power of resolution of light sheet fluorescence microscopy below. Light sheet fluorescence microscopy was modified to be compatible with all objectives, even coverslip-based, oil-immersion objectives with high numerical aperture to increase native spatial resolution and fluorescence detection efficiency. This technique involves tilting the light sheet relative to the detection objective at a precise angle to allow the light sheet to form on the surface of glass coverslips. Light sheet fluorescence microscopy was combined with Adaptive Optics techniques in 2012 to improve the depth of imaging in thick and inhomogenous samples at a depth of 350 um. A Shack Hartmann wavefront sensor was positioned in the detection path and guide stars are used in a close feedback loop. In his thesis, the author discuss the advantage of having Adaptive Optics both in the illumination and detection path of the light sheet fluorescence microscope to correct aberrations induced by the sample. Sample mounting The separation of the illumination and detection beampaths in light sheet fluorescence microscopy (except in oblique plane microscopy) creates a need for specialized sample mounting methods. To date most light sheet fluorescence microscopes are built in such a way that the illumination and detection beampath lie in a horizontal plane (see illustrations above), thus the sample is usually hanging from the top into the sample chamber or is resting on a vertical support inside the sample chamber. Several methods have been developed to mount all sorts of samples: Fixed (and potentially also cleared) samples can be glued to a simple support or holder and can stay in their fixing solution during imaging. Larger living organisms are usually sedated and mounted in a soft gel cylinder that is extruded from a (glass or plastic) capillary hanging from above into the sample chamber. Adherent cells can be grown on small glass plates that are hanging in the sample chamber. Plants can be grown in clear gels containing a growth medium. The gels are cut away at the position of imaging, so they do not reduce the lightsheet and image quality by scattering and absorption. Liquid samples (e.g. for fluorescence correlation spectroscopy) can be mounted in small bags made of thin plastic foil matching the refractive index of the surrounding immersion medium in the sample chamber. Some light sheet fluorescence microscopes have been developed where the sample is mounted as in standard microscopy (e.g. cells grow horizontally on the bottom of a petri dish) and the excitation and detection optics are constructed in an upright plane from above. This also allows combining a light sheet fluorescence microscope with a standard inverted microscope and avoids the requirement for specialized sample mounting procedures. Image properties Typical imaging modes Most light sheet fluorescence microscopes are used to produce 3D images of the sample by moving the sample through the image plane. If the sample is larger than the field of view of the image sensor, the sample also has to be shifted laterally. An alternative approach is to move the image plane through the sample to create the image stack. Long experiments can be carried out, for example with stacks recorded every 10 sec–10 min over the timespan of days. This allows study of changes over time in 3D, or so-called 4D microscopy. After the image acquisition the different image stacks are registered to form one single 3D dataset. Multiple views of the sample can be collected, either by interchanging the roles of the objectives or by rotating the sample. Having multiple views can yield more information than a single stack; for example occlusion of some parts of the sample may be overcome. Multiple views also improves 3D image resolution by overcoming poor axial resolution as described below. Some studies also use a selective plane illumination microscope to image only one slice of the sample, but at much higher temporal resolution. This allows e.g. to observe the beating heart of a zebra fish embryo in real-time. Together with fast translation stages for the sample a high-speed 3D particle tracking has been implemented. Power of resolution The lateral resolution of a selective plane illumination microscope is comparable to that of a standard (epi) fluorescence microscope, as it is determined fully by the detection objective and the wavelength of the detected light (see Abbe limit). E.g. for detection in the green spectral region around 525 nm, a resolution of 250–500 nm can be reached. The axial resolution is worse than the lateral (about a factor of 4), but it can be improved by using a thinner lightsheet in which case nearly isotropic resolution is possible. Thinner light sheets are either thin only in a small region (for Gaussian beams) or else specialized beam profiles such as Bessel beams must be used (besides added complexity, such schemes add side lobes which can be detrimental). Alternatively, isotropic resolution can be achieved by computationally combining 3D image stacks taken from the same sample under different angles. Then the depth-resolution information lacking in one stack is supplied from another stack; for example with two orthogonal stacks the (poor-resolution) axial direction in one stack is a (high-resolution) lateral direction in the other stack. The lateral resolution of light sheet fluorescence microscopy can be improved beyond the Abbe limit, by using super resolution microscopy techniques, e.g. with using the fact, that single fluorophores can be located with much higher spatial precision than the nominal resolution of the used optical system (see stochastic localization microscopy techniques). In Structured Illumination Light Sheet Microscopy, structured illumination techniques have been applied to further improve the optical sectioning capacity of light sheet fluorescence microscopy. Stripe artifacts As the illumination typically penetrates the sample from one side, obstacles lying in the way of the lightsheet can disturb its quality by scattering and/or absorbing the light. This typically leads to dark and bright stripes in the images. If parts of the samples have a significantly higher refractive index (e.g. lipid vesicles in cells), they can also lead to a focussing effect resulting in bright stripes behind these structures. To overcome this artifact, the lightsheets can e.g. be "pivoting". That means that the lightsheet's direction of incidence is changed rapidly (~1 kHz rate) by a few degrees (~10°), so light also hits the regions behind the obstacles. Illumination can also be performed with two (pivoted) lightsheets (see above) to further reduce these artifacts. Alternatively, the Variational Stationary Noise Remover (VSNR) algorithm has been developed and is available as a free Fiji plugin. History At the beginning of the 20th century, R. A. Zsigmondy introduced the ultramicroscope as a new illumination scheme into dark-field microscopy. Here sunlight or a white lamp is used to illuminate a precision slit. The slit is then imaged by a condensor lens into the sample to form a lightsheet. Scattering (sub-diffractive) particles can be observed perpendicularly with a microscope. This setup allowed the observation of particles with sizes smaller than the microscope's resolution and led to a Nobel prize for Zsigmondy in 1925. The first application of this illumination scheme for fluorescence microscopy was published in 1993 by Voie et al. under the name orthogonal-plane fluorescence optical sectioning (OPFOS). for imaging of the internal structure of the cochlea. The resolution at that time was limited to 10 µm laterally and 26 µm longitudinally but at a sample size in the millimeter range. The orthogonal-plane fluorescence optical sectioning microscope used a simple cylindrical lens for illumination. Further development and improvement of the selective plane illumination microscope started in 2004. After this publication by Huisken et al. the technique found wide application and is still adapted to new measurement situations today (see above). Since 2010 a first ultramicroscope with fluorescence excitation and limited resolution and since 2012 a first selective plane illumination microscope are available commercially. A good overview about the development of selective plane illumination microscopy is given in ref. During 2012 also open source projects have started to appear that freely publish complete construction plans for light sheet fluorescence microscopes and also the required software suites. Applications Selective plane illumination microscopy/light sheet fluorescence microscopy is often used in developmental biology, where it enables long-time (several days) observations of embryonic development (even with full lineage tree reconstruction). Selective plane illumination microscopy can also be combined with techniques, like fluorescence correlation spectroscopy to allow spatially resolved mobility measurements of fluorescing particles (e.g. fluorescent beads, quantum dots or fluorescent proteins) inside living biological samples. Strongly scattering biological tissue such as brain or kidney has to be chemically fixed and cleared before it can be imaged in a selective plane illumination microscope. Special tissue clearing techniques have been developed for this purpose, e.g. 3DISCO, CUBIC and CLARITY. Depending on the index of refraction of the cleared sample, matching immersion fluids and special long-distance objectives must be used during imaging. References Further reading Review: Review of different light sheet fluorescence microscopy modalities and results in developmental biology: Review of light sheet fluorescence microscopy for imaging anatomic structures: Editorial: External links : The linked video shows the development of a fruit fly embryo, which was recorded during 20 hours. Two projections of the full 3D dataset are shown. The mesoSPIM Initiative. Open-source light-sheet microscopes for imaging cleared tissue. A practical guide to adaptive light-sheet microscopy Fluorescence techniques Cell imaging Laboratory equipment Optical microscopy techniques Articles containing video clips
Light sheet fluorescence microscopy
Chemistry,Biology
3,109
2,063,902
https://en.wikipedia.org/wiki/Self-defeating%20prophecy
A self-defeating prophecy (self-destroying or self-denying in some sources) is the complementary opposite of a self-fulfilling prophecy; a prediction that prevents what it predicts from happening. This is also known as the prophet's dilemma. A self-defeating prophecy can be the result of rebellion to the prediction. If the audience of a prediction has an interest in seeing it falsified, and its fulfillment depends on their actions or inaction, their actions upon hearing it will make the prediction less plausible. If a prediction is made with this outcome specifically in mind, it is commonly referred to as reverse psychology or warning. Also, when working to make a premonition come true, one can inadvertently change the circumstances so much that the prophecy cannot come true. It is important to distinguish a self-defeating prophecy from a self-fulfilling prophecy that predicts a negative outcome. If a prophecy of a negative outcome is made, and that negative outcome is achieved as a result of positive feedback, then it is a self-fulfilling prophecy. For example, if a group of people decide they will not be able to achieve a goal and stop working towards the goal as a result, their prophecy was self-fulfilling. Likewise, if a prediction of a negative outcome is made, but the outcome is positive because of negative feedback resulting from the rebellion, then that is a self-defeating prophecy. Examples If an economic crisis is predicted, then consumers, manufacturers and authorities will respond to avoid economic loss, breaking the chain of events that would lead to crisis. The biblical prophet Jonah famously ran away and refused to deliver God's prophecy of Nineveh's destruction, lest the inhabitants repent and cause God to forgive them and not destroy the city. Indeed, when Jonah eventually did deliver the prophecy, the people did mend their ways and caused the prophesied event not to happen. The Year 2000 problem has been cited as a self-defeating prophecy, in that fear of massive technology failures caused by computers' internal clocks "rolling over" encouraged the very changes needed to avoid those failures. Pre-announcing products in a way that discourages current sales (the Osborne effect) is also an example of a self-defeating prophecy. Predictions of environmental issues are sometimes corrected via legislation or behavior change and thus never happen. Epidemics with grim projections also encourage changes that can prevent those projections from coming true and in turn lead to people questioning the necessity of those changes because the projections did not come true. See also Catch-22 (logic), a logical trap in which the participant is forced into the same outcome regardless of choice Preparedness paradox, a perception that successful planning means the planned-for danger was smaller than it was Self-refuting idea, a logical statement that negates itself Grandfather Paradox, similar paradox for time travel References External links Attitude attribution Causality Cognitive biases de:Selbstzerstörerische Prophezeiung fr:Prophétie autoréalisatrice#Prophétie autodestructrice
Self-defeating prophecy
Physics
613
40,066,218
https://en.wikipedia.org/wiki/Procyclin
Procyclins also known as procyclic acidic repetitive proteins or PARP are proteins developed in the surface coating of Trypanosoma brucei parasites while in their tsetse fly vector. The cell surface of the bloodstream form features a dense coat of variable surface glycoproteins (VSGs) which is replaced by an equally dense coat of procyclins when the parasite differentiates into the procylic form in the tsetse fly midgut. There are six or seven procyclin genes that encode unusual proteins with extensive tandem repeat units of glutamic acid (E) and proline (P), referred to as EP repeats (EP1, EP1-2, EP2, EP2-1, EP3, EP3-2, EP3-4), and two genes that encode proteins with internal pentapeptide GPEET repeats (GPEET2). EP1 is a 141 amino acids protein and EP2 is a 129 AA protein. Both proteins have their coding genes situated on chromosome 10. GPEET2 is a 114 AA protein and EP3-2 is 123 AA protein with genes situated on chromosome 6. See also Coat protein (disambiguation) References External links Kinetoplastid proteins Parasitic excavates
Procyclin
Chemistry
273
10,052,417
https://en.wikipedia.org/wiki/Polymer%20concrete
Polymer concrete is a type of concrete that uses a polymer to replace lime-type cements as a binder. One specific type is epoxy granite, where the polymer used is exclusively epoxy. In some cases the polymer is used in addition to portland cement to form Polymer Cement Concrete (PCC) or Polymer Modified Concrete (PMC). Polymers in concrete have been overseen by Committee 548 of the American Concrete Institute since 1971. Composition In polymer concrete, thermoplastic polymers are often used, but more typically thermosetting resins are used as the principal polymer component due to their high thermal stability and resistance to a wide variety of chemicals. Polymer concrete is also composed of aggregates that include silica, quartz, granite, limestone, or other material. The aggregate should be of good quality, free of dust and other debris, and dry. Failure to fulfill these criteria can reduce the bond strength between the polymer binder and the aggregate. Uses Polymer concrete may be used for new construction or repairing of old concrete. The adhesive properties of polymer concrete allow repair of both polymer and conventional cement-based concretes. The corrosion resistance and low permeability of polymer concrete allows it to be used in swimming pools, sewer structure applications, drainage channels, electrolytic cells for base metal recovery, and other structures that contain liquids or corrosive chemicals. It is especially suited to the construction and rehabilitation of manholes due to their ability to withstand toxic and corrosive sewer gases and bacteria commonly found in sewer systems. Unlike traditional concrete structures, polymer concrete requires no coating or welding of PVC-protected seams. It can also be used as a bonded wearing course for asphalt pavement, for higher durability and higher strength upon a concrete substrate, and in skate parks, as it is a very smooth surface. Polymer concrete has historically not been widely adopted due to the high costs and difficulty associated with traditional manufacturing techniques. However, recent progress has led to significant reductions in cost, meaning that the use of polymer concrete is gradually becoming more widespread. Polymer concrete in the form of epoxy granite is becoming more widely used in the construction of machine tool bases (such as mills and metal lathes) in place of cast iron due to its superior mechanical properties and a high chemical resistance. Properties The exact properties depend on the mixture, polymer, aggregate used etc. Generally speaking with mixtures used: The binder is more expensive than cement Significantly greater tensile strength than unreinforced Portland concrete (since polymer plastic is 'stickier' than cement and has reasonable tensile strength) Similar or greater compressive strength to Portland concrete Faster curing Good adhesion to most surfaces, including to reinforcements Good long-term durability with respect to freeze and thaw cycles Low permeability to water and aggressive solutions Improved chemical resistance Good resistance against corrosion Lighter weight (slightly less dense than traditional concrete, depending on the resin content of the mix) May be vibrated to fill voids in forms Allows use of regular form-release agents (in some applications) Product hard to manipulate with conventional tools such as drills and presses due to its density. Recommend getting pre-modified product from the manufacturer Small boxes are more costly when compared to its precast counterpart however pre cast concretes induction of stacking or steel covers quickly bridge the gap. Specifications Following are some specification examples of the features of polymer concrete: References Further reading External links Concrete Pavements
Polymer concrete
Engineering
698
9,592,241
https://en.wikipedia.org/wiki/Bamboo%20%28unit%29
A bamboo is an obsolete unit of length in India and Myanmar. India In India, the unit was fixed by the reforms of Akbar the Great (1556–1605) at approximately 12.8 m (42 ft). After Metrication in India in the mid-20th century, the unit became obsolete. Myanmar In Myanmar (formerly Burma) it was approximately 3.912 meters (154 in, or 12.86 ft). It was also known as the dha. One thousand bamboos = one dain (A dain is sometimes referred to as a "Burmese league") One dain = 7 saundaungs See also List of customary units of measurement in South Asia References Units of length Customary units in India Obsolete units of measurement
Bamboo (unit)
Mathematics
154
2,093,655
https://en.wikipedia.org/wiki/Sparklies
Sparklies is a form of interference on analogue satellite television transmissions. Sparklies are black or white 'hard' interference dots (as opposed to the 'soft' interference patterns of terrestrial television), caused either by too weak or too strong a signal. When within the satellite's rated reception footprint, sparklies are most likely to be caused by a misaligned dish, or LNBs which are too high- or too low-gain for the dish and receiver. The term "sparklies" is used by British Sky Broadcasting (BSkyB) and a number of hardware makers including Amstrad and Pace. Sparklies do not occur on digital satellite systems; similar problems with digital signals cause MPEG artifacts. See also Salt and pepper noise References Satellite broadcasting Television terminology
Sparklies
Engineering
158
34,192,383
https://en.wikipedia.org/wiki/Kippo
Kippo is a medium-interaction SSH honeypot written in Python. Kippo is used to log brute-force attacks and the entire shell interaction performed by an attacker. It is inspired by Kojoney. The source code is released under the New BSD License. Kippo is no longer under active development and recommends using the fork'd project Cowrie. Python dependencies Python Twisted Twisted Conch Python 2.5+ but less than 3.0 Python-dev Pysan1 Python-OpenSSL PyCrypto MySql Python References External links Kippo at GitHub Kippo (Old homepage) at GoogleCode Cowrie - Active Kippo Fork at GitHub Kojoney - A honeypot for the SSH Service Python (programming language) software
Kippo
Technology
170
30,295,850
https://en.wikipedia.org/wiki/L.%20R.%20G.%20Treloar
Professor Leslie Ronald George Treloar, OBE (30 July 1906 – 18 March 1985) was a leading figure in the science of rubber and elasticity, and writer of a number of influential texts. Leslie Treloar graduated in Physics from University College, Reading, in 1927 and subsequently joined GEC. He gained his PhD from the University of London (external degree) in 1938. After working for GEC he moved to the British Rubber Producers Research Association. He worked briefly at the Telecommunications Research Establishment during World War II. He moved to the British Rayon Research Association when it was set up in 1948. He was a colleague of John Wilson. He was awarded the Colwyn Medal "for outstanding services to the rubber industry of a scientific, technical or engineering character" in 1961, and the Swinburne Award for his "outstanding contribution to the advancement and knowledge of any field related to the science, engineering or technology of plastics" in 1970. He also was awarded the A. A. Griffith Medal and Prize in 1972. He became Professor of Polymer & Fibre Science in the University of Manchester Institute of Science and Technology in 1966 and retired in 1974. Bibliography Treloar published many texts and papers, of which the following is a selection: Books Papers References 1906 births 1985 deaths Alumni of the University of London Polymer scientists and engineers Officers of the Order of the British Empire Academics of the University of Manchester Institute of Science and Technology
L. R. G. Treloar
Chemistry,Materials_science
287
69,965,808
https://en.wikipedia.org/wiki/Anixia%20interrupta
Anixia interrupta is a species of fungus belonging to the genus Anixia. It was documented in 1832 by German-American mycologist Lewis David de Schweinitz. References Agaricomycetes Fungi described in 1832 Fungus species Taxa named by Lewis David de Schweinitz
Anixia interrupta
Biology
62
18,193,302
https://en.wikipedia.org/wiki/Photodissociation%20region
In astrophysics, photodissociation regions (or photon-dominated regions, PDRs) are predominantly neutral regions of the interstellar medium in which far ultraviolet photons strongly influence the gas chemistry and act as the most important source of heat. They occur in any region of interstellar gas that is dense and cold enough to remain neutral, but that has too low a column density to prevent the penetration of far-UV photons from distant, massive stars. A typical and well-studied example is the gas at the boundary of a giant molecular cloud. PDRs are also associated with HII regions, reflection nebulae, active galactic nuclei, and Planetary nebulae. All the atomic gas and most of the molecular gas in the galaxy is found in PDRs. The closest PDRs to the Sun are IC 59 and IC 63, near the bright Be star Gamma Cassiopeiae. History The study of photodissociation regions began from early observations of the star-forming regions Orion A and M17 which showed neutral areas bright in infrared radiation lying outside ionised HII regions. References Astrophysics Interstellar media
Photodissociation region
Physics,Astronomy
226
953,818
https://en.wikipedia.org/wiki/PSR%20B1829%E2%88%9210
PSR B1829−10 (often shortened to PSR 1829−10) is a pulsar that is approximately 30,000 light-years away in the constellation of Scutum. This pulsar has been the target of interest, because of a mistaken identification of a planet around it. Andrew G. Lyne of the University of Manchester and Bailes claimed in July 1991 to have found "a planet orbiting the neutron star PSR 1829-10" but in 1992 retracted. They had failed to correctly take into account the ellipticity of Earth's orbit, and had incorrectly concluded that a planet with an orbital period of half a year existed around the pulsar. See also Exoplanet Hypothetical planet Pulsar planet Sources Further reading Pulsars Scutum (constellation)
PSR B1829−10
Astronomy
165
5,462,075
https://en.wikipedia.org/wiki/Decision%20tree%20pruning
Pruning is a data compression technique in machine learning and search algorithms that reduces the size of decision trees by removing sections of the tree that are non-critical and redundant to classify instances. Pruning reduces the complexity of the final classifier, and hence improves predictive accuracy by the reduction of overfitting. One of the questions that arises in a decision tree algorithm is the optimal size of the final tree. A tree that is too large risks overfitting the training data and poorly generalizing to new samples. A small tree might not capture important structural information about the sample space. However, it is hard to tell when a tree algorithm should stop because it is impossible to tell if the addition of a single extra node will dramatically decrease error. This problem is known as the horizon effect. A common strategy is to grow the tree until each node contains a small number of instances then use pruning to remove nodes that do not provide additional information. Pruning should reduce the size of a learning tree without reducing predictive accuracy as measured by a cross-validation set. There are many techniques for tree pruning that differ in the measurement that is used to optimize performance. Techniques Pruning processes can be divided into two types (pre- and post-pruning). Pre-pruning procedures prevent a complete induction of the training set by replacing a stop () criterion in the induction algorithm (e.g. max. Tree depth or information gain (Attr)> minGain). Pre-pruning methods are considered to be more efficient because they do not induce an entire set, but rather trees remain small from the start. Prepruning methods share a common problem, the horizon effect. This is to be understood as the undesired premature termination of the induction by the stop () criterion. Post-pruning (or just pruning) is the most common way of simplifying trees. Here, nodes and subtrees are replaced with leaves to reduce complexity. Pruning can not only significantly reduce the size but also improve the classification accuracy of unseen objects. It may be the case that the accuracy of the assignment on the train set deteriorates, but the accuracy of the classification properties of the tree increases overall. The procedures are differentiated on the basis of their approach in the tree (top-down or bottom-up). Bottom-up pruning These procedures start at the last node in the tree (the lowest point). Following recursively upwards, they determine the relevance of each individual node. If the relevance for the classification is not given, the node is dropped or replaced by a leaf. The advantage is that no relevant sub-trees can be lost with this method. These methods include Reduced Error Pruning (REP), Minimum Cost Complexity Pruning (MCCP), or Minimum Error Pruning (MEP). Top-down pruning In contrast to the bottom-up method, this method starts at the root of the tree. Following the structure below, a relevance check is carried out which decides whether a node is relevant for the classification of all n items or not. By pruning the tree at an inner node, it can happen that an entire sub-tree (regardless of its relevance) is dropped. One of these representatives is pessimistic error pruning (PEP), which brings quite good results with unseen items. Pruning algorithms Reduced error pruning One of the simplest forms of pruning is reduced error pruning. Starting at the leaves, each node is replaced with its most popular class. If the prediction accuracy is not affected then the change is kept. While somewhat naive, reduced error pruning has the advantage of simplicity and speed. Cost complexity pruning Cost complexity pruning generates a series of trees where is the initial tree and is the root alone. At step , the tree is created by removing a subtree from tree and replacing it with a leaf node with value chosen as in the tree building algorithm. The subtree that is removed is chosen as follows: Define the error rate of tree over data set as . The subtree that minimizes is chosen for removal. The function defines the tree obtained by pruning the subtrees from the tree . Once the series of trees has been created, the best tree is chosen by generalized accuracy as measured by a training set or cross-validation. Examples Pruning could be applied in a compression scheme of a learning algorithm to remove the redundant details without compromising the model's performances. In neural networks, pruning removes entire neurons or layers of neurons. See also Alpha–beta pruning Artificial neural network Null-move heuristic Pruning (artificial neural network) References Further reading MDL based decision tree pruning Decision tree pruning using backpropagation neural networks External links Fast, Bottom-Up Decision Tree Pruning Algorithm Introduction to Decision tree pruning Decision trees Machine learning
Decision tree pruning
Engineering
1,025
27,416,836
https://en.wikipedia.org/wiki/Mobile%20data%20offloading
Mobile data offloading is the use of complementary network technologies for delivering data originally targeted for cellular networks. Offloading reduces the amount of data being carried on the cellular bands, freeing bandwidth for other users. It is also used in situations where local cell reception may be poor, allowing the user to connect via wired services with better connectivity. Rules triggering the mobile offloading action can be set by either an end-user (mobile subscriber) or an operator. The code operating on the rules resides in an end-user device, in a server, or is divided between the two. End users do data offloading for data service cost control and the availability of higher bandwidth. The main complementary network technologies used for mobile data offloading are Wi-Fi, femtocell and Integrated Mobile Broadcast. It is predicted that mobile data offloading will become a new industry segment due to the surge of mobile data traffic. Mobile data surge Increasing need for offloading solutions is caused by the explosion of Internet data traffic, especially the growing portion of traffic going through mobile networks. This has been enabled by smartphone devices possessing Wi-Fi capabilities together with large screens and different Internet applications, from browsers to video and audio streaming applications. In addition to smart phones, laptops with 3G access capabilities are also seen as a major source of mobile data traffic. Additionally, Wi-Fi is typically much less costly to build than cellular networks. It has been estimated that the total Internet traffic would pass 235.7 Exabytes per month in 2021, up from 73.1 Exabytes per month in 2016. Annual growth rate of 50% is expected to continue and it will keep out phasing the respected revenue growth. Alternatives Wi-Fi and femtocell technologies are the primary offload technologies used by the industry. In addition, WiMax and terrestrial networks (LAN) are also candidates for offloading of 3G mobile data. Femtocells use standard cellular radio technologies, thus any mobile device is capable of participating in the data offloading process, though some modification is needed to accommodate the different backhaul connection. On the other hand, cellular radio technologies are founded on the ability to do network planning within licensed spectrum. Hence, it may turn out to be difficult, both technically and business wise, to mass deploy femtocell access points. Self-Organizing Network (SON) is an emerging technology for tackling unplanned femtocell deployment (among other applications). Wi-Fi technology is different radio technology than cellular, but most Internet capable mobile devices now come with Wi-Fi capability. There are already millions of installed Wi-Fi networks mainly in congested areas such as airports, hotels and city centers and the number is growing rapidly. Wi-Fi networks are very fragmented but recently there have been efforts to consolidate them. The consolidation of Wi-Fi networks is proceeding both through a community approach, Fon as the prime example, and by the consolidation of Wi-Fi network operators. Wi-Fi Wi-Fi offloading is an emerging business domain with multiple companies entering to the market with proprietary solutions. As standardization has focused on degree of coupling between the cellular and Wi-Fi networks, the competing solutions can be classified based on the minimum needed level of network interworking. Besides standardization, research communities have been exploring more open and programmable design in order to fix the deployment dilemma. A further classification criterion is the initiator of the offloading procedure. Cellular and Wi-Fi network interworking Depending on the services to be offloaded and the business model there may be a need for interworking standardization. Standardization efforts have focused on specifying tightly or loose coupling between the cellular and the Wi-Fi networks, especially in a network-controlled manner. 3GPP based Enhanced Generic Access Network () architecture applies tight coupling as it specifies rerouting of cellular network signaling through Wi-Fi access networks. Wi-Fi is considered to be a non-3GPP WLAN radio access network (RAN). 3GPP has also specified an alternative loosely coupled solution for Wi-Fi. The approach is called Interworking Wireless LAN (IWLAN) architecture and it is a solution to transfer IP data between a mobile device and operator's core network through a Wi-Fi access. In the IWLAN architecture, a mobile device opens a VPN/IPsec tunnel from the device to the dedicated IWLAN server in the operator's core network to provide the user either an access to the operator's walled-garden services or to a gateway to the public Internet. With loose coupling between the networks the only integration and interworking point is the common authentication architecture. The most straightforward way to offload data to the Wi-Fi networks is to have a direct connection to the public Internet. This no coupling alternative omits the need for interworking standardization. For majority of the web traffic there is no added value to route the data through the operator core network. In this case the offloading can simply be carried out by switching the IP traffic to use the Wi-Fi connection in mobile client instead of the cellular data connection. In this approach the two networks are in practice totally separated and network selection is done by a client application. Studies show that significant amount of data can be offloaded in this manner to Wi-Fi networks even when users are mobile. However, offloading does not always mean reduction of resource consumption (required system capacity) in the network of the operator. Under certain conditions and due to an increase of the burstiness of the non-offloaded traffic (i.e. traffic that eventually reaches the network of the operator in a regular way), the amount of network resources to offer a given level of QoS is increased. In this context, the distribution of offloading periods turns out to be the main design parameter to deploy effective offloading strategies in the network of MNOs making non-offloaded traffic less heavy-tailed, hence reducing the resources needed in the network of the operator.. The energy consumption in offloading is also another concern. Initiation of offloading procedure There are three main initiation schemes: WLAN scanning initiation, user initiation and remotely managed initiation. In the WLAN scanning-based initiation the user device periodically performs WLAN scanning. When a known or an open Wi-Fi network is found, an offloading procedure is initiated. In the user-initiated mode, a user is prompted to select which network technology is used. This happens usually once per a network access session. In the remotely managed approach, a network server initiates each offloading procedure by prompting the connection manager of a specific user device. Operator-managed is a subclass of the remotely managed approach. In the operator-managed approach, the operator is monitoring its network load and user behavior. In the case of forthcoming network congestion, the operator initiates the offloading procedure. ANDSF Access network discovery and selection function (ANDSF) is the most complete 3GPP approach to date for controlling offloading between 3GPP and non-3GPP access networks (such as Wi-Fi). The purpose of the ANDSF is to assist user devices to discover access networks in their vicinity and to provide rules (policies) to prioritize and manage connections to all networks. ATSSS 3GPP has started to standardize the Access Traffic Steering, Switching & Splitting (ATSSS) function to enable 5G devices to use different types of access networks, including Wi-Fi. The ATSSS service leverages the Multipath TCP protocol to enable 5G devices to simultaneously utilize different access networks. Experience with the utilisation of Multipath TCP on iPhones has shown that the ability to simultanesouly use Wi-Fi and cellular was key to provide support seamless handovers. The first version of the ATSSS specification leverages the 0-rtt convert protocol developed within the IETF. A prototype implementation of this service has been demonstrated in August 2019. Operating system connection manager Many operating systems provide a connection manager that can automatically switch to Wi-Fi network if the connection manager detects a known Wi-Fi network. Such functionality can be found from most modern operating systems (for example from all Windows versions beginning from XP SP3, Ubuntu, Nokia N900, Android and Apple iPhone). The connection managers use various heuristics to detect the best performing network connections. These include performing DNS requests for known names over the newly activated network interfaces, sending queries to specific servers, ... When both the Wi-Fi and the cellular interfaces are activate, Android smartphones will usually prefer the Wi-Fi one since it is usually unmetered. When such a smartphone decides to switch from one interface to another, all the active TCP connections need to be reestablished. Multipath TCP solves this handover problem in a clean way. With Multipath TCP, TCP connections can use both the Wi-Fi interface and the cellular one during the handover. This means that ongoing TCP connections are not stopped when the smartphone decides to switch from one network to another. As of January 2020, Multipath TCP is natively supported on iPhones, but less frequently used on Android smartphones except in South Korea. On iPhones since iOS9, the Wi-Fi Assist subsystem monitors the quality of the underlying network connection. If the quality drops below a given threshold, Wi-Fi Assist may decide to move established Multipath TCP connections to another interface. Initially, this feature was used for the Siri application. Since iOS12, any [Multipath TCP] enabled application can benefit from this feature. Since iOS13, Apple Maps and Apple Music can also be offloaded from Wi-Fi to cellular and vice versa without any interruption. Opportunistic offloading With the increasing availability of inter-device networks (e.g. Bluetooth or WifiDirect) there is also the possibility of offloading delay tolerant data to the ad hoc network layer. In this case, the delay tolerant data is sent to only a subset of data receivers via the 3G network, with the rest forwarded between devices in the ad hoc layer in a multi-hop fashion. As a result, the traffic on the cellular network is reduced, or gets shifted to inter-device networks. See also LTE in unlicensed spectrum Generic Access Network References External links Global Wi-Fi Offload Summit Metropolitan area networks Network access Wi-Fi
Mobile data offloading
Technology,Engineering
2,144
5,642,583
https://en.wikipedia.org/wiki/Lorenz%20system
The Lorenz system is a system of ordinary differential equations first studied by mathematician and meteorologist Edward Lorenz. It is notable for having chaotic solutions for certain parameter values and initial conditions. In particular, the Lorenz attractor is a set of chaotic solutions of the Lorenz system. The term "butterfly effect" in popular media may stem from the real-world implications of the Lorenz attractor, namely that tiny changes in initial conditions evolve to completely different trajectories. This underscores that chaotic systems can be completely deterministic and yet still be inherently impractical or even impossible to predict over longer periods of time. For example, even the small flap of a butterfly's wings could set the earth's atmosphere on a vastly different trajectory, in which for example a hurricane occurs where it otherwise would have not (see Saddle points). The shape of the Lorenz attractor itself, when plotted in phase space, may also be seen to resemble a butterfly. Overview In 1963, Edward Lorenz, with the help of Ellen Fetter who was responsible for the numerical simulations and figures, and Margaret Hamilton who helped in the initial, numerical computations leading up to the findings of the Lorenz model, developed a simplified mathematical model for atmospheric convection. The model is a system of three ordinary differential equations now known as the Lorenz equations: The equations relate the properties of a two-dimensional fluid layer uniformly warmed from below and cooled from above. In particular, the equations describe the rate of change of three quantities with respect to time: is proportional to the rate of convection, to the horizontal temperature variation, and to the vertical temperature variation. The constants , , and are system parameters proportional to the Prandtl number, Rayleigh number, and certain physical dimensions of the layer itself. The Lorenz equations can arise in simplified models for lasers, dynamos, thermosyphons, brushless DC motors, electric circuits, chemical reactions and forward osmosis. Interestingly, the same Lorenz equations were also derived in 1963 by Sauermann and Haken for a single-mode laser. In 1975, Haken realized that their equations derived in 1963 were mathematically equivalent to the original Lorenz equations. Haken's paper thus started a new field called laser chaos or optical chaos. The Lorenz equations are often called Lorenz-Haken equations in optical literature. Later on, it was also shown the complex version of Lorenz equations also had laser equivalent ones. The Lorenz equations are also the governing equations in Fourier space for the Malkus waterwheel. The Malkus waterwheel exhibits chaotic motion where instead of spinning in one direction at a constant speed, its rotation will speed up, slow down, stop, change directions, and oscillate back and forth between combinations of such behaviors in an unpredictable manner. From a technical standpoint, the Lorenz system is nonlinear, aperiodic, three-dimensional and deterministic. The Lorenz equations have been the subject of hundreds of research articles, and at least one book-length study. Analysis One normally assumes that the parameters , , and are positive. Lorenz used the values , , and . The system exhibits chaotic behavior for these (and nearby) values. If then there is only one equilibrium point, which is at the origin. This point corresponds to no convection. All orbits converge to the origin, which is a global attractor, when . A pitchfork bifurcation occurs at , and for two additional critical points appear at These correspond to steady convection. This pair of equilibrium points is stable only if which can hold only for positive if . At the critical value, both equilibrium points lose stability through a subcritical Hopf bifurcation. When , , and , the Lorenz system has chaotic solutions (but not all solutions are chaotic). Almost all initial points will tend to an invariant setthe Lorenz attractora strange attractor, a fractal, and a self-excited attractor with respect to all three equilibria. Its Hausdorff dimension is estimated from above by the Lyapunov dimension (Kaplan-Yorke dimension) as , and the correlation dimension is estimated to be . The exact Lyapunov dimension formula of the global attractor can be found analytically under classical restrictions on the parameters: The Lorenz attractor is difficult to analyze, but the action of the differential equation on the attractor is described by a fairly simple geometric model. Proving that this is indeed the case is the fourteenth problem on the list of Smale's problems. This problem was the first one to be resolved, by Warwick Tucker in 2002. For other values of , the system displays knotted periodic orbits. For example, with it becomes a torus knot. Connection to tent map In Figure 4 of his paper, Lorenz plotted the relative maximum value in the z direction achieved by the system against the previous relative maximum in the direction. This procedure later became known as a Lorenz map (not to be confused with a Poincaré plot, which plots the intersections of a trajectory with a prescribed surface). The resulting plot has a shape very similar to the tent map. Lorenz also found that when the maximum value is above a certain cut-off, the system will switch to the next lobe. Combining this with the chaos known to be exhibited by the tent map, he showed that the system switches between the two lobes chaotically. A Generalized Lorenz System Over the past several years, a series of papers regarding high-dimensional Lorenz models have yielded a generalized Lorenz model, which can be simplified into the classical Lorenz model for three state variables or the following five-dimensional Lorenz model for five state variables: A choice of the parameter has been applied to be consistent with the choice of the other parameters. See details in. Simulations Julia simulation using Plots # define the Lorenz attractor @kwdef mutable struct Lorenz dt::Float64 = 0.02 σ::Float64 = 10 ρ::Float64 = 28 β::Float64 = 8/3 x::Float64 = 2 y::Float64 = 1 z::Float64 = 1 end function step!(l::Lorenz) dx = l.σ * (l.y - l.x) dy = l.x * (l.ρ - l.z) - l.y dz = l.x * l.y - l.β * l.z l.x += l.dt * dx l.y += l.dt * dy l.z += l.dt * dz end attractor = Lorenz() # initialize a 3D plot with 1 empty series plt = plot3d( 1, xlim = (-30, 30), ylim = (-30, 30), zlim = (0, 60), title = "Lorenz Attractor", marker = 2, ) # build an animated gif by pushing new points to the plot, saving every 10th frame @gif for i=1:1500 step!(attractor) push!(plt, attractor.x, attractor.y, attractor.z) end every 10 Maple simulation deq := [diff(x(t), t) = 10*(y(t) - x(t)), diff(y(t), t) = 28*x(t) - y(t) - x(t)*z(t), diff(z(t), t) = x(t)*y(t) - 8/3*z(t)]: with(DEtools): DEplot3d(deq, {x(t), y(t), z(t)}, t = 0 .. 100, [[x(0) = 10, y(0) = 10, z(0) = 10]], stepsize = 0.01, x = -20 .. 20, y = -25 .. 25, z = 0 .. 50, linecolour = sin(t*Pi/3), thickness = 1, orientation = [-40, 80], title = `Lorenz Chaotic Attractor`); Maxima simulation [sigma, rho, beta]: [10, 28, 8/3]$ eq: [sigma*(y-x), x*(rho-z)-y, x*y-beta*z]$ sol: rk(eq, [x, y, z], [1, 0, 0], [t, 0, 50, 1/100])$ len: length(sol)$ x: makelist(sol[k][2], k, len)$ y: makelist(sol[k][3], k, len)$ z: makelist(sol[k][4], k, len)$ draw3d(points_joined=true, point_type=-1, points(x, y, z), proportional_axes=xyz)$ MATLAB simulation % Solve over time interval [0,100] with initial conditions [1,1,1] % ''f'' is set of differential equations % ''a'' is array containing x, y, and z variables % ''t'' is time variable sigma = 10; beta = 8/3; rho = 28; f = @(t,a) [-sigma*a(1) + sigma*a(2); rho*a(1) - a(2) - a(1)*a(3); -beta*a(3) + a(1)*a(2)]; [t,a] = ode45(f,[0 100],[1 1 1]); % Runge-Kutta 4th/5th order ODE solver plot3(a(:,1),a(:,2),a(:,3)) Mathematica simulation Standard way: tend = 50; eq = {x'[t] == σ (y[t] - x[t]), y'[t] == x[t] (ρ - z[t]) - y[t], z'[t] == x[t] y[t] - β z[t]}; init = {x[0] == 10, y[0] == 10, z[0] == 10}; pars = {σ->10, ρ->28, β->8/3}; {xs, ys, zs} = NDSolveValue[{eq /. pars, init}, {x, y, z}, {t, 0, tend}]; ParametricPlot3D[{xs[t], ys[t], zs[t]}, {t, 0, tend}] Less verbose: lorenz = NonlinearStateSpaceModel[{{σ (y - x), x (ρ - z) - y, x y - β z}, {}}, {x, y, z}, {σ, ρ, β}]; soln[t_] = StateResponse[{lorenz, {10, 10, 10}}, {10, 28, 8/3}, {t, 0, 50}]; ParametricPlot3D[soln[t], {t, 0, 50}] Python simulation import matplotlib.pyplot as plt import numpy as np def lorenz(xyz, *, s=10, r=28, b=2.667): """ Parameters ---------- xyz : array-like, shape (3,) Point of interest in three-dimensional space. s, r, b : float Parameters defining the Lorenz attractor. Returns ------- xyz_dot : array, shape (3,) Values of the Lorenz attractor's partial derivatives at *xyz*. """ x, y, z = xyz x_dot = s*(y - x) y_dot = r*x - y - x*z z_dot = x*y - b*z return np.array([x_dot, y_dot, z_dot]) dt = 0.01 num_steps = 10000 xyzs = np.empty((num_steps + 1, 3)) # Need one more for the initial values xyzs[0] = (0., 1., 1.05) # Set initial values # Step through "time", calculating the partial derivatives at the current point # and using them to estimate the next point for i in range(num_steps): xyzs[i + 1] = xyzs[i] + lorenz(xyzs[i]) * dt # Plot ax = plt.figure().add_subplot(projection='3d') ax.plot(*xyzs.T, lw=0.6) ax.set_xlabel("X Axis") ax.set_ylabel("Y Axis") ax.set_zlabel("Z Axis") ax.set_title("Lorenz Attractor") plt.show() R simulation library(deSolve) library(plotly) # parameters prm <- list(sigma = 10, rho = 28, beta = 8/3) # initial values varini <- c( X = 1, Y = 1, Z = 1 ) Lorenz <- function (t, vars, prm) { with(as.list(vars), { dX <- prm$sigma*(Y - X) dY <- X*(prm$rho - Z) - Y dZ <- X*Y - prm$beta*Z return(list(c(dX, dY, dZ))) }) } times <- seq(from = 0, to = 100, by = 0.01) # call ode solver out <- ode(y = varini, times = times, func = Lorenz, parms = prm) # to assign color to points gfill <- function (repArr, long) { rep(repArr, ceiling(long/length(repArr)))[1:long] } dout <- as.data.frame(out) dout$color <- gfill(rainbow(10), nrow(dout)) # Graphics production with Plotly: plot_ly( data=dout, x = ~X, y = ~Y, z = ~Z, type = 'scatter3d', mode = 'lines', opacity = 1, line = list(width = 6, color = ~color, reverscale = FALSE) ) Applications Model for atmospheric convection As shown in Lorenz's original paper, the Lorenz system is a reduced version of a larger system studied earlier by Barry Saltzman. The Lorenz equations are derived from the Oberbeck–Boussinesq approximation to the equations describing fluid circulation in a shallow layer of fluid, heated uniformly from below and cooled uniformly from above. This fluid circulation is known as Rayleigh–Bénard convection. The fluid is assumed to circulate in two dimensions (vertical and horizontal) with periodic rectangular boundary conditions. The partial differential equations modeling the system's stream function and temperature are subjected to a spectral Galerkin approximation: the hydrodynamic fields are expanded in Fourier series, which are then severely truncated to a single term for the stream function and two terms for the temperature. This reduces the model equations to a set of three coupled, nonlinear ordinary differential equations. A detailed derivation may be found, for example, in nonlinear dynamics texts from , Appendix C; , Appendix D; or Shen (2016), Supplementary Materials. Model for the nature of chaos and order in the atmosphere The scientific community accepts that the chaotic features found in low-dimensional Lorenz models could represent features of the Earth's atmosphere (), yielding the statement of “weather is chaotic.” By comparison, based on the concept of attractor coexistence within the generalized Lorenz model and the original Lorenz model (), Shen and his co-authors proposed a revised view that “weather possesses both chaos and order with distinct predictability”. The revised view,  which is a build-up of the conventional view, is used to suggest that “the chaotic and regular features found in theoretical Lorenz models could better represent features of the Earth's atmosphere”. Resolution of Smale's 14th problem Smale's 14th problem says, 'Do the properties of the Lorenz attractor exhibit that of a strange attractor?'. The problem was answered affirmatively by Warwick Tucker in 2002. To prove this result, Tucker used rigorous numerics methods like interval arithmetic and normal forms. First, Tucker defined a cross section that is cut transversely by the flow trajectories. From this, one can define the first-return map , which assigns to each the point where the trajectory of first intersects . Then the proof is split in three main points that are proved and imply the existence of a strange attractor. The three points are: There exists a region invariant under the first-return map, meaning . The return map admits a forward invariant cone field. Vectors inside this invariant cone field are uniformly expanded by the derivative of the return map. To prove the first point, we notice that the cross section is cut by two arcs formed by . Tucker covers the location of these two arcs by small rectangles , the union of these rectangles gives . Now, the goal is to prove that for all points in , the flow will bring back the points in , in . To do that, we take a plan below at a distance small, then by taking the center of and using Euler integration method, one can estimate where the flow will bring in which gives us a new point . Then, one can estimate where the points in will be mapped in using Taylor expansion, this gives us a new rectangle centered on . Thus we know that all points in will be mapped in . The goal is to do this method recursively until the flow comes back to and we obtain a rectangle in such that we know that . The problem is that our estimation may become imprecise after several iterations, thus what Tucker does is to split into smaller rectangles and then apply the process recursively. Another problem is that as we are applying this algorithm, the flow becomes more 'horizontal', leading to a dramatic increase in imprecision. To prevent this, the algorithm changes the orientation of the cross sections, becoming either horizontal or vertical. Gallery See also Eden's conjecture on the Lyapunov dimension Lorenz 96 model List of chaotic maps Takens' theorem Notes References Shen, B.-W. (2015-12-21). "Nonlinear feedback in a six-dimensional Lorenz model: impact of an additional heating term". Nonlinear Processes in Geophysics. 22 (6): 749–764. doi:10.5194/npg-22-749-2015. ISSN 1607-7946. Further reading External links Lorenz attractor by Rob Morris, Wolfram Demonstrations Project. Lorenz equation on planetmath.org Synchronized Chaos and Private Communications, with Kevin Cuomo. The implementation of Lorenz attractor in an electronic circuit. Lorenz attractor interactive animation (you need the Adobe Shockwave plugin) 3D Attractors: Mac program to visualize and explore the Lorenz attractor in 3 dimensions Lorenz Attractor implemented in analog electronic Lorenz Attractor interactive animation (implemented in Ada with GTK+. Sources & executable) Interactive web based Lorenz Attractor made with Iodide Chaotic maps Articles containing video clips Articles with example Python (programming language) code Articles with example MATLAB/Octave code Articles with example Julia code
Lorenz system
Mathematics
4,281
72,760
https://en.wikipedia.org/wiki/Square%20kilometre
The square kilometre (square kilometer in American spelling; symbol: km2) is a multiple of the square metre, the SI unit of area or surface area. 1 km2 is equal to: 1,000,000 square metres (m2) 100 hectares (ha) It is also approximately equal to: 0.3861 square miles 247.1 acres Conversely: 1 m2 = 0.000001 (10−6) km2 1 hectare = 0.01 (10−2) km2 1 square mile = 1 acre = about The symbol "km2" means (km)2, square kilometre or kilometre squared and not k(m2), kilo–square metre. For example, 3 km2 is equal to = 3,000,000 m2, not 3,000 m2. Examples of areas of 1 square kilometre Topographical map grids Topographical map grids are worked out in metres, with the grid lines being 1,000 metres apart. 1:100,000 maps are divided into squares representing 1 km2, each square on the map being one square centimetre in area and representing 1 km2 on the surface of the Earth. For 1:50,000 maps, the grid lines are 2 cm apart. Each square on the map is 2 cm by 2 cm (4 cm2) and represents 1 km2 on the surface of the Earth. For 1:25,000 maps, the grid lines are 4 cm apart. Each square on the map is 4 cm by 4 cm (16 cm2) and represents 1 km2 on the surface of the Earth. In each case, the grid lines enclose one square kilometre. Medieval city centres The area enclosed by the walls of many European medieval cities were about one square kilometre. These walls are often either still standing or the route they followed is still clearly visible, such as in Brussels, where the wall has been replaced by a ring road, or in Frankfurt, where the wall has been replaced by gardens. The approximate area of the old walled cities can often be worked out by fitting the course of the wall to a rectangle or an oval (ellipse). Examples include: Delft, Netherlands (See map alongside) The walled city of Delft was approximately rectangular. The approximate length of rectangle was about . The approximate width of the rectangle was about . A perfect rectangle with these measurements has an area of 1.30×0.75 = 0.9 km2 Lucca (Italy) The medieval city is roughly rectangular with rounded north-east and north-west corners. The maximum distance from east to west is . The maximum distance from north to south is . A perfect rectangle of these dimensions would be 1.36×0.80 = 1.088 km2. Bruges (Belgium) The medieval city of Bruges, a major centre in Flanders, was roughly oval or elliptical in shape with the longer or semi-major axis running north and south. The maximum distance from north to south (semi-major axis) is . The maximum distance from east to west (semi-minor axis) is . A perfect ellipse of these dimensions would be 2.53 × 1.81 × (π/4) = 3.597 km2. Chester United Kingdom Chester is one of the smaller English cities that has a near-intact city wall. The distance from Northgate to Watergate is about 855 metres. The distance from Eastgate to Westgate is about 589 metres. A perfect rectangle of these dimensions would be (855/1000) × (589/1000) = 0.504 km2. Parks Parks come in all sizes; a few are almost exactly one square kilometre in area. Here are some examples: Riverside Country Park, UK. Brierley Forest Park, UK. Rio de Los Angeles State Park, California, US Jones County Central Park, Iowa, US. Kiest Park, Dallas, Texas, US Hole-in-the-Wall Park & Campground, Grand Manan Island, Bay of Fundy, New Brunswick, Canada Downing Provincial Park, British Columbia, Canada Citadel Park, Poznan, Poland Sydney Olympic Park, Sydney, Australia, contains 6.63 square kilometres of wetlands and waterways. Golf courses Using the figures published by golf course architects Crafter and Mogford, a course should have a fairway width of 120 metres and 40 metres clear beyond the hole. Assuming a 18-hole course, an area of 80 hectares (0.8 square kilometre) needs to be allocated for the course itself. Examples of golf courses that are about one square kilometre include: Manchester Golf Club, UK Northop Country Park, Wales, UK The Trophy Club, Lebanon, Indiana, US Qingdao International Country Golf Course, Qingdao, Shandong, China Arabian Ranches Golf Club, Dubai Sharm el Sheikh Golf Courses: Sharm el Sheikh, South Sinai, Egypt Belmont Golf Club, Lake Macquarie, NSW, Australia Other areas of one square kilometre or thereabouts The Old City of Jerusalem is almost 1 square kilometre in area. Milton Science Park, Oxfordshire, UK. Mielec Industrial Park, Mielec, Poland The Guildford Campus of Guildford Grammar School, South Guildford, Western Australia Sardar Vallabhbhai National Institute of Technology (SVNIT), Surat, India Île aux Cerfs Island, near the east coast of Mauritius. Peng Chau Island, Hong Kong See also Conversion of units SI prefix for the precise meaning of the prefix "k" Square Kilometre Array, a proposed radio telescope in both South Africa and Australia, which is intended to have a collecting area of approximately 1 km2 Notes References Units of area SI derived units
Square kilometre
Mathematics
1,167
174,094
https://en.wikipedia.org/wiki/Microsoft%20Messenger%20service
Messenger (formerly MSN Messenger Service, .NET Messenger Service and Windows Live Messenger Service) was an instant messaging and presence system developed by Microsoft in 1999 for use with its MSN Messenger software. It was used by instant messaging clients including Windows 8, Windows Live Messenger, Microsoft Messenger for Mac, Outlook.com and Xbox Live. Third-party clients also connected to the service. It communicated using the Microsoft Notification Protocol, a proprietary instant messaging protocol. The service allowed anyone with a Microsoft account to sign in and communicate in real time with other people who were signed in as well. On January 11, 2013, Microsoft announced that they were retiring the existing Messenger service globally (except for mainland China where Messenger will continue to be available) and replacing it with Skype. In April 2013, Microsoft merged the service into Skype; existing users were able to sign into Skype with their existing accounts and access their contact list. As part of the merger, Skype's instant messaging functionality is now running on the backbone of the former Messenger service. Background Despite multiple name changes to the service and its client software over the years, the Messenger service is often referred to colloquially as "MSN", due to the history of MSN Messenger. The service itself was known as MSN Messenger Service from 1999 to 2001, at which time, Microsoft changed its name to .NET Messenger Service and began offering clients that no longer carried the "MSN" name, such as the Windows Messenger client included with Windows XP, which was originally intended to be a streamlined version of MSN Messenger, free of advertisements and integrated into Windows. Nevertheless, the company continued to offer more upgrades to MSN Messenger until the end of 2005, when all previous versions of MSN Messenger and Windows Messenger were superseded by a new program, Windows Live Messenger, as part of Microsoft's launch of its Windows Live online services. For several years, the official name for the service remained .NET Messenger Service, as indicated on its official network status web page, though Microsoft rarely used the name to promote the service. Because the main client used to access the service became known as Windows Live Messenger, Microsoft started referring to the entire service as the Windows Live Messenger Service in its support documentation in the mid-2000s. The service can integrate with the Windows operating system, automatically and simultaneously signing into the network as the user logs into their Windows account. Organizations can also integrate their Microsoft Office Communications Server and Active Directory with the service. In December 2011, Microsoft released an XMPP interface to the Messenger service. As part of a larger effort to rebrand many of its Windows Live services, Microsoft began referring to the service as simply Messenger in 2012. Software Official clients Microsoft offered the following instant messaging clients that connected to the Messenger service: Windows Live Messenger, for users of Windows 7 and previous versions MSN Messenger was the former name of the client from 1999 to 2006 Windows Messenger is a scaled-down client that was included with Windows XP in 2001 Microsoft Messenger for Mac, for users of Mac OS X Outlook.com includes web browser-based functionality for instant messaging Hotmail, the predecessor to Outlook.com, includes similar functionality for Messenger Windows Live Web Messenger was a web-based program for use through Internet Explorer MSN Web Messenger was the former name of the web-based client Windows 8, includes a built-in Messaging client Xbox Live includes access to the Messenger service from within the Xbox Dashboard MSN TV (formerly WebTV) had a built-in messaging client available on the original WebTV/MSN TV and MSN TV 2 devices, which was originally introduced via a Summer 2000 software update Messenger on Windows Phone includes access to the Messenger service from within a phone running Windows Phone Windows Live Messenger for iPhone and iPod Touch includes access to the Messenger service from within an iPhone, iPod Touch or iPad Windows Live Messenger for Nokia includes access to the Messenger service from within a Nokia phone Messenger Play! includes access to the Messenger service from within an Android phone or tablet Windows Live Messenger for BlackBerry includes access to the Messenger service from within a BlackBerry Security concerns A 2007 analysis of Messenger's Microsoft Notification Protocol, which is unencrypted, concluded that its design "did not follow several principles of designing secure systems", resulting in a "plethora of security vulnerabilities"; these vulnerabilities were demonstrated by successfully spoofing a user's identity. See also Microsoft Notification Protocol Comparison of instant messaging protocols Comparison of cross-platform instant messaging clients References External links MSN Messenger protocol documentation MSNPiki (protocol wiki) Skype replaces Microsoft Messenger for online calls .NET Instant messaging protocols Windows communication and services
Microsoft Messenger service
Technology
942
45,168,186
https://en.wikipedia.org/wiki/Gliese%20625
GJ 625 (AC 54 1646-56) is a small red dwarf star with an exoplanetary companion in the northern constellation of Draco. The system is located at a distance of 21.1 light-years from the Sun based on parallax, but is drifting closer with a radial velocity of −13 km/s. It is too faint to be visible to the naked eye, having an apparent visual magnitude of 10.13 and an absolute magnitude of 11.06. This is an M-type main-sequence star with a stellar classification of M1.5V. It is spinning slowly with a rotation period of roughly 78 days, and has a low magnetic activity level. The star has about a quarter of the mass and size of the Sun, and the metal content is 40% the abundances in the Sun's atmosphere. It is radiating just 1.5% of the luminosity of the Sun from its photosphere at an effective temperature of 3,557 K. Planetary system On May 18, 2017, a planet was detected orbiting GJ 625 by the HARPS-N telescope. The planet, GJ 625 b, orbits near the inner edge of the optimistic circumstellar habitable zone of its star, and the discoverers speculate it may support liquid water, depending on atmospheric conditions. Based on the habitable zone model of Kopparapu et al. 2013, the planet is not considered to be in the habitable zone as it would likely experience a runaway greenhouse effect, similar to Venus. Since the star is considered quiescent (having a low X-ray emission and flare rate), the radio emission from the system may be auroral in nature and coming from a short-period planet. Further observations will be needed to confirm this. References M-type main-sequence stars Planetary systems with one confirmed planet Draco (constellation) 0625 080459
Gliese 625
Astronomy
394
53,091,932
https://en.wikipedia.org/wiki/NGC%20408
NGC 408 is a star located in the constellation Pisces. It was discovered on October 22, 1867, by Herman Schultz. It was described by Dreyer as "very faint, very small, (WH) II 220 eight seconds of time to east.", WH II 220 being NGC 410. See also List of NGC objects (1–1000) Pisces (constellation) References External links SEDS 0408 18671022 Pisces (constellation) Discoveries by Herman Schultz (astronomer)
NGC 408
Astronomy
107
2,061
https://en.wikipedia.org/wiki/Automatic%20number%20announcement%20circuit
An automatic number announcement circuit (ANAC) is a component of a central office of a telephone company that provides a service to installation and service technicians to determine the telephone number of a telephone line. The facility has a telephone number that may be called to listen to an automatic announcement that includes the caller's telephone number. The ANAC facility is useful primarily during the installation of landline telephones to quickly identify one of multiple wire pairs in a bundle or at a termination point. Operation By connecting a test telephone set, a technician calls the local telephone number of the automatic number announcement service. This call is connected to equipment at the central office that uses automatic equipment to announce the telephone number of the line calling in. The main purpose of this system is to allow telephone company technicians to identify the telephone line they are connected to. Automatic number announcement systems are based on automatic number identification. They are intended for use by phone company technicians, the ANAC system bypasses customer features, such as unlisted numbers, caller ID blocking, and outgoing call blocking. Installers of multi-line business services where outgoing calls from all lines display the company's main number on call display can use ANAC to identify a specific line in the system, even if CID displays every line as "line one". Most ANAC systems are provider-specific in each wire center, while others are regional or state-/province- or area-code-wide. No official lists of ANAC numbers are published, as telephone companies guard against abuse that would interfere with availability for installers. Exchange prefixes for testing The North American Numbering Plan reserves the exchange (central office) prefixes 958 and 959 for plant testing purposes. Code 959 with three or four additional digits is dedicated for access to office test lines in local exchange carrier and interoffice carrier central offices. The specifications define several test features for line conditions, such as quiet line and busy line, and test tones transmitted to callers. Telephone numbers are assigned for ring back to test the ringer when installing telephone sets, milliwatt tone (a number simply answers with a continuous test tone) and a loop around (which connects a call to another inbound call to the same or another test number). ANAC services are typically installed in the 958 range, which is intended for communications between central offices. In some area codes, multiple additional prefixes may be reserved for test purposes. Many area codes reserved 999; 320 was also formerly reserved in Bell Canada territory. Other carrier-specific North American test numbers include 555-XXXX numbers (such as 555–0311 on Rogers Communications in Canada) or vertical service codes, such as *99 on Cablevision/Optimum Voice in the United States. MCI Inc. (United States) provides ANI information by dialing 800-444-4444. Telephone numbers Plant testing telephone numbers are carrier-specific, there is no comprehensive list of telephone numbers for ANAC services. In some communities, test numbers change relatively often. In others, a major incumbent carrier might assign a single number which provides test functions on its network across an entire numbering plan area, throughout an entire province or state, or system-wide. Some telecommunication carriers maintain toll-free numbers for ANAC facilities. Some national toll-free numbers provide automatic number identification by speaking the telephone number of the caller, but these are not intended for use in identifying the customer's own phone number. They are used for the agent in a call center to confirm the telephone a customer is calling from, so that the customer's account information can be displayed as a "screen pop" for the next available customer service representative. See also Plant test number Ringback number References Telephone numbers Telephony signals
Automatic number announcement circuit
Mathematics
762
856,319
https://en.wikipedia.org/wiki/HCL%20Sametime
HCL Sametime Premium (formerly IBM Sametime and IBM Lotus Sametime) is a client–server application and middleware platform that provides real-time, unified communications and collaboration for enterprises. Those capabilities include presence information, enterprise instant messaging, web conferencing, community collaboration, and telephony capabilities and integration. Currently it is developed and sold by HCL Software, a division of Indian company HCL Technologies, until 2019 by the Lotus Software division of IBM. Because HCL Sametime is middleware, it supports enterprise software and business process integration (Communication Enabled Business Process), either through an HCL Sametime plugin or by surfacing HCL Sametime capabilities through third-party applications. HCL Sametime integrates with a wide variety of software, including Lotus collaboration products, Microsoft Office productivity software, and portal and Web applications. Features HCL Sametime Premium Features: HCL Sametime Premium (v12.0.1) HCL Sametime chat only support on Kubernetes MongoDB v6 support Podman support Safari browser support Meeting duration timer Network indicator added to meetings Improved Firefox browser experience Keyboard shortcuts Chat enhancements Chat interface improvements Web Chat contact list nickname support Sametime Client RTL (Bi-Di) language support Chat API (technical preview) updates Grafana dashboards for monitoring and statistics Push Proxy support Sametime Database Utility Outlook Calendar HCL Meetings Add-on updates HCL Sametime Premium (v12.0) Company branding Virtual backgrounds Meeting reports and recordings Click to Call File transfer Pinned and muted chats Microphone Background noise detection Meeting modes Member management Waiting room Mobile client policy improvements Video layout enhancements HCL Sametime Premium (v11.6 IF2) Apache Tomcat upgraded Open JDK updated APNS certificate renewed HCL Sametime Premium (v11.6) New modern look HCL Verse and iNotes enabled for Persistent Chat Click to meet feature New Web Chat client modern look and features New Mobile clients on iOS and Android Persistent chat and multi-device support New features for administrators 64-bit Community Server Simplified Proxy Server install Stand-alone Sametime Community Mux install Support for APNS HTTP/2 HCL Sametime Premium (v11.5) Instant meetings & persistent chat Personal meeting rooms Multiple screen-share per meeting Moderator Controls Video meeting options Desktop App, Web & Mobile Meeting Recording Calendar integrations Livestreaming capability Secure data Flexible deployment (Cloud, on-premises or hybrid) Admin policies at the user, group and server level Inbound/outbound telephony support Features - Previous Versions through 11.0: HCL Sametime is a client–server enterprise application that includes the HCL Sametime Connect client for end-users and the HCL Sametime Server for control and administration. HCL Sametime (pre v11.5) comes in four levels of functionality: HCL Sametime Limited Use (Old name HCL Sametime Entry) provides basic presence and instant messaging. HCL Sametime Standard provides additional functionality to HCL Sametime Entry, including: rich presence including location awareness rich-media chat, including point-to-point Voice-over-IP (VoIP) and video chat, timestamps, emoticons, and chat histories group and multi-way chat web conferencing contact business cards interoperability with public IM networks via the HCL Sametime Gateway, including AOL Instant Messenger, Yahoo! Messenger, Google Talk and XMPP-based services. open APIs that allow integrations between HCL's own and other applications Sametime Audio/Video Services supports audio (e.g. G.722.1) and video codecs (e.g. H.264) HCL Sametime Advanced provides additional real-time community collaboration and social networking functionality to HCL Sametime Standard, including: persistent chat rooms instant screen sharing geographic location services HCL Sametime Unified Telephony provides additional telephony functionality to HCL Sametime Standard or HCL Sametime Advanced, including: telephony presence softphone click-to-call and click-to-conference incoming call management call control with live call transfer connectivity to, and integration of, multiple telephone systems - both IP private branch exchange (IP-PBX) and legacy time-division multiplexing (TDM) systems HCL Sametime Gateway provides server-to-server interoperability between disparate communities with conversion services for different protocols, presence information awareness, and instant messaging. HCL Sametime Gateway connects HCL Sametime instant messaging cooperate communities with external communities, including external HCL Sametime, and public instant messaging communities, such as: AOL, AIM, ICQ, Yahoo, Google Talk, and XMPP. HCL Sametime Gateway replaces the Sametime Session Initiation Protocol (SIP) Gateway from earlier releases of HCL Sametime. The HCL Sametime Gateway platform is based on IBM WebSphere Application Server, which provides failover, clustering, and scalability for the HCL Sametime Gateway deployment. The product is shipped with the following connectors: Virtual Places, SIP, and XMPP. More protocol connectors may be added. Platform support, APIs and application integration Because HCL Sametime is middleware, it supports application and business process integration. When within the context of real-time communications, this is often referred to as Communications Enabled Business Processes. Sametime integrates in either of two ways: by surfacing the application into an HCL Sametime plug-in by surfacing HCL Sametime capabilities into the target application Some examples of integration between HCL Sametime and applications include: HCL's products including HCL Notes, HCL Domino applications, HCL Connections, HCL Quickr Microsoft office-productivity software including Microsoft Office, Microsoft Outlook, and Microsoft Sharepoint portal applications, including portals built with IBM WebSphere Portal web applications packaged enterprise applications embedded and client–server telephony applications HCL Sametime Connect, the client component of HCL Sametime, is built on the Eclipse platform, allowing developers familiar with the framework to easily write plug-ins for HCL Sametime. It uses a proprietary protocol named Virtual Places, but also offers support for standard protocols, including Session Initiation Protocol (SIP), SIMPLE, T.120, XMPP, and H.323. HCL Sametime Connect can run under Microsoft Windows, Linux, and macOS. Also available are a zero-download web client for Microsoft Internet Explorer, Mozilla Firefox and Apple Safari; mobile clients are also supported for Apple iPhone, Android, Microsoft Windows Mobile, RIM Blackberry, and Symbian. The HCL Sametime server runs on Microsoft Windows, IBM AIX, IBM i (formerly i5/OS), Linux and Solaris. Sametime can also be accessed using the free software Adium, Gaim, Pidgin, and Kopete clients. History HCL Sametime became an IBM product in 1998 as the synthesis of technologies IBM acquired from two companies: an American company called Databeam provided the architecture to host T.120 dataconferencing (for web messaging) and H.323 Multi-Media Conferencing Ubique, an Israeli company whose Virtual Places Chat software technology (also known as VPBuddy) provided the "presence awareness" functionality that allows people to detect which of their contacts are online and available for messaging or conferencing The Sametime v3.1 client was part of the standard platform loaded by the IBM Standard Software Installer (ISSI) for many years, enabling communications over the corporate intranet by hundreds of thousands of IBM employees. The next major release was the Sametime v7.5 client, built on the Eclipse (software) platform, enabling the use of the plug-in framework. In 2008 Gartner positioned IBM for the first time as a "leader" in Gartner's Unified Communications Magic Quadrant. Version References External links Apple Store Google Play Web conferencing Teleconferencing Sametime Windows Internet software Internet software for Linux Instant messaging server software Symbian instant messaging clients Videotelephony
HCL Sametime
Technology
1,671
55,131,637
https://en.wikipedia.org/wiki/European%20Federation%20of%20Clinical%20Chemistry%20and%20Laboratory%20Medicine
The European Federation of Clinical Chemistry and Laboratory Medicine (EFLM) (formerly EFCC) is a federation of national member societies of clinical chemistry and laboratory medicine from Europe. EFLM has its registered office in Brussels and administrative office in Milan. EFLM is the European Region member of the International Federation of Clinical Chemistry and Laboratory Medicine (IFCC) History EFLM was created from the merger of two precursor organisations, The Federation of European Societies of Clinical Chemistry (FESCC), (a European representative of the International Federation of Clinical Chemistry and Laboratory Medicine, IFCC) and the European Community Confederation of Clinical Chemistry (EC4) at the EuroMedlab meeting in Amsterdam in 2007. Both precursor organizations arose in the 1970s. The increasing overlap between the European Union, represented by EC4 and FESCC, representing geographical Europe, meant that merger was appropriate. Operations EFLM has an Executive Board and a range of committees for: science, education and training, quality and regulations, communication and professional representation. Each committee has working groups with a Chair and three full members, there is also a Young Scientist member and they may also have corresponding members, but only one member form each country is permitted. The Science Committee develops collaborative science in Laboratory Medicine between member organisations or individuals and guidelines to set standards of practice to assist member societies in providing quality patient care. The output of the scientific working groups is scientific papers and presentations which contribute to the science of laboratory medicine internationally; a list of publications can be found on the EFLM web-site (link below). Typically publications are peer-reviewed and published in the journal Clinical Chemistry and Laboratory Medicine. The Working Groups are overseen by the Chair of the Science Committee and their activities reviewed annually; scientific and clinically relevant output determine whether they continue. The Education and Training Committee runs educational activities particularly for trainees and those required to develop new skills as well as running scientific and clinical conferences, webinars, etc. The major Congress is Euromedlab held in conjunction with the IFCC, meetings are selected from bids by member societies at the annual General Meeting. These are a major forum for scientific and medical knowledge exchange and is allied with a large exhibition of In Vitro Diagnostic industry equipment. The Quality and Regulations Committee is particularly focussed on contributing to revisions of International standards such ISO 15189 and ISO 22870 which govern standards in medical laboratories; they also contribute to European Regulations and Directives such as the IVD (In Vitro Diagnostics) Regulations. These regulations and standards ensure quality results are delivered for patient care a key issue in diagnosis and monitoring of health and disease and are under regular international review. The Communications Committee provides information on EFLM activities to members and promotes awareness of EFLM The Profession Committee is responsible for the Register of Specialists in Laboratory Medicine and that applicants meet the standards set for eligibility. The need for comparable attainments of qualification, education and experience is a patient safety issue as patients can freely move across borders. All offices are by election. Member societies have national representatives who vote on behalf of their society. The only automatic office is President who will have been elected as Vice President. Working group members are chosen from member society nominations. The journal published by De Gruyter, Clinical Chemistry and Laboratory Medicine is the EFLM official journal. To date two strategic conferences have been held to advance the profession in Europe: the first was in Milan, Italy in 2015 addressing the issues of methodological harmonisation and traceability, the second in Mannheim, Germany to consider the impact of the digital revolution and its transformation for delivering services for patients. References External links Official website Chemistry societies Clinical pathology International medical associations of Europe
European Federation of Clinical Chemistry and Laboratory Medicine
Chemistry
744
48,634,023
https://en.wikipedia.org/wiki/Susan%20G.%20Finley
Susan G. Finley, a native Californian, has been an employee of NASA's Jet Propulsion Laboratory (JPL) since January 1958, making her the longest-serving woman in NASA. Two days before Explorer 1 was launched, Finley began her career with the laboratory as a human computer, calculating rocket launch trajectories by hand. She now serves as a subsystem engineer for NASA's Deep Space Network (DSN). At JPL, she has participated in the exploration of the Moon, the Sun, all the planets, and other bodies in the Solar System. Life and education Education In 1955, Susan Finley began studying art and architecture at Scripps College, in Claremont, California, with the intention of becoming an architect. Her knowledge of engineering was vast because of her talents in mathematical and computing courses, so she attempted to learn art, but later realized that engineering was in her future. After three years at Scripps College, Finley claimed she "couldn't learn art." During her college experience, she majored in the humanities which allowed her to be successful as a subsystem engineer. At the age of 21, she left Scripps College to become an engineer with a thermodynamics group at Convair in Pomona, California. Family life At the beginning of her career, Finley made sacrifices in her career for her family. Susan Finley married Pete Finley and had two sons together. She left JPL twice in the first few years of employment in order to support her husband's education and also took maternity leave for some time for her two sons, returning permanently to JPL in 1969. Susan Finley divorced her husband Pete in the 1970s. According to Susan Finley, balancing her work and family lives was difficult because of the "lack of good child care options," although she believes that women still face these struggles today. One of her goals was to keep her work and home life separate, aiming to never bring her work home with her or "working late without making up that time at home." She cooked all the meals for her family, but did not spend much time on housework. Her husband, on the other hand, worked on the cars and the yard as he was of the "generation that did not help with the house or children". Career Finley dropped out of Scripps College, after three years of studying, and applied for a filing clerk position at Convair in Pomona, California. Convair was an aircraft and rocket manufacturing company that no longer exists. Finley took a required typing test to compare her abilities with other candidates. After completing the test, Convair informed Finley that the job had been filled. Instead, the company asked Finley her thoughts on working with numbers. Finley explained how her passion for numbers was greater than that of letters. She began her work as hand-calculating rocket launch trajectories as a human computer. In 1957, after her marriage, Finley moved to San Gabriel, California. She expressed her concerns to her husband about the commute to and from work each day. Finley worked for a year with Convair until she pursued a new opportunity of which her husband suggested. In 1958, Susan Finley took a position at the Jet Propulsion Laboratory (JPL) in Pasadena, California, as a computer. NASA, the National Aeronautics and Space Administration, had officially formed in July 1958. This was a response to the National Aeronautics and Space Act. This act was put into action shortly after the Soviet Union launched its satellite, Sputnik. Prior to the National Aeronautics and Space Act of 1958, all space exploration was deemed by the country, the military's responsibility. Shortly thereafter, NASA assumed control of the Jet Propulsion Laboratory (JPL). In the 1950s, computers were mostly women who solved complex problems by hand. Many women did not have degrees. Rather, like Finley, they were good with numbers. Being a computer required Finley to perform "trajectory computations for rocket launches by hand". Two days after Finley was hired, the Jet Propulsion Laboratory launched the United States' first-ever satellite: Explorer 1. Finley's most-remembered contribution to the Jet Propulsion Laboratory (JPL) came in 1958. JPL launched Pioneer 3 on December 6, 1958. Its mission was to orbit around the Moon and enter a solar orbit, but Pioneer 3 failed its mission. A digital computer failed to correctly calculate the probe's velocity data. Following the failure, the Jet Propulsion Laboratory asked Finley to calculate the velocity data of Pioneer 3, and she was successful in providing the sought after information. On January 26, 1962, NASA launched Ranger 3. Ranger 3 was of great importance as it was NASA's first attempt to hardland a spacecraft of the surface of the Moon. Much to their disappointment, Ranger 3 missed the moon by 22,000 miles, due to multiple malfunctions within the spacecraft's guidance system. It was a calculation Finley made that demonstrated to NASA that Ranger 3 had missed the Moon by such a large margin. Susan Finley took a leave of absence from the Jet Propulsion Laboratory (JPL) to allow her husband to begin his graduate degree at the University of California, Riverside. Between jobs, Susan Finley took a short course in FORTRAN in Riverside. FORTRAN, is the primary computer programming language developed by IBM in the 1950s for scientific applications. Male engineers largely didn't want to do the programming themselves in the 1960s. The advent of electronic computers slowly changed what the all-female computations group did. It was still considered "women's work," not part of an engineer's job description. Finley returned to the Jet Propulsion Laboratory (JPL) in 1962 once her husband finished his master's degree. A year later, Finley left the Jet Propulsion Laboratory again to take care of her two sons. In 1969, Finley returned to the Jet Propulsion Laboratory, and would not leave again. This marked Susan Finley's transition from a human computer to a human programmer. Throughout her career, Finley provided both manual computation work and FORTRAN programs as part of JPL's missions to the Moon, Mars, Venus, Mercury, Jupiter, Saturn, Uranus, and Neptune, in the Ranger, Mariner, Pioneer, Viking, and Voyager programs. By the 1970s, teams of female programmers were integrated with male engineers of the same mission. Prior to the 1970s, they were kept apart. Susan Finley went on record as saying "the men always, from the very beginning, treated us as equals. We were doing something they couldn't do, and that they needed to go forward with what they were doing." Later, in the 1980s, Finley switched to software testing and subsystem engineering for the NASA Deep Space Network (DSN). A systems engineer in the context of NASA "encourages the use of tools and methods to better comprehend and manage complexity in systems". The DSN is used to track and communicate with every deep space probe sponsored by NASA as well as non-US space missions. They would communicate by sending commands to the probe, transmit software updates, as well as gather data. The research group tracked the Russian spacecraft Vega which carried a French balloon to Venus on its journey to Halley's comet. Although working with the Russians was difficult during this Cold War time period, her team was able to collaborate well with the French and they successfully delivered tracking data for the French balloon to route toward the comet. Finley's contribution was a program that automated the movements and translations of the platform's antenna. More specifically, the antenna had to be aligned with the spacecraft. Otherwise, no data would be received. Finley considers this project the most memorable of her many years at NASA. In the 1990s and 2000s, Finley contributed to JPL's further explorations of the solar system. Finley worked with the Mars Exploration Rover missions and developed technology in which musical tones were sent at differing phases of descent through the Martian atmosphere and were transmitted back to DSN. The program had the rover send musical tones back to the command center once each stage of the craft's descent. The engineers were then able to use this information to determine which landing stage the rocket was in at a given time. This process was utilized in 1997 for the Pathfinder. However, the program was left out of the Climate Orbiter and Polar Lander Mars missions. Both the Climate Orbiter and the Polar Lander were lost in 1999. NASA attempted to find the causes for each of the failures. Fortunately for NASA, Finley's tones assisted in finding the issues with each of the two failed missions. The engineers used Finley's tones to update the whereabouts of the platforms in their last minutes. Finley was stationed at the Goldstone and Tidbinbilla stations while the landings were taking place and was the first to hear the tones that confirmed the landers survived their trip to Mars. Unfortunately, her work went unrecognized in the media because they reported from JPL's mission control only. In 2004, Finley's tones returned to the Marian landing process of different components. Susan Finley explained that all of the Mars missions that carried the tones were a success. It was not until the Mars Polar Lander (MPL) mission failed where mission designers recognized the value of the tones Finley was responsible for. In 2008, the Jet Propulsion Laboratory (JPL) reviewed their job listings and each of their employees pay. Finley was demoted from a salaried engineer to being paid by the hour. Finley was now an hourly engineering specialist. This was due to her lack of a Bachelor's Degree. Although Finley's overall pay did not change, she remained eligible for overtime. The only downside being she now had to clock in and clock out. She continues to work full-time for JPL and is involved in DSN support for NASA's recent unmanned missions, including the recent Pluto flyby by the New Horizons spacecraft and the Juno mission to Jupiter. Susan Finley currently has no plans to retire. Awards and honors 2013 - NASA Group Achievement Award, NASA (nine certificates awarded to Susan Finley) 2018 - NASA Exceptional Public Service Medal Over the course of her career, Finley has won several NASA Group Achievement Awards. This certificate is "awarded to any combination of government and/or non-government individuals for an outstanding group accomplishment that has contributed substantially to NASA's mission". In 2018, Finley was awarded a NASA Exceptional Public Service Medal "This prestigious NASA medal is awarded to any non-Government individual or to an individual who was not a Government employee during the period in which the service was performed for sustained performance that embodies multiple contributions on NASA projects, programs, or initiatives." Her years of dedication and service to the National Aeronautics and Space Administration (NASA) have made her the longest-serving woman in the space agency. JPL is technically a division of Caltech, so JPL employees don't qualify for governmental individual awards. Publications 2004 "Tracking Capability for Entry, Descent and Landing and its support to NASA Mars Exploration Rovers," ResearchGate 2009 "Receiver filters and records IF analog signals," National Aeronautics and Space Administration 2012 "Spacecraft-to-earth communications for Juno and Mars Science Laboratory critical events," IEEE Xplore 2013 "Improved spacecraft tracking and navigation using a Portable Radio Science Receiver," IEEE Xplore 2013 "Sleuthing the MSL EDL performance from an X band carrier perspective," IEEE Xplore 2014 "Design and implementation of a Deep Space Communications Complex downlink array," IEEE Xplore 2016 "A comparison of atmospheric effects on differential phase for a two-element antenna array and nearby site test interferometer", IEEE Xplore See also Women in science Association for Women in Science List of organizations for women in science List of female scientists before the 20th century External links References Human computers NASA people Jet Propulsion Laboratory faculty Year of birth missing (living people) Living people Scripps College alumni Women systems engineers American women scientists 21st-century American women engineers 20th-century American women engineers 21st-century American engineers 20th-century American engineers
Susan G. Finley
Technology
2,460
31,339,697
https://en.wikipedia.org/wiki/Shmashana
A shmashana () is a Hindu crematory ground, where dead bodies are brought to be burnt on a pyre. It is usually located near a river or body of water on the outskirts of a village or town; as they are usually located near river ghats, they are also regionally called smashan ghats. Etymology The word has its origin from Sanskrit language: shma refers to shava ("corpse"), while shana refers to shanya ("bed"). The other Indian religions like Sikhism, Jainism and Buddhism also use shmashana for the last rites of the dead. Hinduism As per Hindu rites of Nepal and India, the dead body is brought to shmashana for the ritual of antyesti (last rites). At the cremation ground, the chief mourner has to obtain the sacred fire from one who resides by the shmashana and light funeral pyres (chita) for a fee. Various Hindu scriptures also give details of how to select the site of shmashana: it should be on the northern side of the village with land sloping towards the south, it should be near a river or a source of water and should not be visible from a distance. Dead bodies are traditionally cremated on a funeral pyre usually made of wood. However, nowadays in many cities of India there are electric or gas based furnaces used in indoor crematoria. Jainism The Jains also cremate the dead as soon as possible to avoid growth of micro-organisms. Ghee, camphor and sandalwood powder are sprinkled all over the body and the eldest son of the deceased does the last rituals, who lights up the pyre in shmashana, chanting the Namokar Mantra. After cremation, they sprinkle milk on that place. They collect the ashes but unlike Hindus, they do not immerse them in the water. Instead of it they dig the ground and bury the ashes in that pit and sprinkle salt in the pit. Early Buddhism In the Pali Canon discourses, Gautama Buddha frequently instructs his disciples to seek out a secluded dwelling (in a forest, under the shade of a tree, mountain, glen, hillside cave, charnel ground, jungle grove, in the open, or on a heap of straw). The Vinaya and Sutrayana tradition of the "Nine Cemetery Contemplations" (Pali: nava sīvathikā-manasikāra) described in the Satipatthana Sutta demonstrate that charnel ground and cemetery meditations were part of the ascetic practices in Early Buddhism. 'Cemetery contemplations', as described in Mahasatipatthana Sutta (DN: 22) and the Satipaṭṭhāna Sutta (MN: 10): Spiritual role A shmashana, also known as a cremation ground or burial ground, holds cultural, religious, and ritualistic significance in various Eastern spiritual traditions, including Hinduism and certain Tibetan Buddhist practices. The shmashana is said to be abode of ghosts, evil spirits, fierce deities, and tantriks. Therefore, people in general prefer to avoid going near shmashana at night. Women generally do not go to shmashana, only males go to shmashana to perform the last rites. Followers of the modes of worship called Vamamarga like Aghori, Kapalika, Kashmiri Shaivism, Kaula of now scarce Indian Tantric traditions perform sadhana (for example Shava sadhana) and rituals to worship Kali, Tara, Bhairav, Bhairavi, Dakini, Vetal, etc. invoke occult powers within them at a shamashana. The shmashana is also used for similar purpose by followers of Tibetan Buddhist traditions of Vajrayana, Dzogchen for sadhna of Chöd, Phowa, Zhitro, etc. The deity called Shmashana Adhipati is usually considered to be lord of the shmashana. See also Shmashana Adhipati Charnel ground References Buddhism and death Burial monuments and structures Cremation Death and Hinduism Indian words and phrases Pali words and phrases
Shmashana
Chemistry
870
43,209,944
https://en.wikipedia.org/wiki/HD%20191760
HD 191760 is a star in the southern constellation of Telescopium. It has a yellow hue but is too dim to be visible to the naked eye with an apparent visual magnitude of 8.26. The star is located at a distance of approximately 290 light-years from the Sun based on parallax, but is drifting closer with a radial velocity of −30 km/s. The stellar classification of G3IV/V is consistent with a star that is evolving onto the subgiant branch, having exhausted the supply of hydrogen at its core. It is roughly four billion years old with a modest projected rotational velocity of 2.3 km/s. The star is 28% more massive than the Sun and 62% as large. The metallicity, or abundance of heavier elements, is higher than in the Sun. The star is radiating 2.7 times the luminosity of the Sun from its photosphere at an effective temperature of 5,794 K. Companion Using the ESO HARPS instrument, in 2009 HD 191760 was found to have a brown dwarf at least 38 times as massive as Jupiter orbiting at an average distance of in a period of 506 days. This is an unusual distance from the star that has been termed the 'brown dwarf desert'. The upper limit on the mass of this object is 28% of the mass of the Sun (). In 2023, the true mass of this object was determined using Gaia astrometry. At about 106 times the mass of Jupiter, this is a low-mass star (presumably a red dwarf) and not a brown dwarf. References G-type subgiants Brown dwarfs Binary stars Telescopium CD-46 13445 191760 099661
HD 191760
Astronomy
357
57,234,248
https://en.wikipedia.org/wiki/HAT-P-50b
HAT-P-50b is an exoplanet orbiting HAT-P-50 star located in the Gemini constellation. It was discovered in 2015. References Exoplanets discovered in 2015 Transiting exoplanets Gemini (constellation) Exoplanets discovered by HATNet
HAT-P-50b
Astronomy
59
70,734,221
https://en.wikipedia.org/wiki/Institut%20f%C3%BCr%20Kunststoffverarbeitung
The Institut für Kunststoffverarbeitung in Industrie und Handwerk (IKV), the Institute for Plastics Processing in Industry and Trade at the Rheinisch-Westfälische Technische Hochschule Aachen, Germany, is a teaching and research institute for the study of plastics technology. It stands for practice-oriented research, innovation and technology transfer. The focus of the IKV is the integrative view of product development in the material, construction and processing sectors, in particular in plastics and rubber. The sponsor is a non-profit association that currently includes around 300 companies from the plastics industry worldwide (as of December 2018) and through which the institute maintains a close connection between industry and science. In addition, the IKV is a member of the (AiF). The institute was founded in 1950 and, with around 350 employees, has become Europe's largest research and training institute in the field of plastics technology. The first head of the institute was , followed in 1959 by Alfred Hermann Henning. From 1965 to 1988 headed the institute, followed by until his retirement in 2011. Since 2011, the current head of the institute, and at the same time managing director of the association, is . He also holds the Chair for within the Faculty of Mechanical Engineering at RWTH Aachen University. Tasks The tasks of the institute are: scientific and practice-oriented research in the field of plastics technology the training of students to become qualified junior staff for the plastics industry the training of practitioners in the craft industry in the field of plastics technology Structure The scientific departments injection molding/PUR technology, extrusion and further processing, molded part design/materials engineering and fibre-reinforced plastics are the operative units of the institute. The (KAP) (English: Center for Plastics Analysis and Testing) at the IKV supports and advises scientific departments and is available as a service for the industry to solve problems. The training and education department is responsible nationwide for technology transfer to the skilled trades sector. Since 1960, the institute has been cooperating with the (GFA, English: commercial development agency) in the training center of the (HWK), which served and was certified as a training center for plastics technology for both the IKV and the (DVS) and the (DVGW). On the initiative of the incumbent HWK President , this was transferred in 1983 to the (BGE) in Aachen's Tempelhofer Straße. Currently, about 130 employees including some 80 scientists are working at the IKV in research, development and training. They are supported by about 220 student assistants. In addition to the tasks mentioned above, one of the goals of the IKV is to provide the industry with solutions to practical problems. Individual projects, but also those within the framework of often lead to high-quality product ideas and developments, which, in the sense of the desired technology transfer, benefit not only larger, but foremost small and medium-sized enterprises. References External links Website of the Institute for Plastics Processing Research institutes in Germany Research institutes established in 1950 Plastics industry Plastics applications Materials science Engineering disciplines
Institut für Kunststoffverarbeitung
Physics,Materials_science,Engineering
637
4,107,511
https://en.wikipedia.org/wiki/Gum%20Nebula
The Gum Nebula (Gum 12) is an emission nebula that extends across 36° in the southern constellations Vela and Puppis. It lies approximately 450 parsecs from the Earth. Hard to distinguish, it was widely believed to be the greatly expanded (and still expanding) remains of a supernova that took place about a million years ago. More recent research suggests it may be an evolved H II region. It contains the 11,000-year-old Vela Supernova Remnant, along with the Vela Pulsar. The Gum Nebula contains about 32 cometary globules. These dense cloud cores are subject to such strong radiation from O-type stars γ2 Vel and ζ Pup and formerly the progenitor of the Vela Supernova Remnant that the cloud cores evaporate away from the hot stars into comet-like shapes. Like ordinary Bok globules, cometary globules are believed to be associated with star formation. A notable object inside one of these cometary globules is the Herbig-Haro object HH 46/47. It is named after its discoverer, the Australian astronomer Colin Stanley Gum (1924–1960). Gum had published his findings in 1955 in a work called A study of diffuse southern H-alpha nebulae (see Gum catalog). He also published the discovery of the Gum Nebula in 1952 in the journal The Observatory. The observations were made with the Commonwealth Observatory. The Gum nebula was photographed during Apollo 16 while the command module was in the double umbra of the Sun and Earth, using high-speed Kodak film. Popular culture The Gum Nebula is explored by the crew of the Starship Titan in the Star Trek novel Orion's Hounds. Gallery See also CG 4 Barnard's Loop References External links APOD: Gum Nebula, with mouse over (2009.08.22) Galaxy Map: Entry for Gum 12 in the Gum Catalog Galaxy Map: Detail chart for the Gould Belt (showing the location of Gum 12 relative to the Sun) Encyclopedia of Science: Entry for the Gum Nebula (erroneously called Gum 56) SouthernSkyPhoto.com Emission nebulae Puppis Vela (constellation)
Gum Nebula
Astronomy
449
56,785,760
https://en.wikipedia.org/wiki/Faustine%20Fotso
The Honorable Faustine Villanneau Chebou Kamdem Fotso born on the 12th of June, 1965, is a computer scientist, environmentalist, and lawyer from Cameroon. Career In 2012, Fotso was 1st Deputy Mayor of Baham, a town in Western Cameroon. In 2013, she was elected MP in the National Parliament Assembly, representing the highlands of the Western Region. She sits on the Constitutional Laws Committee and belongs to the Cameroon People's Democratic Movement. Academic works Fotso wrote the publication "Environmental Impact Study in French and Cameroonian Law" in 2009 as part of the Masters Program of International and Environmental Law at the University of Limoges. Charitable works Fotso is the founder of the charitable association "Flame of Love, of Peace and Justice" that had its inaugural meeting on September 20, 2016. Awards On May 20, 2016, Fotso was awarded the civilian medal to the rank of the officer of the order of value at the 44th National Day of Unity in Baham. Personal life Fotso is married to Lucas Fotso, regional director for the Cameroon electric company Aes Sonel Littoral. Together they have five children. References 1965 births Living people Cameroon People's Democratic Movement politicians Cameroonian environmentalists Cameroonian lawyers Cameroonian women lawyers Members of the National Assembly (Cameroon) Computer scientists 21st-century women lawyers 21st-century Cameroonian women politicians 21st-century Cameroonian politicians
Faustine Fotso
Technology
294
43,785,137
https://en.wikipedia.org/wiki/EBeam
eBeam was an interactive whiteboard system developed by Luidia, Inc. that transformed any standard whiteboard or other surface into an interactive display and writing surface. Luidia's eBeam hardware and software products allowed text, images, and video to be projected onto a variety of surfaces, where an interactive stylus or marker could be used to add notes, access control menus, manipulate images, and create diagrams and drawings. The presentations, notes, and images could be saved and emailed to class or meeting participants, as well as shared in real-time either on local networks or over the Internet. History An eBeam demo was given at the Apple Expo 2002 in Paris, France. The production of eBeam hardware was discontinued in 2020. As of June 2022, Luidia has ceased all operations. Technology Luidia's eBeam technology was originally developed and patented by engineers at Electronics for Imaging Inc. (Nasdaq: EFII), a Foster City, California, developer of digital print server technology. Luidia was spun off from EFI in July 2003 with venture funding from Globespan Capital Partners and Silicom Ventures. See also Office equipment Display technology Educational technology References External links Electronics for Imaging (EFI) Luidia Inc. - eBeam Luidia website Google Books results Review in PC Mag Review in InfoWorld VEngineers Co. Ltd (Mauritius) Office equipment Display technology
EBeam
Engineering
288
31,294,056
https://en.wikipedia.org/wiki/Richacls
Richacls is a Linux implementation of the NFSv4 ACLs which has been extended by file masks to more easily fit the proprietary POSIX draft file permission model. Nowadays, they offer the most complex permission model for ext4 file system in Linux. They are even more complex than POSIX draft ACLs, which means it is not possible to convert back from Richacls to Linux' implementation of the POSIX draft ACLs without losing information. One of the most important advantages is that they distinguish between write and append permission, between delete and delete child permissions, and make ACL management access discretionary (as opposed to only being only root and the file owner). They are also designed to support Windows interoperability. Richacls use ext4 extended file attributes (xattrs) to store ACLs. References Computer access control
Richacls
Engineering
182
3,283,117
https://en.wikipedia.org/wiki/Fuddling%20cup
A fuddling cup is a three-dimensional puzzle in the form of a drinking vessel, made of three or more cups or jugs with interconnecting bodies all linked together by holes and tubes in which liquor poured into one cup would disappear in one cup and reappear in another cup. The name “fuddling” in this cup has two meanings—to both confuse and intoxicate. Fuddling cups were especially popular in 17th and 18th century England. In Archaeologia Cambrensis, a custom is described, a cup was placed on the head of the village belle and the challenge of the puzzle was to drink from the vessel in such a way that the beverage does not spill while the cup rested on the girl's head. To do this successfully, one must drink from the cups in a specific order. For the cups to be drained, a correct starting point has to be found. Juliet Fleming noted, in Graffiti and the Writing Arts of Early Modern England, that a fuddling cup was a “toy machine” which was meant for entertainment, not for a practical purpose. Many such cups had words inscribed on the side which showcased the fun nature of the game. It was one of the 'joke' drinking pots of the 17th and 18th centuries. Fuddling cups were made from the mid-17th century to the late 18th in graffito slipware in two potteries in Somerset and in tin-glazed earthenware before that. See also Dribble glass Puzzle jug Pythagorean cup References External links Mechanical puzzles Drinkware
Fuddling cup
Mathematics
318
1,988,874
https://en.wikipedia.org/wiki/Kneeling
Kneeling is a basic human position where one or both knees touch the ground. According to Merriam-Webster, kneeling is defined as "to position the body so that one or both knees rest on the floor". Kneeling with only one knee, and not both, is called genuflection. Kneeling is a primate behavior used to convey deference by making the figure that is kneeling appear smaller than the other. Primates themselves establish a dominance hierarchy (or "pecking order") which is important to the survival and behavior of the group. Chimpanzees, for example, have a complex social group that involves a dominant male and a corresponding female, to whom the other males and the juvenile chimps are submissive. Males who threaten the hierarchy are often severely injured or killed; in some instances, the use of submissive behavior is necessary to ensure survival. Religion Humans have inherited the custom of submissive behavior, and kneeling has become prevalent in religious practices. It has been used as a form of prayer and a way to worship or revere deities and supernatural entities. Britannica defines the purpose of kneeling as placing the knees downwards towards "the realm of the underworld", while the purpose of raising one's hands in prayer is to reach upwards towards "the realm of the heavenly gods". Kneeling has taken on many different forms and styles as different cultures and institutions have adopted it. Judaism and Islam Kneeling is one way of praying in both Judaism and Islam, however, the more prevalent way to pray in Judaism is to stand up and perform the Amidah. Kneeling in Judaism is saved for specific kneeling stones which have become obsolete. Both faiths also perform a type of kneeling prostration that involves the entire body including the head, Qidah in Hebrew, and Sajdah in Arabic. This involves getting down on both knees and extending your hands on the ground until your forehead is up against the ground as well. Sometimes the disciples go so far as to lay down completely on the floor instead of just kneeling. In Islam, kneeling or prostrating (sujud) is usually performed on a dedicated prayer rug which is treated with especial care. Though common for Islam, there is also a prayer rug in Judaism in conjunction with the holiday of Yom Kippur (Day of Atonement). This holiday is one of the few times in Judaism in which it is customary to pray while kneeling on the rug. Christianity The history of kneeling and prostration have always been a sign of worship in Christianity. Passages in the Bible show that kneeling is preferred over other forms of prayer. It is mentioned in the New Testament that "whenever you pray, do not be like the hypocrites; for they love to stand and pray in the synagogues". The origin of this practice is within Sacred Scripture, which states: "Therefore God also highly exalted Him and gave Him the name that is above every name, so that at the name of Jesus every knee should bend, in heaven and on earth and under the earth" (NRSV). Some churches may use a kneeler in frequented areas in order to indicate where to kneel as well as provide some level of comfort during the prayer. Marriage Proposals Kneeling is the position often associated with traditional, Western marriage proposals. This position typically involves the person proposing kneeling with one knee on the ground, a position sometimes referred to as genuflecting, holding an engagement ring up to the person being proposed to. Kneeling in a public space in front of an apparent significant other often suggests a forthcoming proposal, and kneeling is indeed typically the expected position when proposing publicly. While kneeling is considered a traditional style of proposing, there is little consensus on its historical origins, and it in fact appears to be a fairly modern custom. Connections have been made to kneeling in European feudal society, in which kneeling before a lord suggested servitude and surrender. Medieval imagery sometimes depicted a knight kneeling before a lady in an act of courtly love, suggesting kneeling as a form of romantic expression was similar to the submission given to lords. Kneeling during a marriage proposal has been suggested as a similar form of submission. Christian wedding ceremonies In Christian wedding ceremonies, especially those of the Roman Catholic, Lutheran and Anglican denominations, it is customary for the couple to kneel before the altar, following the Lord's Prayer. Catholic, Lutheran and Anglican wedding ceremonies are often conducted within a Mass, and as such the couple and wedding guests alike kneel at several points throughout the ceremony as the liturgy calls for it. Couples are sometimes given kneelers to rest on at the altar. During the ceremony, the couple may participate in the customary placing of the veil or Lazo, a rope placed around the couple's shoulders, while they are kneeling with both knees on the ground as chosen sponsors place a veil over the head of the bride and the shoulders of the groom, or have the Lazo placed around them, in order to symbolize the binding of the marriage. Following the placement of the Lazo or veil, the couple remains in a kneeling position as the priest conducts the Nuptial Blessing. Kneeling in sexual intercourse The kneeling position may be used in various ways in sexual intercourse. A member of a couple may take a kneeling position in front of their partner in order to perform oral sex for the other. Other sexual positions involving kneeling may include the position commonly referred to as “doggy style,” in which one partner is crouched on all fours while the other takes a kneeling position, usually with both knees on the ground, in order to penetrate the crouching partner from behind, or various riding positions, in which one partner is kneeling with both knees down above the other partner and is penetrated from below. BDSM BDSM, referring to bondage/discipline, dominance/submission and sadism/masochism, encompasses a complex variety of practices involving interpersonal relationships, typically of a sexual nature, centered around the creation of an unbalanced power dynamic. Consent is considered of the utmost importance within a BDSM relationship. Kneeling is commonplace in BDSM practices as a way to show or enforce submission to or by the dominant partner. The role of the dominant or the submissive is not exclusively gendered, though gender may influence the dynamics of a D/s relationship. Some men who serve as submissives to dominant women, for example, find comfort in the action of kneeling submissively for a woman. Kneeling in different cultural societies East Asia There are many forms of kneeling in East Asia presented within their daily lives and daily rituals. This is different from western culture and other religions since these daily rituals are not necessarily tied to their religion but instead more to their society and culture. Japan There are two forms of kneeling or prostration in Japanese culture: Dogeza and Seiza. Dogeza is a traditional form of respectful bowing to acknowledge superiors. This practice is two steps: kneeling down onto the ground, then bending over to touch the ground with the head. It can also be to express apology or attempt to bless someone with your good favor. This practice is mainly a form of formal and deeply emotional apology to someone of a higher rank than you within society. This is more of an older form of reverence, though, that has fallen out of practice. Seiza is another Japanese kneeling position that refers to the traditional way of sitting down in Japan. This is a formal way of sitting down which was adopted by Japan after the Edo period. Since then, it has now become the traditional way of sitting down within the household and for certain cultural events. Many of the culturally significant and traditional events in Japanese society involve sitting positions such as funerals or tea parties. This form of sitting however is uncomfortable to those who have not practiced it for a long time; therefore people in Japan usually start practicing this posture at quite a young age. China In China, there is a form of prostration involving kneeling called Kowtow. Kowtow is where the participant kneels down then subsequently bows on the ground so that their forehead touches the ground. This was a traditional way of showing respect in China. The literal translation of the mandarin word is “knock head”. This whole process consisted of three kneelings and nine knockings of the head, nine being important since it was a number associated with the Emperor. This practice of kowtowing is not new to the Chinese nor is the concept of kneeling since they sat kneeling down for much of their history as well. Kneeling in Ancient China In ancient Chinese society, kowtow, or kneeling-bowing, was common for students to express gratitude to their teachers. Before learning any skills or knowledge, students or apprentices had to kneel down and bow toward their teachers to show appreciation. The students first thank the teacher and then demonstrate their commitment to the apprenticeship. After the ritual, teachers will express their willingness to teach and impart knowledge as well as life wisdoms. Besides students and teachers, kowtow was common among children and parents as well. The younger generation also performs kowtow toward their parents to show gratitude and appreciation. Although kneeling-bowing used to be seen as the highest expression of Confucianism for the master-apprentice and children-parents relationship, the behavior has caused controversies in the modern world. Genderization of kneeling in Greek ritual Kneeling can be a gendered behavior in Greek Ritual. In classical attic votive reliefs, almost all kneeling worshippers are females. The Greek literature also gives similar evidence. No male kneels in Greek tragedy, and in Greek comedy, only do slaves kneel. In those literature, people kneel when they are in a horrible situation, and kneeling is therefore related to supplication for change. In most cases, kneeling is considered a ritual act of last resort that usually takes place in front of a statue of God. It is also seen as a sign of submission for those in victim positions. It turned out that in Greek ritual, kneeling is only appropriate for females or slaves. Prehistoric Ecuador Kneeling behaviors occurred in prehistoric Ecuador as well. Prehistoric skeletal samples from Coastal Ecuador suggest behaviors or squatting and kneeling. Bone evidence of metatarsophalangeal joints (between the metatarsal bones of the foot and the proximal bones) show that the articular surface of the head of the first metatarsal bones (located behind the big toe) is extended onto the upper side. On the ventral side, the edge of the articular area is lifted away from the shaft, or the midsection of the long bone. and is extended on the lateral side. All the bone evidence suggests a tendency of prolonged hyperdorsiflexion, or an increased use of muscles in the front part of the foot, which is associated with habitual kneeling posture. In addition, the flattening of the ventral surface on metatarsal implies pressure points related to kneeling. Health aspects of kneeling In East Asian cultures such as Chinese, Japanese, Korean and Vietnamese, postures with high flexion including kneeling and squatting are used more often in daily activities, while in North America, people kneel or squat less frequently in daily activities, unless for occupational, religious, or leisure practices. The favored style of those high flexion postures also differs among ethnic groups. While Caucasians tend to flex the forefoot when kneeling or squatting, East Asians are more likely to keep the foot flat on the ground. In the two common styles of kneeling, the plantarflexed kneel and the dorsiflexed kneel, the lead leg may experience higher adduction and flexion moment, which is associated with increased knee joint loads. Emotional expression Grief People may kneel in grief, for their own losses or for others. In 1970, Willy Brandt, Chancellor of West Germany, visited Warsaw, Poland. After laying a wreath at the memorial to the 1943 Warsaw Ghetto Uprising, which had been brutally put down by the Nazi regime, he unexpectedly, and apparently spontaneously, fell to his knees for about half a minute, as a mark of humility and penance. The event is known in German as the Kniefall von Warschau (the Warsaw genuflection). Requests Kneeling is also used when making emotional requests, such as asking for forgiveness. Monarchs European knights since the Middle Ages Kneeling is viewed as a sign of submission when it is performed in a royal setting. One of the most common royal settings in which kneeling takes place is when a person is being knighted. When knighthood began in Europe in the Middle Ages, only men could become knights; when this was changed to include women, the knighted women were made dames. At the ritual portion of a religious knighthood ceremony in the Middle Ages, the man that would later be knighted knelt before a chapel altar with a sword placed on it. During the accolade, the man would kneel or bow before a knight, lord, or king to be dubbed with the flat side of a sword or a hand. The gesture of kneeling before royalty to receive a knighthood is a way of proclaiming the person's dedication to serving and honoring their country or the Church. However, knighting ceremonies are not as rigid and demanding as they once were in the Middle Ages. There have been a few changes to the aspect of kneeling that have made the ceremony accessible to a more diverse number of people. Knighting ceremonies now usually take place with the investiture, a special day when those that have been awarded with an honor from the Crown receive their award in person at a royal residence. Those that are present to be knighted are no longer required to take part in all of the expansive sections of the knighthood ceremony from the Middle Ages. One of the sections that has been waived is the need for the individual to kneel at a chapel altar with a sword placed on it. The person is still required to kneel before the monarch during the accolade in order to be dubbed, but they do so on an investiture stool. While kneeling is seen as a sign of respect and humility in countries that have a monarch, it is not considered commonplace in countries where there is no monarch. Because of this cultural difference, kneeling is not required for individuals from these countries when they are knighted. For example, when General Herbert Norman Schwarzkopf Jr. received an honorary knighthood from Queen Elizabeth II in 1991, he was not expected to kneel to receive his knighthood because he was not a British subject. Other exceptions for kneeling before a monarch when being knighted are old age, physical inability, or health conditions. Interacting with royalty Kneeling is a sign of reverence and submission when done toward royalty upon meeting them. Properly acknowledging the Crown is a nervous time for some individuals that are meeting royalty for the first time or are from a country without a monarch. They want to do what is appropriate. During the reign of Queen Elizabeth I, she valued the bend of an individual's knee to her over a verbal commitment as an act of loyalty. Those that would comply, like one of her favorite courtiers Blanche Parry, would be rewarded with proximity to the Crown and other political gifts. However, the British Royal Family and the Royal Household at Buckingham Palace now no longer insist or require individuals to follow traditional codes of behavior when greeting a member of the Royal Family. They still accept people who wish to follow the traditional codes, but they understand if an individual is not comfortable with kneeling, amongst other gestures, in submission. Sports Placing a single knee on the ground (taking a knee) may have different meanings in different sports and situations. In many sports, taking a knee is a sign of respect and solidarity when a player from either team, or an official, is injured such that they need or may need assistance to leave the playing area. In these cases, it is considered proper for all other players (but not the officials or any player assisting the injured person) to place one knee on the ground until the injured person is off the playing area. American football and Canadian football In American football and Canadian football, the quarterback kneel may be performed to quickly end a play and use up time on the clock with only a minimal penalty. This is particularly useful when the offensive team is ahead by a few points and does not want to risk a fumble or other turnover. Also in American football and Canadian football, any player with the ball may take a knee to end a play or to indicate that they do not intend to advance further with the ball. See also Sitting in salah Kneeling chair Prayer mat References Durga subedi, Takma k.c et.al, Fundamental of nursing (foundation of nursing), first editions External links The Theology of Kneeling Catholic Encyclopedia entry on kneeling Knee Bowing Gestures of respect Human positions
Kneeling
Biology
3,403
40,410,456
https://en.wikipedia.org/wiki/Dothistroma%20septosporum
Dothistroma septosporum or Mycosphaerella pini is a fungus that causes the disease commonly known as red band needle blight. This fungal disease affects the needles of conifers, but is mainly found on pine. Over 60 species have been reported to be prone to infection and Corsican Pine (Pinus nigra ssp. laricio) is the most susceptible species in Great Britain. It was first recorded in Britain on Corsican pine in 1954 in a nursery in Dorset. The disease spread sporadically until 1966, after which there were no new reports up until the end of the 1990s. Between 1997 and 2005 the majority of reports were on Corsican pine in East Anglia, although it had been found in other parts of Britain. The precise origins of the disease are unknown, although there are suggestions that the disease might be from the pine forests of Nepal, in the Himalayas. The origin is also thought to be from the high altitude rain forests of South America. The general opinion is that the disease has been prevalent in the Southern Hemisphere for some length of time, and that there are now high levels of infection in the Northern Hemisphere, with unprecedented records of the disease in Asia, Europe, and the UK. Symptoms The symptoms give the disease its name. The first signs of infection that can be seen are yellow and brown spots that develop on the living needles, which soon turn red. This infection starts on the base of the crown on older needles, which then turn a brownish red at the tip, while the rest of the needle remains green. This can be seen clearly between the months of June and July, after which the needles begin to 'turn up', much like a lion's tail. This infection is then passed on to the following years growth, which continues year after year. This ongoing spread of infection weakens the tree over time, with larger percentages of crown infection leading to lower yields of timber and, in some cases, to the mortality of the tree. Life cycle Spread initially in moist conditions, the pathogen requires physical transport either through mist and rain, or by direct contact with other infected needles. Once the needles have been exposed and the fungus germinates, the pathogen then penetrates the needle through the stoma. The ideal germination temperature is 12–18 °C, with high levels of humidity. The needles will then begin to show signs of infection, and eventually the pathogen produces stromata, which is the pathogen's fruiting body. These are formed in the spring and early summer, and usually coincide with above average levels of rainfall. From these the blight is then passed on to the following years growth. The stromata can be seen as a clear or white mass exuding from red spots on the leaf. Reproduction Dothistroma septosporum is able to reproduce asexually (in the anamorphic stage) as well as sexually (in the teleomorphic stage), but the teleomorphic stage is uncommonly found. The sexual reproduction of the disease holds a greater danger as the division of cells that comes with meiosis allows a far greater genetic variation of the disease, and increases its ability to adapt to local climates and resistance to various forms of control. The pathogen reproduces both asexually and sexually in the UK. The teleomorph that is produced from complete sexual reproduction, Mycosphaerella pini, has not been found in the UK. Damage The disease causes defoliation which increase year on year. This reduces yield of timber growth and weakens the tree, serving as a predisposing factor to other diseases. In several cases of infection, the disease can lead to complete mortality of the tree. Infection may take several years to severely reduce yield, as crown infection under 40% is directly proportional to the reduction in yield. Once crown infection has reached 80% there is no growth at all. Control As a fungal disease, any intervention that increases airflow and reduces humidity will be beneficial. It has been observed that delays in the first thinning in East Anglia resulted in high mortality rates in the crop. The environmental and economic factors behind copper based fungicide treatment of large scale commercial crops makes control difficult and inadvisable. References Fungal tree pathogens and diseases Mycosphaerellaceae Fungi described in 1957 Fungus species
Dothistroma septosporum
Biology
882
1,785,216
https://en.wikipedia.org/wiki/Semiconductor%20intellectual%20property%20core
In electronic design, a semiconductor intellectual property core (SIP core), IP core or IP block is a reusable unit of logic, cell, or integrated circuit layout design that is the intellectual property of one party. IP cores can be licensed to another party or owned and used by a single party. The term comes from the licensing of the patent or source code copyright that exists in the design. Designers of system on chip (SoC), application-specific integrated circuits (ASIC) and systems of field-programmable gate array (FPGA) logic can use IP cores as building blocks. History The licensing and use of IP cores in chip design came into common practice in the 1990s. There were many licensors and also many foundries competing on the market. In 2013, the most widely licensed IP cores were from Arm Holdings (43.2% market share), Synopsys Inc. (13.9% market share), Imagination Technologies (9% market share) and Cadence Design Systems (5.1% market share). Types of IP cores The use of an IP core in chip design is comparable to the use of a library for computer programming or a discrete integrated circuit component for printed circuit board design. Each is a reusable component of design logic with a defined interface and behavior that has been verified by its creator and is integrated into a larger design. Soft cores IP cores are commonly offered as synthesizable RTL in a hardware description language such as Verilog or VHDL. These are analogous to low-level languages such as C in the field of computer programming. IP cores delivered to chip designers as RTL permit chip designers to modify designs at the functional level, though many IP vendors offer no warranty or support for modified designs. IP cores are also sometimes offered as generic gate-level netlists. The netlist is a Boolean-algebra representation of the IP's logical function implemented as generic gates or process-specific standard cells. An IP core implemented as generic gates can be compiled for any process technology. A gate-level netlist is analogous to an assembly code listing in the field of computer programming. A netlist gives the IP core vendor reasonable protection against reverse engineering. See also Integrated circuit layout design protection. Both netlist and synthesizable cores are called soft cores since both allow a synthesis, placement and routing (SPR) design flow. Hard cores Hard cores (or hard macros) are analog or digital IP cores whose function cannot be significantly modified by chip designers. These are generally defined as a lower-level physical description that is specific to a particular process technology. Hard cores usually offer better predictability of chip timing performance and area for their particular technology. Analog and mixed-signal logic are generally distributed as hard cores. Hence, analog IP (SerDes, PLLs, DAC, ADC, PHYs, etc.) are provided to chip makers in transistor-layout format (such as GDSII). Digital IP cores are sometimes offered in layout format as well. Low-level transistor layouts must obey the target foundry's process design rules. Therefore, hard cores delivered for one foundry's process cannot be easily ported to a different process or foundry. Merchant foundry operators (such as IBM, Fujitsu, Samsung, TI, etc.) offer various hard-macro IP functions built for their own foundry processes, helping to ensure customer lock-in. Sources of IP cores Licensed functionality Many of the best known IP cores are soft microprocessor designs. Their instruction sets vary from small 8-bit processors, such as the 8051 and PIC, to 32-bit and 64-bit processors such as the ARM architectures or RISC-V architectures. Such processors form the "brains" of many embedded systems. They are usually RISC instruction sets rather than CISC instruction sets like x86 because less logic is required. Therefore, designs are smaller. Further, x86 leaders Intel and AMD heavily protect their processor designs' intellectual property and don't use this business model for their x86-64 lines of microprocessors. IP cores are also licensed for various peripheral controllers such as for PCI Express, SDRAM, Ethernet, LCD display, AC'97 audio, and USB. Many of those interfaces require both digital logic and analog IP cores to drive and receive high speed, high voltage, or high impedance signals outside of the chip. "Hardwired" (as opposed to software programmable soft microprocessors described above) digital logic IP cores are also licensed for fixed functions such as MP3 audio decode, 3D GPU, digital video encode/decode, and other DSP functions such as FFT, DCT, or Viterbi coding. Vendors IP core developers and licensors range in size from individuals to multi-billion-dollar corporations. Developers, as well as their chip-making customers, are located throughout the world. Silicon intellectual property (SIP, silicon IP) is a business model for a semiconductor company where it licenses its technology to a customer as intellectual property. A company with such a business model is a fabless semiconductor company, which doesn't provide physical chips to its customers but merely facilitates the customer's development of chips by offering certain functional blocks. Typically, the customers are semiconductor companies or module developers with in-house semiconductor development. A company wishing to fabricate a complex device may license in the rights to use another company's well-tested functional blocks such as a microprocessor, instead of developing their own design, which would require additional time and cost. The silicon IP industry has had stable growth for many years. The most successful silicon IP companies, often referred to as the star IP, include ARM Holdings and Synopsys. Gartner Group estimated the total value of sales related to silicon intellectual property at US $1.5 billion in 2005 with annual growth expected around 30%. IP hardening IP hardening is a process to re-use proven designs and generate fast time-to-market, low-risk-in-fabrication solutions to provide intellectual property (IP) (or silicon intellectual property) of design cores. For example, a digital signal processor (DSP) is developed from soft cores of RTL format, and it can be targeted to various technologies or different foundries to yield different implementations. The process of IP hardening is from soft core to generate re-usable hard (hardware) cores. A main advantage of such hard IP is its predictable characteristics as the IP has been pre-implemented, while it offers flexibility of soft cores. It might come with a set of models for simulations for verification. The effort to harden soft IP requires employing the quality of the target technology, goals of design and the methodology. The hard IP has been proven in the target technology and application. E.g. the hard core in GDS II format is said to clean in DRC (design rule checking), and LVS (see Layout versus schematic). I.e. that can pass all the rules required for manufacturing by the specific foundry. Free and open-source Since around 2000, OpenCores.org has offered various soft cores, mostly written in VHDL and Verilog. All of these cores are provided under free and open-source software-license such as GNU General Public License or BSD-like licenses. Since 2010, initiatives such as RISC-V have caused a massive expansion in the number of IP cores available (almost 50 by 2019). This has helped to increase collaboration in developing secure and efficient designs. See also List of semiconductor IP core vendors Semiconductor Semiconductor fabrication plant (foundry) Mask work Fabless manufacturing Integrated circuit layout design protection References External links Open cores "design and publish core" (under LGPL Licence) Altera cores Free reference IP cores for FPGAs Open Source Semiconductor Core Licensing, 25 Harvard Journal of Law & Technology 131 (2011) Article analyzing the law, technology and business of open source semiconductor cores Semiconductor IP cores Electronic design automation Logic design Semiconductor device fabrication
Semiconductor intellectual property core
Materials_science
1,653
20,845
https://en.wikipedia.org/wiki/Multiplication
Multiplication (often denoted by the cross symbol , by the mid-line dot operator , by juxtaposition, or, on computers, by an asterisk ) is one of the four elementary mathematical operations of arithmetic, with the other ones being addition, subtraction, and division. The result of a multiplication operation is called a product. The multiplication of whole numbers may be thought of as repeated addition; that is, the multiplication of two numbers is equivalent to adding as many copies of one of them, the multiplicand, as the quantity of the other one, the multiplier; both numbers can be referred to as factors. For example, the expression , phrased as "3 times 4" or "3 multiplied by 4", can be evaluated by adding 3 copies of 4 together: Here, 3 (the multiplier) and 4 (the multiplicand) are the factors, and 12 is the product. One of the main properties of multiplication is the commutative property, which states in this case that adding 3 copies of 4 gives the same result as adding 4 copies of 3: Thus, the designation of multiplier and multiplicand does not affect the result of the multiplication. Systematic generalizations of this basic definition define the multiplication of integers (including negative numbers), rational numbers (fractions), and real numbers. Multiplication can also be visualized as counting objects arranged in a rectangle (for whole numbers) or as finding the area of a rectangle whose sides have some given lengths. The area of a rectangle does not depend on which side is measured first—a consequence of the commutative property. The product of two measurements (or physical quantities) is a new type of measurement, usually with a derived unit. For example, multiplying the lengths (in meters or feet) of the two sides of a rectangle gives its area (in square meters or square feet). Such a product is the subject of dimensional analysis. The inverse operation of multiplication is division. For example, since 4 multiplied by 3 equals 12, 12 divided by 3 equals 4. Indeed, multiplication by 3, followed by division by 3, yields the original number. The division of a number other than 0 by itself equals 1. Several mathematical concepts expand upon the fundamental idea of multiplication. The product of a sequence, vector multiplication, complex numbers, and matrices are all examples where this can be seen. These more advanced constructs tend to affect the basic properties in their own ways, such as becoming noncommutative in matrices and some forms of vector multiplication or changing the sign of complex numbers. Notation In arithmetic, multiplication is often written using the multiplication sign (either or ) between the terms (that is, in infix notation). For example, ("two times three equals six") There are other mathematical notations for multiplication: To reduce confusion between the multiplication sign × and the common variable , multiplication is also denoted by dot signs, usually a middle-position dot (rarely period): . The middle dot notation or dot operator, encoded in Unicode as , is now standard in the United States and other countries . When the dot operator character is not accessible, the interpunct (·) is used. In other countries that use a comma as a decimal mark, either the period or a middle dot is used for multiplication. Historically, in the United Kingdom and Ireland, the middle dot was sometimes used for the decimal to prevent it from disappearing in the ruled line, and the period/full stop was used for multiplication. However, since the Ministry of Technology ruled to use the period as the decimal point in 1968, and the International System of Units (SI) standard has since been widely adopted, this usage is now found only in the more traditional journals such as The Lancet. In algebra, multiplication involving variables is often written as a juxtaposition (e.g., for times or for five times ), also called implied multiplication. The notation can also be used for quantities that are surrounded by parentheses (e.g., , or for five times two). This implicit usage of multiplication can cause ambiguity when the concatenated variables happen to match the name of another variable, when a variable name in front of a parenthesis can be confused with a function name, or in the correct determination of the order of operations. In vector multiplication, there is a distinction between the cross and the dot symbols. The cross symbol generally denotes the taking a cross product of two vectors, yielding a vector as its result, while the dot denotes taking the dot product of two vectors, resulting in a scalar. In computer programming, the asterisk (as in 5*2) is still the most common notation. This is because most computers historically were limited to small character sets (such as ASCII and EBCDIC) that lacked a multiplication sign (such as ⋅ or ×), while the asterisk appeared on every keyboard. This usage originated in the FORTRAN programming language. The numbers to be multiplied are generally called the "factors" (as in factorization). The number to be multiplied is the "multiplicand", and the number by which it is multiplied is the "multiplier". Usually, the multiplier is placed first, and the multiplicand is placed second; however, sometimes the first factor is considered the multiplicand and the second the multiplier. Also, as the result of multiplication does not depend on the order of the factors, the distinction between "multiplicand" and "multiplier" is useful only at a very elementary level and in some multiplication algorithms, such as the long multiplication. Therefore, in some sources, the term "multiplicand" is regarded as a synonym for "factor". In algebra, a number that is the multiplier of a variable or expression (e.g., the 3 in ) is called a coefficient. The result of a multiplication is called a product. When one factor is an integer, the product is a multiple of the other or of the product of the others. Thus, is a multiple of , as is . A product of integers is a multiple of each factor; for example, 15 is the product of 3 and 5 and is both a multiple of 3 and a multiple of 5. Definitions The product of two numbers or the multiplication between two numbers can be defined for common special cases: natural numbers, integers, rational numbers, real numbers, complex numbers, and quaternions. Product of two natural numbers The product of two natural numbers is defined as: Product of two integers An integer can be either zero, a nonzero natural number, or minus a nonzero natural number. The product of zero and another integer is always zero. The product of two nonzero integers is determined by the product of their positive amounts, combined with the sign derived from the following rule: (This rule is a consequence of the distributivity of multiplication over addition, and is not an additional rule.) In words: A positive number multiplied by a positive number is positive (product of natural numbers), A positive number multiplied by a negative number is negative, A negative number multiplied by a positive number is negative, A negative number multiplied by a negative number is positive. Product of two fractions Two fractions can be multiplied by multiplying their numerators and denominators: which is defined when . Product of two real numbers There are several equivalent ways to define formally the real numbers; see Construction of the real numbers. The definition of multiplication is a part of all these definitions. A fundamental aspect of these definitions is that every real number can be approximated to any accuracy by rational numbers. A standard way for expressing this is that every real number is the least upper bound of a set of rational numbers. In particular, every positive real number is the least upper bound of the truncations of its infinite decimal representation; for example, is the least upper bound of A fundamental property of real numbers is that rational approximations are compatible with arithmetic operations, and, in particular, with multiplication. This means that, if and are positive real numbers such that and then In particular, the product of two positive real numbers is the least upper bound of the term-by-term products of the sequences of their decimal representations. As changing the signs transforms least upper bounds into greatest lower bounds, the simplest way to deal with a multiplication involving one or two negative numbers, is to use the rule of signs described above in . The construction of the real numbers through Cauchy sequences is often preferred in order to avoid consideration of the four possible sign configurations. Product of two complex numbers Two complex numbers can be multiplied by the distributive law and the fact that , as follows: The geometric meaning of complex multiplication can be understood by rewriting complex numbers in polar coordinates: Furthermore, from which one obtains The geometric meaning is that the magnitudes are multiplied and the arguments are added. Product of two quaternions The product of two quaternions can be found in the article on quaternions. Note, in this case, that and are in general different. Computation Many common methods for multiplying numbers using pencil and paper require a multiplication table of memorized or consulted products of small numbers (typically any two numbers from 0 to 9). However, one method, the peasant multiplication algorithm, does not. The example below illustrates "long multiplication" (the "standard algorithm", "grade-school multiplication"): 23958233 × 5830 ——————————————— 00000000 ( = 23,958,233 × 0) 71874699 ( = 23,958,233 × 30) 191665864 ( = 23,958,233 × 800) + 119791165 ( = 23,958,233 × 5,000) ——————————————— 139676498390 ( = 139,676,498,390 ) In some countries such as Germany, the above multiplication is depicted similarly but with the original product kept horizontal and computation starting with the first digit of the multiplier: 23958233 · 5830 ——————————————— 119791165 191665864 71874699 00000000 ——————————————— 139676498390 Multiplying numbers to more than a couple of decimal places by hand is tedious and error-prone. Common logarithms were invented to simplify such calculations, since adding logarithms is equivalent to multiplying. The slide rule allowed numbers to be quickly multiplied to about three places of accuracy. Beginning in the early 20th century, mechanical calculators, such as the Marchant, automated multiplication of up to 10-digit numbers. Modern electronic computers and calculators have greatly reduced the need for multiplication by hand. Historical algorithms Methods of multiplication were documented in the writings of ancient Egyptian, and Chinese civilizations. The Ishango bone, dated to about 18,000 to 20,000 BC, may hint at a knowledge of multiplication in the Upper Paleolithic era in Central Africa, but this is speculative. Egyptians The Egyptian method of multiplication of integers and fractions, which is documented in the Rhind Mathematical Papyrus, was by successive additions and doubling. For instance, to find the product of 13 and 21 one had to double 21 three times, obtaining , , . The full product could then be found by adding the appropriate terms found in the doubling sequence: 13 × 21 = (1 + 4 + 8) × 21 = (1 × 21) + (4 × 21) + (8 × 21) = 21 + 84 + 168 = 273. Babylonians The Babylonians used a sexagesimal positional number system, analogous to the modern-day decimal system. Thus, Babylonian multiplication was very similar to modern decimal multiplication. Because of the relative difficulty of remembering different products, Babylonian mathematicians employed multiplication tables. These tables consisted of a list of the first twenty multiples of a certain principal number n: n, 2n, ..., 20n; followed by the multiples of 10n: 30n 40n, and 50n. Then to compute any sexagesimal product, say 53n, one only needed to add 50n and 3n computed from the table. Chinese In the mathematical text Zhoubi Suanjing, dated prior to 300 BC, and the Nine Chapters on the Mathematical Art, multiplication calculations were written out in words, although the early Chinese mathematicians employed Rod calculus involving place value addition, subtraction, multiplication, and division. The Chinese were already using a decimal multiplication table by the end of the Warring States period. Modern methods The modern method of multiplication based on the Hindu–Arabic numeral system was first described by Brahmagupta. Brahmagupta gave rules for addition, subtraction, multiplication, and division. Henry Burchard Fine, then a professor of mathematics at Princeton University, wrote the following: The Indians are the inventors not only of the positional decimal system itself, but of most of the processes involved in elementary reckoning with the system. Addition and subtraction they performed quite as they are performed nowadays; multiplication they effected in many ways, ours among them, but division they did cumbrously. These place value decimal arithmetic algorithms were introduced to Arab countries by Al Khwarizmi in the early 9th century and popularized in the Western world by Fibonacci in the 13th century. Grid method Grid method multiplication, or the box method, is used in primary schools in England and Wales and in some areas of the United States to help teach an understanding of how multiple digit multiplication works. An example of multiplying 34 by 13 would be to lay the numbers out in a grid as follows: {| class="wikitable" style="text-align: center;" ! scope="col" | × ! scope="col" | 30 ! scope="col" | 4 |- ! scope="row" | 10 |300 |40 |- ! scope="row" | 3 |90 |12 |} and then add the entries. Computer algorithms The classical method of multiplying two -digit numbers requires digit multiplications. Multiplication algorithms have been designed that reduce the computation time considerably when multiplying large numbers. Methods based on the discrete Fourier transform reduce the computational complexity to . In 2016, the factor was replaced by a function that increases much slower, though still not constant. In March 2019, David Harvey and Joris van der Hoeven submitted a paper presenting an integer multiplication algorithm with a complexity of The algorithm, also based on the fast Fourier transform, is conjectured to be asymptotically optimal. The algorithm is not practically useful, as it only becomes faster for multiplying extremely large numbers (having more than bits). Products of measurements One can only meaningfully add or subtract quantities of the same type, but quantities of different types can be multiplied or divided without problems. For example, four bags with three marbles each can be thought of as: [4 bags] × [3 marbles per bag] = 12 marbles. When two measurements are multiplied together, the product is of a type depending on the types of measurements. The general theory is given by dimensional analysis. This analysis is routinely applied in physics, but it also has applications in finance and other applied fields. A common example in physics is the fact that multiplying speed by time gives distance. For example: 50 kilometers per hour × 3 hours = 150 kilometers. In this case, the hour units cancel out, leaving the product with only kilometer units. Other examples of multiplication involving units include: 2.5 meters × 4.5 meters = 11.25 square meters 11 meters/seconds × 9 seconds = 99 meters 4.5 residents per house × 20 houses = 90 residents Product of a sequence Capital pi notation The product of a sequence of factors can be written with the product symbol , which derives from the capital letter Π (pi) in the Greek alphabet (much like the same way the summation symbol is derived from the Greek letter Σ (sigma)). The meaning of this notation is given by which results in In such a notation, the variable represents a varying integer, called the multiplication index, that runs from the lower value indicated in the subscript to the upper value given by the superscript. The product is obtained by multiplying together all factors obtained by substituting the multiplication index for an integer between the lower and the upper values (the bounds included) in the expression that follows the product operator. More generally, the notation is defined as where m and n are integers or expressions that evaluate to integers. In the case where , the value of the product is the same as that of the single factor xm; if , the product is an empty product whose value is 1—regardless of the expression for the factors. Properties of capital pi notation By definition, If all factors are identical, a product of factors is equivalent to exponentiation: Associativity and commutativity of multiplication imply and if is a non-negative integer, or if all are positive real numbers, and if all are non-negative integers, or if is a positive real number. Infinite products One may also consider products of infinitely many terms; these are called infinite products. Notationally, this consists in replacing n above by the infinity symbol ∞. The product of such an infinite sequence is defined as the limit of the product of the first n terms, as n grows without bound. That is, One can similarly replace m with negative infinity, and define: provided both limits exist. Exponentiation When multiplication is repeated, the resulting operation is known as exponentiation. For instance, the product of three factors of two (2×2×2) is "two raised to the third power", and is denoted by 23, a two with a superscript three. In this example, the number two is the base, and three is the exponent. In general, the exponent (or superscript) indicates how many times the base appears in the expression, so that the expression indicates that n copies of the base a are to be multiplied together. This notation can be used whenever multiplication is known to be power associative. Properties For real and complex numbers, which includes, for example, natural numbers, integers, and fractions, multiplication has certain properties: Commutative property The order in which two numbers are multiplied does not matter: Associative property Expressions solely involving multiplication or addition are invariant with respect to the order of operations: Distributive property Holds with respect to multiplication over addition. This identity is of prime importance in simplifying algebraic expressions: Identity element The multiplicative identity is 1; anything multiplied by 1 is itself. This feature of 1 is known as the identity property: Property of 0 Any number multiplied by 0 is 0. This is known as the zero property of multiplication: Negation −1 times any number is equal to the additive inverse of that number: , where −1 times −1 is 1: Inverse element Every number x, except 0, has a multiplicative inverse, , such that . Order preservation Multiplication by a positive number preserves the order: For , if then . Multiplication by a negative number reverses the order: For , if then . The complex numbers do not have an ordering that is compatible with both addition and multiplication. Other mathematical systems that include a multiplication operation may not have all these properties. For example, multiplication is not, in general, commutative for matrices and quaternions. Hurwitz's theorem shows that for the hypercomplex numbers of dimension 8 or greater, including the octonions, sedenions, and trigintaduonions, multiplication is generally not associative. Axioms In the book Arithmetices principia, nova methodo exposita, Giuseppe Peano proposed axioms for arithmetic based on his axioms for natural numbers. Peano arithmetic has two axioms for multiplication: Here S(y) represents the successor of y; i.e., the natural number that follows y. The various properties like associativity can be proved from these and the other axioms of Peano arithmetic, including induction. For instance, S(0), denoted by 1, is a multiplicative identity because The axioms for integers typically define them as equivalence classes of ordered pairs of natural numbers. The model is based on treating (x,y) as equivalent to when x and y are treated as integers. Thus both (0,1) and (1,2) are equivalent to −1. The multiplication axiom for integers defined this way is The rule that −1 × −1 = 1 can then be deduced from Multiplication is extended in a similar way to rational numbers and then to real numbers. Multiplication with set theory The product of non-negative integers can be defined with set theory using cardinal numbers or the Peano axioms. See below how to extend this to multiplying arbitrary integers, and then arbitrary rational numbers. The product of real numbers is defined in terms of products of rational numbers; see construction of the real numbers. Multiplication in group theory There are many sets that, under the operation of multiplication, satisfy the axioms that define group structure. These axioms are closure, associativity, and the inclusion of an identity element and inverses. A simple example is the set of non-zero rational numbers. Here identity 1 is had, as opposed to groups under addition where the identity is typically 0. Note that with the rationals, zero must be excluded because, under multiplication, it does not have an inverse: there is no rational number that can be multiplied by zero to result in 1. In this example, an abelian group is had, but that is not always the case. To see this, consider the set of invertible square matrices of a given dimension over a given field. Here, it is straightforward to verify closure, associativity, and inclusion of identity (the identity matrix) and inverses. However, matrix multiplication is not commutative, which shows that this group is non-abelian. Another fact worth noticing is that the integers under multiplication do not form a group—even if zero is excluded. This is easily seen by the nonexistence of an inverse for all elements other than 1 and −1. Multiplication in group theory is typically notated either by a dot or by juxtaposition (the omission of an operation symbol between elements). So multiplying element a by element b could be notated as a b or ab. When referring to a group via the indication of the set and operation, the dot is used. For example, our first example could be indicated by . Multiplication of different kinds of numbers Numbers can count (3 apples), order (the 3rd apple), or measure (3.5 feet high); as the history of mathematics has progressed from counting on our fingers to modelling quantum mechanics, multiplication has been generalized to more complicated and abstract types of numbers, and to things that are not numbers (such as matrices) or do not look much like numbers (such as quaternions). Integers is the sum of N copies of M when N and M are positive whole numbers. This gives the number of things in an array N wide and M high. Generalization to negative numbers can be done by and The same sign rules apply to rational and real numbers. Rational numbers Generalization to fractions is by multiplying the numerators and denominators, respectively: . This gives the area of a rectangle high and wide, and is the same as the number of things in an array when the rational numbers happen to be whole numbers. Real numbers Real numbers and their products can be defined in terms of sequences of rational numbers. Complex numbers Considering complex numbers and as ordered pairs of real numbers and , the product is . This is the same as for reals when the imaginary parts and are zero. Equivalently, denoting as , Alternatively, in trigonometric form, if , then Further generalizations See Multiplication in group theory, above, and multiplicative group, which for example includes matrix multiplication. A very general, and abstract, concept of multiplication is as the "multiplicatively denoted" (second) binary operation in a ring. An example of a ring that is not any of the above number systems is a polynomial ring (polynomials can be added and multiplied, but polynomials are not numbers in any usual sense). Division Often division, , is the same as multiplication by an inverse, . Multiplication for some types of "numbers" may have corresponding division, without inverses; in an integral domain x may have no inverse "" but may be defined. In a division ring there are inverses, but may be ambiguous in non-commutative rings since need not be the same as . Calculator Enter two numbers to find their product: × = See also Dimensional analysis Multiplication algorithm Karatsuba algorithm, for large numbers Toom–Cook multiplication, for very large numbers Schönhage–Strassen algorithm, for huge numbers Multiplication table Binary multiplier, how computers multiply Booth's multiplication algorithm Floating-point arithmetic Multiply–accumulate operation Fused multiply–add Wallace tree Multiplicative inverse, reciprocal Factorial Genaille–Lucas rulers Lunar arithmetic Napier's bones Peasant multiplication Product (mathematics), for generalizations Slide rule References Further reading External links Multiplication and Arithmetic Operations In Various Number Systems at cut-the-knot Modern Chinese Multiplication Techniques on an Abacus Elementary arithmetic Mathematical notation Articles containing proofs
Multiplication
Mathematics
5,349
15,758,126
https://en.wikipedia.org/wiki/Hofmann%E2%80%93L%C3%B6ffler%20reaction
In organic chemistry, the Hofmann–Löffler reaction (also referred to as Hofmann–Löffler–Freytag reaction, Löffler–Freytag reaction, Löffler–Hofmann reaction, as well as Löffler's method) is a cyclization reaction with remote C–H functionalization. In the reaction, thermal or photochemical decomposition of N-halogenated amine 1 in the presence of a strong acid (concentrated sulfuric acid or concentrated CF3CO2H) generates a nitrogen radical intermediate. The radical then abstracts an intramolecular hydrogen atom to give a cyclic amine 2 (pyrrolidine or, in some cases, piperidine). History In 1878, the structure of piperidine was still unknown, and A. W. Hofmann believed it unsaturated. Following standard analytical technique, Hofmann added hydrogen chloride or bromine to it in an attempt to induce hydrohalogenation. Instead, he produced N-haloamines and N-haloamides, whose reactions under acidic and basic conditions he investigated. 1bromo-2propyl­piperidine (3) and hot sulfuric acid, followed by basic work-up, formed a tertiary amine, later identified as δ-coneceine (4). No further examples of the reaction were reported for about 25 years. But in 1909, K. Löffler and C. Freytag extended the transformation to simple secondary amines and applied the process in their elegant synthesis of nicotine (6) from N-bromo-N-methyl-4-(pyridin-3-yl)butan-1-amine (5). The reaction mechanism only became clear around 1950, when S. Wawzonek investigated various N-haloamine cyclizations. Noting that the hydrogen peroxide or ultraviolet light greatly improved yields, Wawzonek and Thelan suggested a free-radical mechanism. E. J. Corey et al. then examined several features of the reaction: stereochemistry, hydrogen isotope effect, initiation, inhibition, catalysis, intermediates and selectivity of hydrogen transfer. The results, presented below, conclusively supported Wawzonek and Thelan's hypothesis. Reaction mechanism According to Wawzonek and Thelan's 1949 proposal, an acid first protonates an N-chloroamine, which, in the presence of heat, light, or other initiators, homolyzes to ammonium and chloride free radicals. The ammonium radical intramolecularly abstracts a sterically favored hydrogen atom to afford an alkyl radical which, in a chain reaction, abstracts chlorine from another N-chloroammonium ion to form an alkyl chloride and a new ammonium radical. The alkyl chloride later cyclizes during the basic work-up to the cyclic tertiary amine. Because the hydrogen abstraction is radical, any chiral configuration at the δ-carbon racemizes. The reaction also has a quite large hydrogen isotope effect: in the decomposition of 10, the ratio of 1,2-dimethylpyrrolidine 11 and 1,2-dimethylpyrrolidine-2-d 12 (determined by combustion and IR spectra) suggests . Comparable reactions at a primary carbon also give , which strongly suggests that the breaking of the C-H bond proceeds to a rather considerable extent in the transition state. Initiation, inhibition, catalysis Molecular oxygen inhibits the reaction (trapping the radicals), but Fe2+ salts initiate it. Further investigations demonstrated that both the rate of the ultraviolet-catalyzed decomposition of dibutylchloroamine and the yield of newly formed pyrrolidine are strongly dependent on the acidity of the reaction medium – faster and higher-yielding reaction was observed with increasing sulfuric acid concentration. An important question in discussing the role of the acid is whether the N-haloamine reacts in the free base or the salt form in the initiation step. Based on the pKa values of the conjugate acids of 2° alkyl amines (which are generally in the range 10–11), it is evident that N-chloroamines exist largely as salts in a solution of high sulfuric acid concentration. As a result, in the case of chemical or thermal initiation, it is reasonable to assume that it is the N-chloroammonium ion which affords the ammonium free radical. The situation changes, however, when the reaction is initiated upon irradiation with UV light. The radiation must be absorbed and the quantum of the incident light must be large enough to dissociate the N-Cl bond in order for a photochemical reaction to occur. Because the conjugate acids of the N-chloroamines have no appreciable UV absorption above 225 nm, whereas the free N-chloroamine absorb UV light of sufficient energy to cause dissociation (λmax 263 nm, εmax 300), E. J. Corey postulated that in this case it is actually the small percentage of free N-chloroamine that is responsible for most of the initiation. It was also suggested that the newly generated neutral nitrogen radical is immediately protonated. However, it is important to realize that an alternative scenario might be in operation when the reaction is initiated with the UV light; namely, the free N-haloamine might not undergo dissociation upon irradiation, but it might function as a photosensitizer instead. While it was proposed that the higher acid concentration decreases the rate of the initiation step, the acid catalysis involves acceleration of the propagation steps and/or retardation of the chain termination. The influence of certain acidic solvents on the photolytic Hofmann–Löffler–Freytag reaction was also studied by Neale and co-workers. Intermediates Isolation of 4-chlorodibutylamine from decomposition of dibutylchloroamine in H2SO4 confirmed the intermediacy of δ–chloroamines. When the acidic solution is made basic, the δ–chloroamine cyclizes to give a cyclic amine and a chloride ion. Selectivity of hydrogen transfer In order to determine the structural and geometrical factors affecting the intramolecular hydrogen atom transfer, a number of different N-chloroamines were examined in the Hofmann–Löffler–Freytag reaction. The systems were judiciously chosen in order to obtain data on the following points: relative migration tendencies of primary (1°), secondary (2°) and tertiary (3°) hydrogens; relative rates of 1,5- and 1,6-hydrogen rearrangements; and facility of hydrogen rearrangements in cyclic systems of restricted geometry. Investigation of the free radical decomposition of N-chlorobutylamylamine 13 allowed to determine 1° vs. 2° hydrogen migration. It was reported that only 1-n-butyl-2-methylpyrrolidine 14 was formed under the reaction conditions, no 1-n-amylpyrrolidine 15 was detected. This observation provided substantial evidence that the radical attack exhibits strong preference for the 2° over 1° hydrogen. Tendency for 3° vs. 1° hydrogen migration was studied with n-butylisohexylamine 16. When 16 was subjected to the standard reaction conditions, rapid disappearance of 16 was observed, but no pyrrolidine product could be isolated. This result suggested that there is a high selectivity for the 3° hydrogen, but the intermediate tertiary chloro compound 17 is rapidly solvolyzed. Similarly, no cyclic amine was observed with the reaction of n-amylisohexylamine, which demonstrates the selectivity for the 3° vs. 2° hydrogen migration. A qualitative study of products from the Hofmann–Löffler–Freytag reaction of N-chloromethyl-n-hexylamine 18 was performed in order to evaluate the relative ease of 1,5- and 1,6-hydrogen migration. UV-catalyzed decomposition of 18 followed by basification produced a 9:1 mixture of 1-methyl-2-ethylpyrrolidine 19 and 1,2-dimethylpiperidine 20, which demonstrates that the extent of formation of six-membered rings can be appreciable. In terms of the geometrical requirements in the intramolecular rearrangement of hydrogen, it was observed that under identical reaction conditions the UV light-catalyzed decomposition of methylcyclohexylchloroamine and N-chloroazacycloheptane proceeds far more slowly than that of dibutylchloroamine. These findings indicate that the prevailing geometries are in these two cases unfavourable for the rearrangement to occur and the Cδ–H–N bond angle required for the intramolecular hydrogen transfer cannot be easily attained. Generally accepted mechanism It is generally accepted that the first step in the Hofmann–Löffler–Freytag reaction conducted in acidic medium is the protonation of the N-halogenated amine 21 to form the corresponding N-halogenated ammonium salt 22. In case of thermal or chemical initiation of the free radical chain reaction, the N-halogenated ammonium salt 22 undergoes homolytic cleavage of the nitrogen-halogen bond to generate the nitrogen-centered radical cation 23. In contrast, it has been argued that the UV light-catalyzed initiation involves the free form of the N-haloamine and a rapid protonation of the newly generated neutral nitrogen radical (see the section devoted to mechanistic studies for arguments supporting this statement). Intramolecular 1,5-hydrogen atom transfer produces carbon-centered radical 24, which subsequently abstracts a halogen atom from the N-halogenated ammonium salt 22. This affords the protonated δ-halogenated amine 25 and regenerates the nitrogen-centered radical cation 23, the chain carrier of the reaction. Upon treatment with base, 25 undergoes deprotonation followed by an intramolecular SN2 reaction to yield pyrrolidine 28 via intermediate 27. The preferential abstraction of the δ–hydrogen atom corresponds to a six-membered transition state, which can adopt the unstrained cyclohexane chair-type conformation 29. The Hofmann–Löffler–Freytag reaction is conceptually related to the well-known Barton reaction. General features of the reaction The starting material for the Hofmann–Löffler–Freytag reaction could be N-chloro-, N-bromo-, and N-iodoamines. In case of thermal initiation, the N-chloroamines give better yields for pyrrolidines because N-bromoamines are less stable thermally than the corresponding N-chloroamines. In contrast, when the initiation is carried out by irradiation, the N-bromoamines give higher yield for pyrrolidines. The Hofmann–Löffler–Freytag reaction was originally carried out under acidic conditions, but it has been demonstrated that neutral or even weakly basic conditions might also be successfully employed. The initially formed nitrogen-centered radical abstracts a H-atom mostly from the δ-position and thus 5-membered rings are formed predominantly. Formation of 6-membered rings is also possible, but relatively rare, and in majority of cases is observed in rigid cyclic systems. The reaction can be conducted under milder conditions provided that the alkyl radical experiences some form of extra stabilization, e.g. by an adjacent heteroatom. The radical process may be initiated by heating, irradiation with light or with radical initiators (e.g. peroxides, metal salts). Modifications and improvements Because the original strongly acidic reaction conditions are often not compatible with the sensitive functional and protecting groups of complex substrates, several modifications of the Hofmann–Löffler–Freytag reaction were introduced: M. Kimura and Y. Ban demonstrated that adjacent nitrogen atoms can stabilize radical species generated by H-atom abstraction and permit this step to take place under weakly basic conditions They reported that far better yields are obtained on photoirradiation in the presence of triethylamine, which neutralizes the hydrogen chloride generated by cyclization. M. Kimura and Y. Ban employed the modified conditions of the Hofmann–Löffler–Freytag reaction to the synthesis of dihydrodeoxyepiallocernuine 35. It has been demonstrated that photolysis of N-haloamides proceeds efficiently under neutral conditions. Irradiation of N-bromoamide 36 (R=tBu) gave rise to bromomethyl-cyclohexane-amide 37 which, upon treatment with base in situ afforded iminolactone 38 in 92% yield. Similarly, S. W. Baldwin and T. J. Doll examined a modification of the Hofmann–Löffler–Freytag reaction during their studies towards the synthesis of the alkaloid gelsemicine 41. The formation of the pyrrolidine ring of 40 was accomplished by irradiation of N-chloroamide 39. Another variation of the Hofmann–Löffler–Freytag reaction involves sulfonamides in place of N-haloamines. In the presence of persulphates and metal salts, sulfonamides can undergo intramolecular free-radical funcionalization to produce γ- and δ-chloroalkenylsulfonamides under neutral conditions. For instance, upon treatment with Na2S2O8 and CuCl2, butylsulfonamide 42 was transformed to 4-chlorobutylsulfonamide 43 and 3- chlorobutylsulfonamide 44 in the absence of acid. The most important variation of the Hofmann–Löffler–Freytag reaction is the Suárez modification. In 1980, Suárez et al. reported a process using neutral conditions for the Hofmann–Löffler–Freytag reaction of N-nitroamides. Further developments of this transformation have led to the expansion of the substrate scope to N-cyanamides, N-phosphoramidates and carbamates. All these species react with hypervalent iodine reagents in the presence of iodine (I2) to generate nitrogen-centered radical via homolytic fragmentation of a hypothetical iodoamide intermediate. Thus formed N-radicals might participate in an intramolecular 1,5-hydrogen abstraction reaction from unactivated carbons, the result being the formation of pyrrolidines. The great advantage of the Suárez modification is that the reaction can be performed under very mild neutral conditions compatible with the stability of the protective groups most frequently used in synthetic organic chemistry. Consequently, it permits the use of the Hofmann–Löffler–Freytag reaction with more sensitive molecules. Other notable features of this methodology are the following: (1) the unstable iodoamide intermediates are generated in situ; (2) the iodoamide homolysis proceeds thermally at low temperature (20–40 °C) or by irradiation with visible light, which obviates the need for a UV lamp. The Suárez modification has found numerous applications in synthesis (vide infra). Nagib and co-workers have employed a triiodide strategy that expands the scope of the Hofmann–Löffler–Freytag reaction via the Suárez modification to enable the amination of secondary C-H bonds. This approach employs NaI, instead of I2, as a radical precursor to prevent undesired I2-mediated decomposition pathways. Other halide salts (e.g. NaCl and NaBr) afford the postulated intermediates of the interrupted Hofmann–Löffler–Freytag mechanism. Applications in synthesis The most prevalent synthetic utility of the Hofmann–Löffler–Freytag reaction is the assembly of the pyrrolidine ring. The Hofmann–Löffler–Freytag reaction under standard conditions The procedure for the Hofmann–Löffler–Freytag reaction traditionally requires strongly acidic conditions, which limits its appeal. Nonetheless, it has been successfully applied to functionalization of a wide variety of structurally diverse molecules as exemplified below. In 1980, J. P. Lavergne. et al. used this methodology to prepare L-proline 49. P. E. Sonnet and J. E. Oliver employed classic Hofmann–Löffler–Freytag reaction conditions in the synthesis of potential ant sex pheromone precursors (i.e. octahydroindolizine 51). Another example of the construction of a bicyclic amine through the standard Hofmann–Löffler–Freytag methodology is the Waegell's synthesis of azabicyclo[3.2.1]octane derivative 53. The Hofmann–Löffler–Freytag reaction was employed to synthesize the bridged nitrogen structure of (±)-6,15,16-iminopodocarpane-8,11,13-triene 55, an intermediate useful for the preparation of the kobusine-type alkaloids, from a bicyclic chloroamine 54. Irradiation of 54 with a 400 W high-pressure mercury lamp in trifluoroacetic acid under a nitrogen atmosphere at room temperature for 5 h afforded a moderate yield of the product. Derivatives of adamantane have also been prepared using the Hofmann–Löffler–Freytag reaction. When N-chloroamine 56 was treated with sulfuric acid and heat, 2-adamantanone was formed, but photolysis of 56 in the sulfuric acid-acetic acid mixture, using a low-pressure mercury lamp at 25 °C for 1-hour gave a good yield (85%) of the desired product 57. The cyclization of 57 presented considerable difficulties, but it was finally achieved in 34% yield under forcing conditions (heating at 290 °C for 10 min). Similarly, it has been demonstrated that derivatives of diaza-2,6 adamantane such as 60 might be formed under standard Hofmann–Löffler–Freytag reaction conditions; however, the yields are only moderate. R. P. Deshpande and U. R. Nayak reported that the Hofmann–Löffler–Freytag reaction is applicable to the synthesis of pyrrolidines containing a longifolene nucleus, e.g. 62. An outstanding application of the Hofmann–Löffler–Freytag reaction is found in the preparation of the steroidal alkaloid derivatives. J. Hora and G. van de Woude used this procedure in their syntheses of conessine derivatives shown below. In case of 64 and 66, the five-membered nitrogen ring is formed by attack on the unactivated C-18 methyl group of the precursor (63 or 65, respectively) by a suitably placed nitrogen-centered radical at C-20. The ease of this reaction is due to the fact that in the rigid steroid framework the β-C-18 methyl group and the β-C-20 side chain carrying the nitrogen radical are suitably arranged in space in order to allow the 1,5-hydrogen abstraction to proceed via the six-membered transition state. The Hofmann–Löffler–Freytag reaction under mild conditions A number of examples of the Hofmann–Löffler–Freytag reaction under neutral conditions have been presented in the section devoted to modifications and improvements of the original reaction conditions. Hence, the main focus of this section are the applications of the Suárez modification of the Hofmann–Löffler–Freytag reaction. The Suárez modification of the Hofmann–Löffler–Freytag reaction was the basis of the new synthetic method developed by H. Togo et al. The authors demonstrated that various N-alkylsaccharins (N-alkyl-1,2-benzisothiazoline-3-one-1,1,-dioxides) 77 are easily prepared in moderate to good yields by the reaction of N-alkyl(o-methyl)arenesulfonamides 70 with PhI(OAc)2 in the presence of iodine under the irradiation of a tungsten lamp. 1,5 -Hydrogen abstraction/iodination of the o-methyl group is repeated three times and is most likely followed by cyclization to diiodo intermediate 76, which then undergoes hydrolysis. A very interesting transformation is observed when sulfonamides of primary amides bearing an aromatic ring at the γ-position are treated with various iodanes and iodine under the irradiation with a tungsten lamp. The reaction leads to 1,2,3,4-tetrahydroquinoline derivatives and is a good preparative method of six-membered cyclic aromatic amines. For instance, sulfonamide 78 undergoes an intramolecular radical cyclization to afford 79 in relatively good yield. By the same procedure, 3,4-dihydro-2,1-benzothiazine-2,2-dioxides 81 are obtained from the N-alkyl 2-(aryl)ethanesulfonamides via the sulfonamidyl radical. E. Suárez et al. reported that the amidyl radical intermediates, produced by photolysis of medium-sized lactams, e.g. 82 in the presence of PhI(OAc)2 and iodine, undergo transannular hydrogen abstraction to afford intramolecularly funcionalized compounds such as oxoindolizidines 83. E. Suárez and co-workers also applied their methodology in the synthesis of chiral 8-oxa-6-azabicyclo[3.2.1]-octane 85 and 7-oxa-2-azabicyclo[2.2.1]heptane 87 ring systems. This reaction can be considered to be an intramolecular N-glycosidation that goes through an intramolecular 1,5-hydrogen abstraction promoted by an N-amido radical followed by oxidation of the transient C-radical intermediate to an oxycarbenium ion, which is subsequently trapped by an internal nucleophile. The utility of the Suárez modification of the Hofmann–Löffler–Freytag reaction was demonstrated by its application in synthesis of a number of steroid and triterpene compounds. As illustrated below, the phosphoramidate-initiated funcionalizations generally proceed in higher yields than the reactions involving N-nitro or N-cyanamides. In 2008 P.S. Baran et al. reported a new method for the synthesis of 1,3-diols using a variant of the Hofmann–Löffler–Freytag reaction. In 2017, Nagib et al. reported a new method for the synthesis of 1,2-amino-alcohols using a variant of the Hofmann–Löffler–Freytag reaction to promote β selective C-H amination of alcohols. In 2020, an asymmetric variant was disclosed by the same team. See also Free radical reaction Barton reaction References Nitrogen heterocycle forming reactions Substitution reactions Name reactions
Hofmann–Löffler reaction
Chemistry
4,948
20,697,507
https://en.wikipedia.org/wiki/Dunford%E2%80%93Schwartz%20theorem
In mathematics, particularly functional analysis, the Dunford–Schwartz theorem, named after Nelson Dunford and Jacob T. Schwartz, states that the averages of powers of certain norm-bounded operators on L1 converge in a suitable sense. Statement The statement is no longer true when the boundedness condition is relaxed to even . Notes Theorems in functional analysis
Dunford–Schwartz theorem
Mathematics
71
38,649,565
https://en.wikipedia.org/wiki/Bug-A-Salt
Bug-A-Salt is the brand name of a plastic gun used to kill soft-bodied insects by hitting them with salt particles. Description The Bug-A-Salt device uses granular table salt as non-toxic projectiles to kill insects. The plastic gun is designed to spray up to 80 discharges of salt, which forms a conical spread pattern, similar to the blast pattern from a shotgun. Biologist Michael Dickinson of the California Institute of Technology says flies cannot dodge the tiny salt particles, but will be protected by their arthropod exoskeleton and will only be stunned. History Bug-A-Salt was created by Lorenzo Maggiore and patented in 2012. Maggiore invented the tool to kill houseflies at a distance, without creating a mess. The Skell Inc company launched its Bug-A-Salt product in 2012 on the Indiegogo platform. At the close of Skell's crowd-funding campaign on September 11, 2012, the company had sold more than 21,400 units of the original model of the Bug-A-Salt salt gun. See also Fly-killing device References External links Company website This self-made millionaire invented a ‘gun’ that shoots salt at flies Huffington Post - Bug-A-Salt Launch Article Insect control Pest control techniques Products introduced in 2012 Edible salt
Bug-A-Salt
Chemistry
270
64,906,453
https://en.wikipedia.org/wiki/Geochemical%20Journal
The Geochemical Journal is a peer-reviewed open-access scientific journal covering all aspects of geochemistry and cosmochemistry. It is published by the Geochemical Society of Japan and the editor-in-chief is Katsuhiko Suzuki. Abstracting and indexing The journal is abstracted and indexed in: CAB Abstracts Chemical Abstracts Service Current Contents/Physical, Chemical & Earth Sciences Science Citation Index Expanded Scopus According to the Journal Citation Reports, the journal has a 2020 impact factor of 1.561. References External links Academic journals established in 1966 English-language journals Geochemistry journals
Geochemical Journal
Chemistry
122
60,725,016
https://en.wikipedia.org/wiki/Scouring%20powder
Scouring powder is a household cleaning product consisting of an abrasive powder mixed with a dry soap or detergent, soda, and possibly dry bleach. Scouring powder is used to clean encrusted deposits on hard surfaces such as ceramic tiles, pots and pans, baking trays, grill, porcelain sinks, bathtubs, toilet bowls and other bathroom fixtures. It is meant to be rubbed over the surface with a little water. The abrasive removes the dirt by mechanical action, and is eventually washed away, together with the powder, by rinsing with water. Scouring powders are similar to scouring soaps and scouring creams in general composition and mode of action, but differ somewhat in the form (dry powder, instead of a bar or paste) and in the primary intended applications. Scouring powders compete in their intended uses with scouring pads and steel wool. Composition A typical scouring powder consists of an insoluble abrasive powder (about 80%), a soluble base (18%) and a detergent (2%). It may also include perfume and/or a dry bleaching agent. The abrasive can be silica (quartz, ), feldspar (such as orthoclase), pumice, kaolinite, soapstone, talc, calcium carbonate (limestone, chalk), calcite, etc.. The particles should have reasonably uniform size, less than 50 μm in diameter. Hard abrasives like silica and pumice can remove tougher stains but may also scratch glass, metal, and glazed ceramics. The soluble base is meant to break down fatty substances by saponification; it may be sodium carbonate (lye, washing soda, ). The detergent is usually an anionic surfactant. Its role is to help remove greasy material (such as grease) as an emulsion, and to keep the removed stain particles suspended in the abrasive paste. The dry bleach is usually a product that releases chlorine (more precisely hypochlorite, the classical household bleaching agent), such as trichloroisocyanuric acid. The Bar Keepers Friend scouring powder has oxalic acid instead of the base, which makes it effective against rust stains rather than grease and other organic dirt. History Abrasive powders have been used to wash off grease and other hard stains since antiquity. Bathing in ancient Greece and Rome started with rubbing the body with fine sand mixed with oil or other substances, and then scraping it off with a special curved spatula, a strigil. The plants in the genus Equisetum ("horsetails") are also called "scouring rushes" because their microscopic silica scales (phitoliths). An early industrialized scouring powder was Bon Ami, launched in 1886 by the J.T. Robertson Soap Company as a gentler alternative to quartz-based scouring powders then available on store shelves. Another early commercial brand was Vim (1904), one of the first products created by William Lever. The abrasive was obtained from sandstone mined at the Cambrian Quarry at Gwernymynydd between 1905 and about 1950. The brand is still marketed in some countries, but Unilever has been replacing it with Jif and then Cif. Scouring powders have been marketed with many other companies and brand names, such as Ajax, Bon Ami, Radium, Comet, Sano, and Zud. References Cleaning products Powders
Scouring powder
Physics,Chemistry
736
26,157,534
https://en.wikipedia.org/wiki/Mateo%20Valero
Mateo Valero Cortés is a Spanish computer architect. His research encompasses different concepts within the field of computer architecture, a discipline in which he has published more than 700 papers in journals, conference proceedings, and books. Valero has received numerous awards, including the Eckert–Mauchly Award in 2007. he is the director of the Barcelona Supercomputing Center, which hosts the MareNostrum supercomputer. Early life and education Mateo Valero Cortés is from Alfamén, Aragon, Spain. At a young age he went to Zaragoza and then Madrid to study, before settling permanently in Barcelona. Valero graduated in telecommunications engineering from the Technical University of Madrid in 1974 and got his Ph.D. in telecommunications engineering from the Polytechnic University of Catalonia. Career Valero has combined his academic work with establishing and managing centres for high-performance computing research and technology transfer to businesses. Between 1990 and 1995, he first established and then directed the Barcelona European Parallelism Centre (CEPBA, after its initials in Spanish) to carry out fundamental and applied research in parallel computing. From 1995 to 2000, he was the director of C4, the Catalan Computing and Communications Centre, coordinating activities carried out by CEPBA and the Catalan Supercomputing Centre (CESCA, after its initials in Catalan). From October 2000 until 2004, he was the director of CIRI, the CEPBA-IBM Research Institute on parallel computers. Since May 2004 he has been the founder and director of Barcelona Supercomputing Center, and remains director. At these centres he has worked to drive forward different supercomputing networks both nationally and internationally, such as the Spanish Supercomputing Network (RES, after its initials in Spanish), the Partnership for Advanced Computing in Europe (PRACE) and the Latin American Supercomputing Network (RISC, after its initials in Spanish). In 2013 he won a European Research Council Advanced Grant to carry out the RoMoL project on new techniques to build multicore chips and the supercomputers of the future. Recognition and honours Individual awards 2024: Premio Innovación y Ciencia (Innovation and Science award) at the Premios Vanguardia 2020: AUTELSI 2020 annual award, organised by the Asociación Española de Usuarios de Telecomunicaciones y Sociedad de la Información (AUTELSI), which recognises excellence and contributions and commitment to information technology 2019: Cénits Award for Research Excellence, given by Extremadura Center for Research, Technological Innovation and Supercomputing to commemorate its 10th anniversary 2018: Mexican Order of the Aztec Eagle. This is the highest prize given by Mexican government to a non Mexican person. 2017: MareNostrum 4, chosen as the most beautiful data centre in the world. The award, organised by DCDnews, has been granted by popular vote. 2017: Charles Babbage Award (IEEE Computer Society), for "his contributions to parallel computation through brilliant technical work, mentoring PhD students, and building on incredibly productive European research environment" 2017: Recognition for his outstanding career in scientific and technological development, given by the University of Guadalajara in Mexico and by the national committee of the ISUM international congress. 2016: Creu de Sant Jordi award (Catalan Government) 2015: Seymour Cray Award (IEEE - Computer Society) for supercomputing "in recognition of seminal contributions to vector, out-of-order, multithreaded, and VLIW (Very Long Instruction Word) architectures" 2015: Innovative Businesses Forum Award in the Innovative Researcher category 2014: Award of Honour (Catalan Telecommunication Engineers Association) 2013: Association for Computing Machinery (ACM) Distinguished Service Award "for extraordinary leadership of initiatives in high-performance computing research and education" 2009: Goode Award (IEEE - Computer Society), for his contributions to vector, out-of-order, multithreaded, and VLIW architectures 2008: Featured in Hall of Fame (Innovate, Connect, Transform - ICT conference) 2007: Eckert–Mauchly Award (IEEE/ACM;), for "extraordinary leadership in building a world class computer architecture research center, for seminal contributions in the areas of vector computing and multithreading, and for pioneering basic new approaches to instruction-level parallelism" (the highest international honour in the field of computer architecture) 2006: National Research Award for contributions to scientific and technological progress in Catalonia (Catalan Foundation for Research and Innovation 2006: Leonardo Torres Quevedo Spanish National Research Award for engineering research (Spanish Ministry for Education and Science) 2005: Research Achievements Career Award (National Polytechnic Institute, Mexico) 2005: Aritmel National Award –Spanish IT Engineer (Spanish Scientific and IT Society) 2004: Engineer of the Year Award (Spanish Telecommunication Engineers Association) 2001: Julio Rey Pastor Spanish National Research Award in Mathematics, Information and Communication Technology (Spanish Ministry for Education and Science) 1997: Rey Jaime I Award for fundamental research (Rey Jaime I Awards Foundation) 1996: Salvà i Campillo Award (Catalan Telecommunication Engineers Association) 1994: Narcís Monturiol Award (Government of Catalonia) Joint awards 2011: First national award for partnership between research centres and businesses, awarded to BSC and IBM for their long and fruitful research collaboration (Catalan Foundation for Research and Innovation). 2011 and 2015: Severo Ochoa Centre of Excellence Award given to Barcelona Supercomputing Center (Spanish Ministry of Science and Innovation) 1994: Barcelona City Award in Technology for the work of CEPBA (Barcelona City Council) 1992: Fundación Universidad-Empresa Award for the university department with the best European research projects (Fundación Universidad-Empresa) Other recognition Valero is a founding fellow of the Spanish Royal Academy of Engineering, fellow of the Barcelona Royal Academy of Sciences and Arts, fellow of Academia Europaea and corresponding member of the Spanish Royal Academy of Pure Sciences, Physics and Natural Sciences and of the Mexican Academy of Science. In 2018 he was elected correspondent academic of the Academia de Ingeniería de México, honorary fellow of the Real Academia Europea de Doctores and fellow of the Academia de Gastronomía de Murcia. He has been awarded honorary doctorates by Chalmers University of Technology, the University of Belgrade, the University of Las Palmas de Gran Canaria, the University of Veracruz, the University of Zaragoza, the Complutense University of Madrid, the University of Cantabria and the University of Granada. He is also a fellow of the IEEE and ACM and an Intel Distinguished Fellow. He is a member of the external Scientific Advisory Committee of the Universidad Complutense de Madrid and benefactor of the graduation of the 2018 promotion from Universidad San Jorge de Zaragoza. As of 2017 he was a member of the committee for the IEEE Sidney Fernbach Award. Valero maintains strong links with his home town, Alfamén, which has bestowed a variety of honours upon him. In 1998 he was chosen as the municipality's "Favourite Son" and in 2005 a local school was given the name CEIP Mateo Valero. Aragon has also recognised Valero with a number of honours, such as the Aragon Award – also known as the San Jorge Award – which is considered the most important awarded by the provincial government (2008), and the Special Award for Aragonese Research by the Asociación. Publications Valero has published more than 700 papers in computing journals. References Year of birth missing (living people) Living people Spanish scientists Computer engineers Telecommunications engineers Technical University of Madrid alumni Polytechnic University of Catalonia alumni People from the Province of Zaragoza
Mateo Valero
Engineering
1,556
20,762,753
https://en.wikipedia.org/wiki/Glass%20crusher
A glass crusher provides for pulverization of glass to a yield size of or less. Recycling operations may range from simple, manually-fed, self-contained machines to extravagant crushing systems complete with screens, conveyors, crushers and separators. All non-glass contaminants must generally be removed from the glass prior to recycling. The processes used in glass crushing for recycling involves the same methods used by the aggregate industry for crushing rock into sand (rock crusher). Vertical shaft impactor (VSI) glass crushing The use of VSI crushers in large scale operations allow the production of up to 125 tons per hour of crushed glass cullet. VSI crushers use a high speed rotor with wear-resistant tips and a crushing chamber designed to 'throw' the glass against. The VSI crushers utilize velocity rather than surface force as the predominant force to break glass as this allows the breaking force to be applied evenly both across the surface of the material as well as through the mass of the material. In its shattered state, glass has a jagged and uneven surface. Applying surface force (pressure) results in unpredictable and typically non-cubicle particles. As glass is 'thrown' by a VSI rotor against a solid anvil, it fractures and breaks along fissures. Final particle size can be controlled by 1) the velocity at which the glass is thrown against the anvil and 2) the distance between the end of the rotor and the impact point on the anvil. The product resulting from VSI crushing is generally of a consistent cubicle shape which may optimize yield in consumptive applications such as the fabrication of fiberglass, ceramic ware, flux agents and abrasives. Due to the highly abrasive nature of the glass material, a VSI crushing process is generally preferred over Horizontal Shaft Impact and most other crushing methods with higher maintenance and lower wear part lives. VSI crushers generally utilize a high speed spinning rotor at the center of the crushing chamber and an outer impact surface of either abrasive resistant metal anvils or crushed glass (or rock in an aggregate applications). Utilizing cast metal surfaces 'anvils' are traditionally referred to as a "Shoe and Anvil VSI". Utilizing crushed material on the outer walls of the crusher for new material to be crushed against is traditionally referred to as "rock on rock VSI". VSI Principle of Operation References External links Glass crusher for glass bottles recycling - Info and Video in Cogelme site Industrial equipment Recycling
Glass crusher
Engineering
517
21,869,208
https://en.wikipedia.org/wiki/Mucoromycotina
Mucoromycotina is a subphylum of uncertain placement in Fungi. It was considered part of the phylum Zygomycota, but recent phylogenetic studies have shown that it was polyphyletic and thus split into several groups, it is now thought to be a paraphyletic grouping. Mucoromycotina is currently composed of 3 orders, 61 genera, and 325 species. Some common characteristics seen throughout the species include: development of coenocytic mycelium, saprotrophic lifestyles, and filamentous. History Zygomycete fungi were originally only ascribed to the phylum Zygomycota. Such classifications were based on physiological characteristics with little genetic support. A genetic study of Zygomycete fungi performed in 2016 showed that further classification of the group was possible, thus splitting it into Zoopagomycota, Entomophthoromycota, Kickxellomycotina, and Mucoromycotina. The study put these groups as being sister to Dikarya, but without further research, their exact locations in Fungi remain unknown. Many of the questions regarding these groups stem from the difficulty of collecting and growing them in culture, so the current groupings are based on the few that have been successfully collected and which could undergo genomic testing with a certain level of accuracy. Taxonomy The exact placement of Mucoromycotina is currently unknown. It currently resides in the subphyla incertae sedis, alongside Zoopagomycota, Entomophthoromycota, and Kickxellomycotina, whose’ placements are also currently unknown.  These groups originally comprised Zygomycota alongside others that were assigned to Glomeromycota, which was elevated to phylum in 2001. These groups are sister to Dikarya, which contains Ascomycota and Basidiomycota. Studies have currently divided Mucoromycotina into 3 orders: Endognales, Mucorales, and Mortierellales. All three orders contain species that are saprotrophic, with others forming relationships with other organisms. There are still many questions regarding Mucoromycotina and the organisms that compose it, owing to limited collected samples. Orders Endogonales This order currently contains 2 families, (Endogonaceae and Densosporaceae) 7 genera, and 40 species. Not much is known about this order, other than readily noticeable characteristics. They produce subterranean sporocarps, which are ingested by small mammals attracted by the fetid odor they produce. Cultured specimens have shown that they produce coenocytic mycelium, and can be saprotrophic or mycorrhizal. This order was first described in 1931 by Jacz. & P.A.Jacz., after being monographed in 1922 by Thaxter. Mucorales Often referred to as pin molds, members of this order produce sporangia held up on hyphae, called sporangiophores. There are currently 13 families in this order, divided into 56 genera, and approximately 300 species.  They can be parasitic or saprotrophic in nature and reproduce asexually. Much is known about this order since some of the species cause damage to stored food, with several others causing mycosis in immune compromised individuals. The order was proposed in 1878 by van Tieghem, as the examined samples did not fit in with what was Entomophthorales at the time. Mortierellales Previously considered a family of Mucorales, it was suggested as its own order in 1998. At the time it only contained 2 genera, one of which remains. What is known is that species in this order can be parasitic or saprotrophic in nature. Cultured specimens show that they produce a fine mycelium, with branched sporangia, and produce a garlic-like odor. They are widespread, showing up in soil samples from many different locations. The most studied genera in this order is Mortierella, which contains species that cause crown rot in strawberries. There are currently 6 families and 13 described genera, with more than 100 species. Mortierella polycephala was the first species described in 1863 by Coemans, and named after M. Du Mortier, the president of Société de Botanique de Belgique. Dissophora decumbens, the second, wasn't described until 1914, and the most recent was Lobosporangium transversal described in 2004. Ecology The species described in this subphylum have evolved 3 main lifestyles: saprotrophic, mycorrhizal, or parasitic. Saprotrophic species are involved in decomposition of organic matter, mycorrhizal species form symbiotic relationships with plants, and parasitic species form harmful symbiotic relationships with other organisms. Saprotrophs Saprotrophs breakdown decomposing matter into different components: proteins into amino acids, lipids into fatty acids and glycerol, and starches into disaccharides. The species responsible usually require excess water, oxygen, pH less than 7, and low temperatures. Parasitism Parasitic species seen in Mucorales and Mortierellales cause infections in crops and immune compromised animals. A common infection of plants by some species in Mucorales is referred to as crown rot or stem rot, common symptoms are: rotting near the soil line, rotting on one side or on lateral branches.  Treatment is difficult if not caught in its early stages, and usually results in the death of the plant. Crown rot is seen in cereal plants (wheat, barley), with experiments from 2015 showing crop losses at 0.01 t/ha per unit increase in crown rot index or more. In addition to cereal plants, crown rot is seen in strawberries and other such low growing plants. Mycorrhizal Mycorrhizal, literally “fungus-root”, interactions are symbioses between fungi and plants. Such interactions are based on nutrient acquisition and sharing, the fungi increases the range over which nutrients are gathered and the plant provides materials that the fungi cannot produce. There are two main types of interactions: arbuscular endomycorrhizal, and ectomycorrhizal.  Arbuscular endomycorrhizal interactions are when the fungi is allowed to enter the plant, and inhabit special cells. The fungi produce structures that look like trees, called “arbuscules,” inside these cells.  Ectomycorrhizal interactions are similar symbioses, however the fungi are not allowed into any plant cells, though they may grow between them. Plant-microbe interactions Endogonales A new genus proposed in 2017, Jimgerdemannia, contains species with an ectomycorrhizal trophic mode. Further research is needed to understand these species. Several studies have observed fossils of some potential members forming mycorrhizal interactions with ancient plants. The genus Endogone is important in nutrient-deficient soils, such as sand dunes. The presence of species in this genus stabilizes the soil and provides some assistance to dune plants. Mucorales Some species in the genus Mucor are well known for causing crown rot in cereal plants and damage to stored foods. Mortierellales The majority of the species in this group are saprotrophic, and thus form no known relationships with plants. They do however play a role in nutrient transfer through the breakdown of decaying organic matter. The few that are parasitic are only so for animals and not plants. Evolution A genome study of Rhizophagus irregularis performed in 2013 supported the hypothesis that Glomeromycota was responsible for early plant-fungi symbiotic relationships. A paper released in 2015 suggests that a Mucoromycotina species formed a symbiotic relationships with liverworts during the Paleozoic era, which may have been the first plant-fungi symbiotic relationship. Phylogenetic studies have been unable to place Mucoromycotina in any definitive location within fungi, however some research has suggested that the lineage is fairly old. Due to recent advancements allowing for better phylogenetic studies, species assigned to closely related groups are being reassigned to Mucoromycotina, one such species being Rhizophagus irregularis. Broader implications Phylogeny With the improvement of phylogenetic studies, the placement of several established groups in fungi have been called into question. There is some debate regarding the relationship between Mucoromycotina and Glomeromycota, with some species currently in Glomeromycota being moved to Mucoromycotina. Environment The genus Endogone in Endogonales, contains species that grow in sand dunes, aiding the plants that grow in the nutrient-poor soils. The mycelium that is formed also plays a role in soil stabilization, preventing erosion. Other species produce fruiting bodies that are included in the diets of various small rodent species. Species found in Mortierella of Mortierellales have roles in decomposition of organic matter. Some species are among the first to colonize new roots, and others have shared a relationship with spruce trees, though the exact nature in unknown. Disease Crown rot Crown rot is a plant disease caused by species in Mucorales. The disease is characterized by rotting tissue at or near where the stem meets the soil. Treatment is difficult if not caught in its early stages, and usually results in the death of the plant. Crown rot is seen in cereal plants (wheat, barley), with experiments from 2015 showing crop losses at 0.01 t/ha per unit increase in crown rot index or more. In addition to cereal plants, crown rot is seen in strawberries and other such low growing plants. Zygomycosis Fungal infection seen in animals with compromised immune systems, meaning the host is already sick before the fungus invades and inhabits the body. Also referred to as Mucoromycosis, depending on the species. Uses A study was conducted examining insecticidal properties of several fungi species, Mortierella was included. The study focused on species isolated from Antarctica, with the intention of identifying potentially useful adaptations. They found that the Mortierella species examined was shown to have some insecticidal properties against waxmoth and housefly larvae. Further research is needed to determine the process by which this is possible, and potential usefulness. Problems A recurring problem with study of this phylum, is the difficulty in culturing specimens. Many of the species identified and used in phylogenetic studies, or others, have been collected in the field with few of them being cultured in labs. Such an issue impacts the ability to produce extensive phylogenetic trees, resulting in the currently unknown location of the phylum in fungi. References Zygomycota
Mucoromycotina
Biology
2,253
46,872,870
https://en.wikipedia.org/wiki/Complex%20spacetime
Complex spacetime is a mathematical framework that combines the concepts of complex numbers and spacetime in physics. In this framework, the usual real-valued coordinates of spacetime are replaced with complex-valued coordinates. This allows for the inclusion of imaginary components in the description of spacetime, which can have interesting implications in certain areas of physics, such as quantum field theory and string theory. The notion is entirely mathematical with no physics implied, but should be seen as a tool, for instance, as exemplified by the Wick rotation. Real and complex spaces Mathematics The complexification of a real vector space results in a complex vector space (over the complex number field). To "complexify" a space means extending ordinary scalar multiplication of vectors by real numbers to scalar multiplication by complex numbers. For complexified inner product spaces, the complex inner product on vectors replaces the ordinary real-valued inner product, an example of the latter being the dot product. In mathematical physics, when we complexify a real coordinate space we create a complex coordinate space , referred to in differential geometry as a "complex manifold". The space can be related to , since every complex number constitutes two real numbers. A complex spacetime geometry refers to the metric tensor being complex, not spacetime itself. Physics The Minkowski space of special relativity (SR) and general relativity (GR) is a 4 dimensional pseudo-Euclidean space. The spacetime underlying Albert Einstein's field equations, which mathematically describe gravitation, is a real 4 dimensional pseudo-Riemannian manifold. In quantum mechanics, wave functions describing particles are complex-valued functions of real space and time variables. The set of all wavefunctions for a given system is an infinite-dimensional complex Hilbert space. History The notion of spacetime having more than four dimensions is of interest in its own mathematical right. Its appearance in physics can be rooted to attempts of unifying the fundamental interactions, originally gravity and electromagnetism. These ideas prevail in string theory and beyond. The idea of complex spacetime has received considerably less attention, but it has been considered in conjunction with the Lorentz–Dirac equation and the Maxwell equations. Other ideas include mapping real spacetime into a complex representation space of , see twistor theory. In 1919, Theodor Kaluza posted his 5-dimensional extension of general relativity to Albert Einstein, who was impressed with how the equations of electromagnetism emerged from Kaluza's theory. In 1926, Oskar Klein suggested that Kaluza's extra dimension might be "curled up" into an extremely small circle, as if a circular topology is hidden within every point in space. Instead of being another spatial dimension, the extra dimension could be thought of as an angle, which created a hyper-dimension as it spun through 360°. This 5d theory is named Kaluza–Klein theory. In 1932, Hsin P. Soh of MIT, advised by Arthur Eddington, published a theory attempting to unify gravitation and electromagnetism within a complex 4-dimensional Riemannian geometry. The line element ds2 is complex-valued, so that the real part corresponds to mass and gravitation, while the imaginary part with charge and electromagnetism. The usual space x, y, z and time t coordinates themselves are real and spacetime is not complex, but tangent spaces are allowed to be. For several decades after Einstein published his general theory of relativity in 1915, he tried to unify gravity with electromagnetism to create a unified field theory explaining both interactions. In the latter years of World War II, Einstein began considering complex spacetime geometries of various kinds. In 1953, Wolfgang Pauli generalised the Kaluza–Klein theory to a six-dimensional space, and (using dimensional reduction) derived the essentials of an gauge theory (applied in quantum mechanics to the electroweak interaction), as if Klein's "curled up" circle had become the surface of an infinitesimal hypersphere. In 1975, Jerzy Plebanski published "Some Solutions of Complex Albert Einstein Equations". There have been attempts to formulate the Abraham–Lorentz force in complex spacetime by analytic continuation. See also Construction of a complex null tetrad Four-vector Hilbert space Twistor space Spherical basis Riemann–Silberstein vector References Further reading Spacetime Theory of relativity
Complex spacetime
Physics,Mathematics
899
26,891,264
https://en.wikipedia.org/wiki/List%20of%20institutions%20offering%20type%20design%20education
The following is a list of institutions offering type design education. Type design (also: typeface design, pop. font design), the art of creating typefaces, is taught at art and design colleges around the world. A small number of institutions offer a degree in type design; many others offer type design courses as part of their BA or MA curriculum in Graphic Design or Visual Communication. When no full type design course is offered, schools may invite professional type designers to give workshops; these one-off events are not listed in the overview below. Specialized type design degrees Argentina Universidad de Buenos Aires FADU/UBA, Secretaría de Posgrado MT-UBA, Maestría en Tipografía Degree: Brazil Centro Universitário Senac, São Paulo Pós-graduação Senac-SP Curso de Pós-graduação em Tipografia Degree: Lato sensu postgraduate France École Estienne, Paris Atelier de Création Typographique DSAA Design Typographique Degree: Master 1 École supérieure d’art et de design, Amiens EsadType Post-diplôme Ésad Amiens Degree: Post-master course Atelier National de Recherche Typographique, Nancy ANRT Degree: Post-master course Germany Hochschule für Grafik und Buchkunst Leipzig Class Type-Design Degree: Diplom Type-Design (MFA equivalent) Bauhaus-Universität Weimar Associate Professorship for Typography and Type Design Degree: Visual Communication (B.A.), Visual Communication (M.A.) Technische Hochschule Augsburg Professorship for Type Degree: Communication Design (B.A.), Identity Design (M.A.) Hochschule für Gestaltung Offenbach am Main Professorship for Typografie/Type Design Degree: Fine Arts (B.A.), Fine Arts (M.A.) Hochschule für Angewandte Wissenschaften Hamburg Professorship for Type Design Degree: Communication Design (B.A.), Communication Design (M.A.) Mexico Centro de Estudios Gestalt, Veracruz Maestría en Diseño Tipográfico The Netherlands Koninklijke Academie van Beeldende Kunsten (KABK), The Hague (Royal Academy of Art (The Hague)) Master of Design Type and Media Head of Program: Erik van Blokland Degree: MA Spain Tipo.g Laura Meseguer, Co-director Degree: Diploma in Typographic Creation University School of Design and Art of Barcelona (EINA) https://www.eina.cat/es/masters-y-postgrados/creacion-tipografica Degree: Diploma in Typographic Creation Switzerland École cantonale d'art de Lausanne (ECAL) (University of Applied Sciences Western Switzerland) MA Art Direction: Type Design Degree: MA Art Direction Zürcher Hochschule der Künste, (University of the Arts) ZHdK, Zürich CAS Schriftgestaltung (Certificate of Advanced Studies in Type Design) MAS Type Design and Typography (Master of Advanced Studies in Type Design and Typography) Degree: Certificate United Kingdom University of Reading Department of Typography & Graphic Communication MA in Typeface Design Degree: MA Typeface Design Degree: PhD Typography Anglia Ruskin University Degree: PHD Graphic Design & Typography United States The Cooper Union for the Advancement of Science and Art, New York City Postgraduate Certificate in Typeface Design (Type@Cooper) Degree: Certificate Letterform Archive, San Francisco Postgraduate Certificate in Typeface Design (Type West) Degree: Certificate School of Visual Arts, New York City SVA Type Lab A 4-week immersive program in Typeface Design Degree: CE Summer Residency Type design courses at design colleges and universities Australia University of Technology Sydney School of Design, Faculty of Design, Architecture & Building Bachelor of Design in Visual Communication Belgium Plantin Instituut voor Typografie, Antwerp Expert Class Type Design Degree: Certificate PXL-MAD, Hasselt Reading Type & Typography Degree: Bachelor and Master ENSAV La Cambre, Brussels Atelier de Typographie & Design Graphique Degree: Bachelor and Master Brazil Universidade Federal do Ceará (UFC) Bacharelado em Design (Undergraduate) Centro Universitário Senac Bacharelado em Design / Habilitação em Comunicação Visual (Undergraduate) Universidade de São Paulo, Faculdade de Arquitetura e Urbanismo Curso de Design (Undergraduate) Universidade Federal do Espírito Santo (UFES) Bacharelado em Desenho Industrial / Habilitação em Programação Visual (Undergraduate) Universidade Federal de Santa Maria (UFSm) Bacharelado em Desenho Industrial / Habilitação em Programação Visual (Undergraduate) Universidade Estadual Paulista 'Julio de Mesquita Filho' (UNesp, Bauru) Bacharelado em Design com habilitação em Design Gráfico (Undergraduate) Escola Superior de Propaganda e Marketing (ESPM) Bacharelado em Design com habilitação em Comunicação Visual e ênfase em Marketing (Undergraduate) Canada Emily Carr University of Art + Design, Vancouver Studio courses in typography Studio course in Type Design George Brown College SCHOOLOFDESIGN, Toronto Studio course: Experimental Typography (Instructor: Carl Shura) University of Guelph Humber, Toronto Course in Type Design Instructor: Patrick Griffin OCAD University, Toronto Course in The Art of Type York University, Toronto Course in Typeface Design Université du Québec à Montréal, Montreal École de design Course in Type design Instructors: Alessandro Colizzi, Étienne Aubert-Bonn Colombia Universidad Nacional de Colombia, Bogotá Studio course in Type Design Czech Republic UMPRUM (AKA VŠUP or AAAD), Prague (Academy of Arts, Architecture and Design) Studio of Type Design and Typography VUT: FaVU, Brno (Faculty of Fine Arts at the Brno University of Technology) Denmark Danish School of Media, Copenhagen The Danish Design School, Copenhagen Finland Aalto University School of Art and Design Department of Media KyAMK, Kymenlaakso University of Applied Sciences France École supérieure d’art et de design (ESAD), Amiens Post-diplôme « Typographie & language » diplome.esad-amiens.fr Post-diplôme Ésad Amiens ESAC, Pau Atelier de typographie Germany HAW Hochschule für Angewandte Wissenschaften, Hamburg BA / MA Kommunikationsdesign mit Schwerpunkt Type Design Designschule München (Berufsfachschule für Kommunikationsdesign) Fachhochschule Augsburg (University of Applied Science) Fachbereich Gestaltung, Studiengang Kommunikationsdesign Berliner technische Kunsthochschule BTK (University of Applied Science) Fachbereich Design, studiengangsübergreifend Fachhochschule Potsdam (University of Applied Science) Fachbereich Design, Modul Kommunikationsdesign Hochschule Darmstadt (University of Applied Science) Fachbereich Gestaltung, Studiengang Kommunikationsdesign Hochschule Niederrhein (University of Applied Science) Fachbereich Design, Faculty of Design Hochschule der Bildenden Künste Saar (Academy of Fine Arts Saar) Studiengang Kommunikationsdesign (BFA, Diplom, MFA) Kunsthochschule Weißensee, Berlin (Academy of Fine and Applied Arts Berlin-Weißensee) Studiengang Visuelle Kommunikation (BFA, Master) Muthesius Kunsthochschule, Kiel (Muthesius University of Fine Arts and Design) Department of Design, Communication Design, Typography, Typeface Design BA and MA courses Fachbereich Design, Studiengang Kommunikationsdesign, Typografie, Schriftgestaltung BA and MA courses Ireland National College of Art & Design, Dublin BA(Hons) Graphic Design Italy cfp Bauer, Milano Type design fundamentals Type design and Type in motion Isia Urbino, Urbino Typographic techniques Type design Politecnico di Milano, Milano Type Design (Communication Design BA) México Benemérita Universidad Autónoma de Puebla, Puebla Specialization course in Typeface Design Poland Akademia Sztuk Pięknych, Poznań Poznaniu Pracownia Znaku i Typografii Sign and Typography Studio Akademicki Kurs Typografii, Warszawa Akademicki Kurs Typografii Portugal Communication and Art Department, University of Aveiro Course Syllabus Typography Course blog Escola Superior de Arte e Design das Caldas da Rainha, Instituto Politécnico de Leiria Graphic Design Course Type Design studies Escola Superior de Arte e Design de Matosinhos Communication Design Course Type Design studies Fine Art Faculty of the University of Porto Course Syllabus Type Design course website (course description, blog, projects, results,…) Russia British Higher School of Art and Design, Moscow Course of Type & Typography Moscow State University of Printing Arts Type Design Workshop Spain Tipo.g Escuela de Tipografía de Barcelona Tipo.g Escuela de Tipografía de Barcelona Sweden Södertörn University, Stockholm Typsnittsdesign och typografi Typsnittsdesign och fontutveckling United States California Institute of the Arts Program in Graphic Design Maine College of Art Undergraduate program in Graphic Design (BFA) Massachusetts College of Art & Design Graphic Design undergraduate program (BFA) Parsons School of Design, New York Undergraduate Type Design Portland State University Intro Level Type Design Course Pratt Institute School of Art and Design Rhode Island School of Design 1 Undergraduate and Masters Introduction to Type Design course Savannah College of Art and Design 1 type face design undergraduate, 3 graduate level typeface design classes and 1 typeface marketing School of the Museum of Fine Arts at Tufts University, Boston Digital Type Founding: a fifteen-week course in type design Instructor: Charles Gibbons School of Visual Arts (SVA), New York Continuing Education course in Type Design University of Washington School of Art, Art History, & Design The Visual Communication Design Program Yale School of Art Letterform/Type Design (showcase of previous work) California College of the Arts One class on typeface design, offered as an investigative studio in junior year Type West at Letterform Archive A year-long postgraduate certificate in typeface design grounded in the Letterform Archive collection of over 50,000 specimens from type and design history California State University, Los Angeles Typeface Design Notes External links Education page of the ATypI website: Association Typographique Internationale, the worldwide association of typographers, typeface designers and manufacturers Education Wiki on Typophile Brazilian Typography Education Wiki (in Portuguese) Não conserte o que não está quebrado - Student experience during the Brazilian Postgraduate Studies in Typography (in Portuguese) Typography Type design education, Institutions offering Design
List of institutions offering type design education
Engineering
2,388
15,525,260
https://en.wikipedia.org/wiki/Human%20power
Human power is the rate of work or energy that is produced from the human body. It can also refer to the power (rate of work per time) of a human. Power comes primarily from muscles, but body heat is also used to do work like warming shelters, food, or other humans. World records of power performance by humans are of interest to work planners and work-process engineers. The average level of human power that can be maintained over a certain duration of time⁠ is interesting to engineers designing work operations in industry. Human-powered transport includes bicycles, rowing, skiing and many other forms of mobility. Human-powered equipment is occasionally used to generate, and sometimes to store, electrical energy for use where no other source of power is available. These include the Gibson girl survival radio, wind-up or (clockwork) radio and pedal radio. Available power Normal human metabolism produces heat at a basal metabolic rate of around 80 watts. During a bicycle race, an elite cyclist can produce around 440 watts of mechanical power over an hour and track cyclists in short bursts over 2500 watts; modern racing bicycles have greater than 95% mechanical efficiency. An adult of good fitness is more likely to average between 50 and 150 watts for an hour of vigorous exercise. Over an 8-hour work shift, an average, healthy, well-fed and motivated manual laborer may sustain an output of around 75 watts of power. However, the potential yield of human electric power is decreased by the inefficiency of any generator device, since all real generators incur losses during the energy conversion process. It is possible to use exercise equipment for power generation, by attaching the moving parts to components of electric generators; some home gym equipment uses DC generators to power readouts, displays, and control the amount of resistance offered by the machine. The amount of energy generated is so small compared to industrial power sources that the cost of conversion equipment makes it financially impractical. For example, supplying an average United States home solely with electricity generated from exercise equipment for one day would require more than a hundred people to ride stationary bicycles for all of it. Transport Several forms of transport utilize human power. They include the bicycle, wheelchair, walking, skateboard, wheelbarrow, rowing, skis, and rickshaw. Some forms may utilize more than one person. The historical galley was propelled by freemen or citizens in ancient times, and by slaves captured by pirates in more recent times. The MacCready Gossamer Condor was the first human-powered aircraft capable of controlled and sustained flight, making its first flight in 1977. In 2007, Jason Lewis of Expedition 360 became the first person to circumnavigate the globe at non-polar latitudes using only human power—walking, biking, and rollerblading across the landmasses; and swimming, kayaking, rowing, and using a 26-foot-long pedal-powered boat to cross the oceans. General devices and machines Treadwheels, also called treadmills, are engines or machines powered by humans. These may resemble a water wheel in appearance, and can be worked either by a human treading paddles set into its circumference (treadmill), or by a human standing inside it (treadwheel). Some devices use human power. They may directly use mechanical power from muscles, or a generator may convert energy generated by the body into electrical power. Human-powered equipment primarily consists of electrical appliances which can be powered by electricity generated by human muscle power as an alternative to conventional sources of electricity such as disposable primary batteries and the electrical grid. Such devices contain electric generators or an induction system to recharge their batteries. Separate crank-operated generators are now available to recharge battery-powered portable electronic devices such as mobile phones. Others, such as mechanically powered flashlights, have the generator integrated within the device. Wrist watches can use muscle power to keep their mainsprings wound up. An alternative to rechargeable batteries for electricity storage is supercapacitors, now being used in some devices such as the mechanically powered flashlight shown here. Devices that store the energy mechanically, rather than electrically, include clockwork radios with a mainspring, which is wound up by a crank and turns a generator to power the radio. An early example of regular use of human-powered electrical equipment is in early telephone systems; current to ring the remote bell was provided by a subscriber cranking a handle on the telephone, which turned a small magneto generator. Human-powered devices are useful as emergency equipment, when natural disaster, war, or civil disturbance make regular power supplies unavailable. They have also been seen as economical for use in poor countries, where batteries may be expensive and mains electricity unreliable or unavailable. They are also an environmentally preferable alternative to the use of disposable batteries, which are a wasteful source of energy and may introduce heavy metals into the environment. Communication is a common application for the relatively small amount of electric power that can be generated by a human turning a generator. Human-powered radio Survival radio The World War II-era Gibson girl survival radio used a hand-cranked generator to provide power; this avoided the unreliable performance of dry-cell batteries that might be stored for months before they were needed, although it had the drawback that the survivor had to be fit enough to turn the crank. Survival radios were invented and deployed by both sides during the war. The SCR-578 (and the similar post-war AN/CRT-3) survival radio transmitters carried by aircraft on over-water operations were given the nickname "Gibson Girl" because of their "hourglass" shape, which allowed them to be held stationary between the legs while the generator handle was turned. Military radio During World War II, U.S. troops sometimes employed hand crank generators, GN-35 and GN-45, to power Signal Corps Radio transmitter/receivers. The hand cranking was laborious, but generated sufficient current for smaller radio sets, such as the SCR-131, SCR-161, SCR-171, SCR-284, and SCR-694. Windup radio A windup radio or clockwork radio is a radio that is powered by human muscle power rather than batteries or the electrical grid. In the most common arrangement, an internal electric generator is run by a mainspring, which is wound by a hand crank on the case. Turning the crank winds the spring and a full winding will allow several hours of operation. Alternatively, the generator can charge an internal battery. Radios powered by handcranked generators are not new, but their market was previously seen as limited to emergency or military organizations. The modern clockwork radio was designed and patented in 1991 by British inventor Trevor Baylis as a response to the HIV/AIDS crisis. He envisioned it as a radio for use by poor people in developing countries, especially in Africa, without access to batteries. In 1994, British accountant Chris Staines and his South African partner, Rory Stear, secured the worldwide license to the invention and cofounded Baygen Power Industries (now Freeplay Energy Ltd), which produced the first commercial model. The key to its design, which is no longer in use, was the use of a constant-velocity spring to store the potential energy. After Baylis lost control of his invention when Baygen became Freeplay, the Freeplay Energy units switched to disposable batteries charged by cheaper hand-crank generators. Like other self-powered equipment, windup radios were intended for camping, emergencies and for areas where there is no electrical grid and replacement batteries are hard to obtain, such as in developing countries or remote settlements. They are also useful where a radio is not used on a regular basis and batteries would deteriorate, such as at a vacation house or cabin. Windup radios designed for emergency use often include flashlights, blinking emergency lights, and emergency sirens. They also may include multiple alternate power sources, such as disposable or rechargeable batteries, cigarette lighter receptacles, and solar cells. Pedal-powered transmitter The pedal radio (or pedal wireless) was a radio transmitter-receiver powered by a pedal-driven generator. It was developed by South Australian engineer and inventor Alfred Traeger in 1929 as a way of providing radio communications to remote homesteads and cattle stations in the Australian outback. There were no mains or generator power available at the time and batteries to provide the power required would have been too expensive. It was a highly important invention, as it was this technology that enabled the Royal Flying Doctor Service, and later the School of the Air, linking people living remotely to emergency services and education. See also Manual labour Batteryless radio Bottle dynamo Crank (mechanism) Energy harvesting Micropower Pavegen References Renewable energy
Human power
Physics
1,804
61,506,065
https://en.wikipedia.org/wiki/C15H13FO2
{{DISPLAYTITLE:C15H13FO2}} The molecular formula C15H13FO2 (molar mass: 244.261 g/mol, exact mass: 244.089958 u) may refer to: Flurbiprofen Tarenflurbil (also called Flurizan) Molecular formulas
C15H13FO2
Physics,Chemistry
72
3,423,785
https://en.wikipedia.org/wiki/IEEE%201355
IEEE Standard 1355-1995, IEC 14575, or ISO 14575 is a data communications standard for Heterogeneous Interconnect (HIC). IEC 14575 is a low-cost, low latency, scalable serial interconnection system, originally intended for communication between large numbers of inexpensive computers. IEC 14575 lacks many of the complexities of other data networks. The standard defined several different types of transmission media (including wires and optic fiber), to address different applications. Since the high-level network logic is compatible, inexpensive electronic adapters are possible. IEEE 1355 is often used in scientific laboratories. Promoters include large laboratories, such as CERN, and scientific agencies. For example, the ESA advocates a derivative standard called SpaceWire. Goals The protocol was designed for a simple, low cost switched network made of point-to-point links. This network sends variable length data packets reliably at high speed. It routes the packets using wormhole routing. Unlike Token Ring or other types of local area networks (LANs) with comparable specifications, IEEE 1355 scales beyond a thousand nodes without requiring higher transmission speeds. The network is designed to carry traffic from other types of networks, notably Internet Protocol and Asynchronous Transfer Mode (ATM), but does not depend on other protocols for data transfers or switching. In this, it resembles Multiprotocol Label Switching (MPLS). IEEE 1355 had goals like Futurebus and its derivatives Scalable Coherent Interface (SCI), and InfiniBand. The packet routing system of IEEE 1355 is also similar to VPLS, and uses a packet labeling scheme similar to MPLS. IEEE 1355 achieves its design goals with relatively simple digital electronics and very little software. This simplicity is valued by many engineers and scientists. Paul Walker (see links ) said that when implemented in an FPGA, the standard takes about a third the hardware resources of a UART (a standard serial port), and gives one hundred times the data transmission capacity, while implementing a full switching network and being easier to program. Historically, IEEE 1355 derived from the asynchronous serial networks developed for the Transputer model T9000 on-chip serial data interfaces. The Transputer was a microprocessor developed to inexpensively implement parallel computation. IEEE 1355 resulted from an attempt to preserve the Transputer's unusually simple data network. This data strobe encoding scheme makes the links self-clocking, able to adapt automatically to different speeds. It was patented by Inmos under U.K. patent number 9011700.3, claim 16 (DS-Link bit-level encoding), and in 1991 under US patent 5341371, claim 16. The patent expired in 2011. Use IEEE 1355 inspired SpaceWire. It is sometimes used for digital data connections between scientific instruments, controllers and recording systems. IEEE 1355 is used in scientific instrumentation because it is easy to program and it manages most events by itself without complex real-time software. IEEE 1355 includes a definition for cheap, fast, short-distance network media, intended as the internal protocols for electronics, including network switching and routing equipment. It also includes medium, and long-distance network protocols, intended for local area networks and wide area networks. IEEE 1355 is designed for point-to-point use. It could therefore take the place of the most common use of Ethernet, if it used equivalent signaling technologies (such as Low voltage differential signaling). IEEE 1355 could work well for consumer digital appliances. The protocol is simpler than Universal Serial Bus (USB), FireWire, Peripheral Component Interconnect (PCI) and other consumer protocols. This simplicity can reduce equipment expense and enhance reliability. IEEE 1355 does not define any message-level transactions, so these would have to be defined in auxiliary standards. A 1024 node testbed called Macramé was constructed in Europe in 1997. Researchers measuring the performance and reliability of the Macramé testbed provided useful input to the working group which established the standard. What it is The work of the Institute of Electrical and Electronics Engineers was sponsored by the Bus Architecture Standards Committee as part of the Open Microprocessor Systems Initiative. The chair of the group was Colin Whitby-Strevens, co-chair was Roland Marbot, and editor was Andrew Cofler. The standard was approved 21 September 1995 as IEEE Standard for Heterogeneous InterConnect (HIC) (Low-Cost, Low-Latency Scalable Serial Interconnect for Parallel System Construction) and published as IEEE Std 1355-1995. A trade association was formed in October 1999 and maintained a web site until 2004. The family of standards use similar logic and behavior, but operate at a wide range of speeds over several types of media. The authors of the standard say that no single standard addresses all price and performance points for a network. Therefore, the standard includes slices (their words) for single-ended (cheap), differential (reliable) and high speed (fast) electrical interfaces, as well as fiber optic interfaces. Long-distance or fast interfaces are designed so that there is no net power transfer through the cable. Transmission speeds range from 10 megabits per second to 1 gigabit per second. The network's normal data consists of 8-bit bytes sent with flow control. This makes it compatible with other common transmission media, including standard telecommunications links. The maximum length of the different data transmission media range from one meter to 3 kilometers. The 3 km standard is the fastest. The others are cheaper. The connectors are defined so that if a plug fits a jack, the connection is supposed to work. Cables have the same type of plug at both ends, so that each standard has only one type of cable. "Extenders" are defined as two-ended jacks that connect two standard cables. Interface electronics perform most of the packet-handling, routing, housekeeping and protocol management. Software is not needed for these tasks. When there is an error, the two ends of a link exchange an interval of silence or a reset, and then restart the protocol as if from power-up. A switching node reads the first few bytes of a packet as an address, and then forwards the rest of the packet to the next link without reading or changing it. This is called "wormhole switching" in an annex to the standard. Wormhole switching requires no software to implement a switching fabric. Simple hardware logic can arrange fail-overs to redundant links. Each link defines a full-duplex (continuous bidirectional transmission and reception) point-to-point connection between two communicating pieces of electronics. Every transmission path has a flow control protocol, so that when a receiver begins to get too much data, it can turn down the flow. Every transmission path's electronics can send link control data separately from normal data. When a link is idle, it transmits NULL characters. This maintains synchronization, finishes any remaining transmission quickly, and tests the link. Some Spacewire users are experimenting with half-duplex versions. The general scheme is that half-duplex uses one transmission channel rather than two. In space, this is useful because the weight of wires is half as much. Controllers would reverse the link after sending an end-of-packet character. The scheme is most effective in the self-clocking electrical systems, such as Spacewire. In the high speed optical slices, half-duplex throughput would be limited by the synchronization time of the phase locked loops used to recover the bit clock. Definition This description is a brief outline. The standard defines more details, such as the connector dimensions, noise margins, and attenuation budgets. IEEE 1355 is defined in layers and slices. The layers are network features that are similar in different media and signal codings. Slices identify a vertical slice of compatible layers. The lowest layer defines signals. The highest defines packets. Combinations of packets, the application or transaction layer, are outside the standard. A slice, an interoperable implementation, is defined by a convenient descriptive code, SC-TM-dd, where: SC is the signal coding system. Valid values are DS (data strobe encoding), TS (three of six), and HS (high speed). TM is the transmission medium. Valid values are SE (single-ended electrical), DE (differential electrical), and FO (fiber optic) dd is the speed in hundreds of megabaud (MBd). A baud rate relates to a change of the signal. Transmission codings may send several bits per second per baud, or several baud per bit per second. Defined slices include: DS-SE-02, cheap, useful inside electronic equipment, (200 Mbit/s, <1 meter maximum length). DS-DE-02, noise-resistant electrical connections between equipment (200 Mbit/s, <10 meters). TS-FO-02, good, useful for long-distance connections (200 Mbit/s, <300 meters). HS-SE-10, short very fast connections between equipment (1 Gbit/s, <8 meters). HS-FO-10, long very fast connections (1 Gbit/s, <3000 meters). Spacewire is very similar to DS-DE-02, except it uses a microminiature 9-pin "D" connector (lower-weight), and low voltage differential signaling. It also defines some higher-level standard message formats, routing methods, and connector and wire materials that work reliably in vacuum and severe vibration. Layer 0: The signal layer In all slices, each link can continuously transmit in both directions ("full duplex"). Each link has two transmission channels, one for each direction. In a link's cable, the channels have a "half twist" so that input and output always go to the same pins of the connector on both ends of the cable. This makes the cables "promiscuous", that is, each end of any cable will plug into any jack on a piece of equipment. Each end of a link's cable must be clearly marked with the type of link: for example "IEEE 1355 DS-DE Link Cable". Layer 1: The Character Layer Every slice defines 256 data characters. This is enough to represent 8 bits per character. These are called "normal data" or "N-chars." Every slice defines a number of special link control characters, sometimes called "L-chars." The slice cannot confuse them with N-chars. Each slice includes a flow control link-control character, or FCC, as well as L-chars for NULL (no data), ESCAPE, end of packet, and exceptional end of packet. Some slices add a few more to start-up the link, diagnose problems, etc. Every slice has error detection defined at the character layer, usually using parity. The parity is usually distributed over several characters. A flow-control-character gives a node permission to transmit a few normal data characters. The number depends on the slice, with faster slices sending more characters per FCC. Building flow control in at a low level makes the link far more reliable, and removes much of the need to retransmit packets. Layer 2: The Exchange layer Once a link starts, it continuously exchanges characters. These are NULLs if there is no data to exchange. This tests the link, and ensures that the parity bits are sent quickly to finish messages. Each slice has its own start-up sequence. For example, DS-SE and DS-DE are silent, then start sending as soon as they are commanded to start. A received character is a command to start. In error detection, normally the two ends of the link exchange a very brief silence (e.g. a few microseconds for DS-SE), or a reset command and then try to reset and restore the link as if from power-up. Layer 3: The common packet layer A packet is a sequence of normal data with a specific order and format, ended by an "end of packet" character. Links do not interleave data from several packets. The first few characters of a packet describe its destination. Hardware can read those bytes to route the packet. Hardware does not need to store the packet, or perform any other calculations on it in order to copy it and route it. One standard way to route packets is wormhole source routing in which the first data byte always tells the router which of its outputs should carry the packet. The router then strips off the first byte, exposing the next byte for use by the next router. Layer 4: The Transaction Layer IEEE 1355 acknowledges that there must be sequences of packets to perform useful work. It does not define any of these sequences. Slice: DS-SE-02 DS-SE stands for "Data and Strobe, Single-ended Electrical." This is the least expensive electrical standard. It sends data at up to 200 megabits per second, for up to 1 meter, this is useful inside an instrument for reliable low-pin-count communications. A connection has two channels, one per direction. Each channel consists of two wires carrying strobe and data. The strobe line changes state whenever the data line starts a new bit with the same value as the previous bit. This scheme makes the links self-clocking, able to adapt automatically to different speeds. Data characters start with an odd parity, followed by a zero bit. This means that the character is a normal data character, followed by eight data bits. Link control characters start with odd parity, followed by a one bit, followed by two bits. Odd-1 means that the character is a link control character. 00 is the flow control character FCC, 01 is a normal end of packet EOP, 10 is an exceptional end of packet EEOP, and 11 is an escape character ESC. A NULL is the sequence "ESC FCC". An FCC gives permission to send eight (8) normal data characters. Each line can have two states: above 2.0 V, and below 0.8 V -- single-ended CMOS or TTL logic level signals. The nominal impedance is either 50 or 100 ohms, for 3.3 V and 5 V systems respectively. Rise and fall times should be <100 ns. Capacitance should be <300 pF for 100 MBd, and <4 pF for 200 MBd. No connectors are defined because DS-SE is designed for use within electronic equipment. Slice: DS-DE-02 DS-DE stands for "Data and Strobe, Differential Electrical." This is the electrical standard that resists electrical noise the best. It sends data at up to 200 megabits per second, for up to 10 meters, which is useful for connecting instruments. The cable is thick, and the standard connectors are both heavy and expensive. Each cable has eight wires carrying data. These eight wires are divided into two channels, one for each direction. Each channel consists of four wires, two twisted pairs. One twisted pair carries differential strobe, and the other carries differential data. The encoding for the character layer and above is otherwise like the DS-SE definition. Since the cable has ten wires, and eight are used for data, a twisted pair is left over. The black/white pair optionally carries 5 V power and return. The driver rise time should be between 0.5 and 2ns. The differential voltage may range from 0.8 V to 1.4 V, with 1.0 V typical—differential PECL logic level signals. The differential impedance is 95 ± 10 ohms. The common mode output voltage is 2.5–4 V. The receiver's input impedance should be 100 ohms, within 10%. the receiver input's common mode voltage must be between -1 and 7 V. The receiver's sensitivity should be at least 200 mV. The standard cable has ten wires. The connectors are IEC-61076-4-107. Plug A (pin 1 is first, pin 2 second): a:brown/blue, b:red/green, c:white/black, d:orange/yellow, e:violet/gray (Pin 1 is given first). Plug B (pin 2 is first, pin 1 second): e:brown/blue, d:red/green, c:black/white, b:orange/yellow, a:violet/gray. Note the implementation the "half twist", routing inputs and outputs to the same pins on each plug. The Pin 1C/black, may carry 5 volts, while 2C/white may carry return. If the power supply is present it must have a self-healing fuse, and may have ground fault protection. If it is absent, the pins should include a 1 MΩ resistor to ground to leak away static voltages. Slice: TS-FO-02 TS-FO stands for "Three of Six, Fiber Optical." This is a fiber optic standard designed for affordable plastic fibers operating in the near infrared. It sends 200 megabits/second about 300 meters. The wavelength should be between 760 and 900 nanometers, which is in the near infrared. The operating speed should be at most 250 MBd with at most 100 parts per million variation. The dynamic range should be about 12 decibels. The cable for this link uses two 62.5 micrometer-diameter multimode optic fibers. The fiber's maximum attenuation should be 4 decibels per kilometer at an infrared wavelength of 850 nanometers. The standard connector on each end is a duplex MU connector. Ferrule 2 is always "in", while ferrule 1 is "out". The centerlines should be on 14 mm centers, and the connector should be 13.9 mm maximum. The cable has a "half twist" to make it promiscuous. The line code "3/6" sends a stream of six bits, of which three bits are always set. There are twenty possible characters. Sixteen are used to send four bits, two (111000 and 000111) are unused, and two are used to construct link control characters. These are shown with the first bit sent starting on the left. Such a constant-weight code detects all single-bit errors. Combined with a longitudinal redundancy check, it avoids the need for a CRC which can double the size of small packets. Normal data bytes are sent as two data characters, sent least significant nibble first. Special symbols are sent as pairs including at least one control character. The two control characters are called "Control" and "Control*", depending on the previous character. If the previous character ends with a 0, Control is 010101 and Control* is 101010. If the previous character ends with a 1, Control is 101010, and Control* is 010101. NULL is Control Control*: (0) 010101 010101 or (1) 101010 101010 FCC (flow control character) is Control Control: (0) 010101 101010 or (1) 101010 010101 INIT is Control Control* Control* Control* (NULL Control* Control*). EOP_1 (end of packet) is Control Checksum (see below for value). EOP_2 (exceptional end of packet) is Checksum Control. Data errors are detected by a longitudinal parity: all the data nibbles exclusive-ored and then the result is sent as the 4-bit checksum nibble in the end-of-packet symbol. This link transmits NULLs when idle. Each flow control character (FCC) authorizes the other end to send eight bytes, i.e. sixteen normal data characters. The link starts by sending INIT characters. After receiving them for , it switches to sending NULLs. After it sends NULLs for , it sends a single INIT. When a link has both sent and received a single INIT, it may send an FCC and start receiving data. Receiving two consecutive INITs, or many zeros or ones, indicates disconnection. Like the two-out-of-five code, it may be decoded by assigning weights to bit positions, in this case 1-2-0-4-8-0. The two 0-weight bits are assigned to ensure there are a total of three bits set. When the nibble has one or three 1 bits, this is unambiguous. When the nibble is 0 or F (zero or four 1 bits), an exception must be made. And when the nibble has two 1 bits, there is ambiguity: Nibbles 3 and C encode to 11_00_ and 00_11_. The codes 110001 and 001110 are used, leaving the codes 111000 and 000111 unused. This ensures there are never more than four identical bits in a row. Nibbles 5 and A encode to 10_10_ and 01_01_. The codes 101100 and 010011 are used, with the codes 100101 and 011010 assigned to nibbles F and 0 instead. Nibbles 6 and 9 encode to 01_10_ and 10_01_. The codes 011100 and 100011 are used, with the codes 010101 and 101010 used for control characters. Slice: HS-SE-10 HS-SE stands for "High speed, Single-ended Electrical." This is the fastest electrical slice. It sends a gigabit per second, but the 8 meter range limits its usage to instrument clusters. However, the modulation and link control features of this standard are also used by the wide-area fiber optic protocols. A link cable consists of two 2.85 mm diameter 50 Ω coaxial cables. The impedance of the whole transmission line shall be 50 ohms ±10%. The connectors shall follow IEC 1076-4-107. The coaxial cables do a "half twist" so that pin B is always "in" and pin A is always "out". The electrical link is single-ended. For 3.3 V operation, low is 1.25 V and high is 2 V. For 5 V operation, low is 2.1 V and high is 2.9 V. The signaling speed is 100 MBd to 1 GBd. The maximum rise time is 300 picoseconds, and the minimum is 100 picoseconds. The HS link's 8B/12B code is a balanced paired disparity code, so there is no net power transfer. It arranges this by keeping a running disparity, a count of the average number of ones and zeros. It uses the running disparity to selectively invert characters. An inverted character is marked with a set invert bit. 8B/12B also guarantees a clock transition on each character. 8B/12B first sends an odd parity bit, followed by 8 bits (least-significant bit first), followed by an inversion bit, followed by a 1 (which is the start bit), and a 0 which is the stop bit. When the disparity of a character is zero (that is, it has the same number of ones and zeroes, and therefore will not transfer power), it can be transmitted either inverted or noninverted with no effect on the running disparity. Link control characters have a disparity of zero, and are inverted. This defines 126 possible link characters. Every other character is a normal data character. The link characters are: 0:IDLE 5:START_REQ (start request) 1:START_ACK (start acknowledge) 2:STOP_REQ (stop request) 3:STOP_ACK (stop acknowledge) 4:STOP_NACK (stop negative acknowledge) 125:FCC (flow control character) 6:RESET When a link starts, each side has a bit "CAL" that is zero before the receiver is calibrated to the link. When CAL is zero, the receiver throws away any data it receives. During a unidirectional start up, side A sends IDLE. When side B is calibrated, it begins to send IDLE to A. When A is calibrated, it sends START_REQ. B responds with START_ACK back to A. A then sends START_REQ to B, B responds with START_ACK, and at that point, either A or B can send a flow control character and start to get data. In a bidirectional start-up both sides start sending IDLE. When side A is calibrated, it send START_REQ to side B. Side B sends START_ACK, and then A can send an FCC to start getting data. Side B does exactly the same. If the other side is not ready, it does not respond with a START_ACK. After 5 ms, side A tries again. After 50 ms, side A gives up, turns off the power, stops and reports an error. This behavior is to prevent eye-injuries from a high-powered disconnected optical fiber end. A flow control character (FCC) authorizes the receiver to send thirty-two (32) data characters. A reset character is echoed, and then causes a unidirectional start-up. If a receiver loses calibration, it can either send a reset command, or simply hold its transmitter low, causing a calibration failure in the other link. The link is only shutdown if both nodes request a shutdown. Side A sends STOP_REQ, side B responds with STOP_ACK if it is ready to shut down, or STOP_NACK if it is not ready. Side B must perform the same sequence. Slice: HS-FO-10 "HS-FO" stands for "High Speed Fiber Optical." This is the fastest slice, and has the longest range, as well. It sends a gigabit/second up to 3000 meters. The line code and higher levels are just like HS-SE-10. The cable is very similar to the other optical cable, TS-FO-02, except for the mandatory label and the connector, which should be IEC-1754-6. However, in older cables, it is often exactly the same as TS-FO-02, except for the label. HS-FO-10 and TS-FO-02 will not interoperate. This cable can have 62.5 micrometer multimode cable, 50 micrometer multimode cable, or 9 micrometer single-mode cable. These vary in expense and the distances they permit: 100 meters, 1000 meters, and 3000 meters respectively. For multimode fiber, the transmitter launch power is generally −12 dBm. The wavelength is 760–900 nanometer (near infrared). On the receiver, the dynamic range is 10 dB, and the sensitivity is −21 dBm with a bit error rate of one bit in 1012 bits. For single mode fiber, the transmitter launch power is generally −12 dBm. The wavelength is 1250–1340 nanometers (farther infrared). On the receiver, the dynamic range is 12 dB, and the sensitivity is −20 dBm with a bit error rate of one bit in 1012 bits. References Further reading External links official specification; requires payment CERN's public copy of official IEEE standard 1355-1995 The European Space Agency's site for SpaceWire, a derived standard. Computer buses Networking standards Serial buses
IEEE 1355
Technology,Engineering
5,771
53,559,951
https://en.wikipedia.org/wiki/Hazard%20substitution
Hazard substitution is a hazard control strategy in which a material or process is replaced with another that is less hazardous. Substitution is the second most effective of the five members of the hierarchy of hazard controls in protecting workers, after elimination. Substitution and elimination are most effective early in the design process, when they may be inexpensive and simple to implement, while for an existing process they may require major changes in equipment and procedures. The concept of prevention through design emphasizes integrating the more effective control methods such as elimination and substitution early in the design phase. Hazard substitutions can involve not only changing one chemical for another, but also using the same chemical in a less hazardous form. Substitutions can also be made to processes and equipment. In making a substitution, the hazards of the new material should be considered and monitored, so that a new hazard is not unwittingly introduced, causing "regrettable substitutions". Substitution can also fail as a strategy if the hazardous process or material is reintroduced at a later stage in the design or production phases, or if cost or quality concerns cause a substitution to not be adopted. Examples Chemicals A common substitution is to replace a toxic chemical with a less toxic one. Some examples include replacing the solvent benzene, a carcinogen, with toluene; switching from organic solvents to water-based detergents; and replacing paints containing lead with those containing non-leaded pigments. Dry cleaning can avoid the use of toxic perchloroethylene by using petroleum-based solvents, supercritical carbon dioxide, or wet cleaning techniques. Chemical substitutions are an example of green chemistry. Chemicals can also be substituted with a different form of the same chemical. In general, inhalation exposure to dusty powders can be reduced by using a slurry or suspension of particles in a liquid solvent instead of a dry powder, or substituting larger particles such as pellets or ingots. Some chemicals, such as nanomaterials, often cannot be eliminated or substituted with conventional materials because their unique properties are necessary to the desired product or process. However, it may be possible to choose properties of the nanoparticle such as size, shape, functionalization, surface charge, solubility, agglomeration, and aggregation state to improve their toxicological properties while retaining the desired functionality. In 2014, the U.S. National Academies released a recommended decision-making framework for chemical substitutions. The framework maintained health-related metrics used by previous frameworks, including carcinogenicity, mutagenicity, reproductive and developmental toxicity, endocrine disruption, acute and chronic toxicity, dermal and eye irritation, and dermal and respiratory sensitization, and ecotoxicity. It added an emphasis on assessing actual exposure rather than only the inherent hazards of the chemical itself, decision rules for resolving trade-offs among hazards, and consideration of novel data sources on hazards such as simulations. The assessment framework has 13 steps, many of which are unique, such as dedicated steps for scoping and problem formulation, assessing physicochemical properties, broader life-cycle assessment, and research and innovation. The framework also provides guidance on tools and sources for scientific information. Processes and equipment Hazards to workers can be reduced by limiting or replacing procedures that may aerosolize toxic materials contained in the item. Examples include limiting agitation procedures such as sonication, or by using a lower-temperature process in chemical reactors to minimize release of materials in exhaust. Substituting a water-jet cutting process instead of mechanical sawing of a solid item also creates less dust. Equipment can also be substituted, for example using a self-retracting lifeline instead of a fixed rope for fall protection, or packaging materials in smaller containers to prevent lifting injuries. Health effects from noise can be controlled by purchasing or renting less noisy equipment. This topic has been the subject of several Buy Quiet campaigns, and the NIOSH Power Tools Database contains data on sound power, pressure, and vibration levels of many power tools. Regrettable substitutions A regrettable substitution occurs when a material or process believed to be less hazardous turns out to have an unexpected hazard. One well-known example occurred when dichloromethane was phased out as a brake cleaner due to its environmental effects, but its replacement n-hexane was subsequently found to be neurotoxic. Often the substances being replaced have well-studied hazards, but the alternatives may have little or no toxicity data, making alternatives assessments difficult. Often, chemicals with no toxicity data are considered preferable since they do not prompt such concerns as a California Proposition 65 warning. Another type of regrettable substitution involves shifting the burden of a hazard to another party. One example is that the potent neurotoxin acrylamide can be replaced with the safer N-vinyl formamide, but the synthesis of the latter requires use of the highly toxic hydrogen cyanide, increasing the hazards to workers in the manufacturing firm. In performing an alternatives assessment, including the effects over the entire product lifecycle as part of a life-cycle assessment can mitigate this. References Industrial hygiene Safety engineering Risk analysis
Hazard substitution
Engineering
1,047
54,061
https://en.wikipedia.org/wiki/Amphitheatre
An amphitheatre (U.S. English: amphitheater) is an open-air venue used for entertainment, performances, and sports. The term derives from the ancient Greek (), from (), meaning "on both sides" or "around" and (), meaning "place for viewing". Ancient Greek theatres were typically built on hillsides and semi-circular in design. The first amphitheatre may have been built at Pompeii around 70 BC. Ancient Roman amphitheatres were oval or circular in plan, with seating tiers that surrounded the central performance area, like a modern open-air stadium. In contrast, both ancient Greek and ancient Roman theatres were built in a semicircle, with tiered seating rising on one side of the performance area. Modern English parlance uses "amphitheatre" for any structure with sloping seating, including theatre-style stages with spectator seating on only one side, theatres in the round, and stadia. They can be indoor or outdoor. Roman amphitheatres About 230 Roman amphitheatres have been found across the area of the Roman Empire. Their typical shape, functions and name distinguish them from Roman theatres, which are more or less semicircular in shape; from the circuses (similar to hippodromes) whose much longer circuits were designed mainly for horse or chariot racing events; and from the smaller stadia, which were primarily designed for athletics and footraces. Roman amphitheatres were circular or oval in plan, with a central arena surrounded by perimeter seating tiers. The seating tiers were pierced by entrance-ways controlling access to the arena floor, and isolating it from the audience. Temporary wooden structures functioning as amphitheaters would have been erected for the funeral games held in honour of deceased Roman magnates by their heirs, featuring fights to the death by gladiators, usually armed prisoners of war, at the funeral pyre or tomb of the deceased. These games are described in Roman histories as , gifts, entertainments or duties to honour deceased individuals, Rome's gods and the Roman community. Some Roman writers interpret the earliest attempts to provide permanent amphitheaters and seating for the lower classes as populist political graft, rightly blocked by the Senate as morally objectionable; too-frequent, excessively "luxurious" would corrode traditional Roman morals. The provision of permanent seating was thought a particularly objectionable luxury. The earliest permanent, stone and timber Roman amphitheatre with perimeter seating was built in the in 29 BCE. Most were built under Imperial rule, from the Augustan period (27 BCE–14 CE) onwards. Imperial amphitheatres were built throughout the Roman Empire, especial in provincial capitals and major colonies, as an essential aspect of Romanitas. There was no standard size; the largest could accommodate 40,000–60,000 spectators. The most elaborate featured multi-storeyed, arcaded façades and were decorated with marble, stucco and statuary. The best-known and largest Roman amphitheatre is the Colosseum in Rome, also known as the Flavian Amphitheatre (), after the Flavian dynasty who had it built. After the ending of gladiatorial games in the 5th century and of staged animal hunts in the 6th, most amphitheatres fell into disrepair. Their materials were mined or recycled. Some were razed, and others were converted into fortifications. A few continued as convenient open meeting places; in some of these, churches were sited. Modern amphitheatres In modern english usage of the word, an amphitheatre is not only a circular, but can also be a semicircular or curved performance space, particularly one located outdoors. Contemporary amphitheatres often include standing structures, called bandshells, sometimes curved or bowl-shaped, both behind the stage and behind the audience, creating an area which echoes or amplifies sound, making the amphitheatre ideal for musical or theatrical performances. Small-scale amphitheatres can serve to host outdoor local community performances. Notable modern amphitheatres include the Shoreline Amphitheatre, the Hollywood Bowl and the Aula Magna at Stockholm University. The term "amphitheatre" is also used for some indoor venues, such as the (by now demolished) Gibson Amphitheatre and Chicago International Amphitheatre. In other languages (like German) an amphitheatre can only be a circular performance space. A performance space where the audience is not all around the stage can not be called an amphitheatre—by definition of the word. Natural amphitheatres A natural amphitheatre is a performance space located in a spot where a steep mountain or a particular rock formation naturally amplifies or echoes sound, making it ideal for musical and theatrical performances. An amphitheatre can be naturally occurring formations which would be ideal for this purpose, even if no theatre has been constructed there. Notable natural amphitheatres include the Drakensberg Amphitheatre in South Africa, Slane Castle in Ireland, the Supernatural Amphitheatre in Australia, and the Red Rocks and the Gorge Amphitheatres in the western United States. There is evidence that the Anasazi people used natural amphitheatres for the public performance of music in Pre-Columbian times including a large constructed performance space in Chaco Canyon, New Mexico. See also Odeon (building) Colosseum Ancient theatres Theatre of ancient Greece List of ancient Greek theatres Arena Thingplatz List of Roman amphitheatres List of contemporary amphitheatres List of indoor arenas Notes References Buildings and structures by type
Amphitheatre
Engineering
1,171
38,208,695
https://en.wikipedia.org/wiki/FNSS%20Kunduz
Kunduz or AZMİM (short for ) is a Turkish tracked amphibious combat engineering armoured bulldozer. History The Kunduz was developed and produced in less than four-years work for the Turkish Armed Forces (TSK) by the Turkish company FNSS Defence Systems. The prototype was delivered to the TSK on January 11, 2013 in Ankara. AZMİM will be operated to move earth, clear terrain obstacles, cut steep slopes and stabilize stream banks for easy river crossing of combat vehicles during Turkish Army's amphibious warfare. The contract to develop the armoured amphibious bulldozer was signed on March 10, 2009 between the Ministry of National Defence and the FNSS. The project, which began to develop on June 15, 2009, foresees the production of twelve units until the end of 2013. The first was delivered on January 11, 2013. Specifications AZMİM has two crew, one operator and one attendant. It is capable of shoveling, smoothing, hauling and excavating. Tests showed that it is resistant to land mines and armour-piercing shot and shell. AZMİM is equipped with daylight camera system, night vision device, multi-purpose LED display and air conditioner. The tracked vehicle can accompany other military combat vehicles without the need of a carrier thanks to its max. speed of . Two installed pump-jets enable the amphibious bulldozer to conduct a 360 degrees turning movement in rip current waters. Operators References Bulldozers Amphibious military vehicles Military engineering vehicles Tracked armoured fighting vehicles Armoured fighting vehicles of Turkey Post–Cold War military equipment of Turkey Military vehicles of Turkey Kunduz Military vehicles introduced in the 2010s
FNSS Kunduz
Engineering
335
22,086,557
https://en.wikipedia.org/wiki/Sleeping%20berth
A sleeping berth is a bed or sleeping accommodation on vehicles. Space accommodations have contributed to certain common design elements of berths. Beds in boats or ships While beds on large ships are little different from those on shore, the lack of space on smaller yachts means that bunks must be fit in wherever possible. Some of these berths have specific names: V-berth Frequently, yachts have a bed in the extreme forward end of the hull (usually in a separate cabin called the forepeak). Because of the shape of the hull, this bed is basically triangular, though most also have a triangular notch cut out of the middle of the aft end, splitting it partially into two separate beds and making it more of a V shape, hence the name. This notch can usually be filled in with a detachable board and cushion, creating something more like a double bed (though with drastically reduced space for the feet; wide is typical). The term "V-berth" is not widely used in the UK; instead, the cabin as a whole (the forepeak) is usually referred to. Settee berth The archetypal layout for a small yacht has seats running down both sides of the cabin, with a table in the middle. At night, these seats can usually be used as beds. Because the ideal ergonomic distance between a seat-back and its front edge (back of the knee) makes for a rather narrow bed, good settee berths will have a system for moving the back of the settee out of the way; this can reveal a surprisingly wide bunk, often running right out to the hull side underneath the lockers. If they are to be used at sea, settee berths must have lee-cloths to prevent the user falling out of bed. Sometimes the settee forms part of a double bed for use in harbor, often using detachable pieces of the table and extra cushions. Such beds are not usually referred to as settee berths. Pilot berth A narrow berth high up in the side of the cabin, the pilot berth is usually above and behind the back of the settee and right up under the deck. Sometimes the side of this bunk is "walled in" up to the sleeper's chest; there may even be small shelves or lockers on the partition so that the bed is "behind the furniture". The pilot berth is so called because originally they were so small and uncomfortable that nobody slept in them most of the time; only the pilot would be offered it if it were necessary to spend a night aboard the yacht. Quarter berth This is a single bunk tucked under the cockpit, usually found in smaller boats where there is not room for a cabin in this location. Pipe Berth A pipe berth is a canvas cloth laced to a perimeter frame made of pipe. Easily stored due to its flat shape, the pipe berth is often suspended on ropes or fits into brackets when in use. The canvas dries more easily than a mattress. Root Berth A Root Berth is like a pipe berth but with the pipes on only the long sides. Root Berths easily roll up for storage. Some use heavy wooden dowels instead of pipes, again fitting into brackets when in use. Some boats provide multiple bracket options, so the canvas can be pulled tight like in a pipe berth, or left looser for a more "hammock-like" berth, helpful in heeling boats or heavy seas. Lee cloths Lee cloths are sheets of canvas or other fabric attached to the open side of the bunk (very few are open all round) and usually tucked under the mattress during the day or when sleeping in harbour. The lee cloth keeps the sleeping person in the bunk from falling out when the boat heels during sailing or rough weather. Berths in trains Long-distance trains running at night usually have sleeping compartments with sleeping berths. In the case of compartments with two berths, one is on top of the other in a double-bunk arrangement. These beds (the lower bed in a double-bunk arrangement) are usually designed in conjunction with seats which occupy the same space, and each can be folded away when the other is in use. Sleeper trains are common, especially in Europe, India and China. Sleeper trains usually consist of single or double-berth compartments, as well as couchette, which have four or six berths (consisting of a bottom, middle and top bunk on each side of the compartment). Open section berths These berths are clustered in compartments, contrasting with the berths in the open sections of Pullman cars in the United States, common until the 1950s. In these cars, passengers face each other in facing seats during the day. Porters pull down the upper berth and bring the lower seats together to create the lower berth. All of these berths face the aisle running down the center of the sleeping car. Each berth has a curtain for privacy away from the aisle. Berths in long-distance trucks Long-haul truckers sleep in berths known as sleeper cabs contained within their trucks. The sleeper-berth's size and location is typically regulated. See also Couchette car Pullman car References Beds Nautical terminology Passenger rail transport Ship compartments
Sleeping berth
Biology
1,061
34,280,377
https://en.wikipedia.org/wiki/Types%20of%20press%20tools
Press tools are commonly used in hydraulic, pneumatic, and mechanical presses to produce the sheet metal components in large volumes. Generally press tools are categorized by the types of operation performed using the tool, such as blanking, piercing, bending, forming, forging, trimming etc. The press tool will also be specified as a blanking tool, piercing tool, bending tool etc. Classification of press tools Blanking tool Blanking is a punching operation in which the entire periphery is cut out and the cutout portion required is known as STAMP or BLANK. When a component is produced with one single punch and die where the entire outer profile is cut in a single stroke the tool is called a blanking tool. Blanking is the operation of cutting flat shapes from sheet metal. The outer area of metal remaining after a blanking operation is generally discarded as waste. Size of blank or product is the size of the die & clearance is given on punch. It is a metal cutting operation. In blanking, metal obtained after cutting is not a scrap if it is usable. The size of the blank depends on the size of the die. So the size of the die opening is equal to the blank size. Clearance is given to the punch. Blank sheet to used as final product. Piercing tool Piercing involves cutting of clean holes with a resulting scrap slug. The operation is called die cutting and can also produce flat components where the die, the shaped tool, is pressed into a sheet material employing a shearing action to cut holes. This method can be used to cut parts of different sizes and shapes in sheet metal, leather and many other materials. Cut off tool It is a shearing operation in which blanks are separated from a sheet metal strip by cutting the opposite sides of the part in sequence. Parting off tool This is similar to a cutoff tool, in that a discrete part is cut from a sheet or strip of metal along a desired geometric path. The difference between a cutoff and a parting is that a cutoff can be nestled perfectly on the sheet metal, due to its geometry. With cutoffs, the cutting of sheet metal can be done over one path at a time and there is practically no waste of material. With partings, the shape can not be nestled precisely. Parting involves cutting the sheet metal along two paths simultaneously. Partings waste a certain amount of material, that can be significant. Trimming tool When cups and shells are drawn from flat sheet metal, the edge is left wavy and irregular, due to uneven flow of metal. Shown is flanged shell, as well as the trimmed ring removed from around the edge. While a small amount of material is removed from the side of a component in trimming tool. hlo every one triming tool is nothing bt cleaning tool like where the extyra curve or some we doesnt want that removing proces call as triming tool Shaving tool Shaving removes a small amount of material around the edges of a previously blanked stampings or piercing. A straight, smooth edge is provided and therefore shaving is frequently performed on instrument parts, watch and clock parts and the like. Shaving is accomplished in shaving tools especially designed for the purpose. It is also required proper die clearance. Forming tool Forming is the operation of deforming a part in curved profile. Forming tools apply more complex forms to work pieces. The line of bend is curved instead of straight, and the metal is subjected to plastic flow or deformation. Drawing tool Drawing tools transform flat sheets of metal into cups, shells or other drawn shapes by subjecting the material to severe plastic deformation. Shown in fig is a rather deep shell that has been drawn from a flat sheet. It is an axial elongation through the application of axial force. This type of press tool is used to perform only one particular operation therefore classified under stage tools. Progressive tool A progressive tool differs from a stage tool in the following respect: in a progressive tool, the final component is obtained by progressing the sheet metal or strip in more than one stage. At each stage, the tool will progressively shape the component towards its final shape, with the final stage normally being cutting-off. Compound tool The compound tool differs from progressive and stage tools by the arrangement of the punch and die. It is an inverted tool where blanking and piercing takes place in a single stage and also the blanking punch will act as the piercing die. That means punch will be to the bottom side of the tool and piercing punches to top side of the tool. The burr forms only one side. Combination tool In a combination tool, two or more operations such as bending and trimming will be performed simultaneously. Two or more operations such as forming, drawing, extruding, embossing may be combined on the component with various cutting operations like blanking, piercing, broaching and cut off takes place- it can perform a cutting and non-cutting operations in a single tool. General press tool construction The general press tool construction will have the following elements: Shank: It is used as a part for installing the Press tool die in the slide of the press machine with proper alignment. Top Plate: It is used to hold top half of the press tool with press slide. It is also called Bolster Plate. Punch Back Plate : This plate prevents the hardened punches penetrating into top plate. It is also called Pressure Plate or Backup plate. Punch Holder: This plate is used to accommodate the punches of press tool. Punches : To perform cutting and non cutting operations either plain or profiled punches are used. Die Plate: Die plate will have similar profile of the component where cutting dies usually have holes with land and angular clearance and non cutting dies will have profiles. Die Back Plate:This plate prevents the hardened Die inserts penetrating into bottom plate. Guide Pillar & Guide Bush : Used for alignment between top and bottom halves of the press tools. Bottom plate:It is used to hold bottom half of the press tool with press slide. Stripper plate: it is used to strip off the component from punches. Strip guides: It is used to guide the strip into the press tool to perform the operation. Cutting force in press tool In general, cutting force can be calculated using the formula: CF =L x T x ζmax Cutting force will be in Newton (N) Where, L = Cut length in mm,(perimeter of profile to be cut) Ex: 40 mm square to be cut will have cut length of 160 mm T = Sheet metal thickness in mm, ζmax = Maximum shear strength of sheet metal in MPa Stripping force Stripping force is the force required to eject the strip from the punches, which helps the strip to go forward for the next operation Stripping force will be usually 10 to 20% of cutting force (CF) Press force The Press force is the cutting force added to the stripping force: Press Force = Cutting force + Stripping force Fits in press tools Punch holder and Punches =H7/k6 Punch and Stripper = H7/k6 Guide Pillar and Guide bush = H7/g6 Guide bush and Top plate = H7/p6 Guide pillar and bottom plate = H7/p6 Dowel and plate = H7/m6 Dowel holes = H7/m6 References Machine tools Presswork
Types of press tools
Engineering
1,481
1,964,551
https://en.wikipedia.org/wiki/Fanlight
A fanlight is a form of lunette window, often semicircular or semi-elliptical in shape, with glazing bars or tracery sets radiating out like an open fan. It is placed over another window or a doorway, and is sometimes hinged to a transom. The bars in the fixed glazed window spread out in the manner of a sunburst. It is also called a sunburst light. References External links Doorways around the World Glass architecture Windows
Fanlight
Materials_science,Engineering
101
39,368,335
https://en.wikipedia.org/wiki/Mission%20to%20Mars%3A%20My%20Vision%20for%20Space%20Exploration
Mission to Mars: My Vision for Space Exploration is a 2013 book written by retired NASA astronaut Buzz Aldrin and Leonard David. The book was released on May 7, 2013 by National Geographic Books. In the book, Aldrin outlines his plan for humans to be able to colonize Mars by the year 2035. The books goes over a number of past and then current space concepts, policy, and future mission concepts. He encouraged future missions to not focus on strictly on Mars exploration, but also on Mars settlement. The books goes beyond just Mars missions to review the overall space exploration vision-scape, such as considering the viability of Lunar missions and international cooperation in space. National Geographic released a video trailer for the book. See also Mars to Stay References Exploration of Mars Human missions to Mars Buzz Aldrin Books by astronauts Works about Mars
Mission to Mars: My Vision for Space Exploration
Astronomy
173
5,832,289
https://en.wikipedia.org/wiki/Bradykinin%20receptor
The bradykinin receptor family is a group of G-protein coupled receptors whose principal ligand is the protein bradykinin. There are two Bradykinin receptors: the B1 receptor and the B2 receptor. B1 receptor Bradykinin receptor B1 (B1) is a G-protein coupled receptor encoded by the BDKRB1 gene in humans. Its principal ligand is bradykinin, a 9 amino acid peptide generated in pathophysiologic conditions such as inflammation, trauma, burns, shock, and allergy. The B1 receptor is one of two G protein-coupled receptors that have been found which bind bradykinin and mediate responses to these pathophysiologic conditions. B1 protein is synthesized by de novo following tissue injury and receptor binding leads to an increase in the cytosolic calcium ion concentration, ultimately resulting in chronic and acute inflammatory responses. B2 receptor The B2 receptor is a G protein-coupled receptor, coupled to Gq and Gi. Gq stimulates phospholipase C to increase intracellular free calcium and Gi inhibits adenylate cyclase. Furthermore, the receptor stimulates the mitogen-activated protein kinase pathways. It is ubiquitously and constitutively expressed in healthy tissues. The B2 receptor forms a complex with angiotensin converting enzyme (ACE), and this is thought to play a role in cross-talk between the renin-angiotensin system (RAS) and the kinin–kallikrein system (KKS). The heptapeptide angiotensin (1-7) also potentiates bradykinin action on B2 receptors. Icatibant is a second generation B2 receptor antagonist which has undergone limited clinical trials in pain and inflammation. FR 173657 is another orally active non-peptide B2 antagonist that has undergone limited trials as analgesic and antiinflammatory drug. Kallidin also signals through the B2 receptor. References External links G protein-coupled receptors
Bradykinin receptor
Chemistry
415
342,058
https://en.wikipedia.org/wiki/Gimbal%20lock
Gimbal lock is the loss of one degree of freedom in a multi-dimensional mechanism at certain alignments of the axes. In a three-dimensional three-gimbal mechanism, gimbal lock occurs when the axes of two of the gimbals are driven into a parallel configuration, "locking" the system into rotation in a degenerate two-dimensional space. The term gimbal-lock can be misleading in the sense that none of the individual gimbals are actually restrained. All three gimbals can still rotate freely about their respective axes of suspension. Nevertheless, because of the parallel orientation of two of the gimbals' axes, there is no gimbal available to accommodate rotation about one axis, leaving the suspended object effectively locked (i.e. unable to rotate) around that axis. The problem can be generalized to other contexts, where a coordinate system loses definition of one of its variables at certain values of the other variables. Gimbals A gimbal is a ring that is suspended so it can rotate about an axis. Gimbals are typically nested one within another to accommodate rotation about multiple axes. They appear in gyroscopes and in inertial measurement units to allow the inner gimbal's orientation to remain fixed while the outer gimbal suspension assumes any orientation. In compasses and flywheel energy storage mechanisms they allow objects to remain upright. They are used to orient thrusters on rockets. Some coordinate systems in mathematics behave as if they were real gimbals used to measure the angles, notably Euler angles. For cases of three or fewer nested gimbals, gimbal lock inevitably occurs at some point in the system due to properties of covering spaces. In engineering While only two specific orientations produce exact gimbal lock, practical mechanical gimbals encounter difficulties near those orientations. When a set of gimbals is close to the locked configuration, small rotations of the gimbal platform require large motions of the surrounding gimbals. Although the ratio is infinite only at the point of gimbal lock, the practical speed and acceleration limits of the gimbals—due to inertia (resulting from the mass of each gimbal ring), bearing friction, the flow resistance of air or other fluid surrounding the gimbals (if they are not in a vacuum), and other physical and engineering factors—limit the motion of the platform close to that point. In two dimensions Gimbal lock can occur in gimbal systems with two degrees of freedom such as a theodolite with rotations about an azimuth (horizontal angle) and elevation (vertical angle). These two-dimensional systems can gimbal lock at zenith and nadir, because at those points azimuth is not well-defined, and rotation in the azimuth direction does not change the direction the theodolite is pointing. Consider tracking a helicopter flying towards the theodolite from the horizon. The theodolite is a telescope mounted on a tripod so that it can move in azimuth and elevation to track the helicopter. The helicopter flies towards the theodolite and is tracked by the telescope in elevation and azimuth. The helicopter flies immediately above the tripod (i.e. it is at zenith) when it changes direction and flies at 90 degrees to its previous course. The telescope cannot track this maneuver without a discontinuous jump in one or both of the gimbal orientations. There is no continuous motion that allows it to follow the target. It is in gimbal lock. So there is an infinity of directions around zenith for which the telescope cannot continuously track all movements of a target. Note that even if the helicopter does not pass through zenith, but only near zenith, so that gimbal lock does not occur, the system must still move exceptionally rapidly to track it, as it rapidly passes from one bearing to the other. The closer to zenith the nearest point is, the faster this must be done, and if it actually goes through zenith, the limit of these "increasingly rapid" movements becomes infinitely fast, namely discontinuous. To recover from gimbal lock the user has to go around the zenith – explicitly: reduce the elevation, change the azimuth to match the azimuth of the target, then change the elevation to match the target. Mathematically, this corresponds to the fact that spherical coordinates do not define a coordinate chart on the sphere at zenith and nadir. Alternatively, the corresponding map T2→S2 from the torus T2 to the sphere S2 (given by the point with given azimuth and elevation) is not a covering map at these points. In three dimensions Consider a case of a level-sensing platform on an aircraft flying due north with its three gimbal axes mutually perpendicular (i.e., roll, pitch and yaw angles each zero). If the aircraft pitches up 90 degrees, the aircraft and platform's yaw axis gimbal becomes parallel to the roll axis gimbal, and changes about yaw can no longer be compensated for. Solutions This problem may be overcome by use of a fourth gimbal, actively driven by a motor so as to maintain a large angle between roll and yaw gimbal axes. Another solution is to rotate one or more of the gimbals to an arbitrary position when gimbal lock is detected and thus reset the device. Modern practice is to avoid the use of gimbals entirely. In the context of inertial navigation systems, that can be done by mounting the inertial sensors directly to the body of the vehicle (this is called a strapdown system) and integrating sensed rotation and acceleration digitally using quaternion methods to derive vehicle orientation and velocity. Another way to replace gimbals is to use fluid bearings or a flotation chamber. On Apollo 11 A well-known gimbal lock incident happened in the Apollo 11 Moon mission. On this spacecraft, a set of gimbals was used on an inertial measurement unit (IMU). The engineers were aware of the gimbal lock problem but had declined to use a fourth gimbal. Some of the reasoning behind this decision is apparent from the following quote: They preferred an alternate solution using an indicator that would be triggered when near to 85 degrees pitch. Rather than try to drive the gimbals faster than they could go, the system simply gave up and froze the platform. From this point, the spacecraft would have to be manually moved away from the gimbal lock position, and the platform would have to be manually realigned using the stars as a reference. After the Lunar Module had landed, Mike Collins aboard the Command Module joked "How about sending me a fourth gimbal for Christmas?" Robotics In robotics, gimbal lock is commonly referred to as "wrist flip", due to the use of a "triple-roll wrist" in robotic arms, where three axes of the wrist, controlling yaw, pitch, and roll, all pass through a common point. An example of a wrist flip, also called a wrist singularity, is when the path through which the robot is traveling causes the first and third axes of the robot's wrist to line up. The second wrist axis then attempts to spin 180° in zero time to maintain the orientation of the end effector. The result of a singularity can be quite dramatic and can have adverse effects on the robot arm, the end effector, and the process. The importance of avoiding singularities in robotics has led the American National Standard for Industrial Robots and Robot Systems – Safety Requirements to define it as "a condition caused by the collinear alignment of two or more robot axes resulting in unpredictable robot motion and velocities". In applied mathematics The problem of gimbal lock appears when one uses Euler angles in applied mathematics; developers of 3D computer programs, such as 3D modeling, embedded navigation systems, and video games must take care to avoid it. In formal language, gimbal lock occurs because the map from Euler angles to rotations (topologically, from the 3-torus T3 to the real projective space RP3, which is the same as the space of rotations for three-dimensional rigid bodies, formally named SO(3)) is not a local homeomorphism at every point, and thus at some points the rank (degrees of freedom) must drop below 3, at which point gimbal lock occurs. Euler angles provide a means for giving a numerical description of any rotation in three-dimensional space using three numbers, but not only is this description not unique, but there are some points where not every change in the target space (rotations) can be realized by a change in the source space (Euler angles). This is a topological constraint – there is no covering map from the 3-torus to the 3-dimensional real projective space; the only (non-trivial) covering map is from the 3-sphere, as in the use of quaternions. To make a comparison, all the translations can be described using three numbers , , and , as the succession of three consecutive linear movements along three perpendicular axes , and axes. The same holds true for rotations: all the rotations can be described using three numbers , , and , as the succession of three rotational movements around three axes that are perpendicular one to the next. This similarity between linear coordinates and angular coordinates makes Euler angles very intuitive, but unfortunately they suffer from the gimbal lock problem. Loss of a degree of freedom with Euler angles A rotation in 3D space can be represented numerically with matrices in several ways. One of these representations is: An example worth examining happens when . Knowing that and , the above expression becomes equal to: Carrying out matrix multiplication: And finally using the trigonometry formulas: Changing the values of and in the above matrix has the same effects: the rotation angle changes, but the rotation axis remains in the direction: the last column and the first row in the matrix won't change. The only solution for and to recover different roles is to change . It is possible to imagine an airplane rotated by the above-mentioned Euler angles using the X-Y-Z convention. In this case, the first angle - is the pitch. Yaw is then set to and the final rotation - by - is again the airplane's pitch. Because of gimbal lock, it has lost one of the degrees of freedom - in this case the ability to roll. It is also possible to choose another convention for representing a rotation with a matrix using Euler angles than the X-Y-Z convention above, and also choose other variation intervals for the angles, but in the end there is always at least one value for which a degree of freedom is lost. The gimbal lock problem does not make Euler angles "invalid" (they always serve as a well-defined coordinate system), but it makes them unsuited for some practical applications. Alternate orientation representation The cause of gimbal lock is the representation of orientation in calculations as three axial rotations based on Euler angles. A potential solution therefore is to represent the orientation in some other way. This could be as a rotation matrix, a quaternion (see quaternions and spatial rotation), or a similar orientation representation that treats the orientation as a value rather than three separate and related values. Given such a representation, the user stores the orientation as a value. To quantify angular changes produced by a transformation, the orientation change is expressed as a delta angle/axis rotation. The resulting orientation must be re-normalized to prevent the accumulation of floating-point error in successive transformations. For matrices, re-normalizing the result requires converting the matrix into its nearest orthonormal representation. For quaternions, re-normalization requires performing quaternion normalization. See also Aircraft principal axes (equivalent navigational problem on polar expeditions) Keyhole problem – Problems tracking near a gimbal axis under rate and acceleration limits References External links Gimbal Lock - Explained at YouTube Rotation in three dimensions Angle Gyroscopes Spaceflight concepts 3D computer graphics
Gimbal lock
Physics
2,519
1,637,666
https://en.wikipedia.org/wiki/Cell%20disruption
Cell disruption, sometimes referred to as digestion, is a method or process for releasing biological molecules from inside a cell. Methods The production of biologically interesting molecules using cloning and culturing methods allows the study and manufacture of relevant molecules. Except for excreted molecules, cells producing molecules of interest must be disrupted. This page discusses various methods. Another method of disruption is called cell unroofing. Bead method A common laboratory-scale mechanical method for cell disruption uses glass, ceramic, or steel beads, in diameter, mixed with a sample suspended in an aqueous solution. First developed by Tim Hopkins in the late 1970s, the sample and bead mix is subjected to high level agitation by stirring or shaking. Beads collide with the cellular sample, cracking open the cell to release the intracellular components. Unlike some other methods, mechanical shear is moderate during homogenization resulting in excellent membrane or subcellular preparations. The method, often called "bead beating", works well for all types of cellular material - from spores to animal and plant tissues. It is the most widely used method of yeast lysis, and can yield breakage of well over 50% (up to 95%). It has the advantage over other mechanical cell disruption methods of being able to disrupt very small sample sizes, process many samples at a time with no cross-contamination concerns, and does not release potentially harmful aerosols in the process. In the simplest example of the method, an equal volume of beads are added to a cell or tissue suspension in a test tube and the sample is vigorously mixed on a common laboratory vortex mixer. While processing times are slow, taking 310 times longer than that in specialty shaking machines, it works well for easily disrupted cells and is inexpensive. Successful bead beating is dependent not only on design features of the shaking machine (which take into consideration shaking oscillations frequency, shaking throw or distance, shaking orientation and vial orientation), but also the selection of correct bead size ( diameter), bead composition (glass, ceramic, steel) and bead load in the vial. In most laboratories, bead beating is done in batch sizes of one to twenty-four sealed, plastic vials or centrifuge tubes. The sample and tiny beads are agitated at about 2000 oscillations per minute in specially designed reciprocating shakers driven by high power electric motors. Cell disruption is complete in 1–3 minutes of shaking. Significantly faster rates of cell disruption are achieved with a bead beater variation called SoniBeast. Differing from conventional machines, it agitates the beads using a vortex motion at 20,000 oscillations per minute. Larger bead beater machines that hold deep-well microtiter plates also shorten process times, as do Bead Dispensers designed to quickly load beads into multiple vials or microplates. Pre-loaded vials and microplates are also available. All high energy bead beating machines warm the sample about 10 degrees per minute. This is due to frictional collisions of the beads during homogenization. Cooling of the sample during or after bead beating may be necessary to prevent damage to heat-sensitive proteins such as enzymes. Sample warming can be controlled by bead beating for short time intervals with cooling on ice between each interval, by processing vials in pre-chilled aluminum vial holders or by circulating gaseous coolant through the machine during bead beating. A different bead beater configuration, suitable for larger sample volumes, uses a rotating fluorocarbon rotor inside a 15, 50 or 200 ml chamber to agitate the beads. In this configuration, the chamber can be surrounded by a static cooling jacket. Using this same rotor/chamber configuration, large commercial machines are available to process many liters of cell suspension. Currently, these machines are limited to processing unicellular organisms such as yeast, algae and bacteria. Cryopulverization Samples with a tough extracellular matrix, such as animal connective tissue, some tumor biopsy samples, venous tissue, cartilage, seeds, etc., are reduced to a fine powder by impact pulverization at liquid nitrogen temperatures. This technique, known as cryopulverization, is based on the fact that biological samples containing a significant fraction of water become brittle at extremely cold temperatures. This technique was first described by Smucker and Pfister in 1975, who referred to the technique as cryo-impacting. The authors demonstrated cells are effectively broken by this method, confirming by phase and electron microscopy that breakage planes cross cell walls and cytoplasmic membranes. The technique can be done by using a mortar and pestle cooled to liquid nitrogen temperatures, but use of this classic apparatus is laborious and sample loss is often a concern. Specialised stainless steel pulverizers generically known as Tissue Pulverizers are also available for this purpose. They require less manual effort, give good sample recovery and are easy to clean between samples. Advantages of this technique are higher yields of proteins and nucleic acids from small, hard tissue samples - especially when used as a preliminary step to mechanical or chemical/solvent cell disruption methods mentioned above. High Pressure Cell Disruption Since the 1940s high pressure has been used as a method of cell disruption, most notably by the French Pressure Cell Press, or French Press for short. This method was developed by Charles Stacy French and utilises high pressure to force cells through a narrow orifice, causing the cells to lyse due to the shear forces experienced across the pressure differential. While French Presses have become a staple item in many microbiology laboratories, their production has been largely discontinued, leading to a resurgence in alternate applications of similar technology. Modern physical cell disruptors typically operate via either pneumatic or hydraulic pressure. Although pneumatic machines are typically lower cost, their performance can be unreliable due to variations in the processing pressure throughout the stroke of the air pump. It is generally considered that hydraulic machines offer superior lysing ability, especially when processing harder to break samples such as yeast or Gram-positive bacteria, due to their ability to maintain constant pressure throughout the piston stroke. As the French Press, which is operated by hydraulic pressure, is capable of over 90% lysis of most commonly used cell types it is often taken as the gold standard in lysis performance and modern machines are often compared against it not only in terms of lysis efficiency but also in terms of safety and ease of use. Some manufacturers are also trying to improve on the traditional design by altering properties within these machines other than the pressure driving the sample through the orifice. One such example is Constant Systems, who have recently shown that their Cell Disruptors not only match the performance of a traditional French Press, but also that they are striving towards attaining the same results at a much lower power. Pressure Cycling Technology ("PCT"). PCT is a patented, enabling technology platform that uses alternating cycles of hydrostatic pressure between ambient and ultra-high levels (up to 90,000 psi) to safely, conveniently and reproducibly control the actions of molecules in biological samples, e.g., the rupture (lysis) of cells and tissues from human, animal, plant, and microbial sources, and the inactivation of pathogens. PCT-enhanced systems (instruments and consumables) address some challenging problems inherent in biological sample preparation. PCT advantages include: (a) extraction and recovery of more membrane proteins, (b) enhanced protein digestion, (c) differential lysis in a mixed sample base, (d) pathogen inactivation, (e) increased DNA detection, and (f) exquisite sample preparation process control. The Microfluidizer method used for cell disruption strongly influences the physicochemical properties of the lysed cell suspension, such as particle size, viscosity, protein yield and enzyme activity. In recent years the Microfluidizer method has gained popularity in cell disruption due to its ease of use and efficiency at disrupting many different kinds of cells. The Microfluidizer technology was licensed from a company called Arthur D. Little and was first developed and utilized in the 1980s, initially starting as a tool for liposome creation. It has since been used in other applications such as cell disruption nanoemulsions, and solid particle size reduction, among others. By using microchannels with fixed geometry, and an intensifier pump, high shear rates are generated that rupture the cells. This method of cell lysis can yield breakage of over 90% of E. coli cells. Many proteins are extremely temperature-sensitive, and in many cases can start to denature at temperatures of only 4 degrees Celsius. Within the microchannels, temperatures exceed 4 degrees Celsius, but the machine is designed to cool quickly so that the time the cells are exposed to elevated temperatures is extremely short (residence time 25 ms-40 ms). Because of this effective temperature control, the Microfluidizer yields higher levels of active proteins and enzymes than other mechanical methods when the proteins are temperature-sensitive. Viscosity changes are also often observed when disrupting cells. If the cell suspension viscosity is high, it can make downstream handling—such as filtration and accurate pipetting—quite difficult. The viscosity changes observed with a Microfluidizer are relatively low, and decreases with further additional passes through the machine. In contrast to other mechanical disruption methods the Microfluidizer breaks the cell membranes efficiently but gently, resulting in relatively large cell wall fragments (450 nm), and thus making it easier to separate the cell contents. This can lead to shorter filtration times and better centrifugation separation. Microfluidizer technology scales from one milliliter to thousands of liters. Nitrogen decompression For nitrogen decompression, large quantities of nitrogen are first dissolved in the cell under high pressure within a suitable pressure vessel. Then, when the gas pressure is suddenly released, the nitrogen comes out of the solution as expanding bubbles that stretch the membranes of each cell until they rupture and release the contents of the cell. Nitrogen decompression is more protective of enzymes and organelles than ultrasonic and mechanical homogenizing methods and compares favorably to the controlled disruptive action obtained in a PTFE and glass mortar and pestle homogenizer. While other disruptive methods depend upon friction or a mechanical shearing action that generate heat, the nitrogen decompression procedure is accompanied by an adiabatic expansion that cools the sample instead of heating it. The blanket of inert nitrogen gas that saturates the cell suspension and the homogenate offers protection against oxidation of cell components. Although other gases: carbon dioxide, nitrous oxide, carbon monoxide and compressed air have been used in this technique, nitrogen is preferred because of its non-reactive nature and because it does not alter the pH of the suspending medium. In addition, nitrogen is preferred because it is generally available at low cost and at pressures suitable for this procedure. Once released, subcellular substances are not exposed to continued attrition that might denature the sample or produce unwanted damage. There is no need to watch for a peak between enzyme activity and percent disruption. Since nitrogen bubbles are generated within each cell, the same disruptive force is applied uniformly throughout the sample, thus ensuring unusual uniformity in the product. Cell-free homogenates can be produced. The technique is used to homogenize cells and tissues, release intact organelles, prepare cell membranes, release labile biochemicals, and produce uniform and repeatable homogenates without subjecting the sample to extreme chemical or physical stress. The method is particularly well suited for treating mammalian and other membrane-bound cells. It has also been used successfully for treating plant cells, for releasing virus from fertilized eggs and for treating fragile bacteria. It is not recommended for untreated bacterial cells. Yeast, fungus, spores and other materials with tough cell walls do not respond well to this method. See also Lysis Ultrasonic homogenizer Sonication Homogenization (chemistry) Homogenizer References Cell biology
Cell disruption
Biology
2,499
46,871,755
https://en.wikipedia.org/wiki/Hole%20drilling%20method
The hole drilling method is a method for measuring residual stresses, in a material. Residual stress occurs in a material in the absence of external loads. Residual stress interacts with the applied loading on the material to affect the overall strength, fatigue, and corrosion performance of the material. Residual stresses are measured through experiments. The hole drilling method is one of the most used methods for residual stress measurement. The hole drilling method can measure macroscopic residual stresses near the material surface. The principle is based on drilling of a small hole into the material. When the material containing residual stress is removed the remaining material reaches a new equilibrium state. The new equilibrium state has associated deformations around the drilled hole. The deformations are related to the residual stress in the volume of material that was removed through drilling. The deformations around the hole are measured during the experiment using strain gauges or optical methods. The original residual stress in the material is calculated from the measured deformations. The hole drilling method is popular for its simplicity and it is suitable for a wide range of applications and materials. Key advantages of the hole drilling method include rapid preparation, versatility of the technique for different materials, and reliability. Conversely, the hole drilling method is limited in depth of analysis and specimen geometry, and is at least semi-destructive. History and development The idea of measuring the residual stress by drilling a hole and registering the change of the hole diameter was first proposed by Mathar in 1934. In 1966 Rendler and Vignis introduced a systematic and repeatable procedure of hole drilling to measure the residual stress. In the following period the method was further developed in terms of drilling techniques, measuring the relieved deformations, and the residual stress evaluation itself. A very important milestone is the use of finite element method to compute the calibration coefficients and to evaluate the residual stresses from the measured relieved deformations (Schajer, 1981). That allowed especially the evaluation of residual stresses which are not constant along the depth. It also brought further possibilities of using the method, e.g., for inhomogeneous materials, coatings, etc. The measurement and evaluation procedure is standardised by the norm ASTM E837 of the American Society for Testing and Materials which also contributed to the popularity of the method. The hole drilling is currently one of the most widespread methods of measuring the residual stress. Modern computational methods are used for the evaluation. The method is being developed especially in terms of drilling techniques and the possibilities of measuring the deformations. Some laboratories, such as the company MELIAD, offer residual stress measurement services and the sale of measurement equipment according to ASTM E837. Today this method is integrated within several large companies in the energy and aeronautics sectors. Fundamental principles The hole drilling method of measuring the residual stresses is based on drilling a small hole in the material surface. This relieves the residual stresses and the associated deformations around the hole. The relieved deformations are measured in at least three independent directions around the hole. The original residual stress in the material is then evaluated based on the measured deformations and using the so-called calibration coefficients. The hole is made by a cylindrical end mill or by alternative techniques. Deformations are most often measured using strain gauges (strain gauge rosettes). The biaxial stress in the surface plane can be measured. The method is often referred to as semi-destructive thanks to the small material damage. The method is relatively simple, fast, the measuring device is usually portable. Disadvantages include the destructive character of the technique, limited resolution, and a lower accuracy of the evaluation in the case of nonuniform stresses or inhomogeneous material properties. The so-called calibration coefficients play an important role in the residual stress evaluation. They are used to convert the relieved deformations to the original residual stress in the material. The coefficients can be theoretically derived for a through hole and a homogeneous stress. Then they depend only on the material properties, hole radius, and the distance from the hole. In the vast majority of practical applications, however, the preconditions for using the theoretically derived coefficients are not met, e.g., the integral deformation over the tensometer area is not included, the hole is blind instead of through, etc. Therefore, coefficients taking into account the practical aspects of measuring are used. They are mostly determined by a numerical computation using the finite element method. They express the relation between the relieved deformations and the residual stresses, taking into account the hole size, hole depth, shape of the tensometric rosette, material, and other parameters. The evaluation of the residual stresses depends on the method used to calculate them from the measured relieved deformations. All the evaluation methods are built on the basic principles. They differ in the preconditions for use, the accuracy requirements on the calibration coefficients, or the possibility to take additional influences into account. In general, the hole is made in successive steps and the relieved deformations are measured after each step. Evaluation methods for the residual stress Several methods have been developed for the evaluation of residual stresses from the relieved deformations. The fundamental method is the equivalent uniform stress method. The coefficients for particular hole diameter, rosette type, and hole depth are published in the norm ASTM E837. The method is suitable for a constant or little changing stress along the depth. It can be used as a guideline for non-constant stresses, however, the method may give highly distorted results. The most general method is the integral method. It calculates the influence of the relieved stress in the given depth which, however, changes with the total depth of the hole. The calibration coefficients are expressed as matrices. The evaluation leads to a system of equations whose solution is a vector of residual stresses in particular depths. A numerical simulation is required to get the calibration coefficients. The integral method and its coefficients are defined in the norm ASTM E837. There are other evaluation methods that have lower demands on the calibration coefficients and on the evaluation process itself. These include the average stress method and the incremental strain method. Both the methods are based on the assumption that the change in deformation is caused solely by the relieved stress on the drilled increment. They are suitable only if there are small changes in the stress profiles. Both the methods give numerically correct results for uniform stresses. The power series method and the spline method are other modifications of the integral method. They both take into account both the distance of the stress effect from the surface and the total hole depth. Contrary to the integral method, the resulting stress values are approximated by a polynomial or a spline. The power series method is very stable but it cannot capture rapidly changing stress values. The spline method is more stable and less susceptible to errors than the integral method. It can capture the actual stress values better than the power series method. The main disadvantage are the complicated mathematical calculations needed to solve a system of nonlinear equations. Using the hole drilling method The hole drilling method finds its use in many industrial areas dealing with material production and processing. The most important technologies include heat treatment, mechanical and thermal surface finishing, machining, welding, coating, or manufacturing composites. Despite its relative universality, the method requires these fundamental preconditions to be met: the possibility to drill the material, the possibility to apply the tensometric rosettes (or other means of measuring the deformations), and the knowledge of the material properties. Additional conditions can affect the accuracy and repeatability of the measuring. These include especially the size and shape of the sample, distance of the measured area from the edges, homogeneity of the material, presence of residual stress gradients, etc. Hole drilling can be performed in the laboratory or as a field measurement, making it ideal for measuring actual stresses in large components that cannot be moved. See also Residual stress Deep hole drilling Friction drilling External links Measuring residual stresses by the hole drilling method, University of West Bohemia, New Technologies - Research Centre, department Thermomechanics of Technological Processes Laboratory and Field Measurements of Residual Stress by Hole Drilling References Mechanical engineering
Hole drilling method
Physics,Engineering
1,655
19,858,814
https://en.wikipedia.org/wiki/Dynamic%20mode%20decomposition
In data science, dynamic mode decomposition (DMD) is a dimensionality reduction algorithm developed by Peter J. Schmid and Joern Sesterhenn in 2008. Given a time series of data, DMD computes a set of modes, each of which is associated with a fixed oscillation frequency and decay/growth rate. For linear systems in particular, these modes and frequencies are analogous to the normal modes of the system, but more generally, they are approximations of the modes and eigenvalues of the composition operator (also called the Koopman operator). Due to the intrinsic temporal behaviors associated with each mode, DMD differs from dimensionality reduction methods such as principal component analysis (PCA), which computes orthogonal modes that lack predetermined temporal behaviors. Because its modes are not orthogonal, DMD-based representations can be less parsimonious than those generated by PCA. However, they can also be more physically meaningful because each mode is associated with a damped (or driven) sinusoidal behavior in time. Overview Dynamic mode decomposition was first introduced by Schmid as a numerical procedure for extracting dynamical features from flow data. The data takes the form of a snapshot sequence where is the -th snapshot of the flow field, and is a data matrix whose columns are the individual snapshots. These snapshots are assumed to be related via a linear mapping that defines a linear dynamical system that remains approximately the same over the duration of the sampling period. Written in matrix form, this implies that where is the vector of residuals that accounts for behaviors that cannot be described completely by , , , and . Regardless of the approach, the output of DMD is the eigenvalues and eigenvectors of , which are referred to as the DMD eigenvalues and DMD modes respectively. Algorithm There are two methods for obtaining these eigenvalues and modes. The first is Arnoldi-like, which is useful for theoretical analysis due to its connection with Krylov methods. The second is a singular value decomposition (SVD) based approach that is more robust to noise in the data and to numerical errors. The Arnoldi approach In fluids applications, the size of a snapshot, , is assumed to be much larger than the number of snapshots , so there are many equally valid choices of . The original DMD algorithm picks so that each of the snapshots in can be expressed as linear combinations of the snapshots in . Because most of the snapshots appear in both data sets, this representation is error free for all snapshots except , which is written as where is a set of coefficients DMD must identify and is the residual. In total, where is the companion matrix The vector can be computed by solving a least squares problem, which minimizes the overall residual. In particular if we take the QR decomposition of , then . In this form, DMD is a type of Arnoldi method, and therefore the eigenvalues of are approximations of the eigenvalues of . Furthermore, if is an eigenvector of , then is an approximate eigenvector of . The reason an eigendecomposition is performed on rather than is because is much smaller than , so the computational cost of DMD is determined by the number of snapshots rather than the size of a snapshot. The SVD-based approach Instead of computing the companion matrix , the SVD-based approach yields the matrix that is related to via a similarity transform. To do this, assume we have the SVD of . Then Equivalent to the assumption made by the Arnoldi-based approach, we choose such that the snapshots in can be written as the linear superposition of the columns in , which is equivalent to requiring that they can be written as the superposition of POD modes. With this restriction, minimizing the residual requires that it is orthogonal to the POD basis (i.e., ). Then multiplying both sides of the equation above by yields , which can be manipulated to obtain Because and are related via similarity transform, the eigenvalues of are the eigenvalues of , and if is an eigenvector of , then is an eigenvector of . In summary, the SVD-based approach is as follows: Split the time series of data in into the two matrices and . Compute the SVD of . Form the matrix , and compute its eigenvalues and eigenvectors . The -th DMD eigenvalue is and -th DMD mode is the . The advantage of the SVD-based approach over the Arnoldi-like approach is that noise in the data and numerical truncation issues can be compensated for by truncating the SVD of . As noted in accurately computing more than the first couple modes and eigenvalues can be difficult on experimental data sets without this truncation step. Theoretical and algorithmic advancements Since its inception in 2010, a considerable amount of work has focused on understanding and improving DMD. One of the first analyses of DMD by Rowley et al. established the connection between DMD and the Koopman operator, and helped to explain the output of DMD when applied to nonlinear systems. Since then, a number of modifications have been developed that either strengthen this connection further or enhance the robustness and applicability of the approach. Optimized DMD: Optimized DMD is a modification of the original DMD algorithm designed to compensate for two limitations of that approach: (i) the difficulty of DMD mode selection, and (ii) the sensitivity of DMD to noise or other errors in the last snapshot of the time series. Optimized DMD recasts the DMD procedure as an optimization problem where the identified linear operator has a fixed rank. Furthermore, unlike DMD which perfectly reproduces all of the snapshots except for the last, Optimized DMD allows the reconstruction errors to be distributed throughout the data set, which appears to make the approach more robust in practice. Optimal Mode Decomposition: Optimal Mode Decomposition (OMD) recasts the DMD procedure as an optimization problem and allows the user to directly impose the rank of the identified system. Provided this rank is chosen properly, OMD can produce linear models with smaller residual errors and more accurate eigenvalues on both synthetic and experimental data sets. Exact DMD: The Exact DMD algorithm generalizes the original DMD algorithm in two ways. First, in the original DMD algorithm the data must be a time series of snapshots, but Exact DMD accepts a data set of snapshot pairs. The snapshots in the pair must be separated by a fixed , but do not need to be drawn from a single time series. In particular, Exact DMD can allow data from multiple experiments to be aggregated into a single data set. Second, the original DMD algorithm effectively pre-processes the data by projecting onto a set of POD modes. The Exact DMD algorithm removes this pre-processing step, and can produce DMD modes that cannot be written as the superposition of POD modes. Sparsity Promoting DMD: Sparsity promoting DMD is a post processing procedure for DMD mode and eigenvalue selection. Sparsity promoting DMD uses an penalty to identify a smaller set of important DMD modes, and is an alternative approach to the DMD mode selection problem that can be solved efficiently using convex optimization techniques. Multi-Resolution DMD: Multi-Resolution DMD (mrDMD) is a combination of the techniques used in multiresolution analysis with Exact DMD designed to robust extracting DMD modes and eigenvalues from data sets containing multiple timescales. The mrDMD approach was applied to global surface temperature data, and identifies a DMD mode that appears during El Nino years. Extended DMD: Extended DMD is a modification of Exact DMD that strengthens the connection between DMD and the Koopman operator. As the name implies, Extended DMD is an extension of DMD that uses a richer set of observable functions to produce more accurate approximations of the Koopman operator. This extended set could be chosen a priori or learned from data. It also demonstrated the DMD and related methods produce approximations of the Koopman eigenfunctions in addition to the more commonly used eigenvalues and modes. Residual DMD: Residual DMD provides a means to control the projection errors of DMD and Extended DMD that arise from finite-dimensional approximations of the Koopman operator. The method utilizes the same snapshot data but introduces an additional finite matrix that captures infinite-dimensional residuals exactly in the large data limit. This enables users to sidestep spectral pollution (spurious modes), verify Koopman mode decompositions and learned dictionaries, and compute continuous spectra. Moreover, the method further bolsters the link between DMD and the Koopman operator by demonstrating how the spectral content of the latter can be computed with verification and error control. Physics-informed DMD: Physics-informed DMD forms a Procrustes problem that restricts the family of admissible models to a matrix manifold that respects the physical structure of the system. This allows physical structures to be incorporated into DMD. This approach is less prone to overfitting, requires less training data, and is often less computationally expensive to build than standard DMD models. Measure-preserving EDMD: Measure-preserving extended DMD (mpEDMD) offers a Galerkin method whose eigendecomposition converges to the spectral quantities of the Koopman operators for general measure-preserving dynamical systems. This method employs an orthogonal Procrustes problem (essentially a polar decomposition) to DMD and extended DMD. Beyond convergence, mpEDMD upholds physical conservation laws, and exhibits enhanced robustness to noise as well as improved long-term behavior. DMD with Control: Dynamic mode decomposition with control (DMDc) is a modification of the DMD procedure designed for data obtained from input output systems. One unique feature of DMDc is the ability to disambiguate the effects of system actuation from the open loop dynamics, which is useful when data are obtained in the presence of actuation. Total Least Squares DMD: Total Least Squares DMD is a recent modification of Exact DMD meant to address issues of robustness to measurement noise in the data. In, the authors interpret the Exact DMD as a regression problem that is solved using ordinary least squares (OLS), which assumes that the regressors are noise free. This assumption creates a bias in the DMD eigenvalues when it is applied to experimental data sets where all of the observations are noisy. Total least squares DMD replaces the OLS problem with a total least squares problem, which eliminates this bias. Dynamic Distribution Decomposition: DDD focuses on the forward problem in continuous time, i.e., the transfer operator. However the method developed can also be used for fitting DMD problems in continuous time. In addition to the algorithms listed here, similar application-specific techniques have been developed. For example, like DMD, Prony's method represents a signal as the superposition of damped sinusoids. In climate science, linear inverse modeling is also strongly connected with DMD. For a more comprehensive list, see Tu et al. Examples Trailing edge of a profile The wake of an obstacle in the flow may develop a Kármán vortex street. The Fig.1 shows the shedding of a vortex behind the trailing edge of a profile. The DMD-analysis was applied to 90 sequential Entropy fields and yield an approximated eigenvalue-spectrum as depicted below. The analysis was applied to the numerical results, without referring to the governing equations. The profile is seen in white. The white arcs are the processor boundaries since the computation was performed on a parallel computer using different computational blocks. Roughly a third of the spectrum was highly damped (large, negative ) and is not shown. The dominant shedding mode is shown in the following pictures. The image to the left is the real part, the image to the right, the imaginary part of the eigenvector. Again, the entropy-eigenvector is shown in this picture. The acoustic contents of the same mode is seen in the bottom half of the next plot. The top half corresponds to the entropy mode as above. Synthetic example of a traveling pattern The DMD analysis assumes a pattern of the form where is any of the independent variables of the problem, but has to be selected in advance. Take for example the pattern With the time as the preselected exponential factor. A sample is given in the following figure with , and . The left picture shows the pattern without, the right with noise added. The amplitude of the random noise is the same as that of the pattern. A DMD analysis is performed with 21 synthetically generated fields using a time interval , limiting the analysis to . The spectrum is symmetric and shows three almost undamped modes (small negative real part), whereas the other modes are heavily damped. Their numerical values are respectively. The real one corresponds to the mean of the field, whereas corresponds to the imposed pattern with . Yielding a relative error of −1/1000. Increasing the noise to 10 times the signal value yields about the same error. The real and imaginary part of one of the latter two eigenmodes is depicted in the following figure. See also Several other decompositions of experimental data exist. If the governing equations are available, an eigenvalue decomposition might be feasible. Eigenvalue decomposition Empirical mode decomposition Global mode Normal mode Proper orthogonal decomposition Singular-value decomposition References Schmid, P. J. & Sesterhenn, J. L. 2008 Dynamic mode decomposition of numerical and experimental data. In Bull. Amer. Phys. Soc., 61st APS meeting, p. 208. San Antonio. Hasselmann, K., 1988. POPs and PIPs. The reduction of complex dynamical systems using principal oscillation and interaction patterns. J. Geophys. Res., 93(D9): 10975–10988. Experimental physics Time series Matrix decompositions
Dynamic mode decomposition
Physics
2,959
9,461,013
https://en.wikipedia.org/wiki/Finger%20Touching%20Cell%20Phone
The Finger Touching Cell Phone was a concept cell-phone developed by Samsung and Sunman Kwon at Hong-ik University, South Korea. Concept The phone was designed to be worn, as a wristband. The phone would project a 3 × 4 mobile-style keypad onto your fingers, with each joint making up a button. The product won an iF Concept Product Award in 2007. References External links http://digital.no.msn.com/article.aspx?cp-documentid=2890839(Norwegian) http://techdigest.tv/2007/02/turn_your_finge.html Mobile phones Pointing-device text input
Finger Touching Cell Phone
Technology
144
5,690,312
https://en.wikipedia.org/wiki/Maria%20Pia%20Bridge
Maria Pia Bridge (in Portuguese Ponte de D. Maria Pia, commonly known as Ponte de Dona Maria Pia) is a railway bridge built in 1877 and attributed to Gustave Eiffel. It is situated between the Portuguese Northern municipalities of Porto and Vila Nova de Gaia. The double-hinged, crescent arch bridge is made of wrought iron and spans , over the Douro River. It is part of the Linha Norte system of the national railway. At the time of its construction, it was the longest single-arch span in the world. It is no longer used for rail transport, having been replaced by Ponte de São João (or St. John's Bridge) in 1991. It is often confused with the similar D. Luís Bridge, which was built nine years later and is located to the west, although the D. Luis Bridge has two decks instead of one. History In 1875, the Royal Portuguese Railway Company announced a competition for a bridge to carry the Lisbon to Porto railway across the river Douro. This was very technically demanding: the river was fast-flowing, its depth could be as much as during times of flooding, and the riverbed was made up of a deep layer of gravel. These factors ruled out the construction of piers in the river, meaning that the bridge would have to have a central span of 160m (525 ft). At the time, the longest span of an arch bridge was the 158.5m (520 ft) span of the bridge built by James B. Eads over the Mississippi at St Louis. When the project was approved, João Crisóstomo de Abreu e Sousa, member of the Junta Consultiva das Obras Públicas (Consultative Junta for Public Works), thought that the deck should have two tracks. Gustave Eiffel's design proposal, priced at 965,000 French francs, was the least expensive of the four designs considered at around two-thirds the cost of the nearest competitor. Since the company was relatively inexperienced, a commission was appointed to report on their suitability to undertake the work. Their report was favorable, although it did emphasise the difficulty of the project: Responsibility for the actual design is difficult to attribute, but it is likely that Théophile Seyrig, Eiffel's business partner who presented a paper on the bridge to the Société des Ingénieurs Civils in 1878, was largely responsible. In his account of the bridge that accompanied the 1:50 scale model exhibited at the 1878 World's Fair, Eiffel credited Seyrig and Henry de Dion with work on the calculations and drawings. Construction started on 5 January 1876. Work on the abutments, piers, and approach decking was complete by September. Work then paused due to winter flooding, and the erection of the central arch span was not re-started until March 1877. By 28 October 1877, the platform was mounted and concluded, with the work on the bridge finishing on 30 October 1878. Tests were performed between 1 and 2 November, leading to the 4 November inauguration by King D. Louis I and Queen Maria Pia of Savoy (the eponym of the bridge). Between 1897 and 1898 there was some concern by technicians about the integrity of the bridge; its width, the interruption of principal beams, its lightweight structure, and its elastic nature. In 1890, in Ovar, the Oficina de Obras Metálicas (Metal Works Office) was created to support the work to reinforce and repair those structures. As a consequence, restrictions were placed on transit over the structure between 1900 and 1906: axle load was limited to 14 tons and velocity to per hour. Alterations to the deck of the bridge were performed under the oversight of Xavier Cordeiro in 1900. These were followed between 1901 and 1906 by improvements to the triangular beams, which were performed by the Oficina of Ovar. Consulting with a specialist in metallic structures, French engineer Manet Rabut, in 1907, the Oficina concluded that the arch and the works performed on the bridge were sufficient to allow circulation. This did not impede further work on the fore- and aft-structural members to make the bridge more accessible and to reinforce the main pillars. In 1916, a commission was created to study the possibility of a secondary transit between Vila Nova de Gaia and Porto. In 1928, the bridge was noted as "a real obstacle to traffic." In order to improve the structure for the beginning of CP service across the bridge with improved Series 070 locomotives on 1 November 1950, engineer João de Lemos executed several studies in 1948 to evaluate the bridge's condition: a study of the deck (including structural members) and analyses of the continuous beams and the arch's structural supports. The analysis of the stability of the bridge, handled by the Laboratório Nacional de Engenharia Civil (LNEC), resulted in the injection of cement and repair of the masonry joints and pillars that connected with metallic structures. At the same time, the repair team removed flaking paint from the structure and treated corrosion, including repainting with new metallic paint. Another analytic study in 1966 began to analyze upgrading service to electrical locomotives (Bò-Bó), leading to the conclusion of the electrification of the Linha Norte. In 1969, in loco stress tests verified the analytical results. In 1990, the bridge was classified by the American Society of Civil Engineers as an International Historic Civil Engineering Landmark. In 1991, rail service over the bridge ended because the single track and speed restrictions limited transit to per hour. Rail functions transitioned to the São João Bridge (designed by engineer Edgar Cardoso). In 1998, there was a plan to rehabilitate and illuminate the bridge, resulting in the establishment of a tourist train attraction between the Museu dos Transportes and the area that included the wine cellars of Porto, a route using a formerly closed tunnel under the historic centre of Porto. In 2013, there was an effort to relocate the bridge to the city centre where it would serve as a monument. Architecture The bridge is in an urban cityscape over the Douro River, connecting the mount of Seminário in the municipality of Porto to the Serra do Pilar in the lightly populated section of the municipality of Vila Nova de Gaia. The structure consists of a deck long, supported by two piers on one side of the river and three on the other, with a central arch with a span of and a rise of . It is supported on three pillars in Vila Nova da Gaia and by two pillars in Porto. Two shorter pillars support the arch. The five interlaced support pillars are constructed in a pyramidal format over granite masonry blocks, over six veins, three of which are on the Gaia side and on the Porto side. Over the bridge are painted ironwork guardrails over granite masonry. Another innovation was the method of construction used for the central arch. Since it was impossible to use any falsework, the arch was built out from the abutments on either side, their weight being supported by steel cables attached to the top of the piers supporting the deck. The same method was also used to build the decking, temporary tower structures built above deck level to support the cables. This technique had been previously used by Eads, but its use by Eiffel shows his adoption of the latest engineering techniques. The design uses a parabolic arch. References Notes Sources External links Bridges completed in 1877 Bridges in Porto Bridges in Vila Nova de Gaia Bridges over the Douro River Gustave Eiffel's designs Historic Civil Engineering Landmarks Listed bridges in Portugal National monuments in Porto District Railway bridges in Portugal Truss arch bridges Wrought iron bridges 1877 establishments in Portugal
Maria Pia Bridge
Engineering
1,567
11,717,153
https://en.wikipedia.org/wiki/TabletKiosk
TabletKiosk is a manufacturer of enterprise-grade Tablet PCs and UMPCs located in Torrance, California, United States. All mobile computers produced by TabletKiosk fall into the slate category, featuring touchscreen or pen (active digitizer) input, in lieu of integrated or convertible keyboards. Current products include the Sahara Slate PC i500 series, designed in-house at TabletKiosk's Taiwan R&D facility. Early generations of the eo brand of UMPC (Ultra-Mobile PC) were designed in collaboration with outside designers and the TabletKiosk team, while the fourth generation of this brand, the eo a7400 is designed exclusively in-house. TabletKiosk is a wholly owned subsidiary of Sand Dune Ventures, based in Torrance, California. In 2006, TabletKiosk delayed shipment of its "eo" brand tablet after discovering problems with the device's fan. SoftBrands announced in 2007 that it would use TabletKiosk's Sahara Slate PC line to distribute SoftBrands software to hotel companies. Parkland Memorial Hospital in Dallas, Texas, United States, has patients visiting its emergency department fill in their details using a TabletKiosk machine., In 2013, Healthcare Global named the Sahara Slate PC i500 as One of the Top 10 Mobile Tablets For Healthcare Professionals. References See also External links Company website Computer hardware companies Computer systems companies Computer companies established in 2003 Microsoft Tablet PC Tablet computers Computer companies of the United States
TabletKiosk
Technology
306
16,865,499
https://en.wikipedia.org/wiki/Earthquake%20shaking%20table
There are different experimental techniques which can be used to test the response of structures and soil or rock slopes to verify their seismic performance. One of these is using an earthquake shaking table (a shaking table or shake table). This device is used for shaking scaled slopes, structural models or building components with a wide range of simulated ground motions, including reproductions of recorded earthquake time-histories. While modern tables typically consist of a rectangular platform that is driven in up to six degrees of freedom (DOF) by servo-hydraulic or other types of actuators, the earliest shake table, invented at the University of Tokyo in 1893 to categorize types of building construction, ran on a simple wheel mechanism. Test specimens are fixed to the platform and shaken, often to the point of failure. Using video records and data from transducers, it is possible to interpret the dynamic behaviour of the specimen. Earthquake shaking tables are used extensively in seismic research, as they provide the means to excite structures such that they are subjected to conditions representative of true earthquake ground motions. They are also used in other fields of engineering to test and qualify vehicles and components of vehicles that must respect heavy vibration requirements and standards. Some applications include testing according to aerospace, electrical, and military standards. Earthquake shaking tables are essential in model testing contests, where participants evaluate designs developed within specific guidelines against simulated seismic activity. Simple shake tables are also used in architecture and structural engineering primarily for educational purposes, helping students learn how structures respond to earthquakes through hands-on model testing. See also Earthquake engineering References Further reading IEEE 693-2018: "IEEE Recommended Practice for Seismic Design of Substations", Institute of Electrical and Electronics Engineers, 2018. External links Hydra shaker – European Space Agency Earthquake engineering Mechanical tests
Earthquake shaking table
Engineering
360
1,677,464
https://en.wikipedia.org/wiki/Space%20environment
Space environment is a branch of astronautics, aerospace engineering and space physics that seeks to understand and address conditions existing in space that affect the design and operation of spacecraft. A related subject, space weather, deals with dynamic processes in the solar-terrestrial system that can give rise to effects on spacecraft, but that can also affect the atmosphere, ionosphere and geomagnetic field, giving rise to several other kinds of effects on human technologies. Effects on spacecraft can arise from radiation, space debris and meteoroid impact, upper atmospheric drag and spacecraft electrostatic charging. Various mitigation strategies have been adopted. Radiation Radiation in space usually comes from three main sources: The Van Allen radiation belts Solar proton events and solar energetic particles; and Galactic cosmic rays. For long-duration missions, the high doses of radiation can damage electronic components and solar cells. A major concern is also radiation-induced "single-event effects" such as single event upset. Crewed missions usually avoid the radiation belts and the International Space Station is at an altitude well below the most severe regions of the radiation belts. During solar energetic events (solar flares and coronal mass ejections) particles can be accelerated to very high energies and can reach the Earth in times as short as 30 minutes (but usually take some hours). These particles are mainly protons and heavier ions that can cause radiation damage, disruption to logic circuits, and even hazards to astronauts. Crewed missions to return to the Moon or to travel to Mars will have to deal with the major problems presented by solar particle events to radiation safety, in addition to the important contribution to doses from the low-level background cosmic rays. In near-Earth orbits, the Earth's geomagnetic field screens spacecraft from a large part of these hazards - a process called geomagnetic shielding. Debris Space debris and meteoroids can impact spacecraft at high speeds, causing mechanical or electrical damage. The average speed of space debris is while the average speed of meteoroids is much greater. For example, the meteoroids associated with the Perseid meteor shower travel at an average speed of . Mechanical damage from debris impacts have been studied through space missions including LDEF, which had over 20,000 documented impacts through its 5.7-year mission. Electrical anomalies associated with impact events include ESA's Olympus spacecraft, which lost attitude control during the 1993 Perseid meteor shower. A similar event occurred with the Landsat 5 spacecraft during the 2009 Perseid meteor shower. Electrostatic charging Spacecraft electrostatic charging is caused by the hot plasma environment around the Earth. The plasma encountered in the region of the geostationary orbit becomes heated during geomagnetic substorms caused by disturbances in the solar wind. "Hot" electrons (with energies in the kilo-electron volt range) collect on surfaces of spacecraft and can establish electrostatic potentials of the order of kilovolts. As a result, discharges can occur and are known to be the source of many spacecraft anomalies. Mitigation strategies Solutions devised by scientists and engineers include, but are not limited to, spacecraft shielding, special "hardening" of electronic systems, various collision detection systems. Evaluation of effects during spacecraft design includes application of various models of the environment, including radiation belt models, spacecraft-plasma interaction models and atmospheric models to predict drag effects encountered in lower orbits and during reentry. The field often overlaps with the disciplines of astrophysics, atmospheric science, space physics, and geophysics, albeit usually with an emphasis on application. The United States government maintains a Space Weather Prediction Center at Boulder, Colorado. The Space Weather Prediction Center (SWPC) is part of the National Oceanic and Atmospheric Administration (NOAA). SWPC is one of the National Weather Service's (NWS) National Centers for Environmental Prediction (NCEP). Law Environmental law in space law is being considered but lacks establishment, but has become an issue in light of increased space debris. Space environmentalism Space environmentalism is an advocacy that sees space as not devoid of needing regulation and protection, and has gained attention by an increasing number of academics, such as Moriba Jah. See also ECSS standard E-ST-10-04C on Space environment Graveyard orbit Karman line Outer space Outline of space science Space climate Spacecraft cemetery Space Environment Data System (SEDAT) Space Environment Information System (SPENVIS) Space weathering References External links Space Environment Technologies (SET) Space Weather Center (SWC) ESA Space Environment and Effects Analysis Section International Space Environment Service (ISES) Aerospace engineering Astronautics Space physics
Space environment
Astronomy,Engineering
938
32,274,647
https://en.wikipedia.org/wiki/C13H16O7
{{DISPLAYTITLE:C13H16O7}} The molecular formula C13H16O7 (molar mass: 284.26 g/mol, exact mass: 284.0896 u) may refer to: Benzoyl-β-D-glucoside Helicin
C13H16O7
Chemistry
67
1,914,405
https://en.wikipedia.org/wiki/Heats%20of%20fusion%20of%20the%20elements%20%28data%20page%29
Heat of fusion Notes Values refer to the enthalpy change between the liquid phase and the most stable solid phase at the melting point (normal, 101.325 kPa). References CRC As quoted from various sources in an online version of: David R. Lide (ed), CRC Handbook of Chemistry and Physics, 84th Edition. CRC Press. Boca Raton, Florida, 2003; Section 6, Fluid Properties; Enthalpy of Fusion LNG As quoted from various sources in: J.A. Dean (ed), Lange's Handbook of Chemistry (15th Edition), McGraw-Hill, 1999; Section 6, Thermodynamic Properties; Table 6.4, Heats of Fusion, Vaporization, and Sublimation and Specific Heat at Various Temperatures of the Elements and Inorganic Compounds WEL As quoted at http://www.webelements.com/ from these sources: G.W.C. Kaye and T.H. Laby in Tables of physical and chemical constants, Longman, London, UK, 15th edition, 1993. D.R. Lide, (ed.) in Chemical Rubber Company handbook of chemistry and physics, CRC Press, Boca Raton, Florida, USA, 79th edition, 1998. A.M. James and M.P. Lord in Macmillan's Chemical and Physical Data, Macmillan, London, UK, 1992. H. Ellis (Ed.) in Nuffield Advanced Science Book of Data, Longman, London, UK, 1972. See also Thermodynamic properties Chemical element data pages
Heats of fusion of the elements (data page)
Physics,Chemistry,Mathematics
330
1,353,830
https://en.wikipedia.org/wiki/Matte%20painting
A matte painting is a painted representation of a landscape, set, or distant location that allows filmmakers to create the illusion of an environment that is not present at the filming location. Historically, matte painters and film technicians have used various techniques to combine a matte-painted image with live-action footage (compositing). At its best, depending on the skill levels of the artists and technicians, the effect is seamless and creates environments that would otherwise be impossible or expensive to film. In the scenes, the painting part is static while movements are integrated on it. Background Traditionally, matte paintings were made by artists using paints or pastels on large sheets of glass for integrating with the live-action footage. The first known matte painting shot was made in 1907 by Norman Dawn (ASC), who improvised the crumbling California Missions by painting them on glass for the movie Missions of California. Notable traditional matte-painting shots include Dorothy's approach to the Emerald City in The Wizard of Oz, Charles Foster Kane's Xanadu in Citizen Kane, and the seemingly bottomless tractor-beam set of Star Wars Episode IV: A New Hope. The documentary The Making of Star Wars mentioned the technique used for the tractor beam scene as being a glass painting. By the mid-1980s, advancements in computer graphics programs allowed matte painters to work in the digital realm. The first digital matte shot was created by painter Chris Evans in 1985 for Young Sherlock Holmes for a scene featuring a computer-graphics (CG) animation of a knight leaping from a stained-glass window. Evans first painted the window in acrylics, then scanned the painting into LucasFilm's Pixar system for further digital manipulation. The computer animation (another first) blended perfectly with the digital matte, which could not have been accomplished using a traditional matte painting. From traditional to digital Traditional matte painting is older than the movie camera itself and has been already practiced in the early years of photography to create painted elements in photographs. With the advantages of the digital age, matte painters have slowly transitioned to a digital work environment, using pressure-sensitive pens and graphic tablets in conjunction with painting software such as Adobe Photoshop. A digital matte painter is part of a visual effects team being involved in post-production, as opposed to a traditional matte painter, who was a special effects crew, often creating matte paintings on set to be used as backdrops. Throughout the 1990s, traditional matte paintings were still in use, but more often in conjunction with digital compositing. Die Hard 2 (1990) was the first film to use digitally composited live-action footage with a traditional glass matte painting that had been photographed and scanned into a computer. It was for the last scene, which took place on an airport runway. By the end of the decade, the time of hand-painted matte paintings was drawing to a close, although as late as 1997 some traditional paintings were still being made, notably Chris Evans’ painting of the rescue ship in James Cameron’s Titanic. One particular drawback to the work of the digital matte artist is an occasional tendency of their output to look too realistic, which traditional artists avoided by using impressionistic elements or by suggesting details. What this means is that digital matte art is often characterized by an artificially perfect look. One of the modern approaches adopted to address this is the integration of details from a photograph, say, of real places to depict realistic scenes. It is this reason why some digital matte artists refer to their work as a combination of digital painting, photo manipulation, and 3D, for the purpose of creating virtual sets that are hard or impossible to find in the real world. Paint was superseded in the 21st century by digital images created using photo references, 3-D models, and drawing tablets. Matte painters combine their digitally matte painted textures within computer-generated 3-D environments, allowing for 3-D camera movement. Lighting algorithms used to simulate lighting sources expanded in scope in 1995, when radiosity rendering was applied to film for the first time in Martin Scorsese's Casino. Matte World Digital collaborated with LightScape to simulate the indirect bounce-light effect of millions of neon lights of the 1970s-era Las Vegas strip. Lower computer processing times continue to alter and expand matte painting technologies and techniques. Matte painting techniques are also implemented in concept art and used often in games and even high end production techniques in animation. Digital matte artists A digital matte artist, or digital matte painter (DMP), is today's modern form of a traditional matte painter in the entertainment industry. They digitally paint photo-realistic interior and exterior environments that could not have been otherwise created or visited. The term 'digital' is used to distinguish a DMP from a traditional matte painter. Craig Barron, the co-founder of Matte World Digital, offered an insight regarding the transition of the art from traditional to digital in the following words:It is difficult to categorize what a matte painting shot is today... Most filmmakers still call what we do matte shots, and we like that because we see our work as an extension of the original craft. But it's more accurate to say we are involved in environment creation. Workflow and skillset The time period and extent of involvement of a digital matte artist in film production varies by the type of film and by the artist's supervisor's (film producer, film director, art director) intentions. However, there are artists such as Mathieu Raynault who stated that they are often brought into the production at a very early stage, providing sketches and concepts to get a dialogue started with the director or art director. Raynault was involved on films like 300, Star Wars: Episode II Attack of the Clones, and two Lord of the Rings films, among others. Because of the growing need for 'moving' mattes, camera projection mapping has been implemented into the matte painting timeline. Although ILM CG Supervisor Stefen Fangmeier came up with the idea of projecting Yusei Uesugi's aerial painting of Neverland onto a 3D mesh modeled by Geoff Campbell while working on the motion picture Hook in 1991, projection-mapping based 3D environment matte art was until recently, like its predecessor matte painting has been, the industry's best-kept secret. The involvement of 3D in this until then 2D art form was revealed by Craig Barron in 1998 after completing their work on the feature film Great Expectations when they introduced this technique as a 2.5D matte to the public. In production today this combination of 2D and 3D is part of every matte artist's bread and butter. Because of their high artistic skills, digital matte artists are often also involved with the creation of concept artwork. Notable uses The army barracks in All Quiet on the Western Front (1930). Count Dracula's castle exteriors in Dracula (1931) and other scenes. The view of Skull Island in King Kong (1933). Charlie Chaplin's blindfold roller-skating beside the illusory drop in Modern Times (1936). The view of Nottingham Castle in The Adventures of Robin Hood (1938). The 1942 spy thriller Saboteur, directed by Alfred Hitchcock, is enhanced by numerous matte shots, ranging from a California aircraft factory to the climactic scene atop New York's Statue of Liberty. Black Narcissus (1947) by Powell and Pressburger, scenes of the Himalayan convent. Several external views and the 20 miles-a-side cube left by the Ancients in Forbidden Planet (1956). In Alfred Hitchcock's North by Northwest (1959) shots of The United Nations building, Mount Rushmore and the Mount Rushmore house. Birds flying over Bodega Bay, looking down at the town below, in Alfred Hitchcock's The Birds (1963). Mary Poppins gliding over London with her umbrella, the St Paul's Cathedral and London's rooftops and aerial views in Mary Poppins (1964). The iconic image of the Statue of Liberty at the end of Planet of the Apes (1968). Diabolik's underground lair and various locations in Danger: Diabolik (1968). Virtually all of the exterior shots of San Francisco in The Love Bug (1968). The rooftops of Portobello Road, the English landscape, Miss Price's house and other scenes in Bedknobs and Broomsticks (1971) (special effects won an Academy Award). The city railway line in The Sting (1973). Views of a destroyed Los Angeles in Earthquake (1974) for which Albert Whitlock won an Academy Award. The stone column demolished by the locomotive in the Chicago station in the film Silver Streak. The Death Star's laser tunnel in Star Wars (1977). The Starfleet headquarters in Star Trek The Motion Picture (1979). The background for all scenes featuring Imperial walkers in The Empire Strikes Back (1980). The final scene of the secret government warehouse in Steven Spielberg's Raiders of the Lost Ark (1981). The Roy and Deckard chase scene in Blade Runner (1982). The view of the crashed space ship in The Thing (1982). The view of the OCP tower in RoboCop (1987) and other scenes. Gotham City street scene in Batman (1989). The Bat Cave in Batman Returns (1992). The Karl G. Jansky Very Large Array in Contact (1997). The Magic Railroad in Thomas and the Magic Railroad (2000). The bunkhouse of the Furious Five and its surrounding background in Kung Fu Panda (2008). The cityscape behind the Barnums' first apartment in The Greatest Showman (2017). Notable matte painters and technicians Dylan Cole Max Dennison Walter Percy Day Norman Dawn Linwood G. Dunn Harrison Ellenshaw Peter Ellenshaw Emilio Ruiz del Rio Michael Pangrazio Milan Schere Albert Whitlock Matthew Yuricich See also Camera projection mapping Bipack Chroma key Compositing Optical printing Video matting References Books Peter Ellenshaw; Ellenshaw Under Glass – Going to the Matte for Disney Richard Rickitt: Special Effects: The History and Technique. Billboard Books; 2nd edition, 2007; (Chapter 5 covers the history and techniques of movie matte painting.) Barron, C., 1998. Matte Painting in the Digital Age. In: Invisible Effects. Siggraph 98: Proceedings of the 25th Annual Conference on Computer Graphics, July 23, 1998. Orlando, Florida, USA. Cotta Vaz, M., 2002. The invisible Art: The Legends of Movie Matte Painting. San Francisco, CA, USA: Chronicle Books. Uesugi, Y. et al., 2008. d'artiste Matte Painting 2. Adelaide, SA, AUS: Ballistic Publishing. Film and video technology Film post-production technology Background artists Painting techniques Optical illusions Pastel
Matte painting
Physics
2,251
34,637,995
https://en.wikipedia.org/wiki/Oka%27s%20lemma
In mathematics, Oka's lemma, proved by Kiyoshi Oka, states that in a domain of holomorphy in , the function is plurisubharmonic, where is the distance to the boundary. This property shows that the domain is pseudoconvex. Historically, this lemma was first shown in the Hartogs domain in the case of two variables, also Oka's lemma is the inverse of the Levi's problem (unramified Riemann domain over ). So maybe that's why Oka called Levi's problem as "problème inverse de Hartogs", and the Levi's problem is occasionally called Hartogs' Inverse Problem. References Further reading PDF TeX Several complex variables Theorems in complex analysis Lemmas in analysis
Oka's lemma
Mathematics
166
54,568,728
https://en.wikipedia.org/wiki/RigExpert
Rig Expert Ukraine Ltd is a manufacturer of ham and PMR Two-way radio RF antenna analysis and antenna tuning equipment. The company was founded in 2003 and is headquartered in Kyiv, Ukraine. Current products The AA-30, AA-54 & AA-170 are almost the same product except for the frequency range. Similarly, the AA-600, AA-1000 & AA-1400 are the same product except for the different frequency range. See also Antenna analyzer Antenna tuner Impedance matching References External links Ukraine: USA: (website has been closed, May 2022) Canada: UK: Electrical circuits Radio electronics Impedance measurements Electronic test equipment manufacturers
RigExpert
Physics,Engineering
133
2,366,889
https://en.wikipedia.org/wiki/M4%20corridor
The M4 corridor is an area in the United Kingdom adjacent to the M4 motorway, which runs from London to South Wales. It is a major hi-tech hub. Important cities and towns linked by the M4 include (from east to west) London, Slough, Bracknell, Maidenhead, Reading, Newbury, Swindon, Bath, Bristol, Newport, Cardiff, Port Talbot and Swansea. The area is also served by the Great Western Main Line, the South Wales Main Line, and London Heathrow Airport. Technology companies with major operations in the area include Adobe, Amazon, Citrix Systems, Dell, Huawei, Lexmark, LG, Microsoft, Novell, Nvidia, O2, Oracle, Panasonic, SAP, and Symantec. England The east end of the English M4 corridor is home to a large number of technology companies, particularly in Berkshire, Swindon and the Thames Valley. For this reason this part of the M4 corridor is sometimes described as England's "Silicon Valley". Slough, Windsor, Maidenhead, Reading, Bracknell and Newbury are the main towns in the Berkshire stretch of the M4. Reading is home to many information technology and financial services businesses, including Cisco, Microsoft, ING Direct, Oracle, Prudential, Yell Group and Ericsson. Vodafone has a major corporate campus in Newbury, O2 plc is in Slough. Maidenhead is the home of Hutchison 3G UK's headquarters and Tesla Motors' UK head office. Investment has gradually spread westwards since the 1980s. In the west, the interchange of the M4 and M5 motorways north of Bristol had seen considerable growth of industries by the mid 1990s. Wales The major Welsh towns and cities along the M4 corridor are Bridgend, Cardiff, Llanelli, Neath, Newport, Port Talbot and Swansea. South Wales is an industrial heartland of the UK. The 1980s and 1990s saw the development of the Swansea Enterprise Park. The Celtic Manor Resort, adjacent to the M4 in Newport, has received significant investment and hosted the 2010 Ryder Cup. Newport has seen significant growth in the electronics industry since the late 1980s. The 1990s saw significant investment in Cardiff, such as in Cardiff Gate and the Cardiff Bay area. One site of note on the M4 corridor is Port Talbot Steelworks – the largest steel producer in the UK and one of the biggest in Europe. The opening of the Second Severn Crossing in 1996 resulted in the previous M4 and bridge, serving Chepstow, being renumbered the M48, although the area is still generally considered as falling within the M4 corridor. Since the start of the 21st century there has been evidence of more investment west of Cardiff, such as: Port Talbot Aberavon Beach Baglan Industrial Park Baglan Energy Park Amazon.co.uk fulfilment centre at Crymlyn Burrows Swansea Maritime Quarter SA1 development Swansea Vale Felindre Llanelli Dafen/Llanelli Gate Parc Hendre Parc Trostre and Parc Pemberton Llanelli Waterside, including North Dock and Delta Lakes Ffos Las racecourse Cross Hands Cross Hands Food Park Cross Hands Business Park See also Great Western Cities Silicon Fen Silicon Glen Silicon Gorge References External links The Guardian: Advantages of the M4 corridor as a business location Economy of England Economy of Wales High-technology business districts in the United Kingdom Information technology places M4 motorway Regions of England Regions of Wales Science and technology in Berkshire South East England
M4 corridor
Technology
711
20,754,349
https://en.wikipedia.org/wiki/MRS-1706
MRS-1706 is a selective inverse agonist for the adenosine A2B receptor. It inhibits release of interleukins and has an antiinflammatory effect. References Xanthines Phenol ethers Acetanilides Aromatic ketones Adenosine receptor antagonists
MRS-1706
Chemistry
64
347,027
https://en.wikipedia.org/wiki/Amorphous%20metal
An amorphous metal (also known as metallic glass, glassy metal, or shiny metal) is a solid metallic material, usually an alloy, with disordered atomic-scale structure. Most metals are crystalline in their solid state, which means they have a highly ordered arrangement of atoms. Amorphous metals are non-crystalline, and have a glass-like structure. But unlike common glasses, such as window glass, which are typically electrical insulators, amorphous metals have good electrical conductivity and can show metallic luster. Amorphous metals can be produced in several ways, including extremely rapid cooling, physical vapor deposition, solid-state reaction, ion irradiation, and mechanical alloying. Small batches of amorphous metals have been produced through a variety of quick-cooling methods, such as amorphous metal ribbons produced by sputtering molten metal onto a spinning metal disk (melt spinning). The rapid cooling (millions of degrees Celsius per second) comes too fast for crystals to form and the material is "locked" in a glassy state. Alloys with cooling rates low enough to allow formation of amorphous structure in thick layers (over ) have been produced; bulk metallic glasses. Batches of amorphous steel with three times the strength of conventional steel alloys have been produced. New techniques such as 3D printing, also characterised by high cooling rates, are an active research topic. History The first reported metallic glass was Au75Si25, produced at Caltech by Klement, Willens, and Duwez in 1960. This and other early glass-forming alloys had to be rapidly cooled (on the order of one megakelvin per second, 106 K/s) to avoid crystallization. An important consequence of this was that metallic glasses could be produced in a few forms (typically ribbons, foils, or wires) in which one dimension was small so that heat could be extracted quickly enough to achieve the required cooling rate. As a result, metallic glass specimens (with a few exceptions) were limited to thicknesses of less than one hundred microns. In 1969, an alloy of 77.5% palladium, 6% copper, and 16.5% silicon was found to have critical cooling rate between 100 and 1000 K/s. In 1976, Liebermann and Graham developed a method of manufacturing thin ribbons of amorphous metal on a supercooled fast-spinning wheel. This was an alloy of iron, nickel, and boron. The material, known as Metglas, was commercialized in the early 1980s and became used for low-loss power distribution transformers (amorphous metal transformer). Metglas-2605 is composed of 80% iron and 20% boron, has a Curie temperature of and a room temperature saturation magnetization of 1.56 teslas. In the early 1980s, glassy ingots with a diameter of were produced with an alloy of 55% palladium, 22.5% lead, and 22.5% antimony, by surface etching followed with heating-cooling cycles. Using boron oxide flux, the achievable thickness increased to one centimeter. In 1982, a study on amorphous metal structural relaxation indicated a relationship between the specific heat and temperature of (Fe0.5Ni0.5)83P17. As the material was heated, the two properties displayed a negative relationship starting at 375 K, due to the change in relaxed amorphous states. When the material was annealed for periods from 1 to 48 hours, the properties instead displayed a positive relationship starting at 475 K for all annealing periods, since the annealing induced structure disappears at that temperature. In this study, amorphous alloys demonstrated glass transition and a super cooled liquid region. Between 1988 and 1992, more studies found more glass-type alloys with glass transition and a super cooled liquid region. From those studies, bulk glass alloys were made of La, Mg, and Zr, and these alloys demonstrated plasticity even with ribbon thickness from 20 μm to 50 μm. The plasticity was a stark difference to past amorphous metals that became brittle at those thicknesses. In 1988, alloys of lanthanum, aluminium, and copper ore were revealed to be glass-forming. Al-based metallic glasses containing scandium exhibited a record-type tensile mechanical strength of about . Bulk amorphous alloys of several millimeters in thickness were rare, although Pd-based amorphous alloys had been formed into rods with a diameter by quenching, and spheres with a diameter were formed by repetition flux melting with B2O3 and quenching. New techniques were found in 1990, producing alloys that form glasses at cooling rates as low as one kelvin per second. These cooling rates can be achieved by simple casting into metallic molds. These alloys can be cast into parts several centimeters thick while retaining an amorphous structure. The best glass-forming alloys were based on zirconium and palladium, but alloys based on iron, titanium, copper, magnesium, and other metals are known. The process exploited a phenomenon called "confusion". Such alloys contain many elements (often four or more) such that upon cooling sufficiently quickly, constituent atoms cannot achieve an equilibrium crystalline state before their mobility is lost. In this way, the random disordered state of the atoms is "locked in". In 1992, the commercial amorphous alloy, Vitreloy 1 (41.2% Zr, 13.8% Ti, 12.5% Cu, 10% Ni, and 22.5% Be), was developed at Caltech, as a part of Department of Energy and NASA research of new aerospace materials. By 2000, research in Tohoku University and Caltech yielded multicomponent alloys based on lanthanum, magnesium, zirconium, palladium, iron, copper, and titanium, with critical cooling rate between 1 K/s and 100 K/s, comparable to oxide glasses. In 2004, bulk amorphous steel was successfully produced by a groups at Oak Ridge National Laboratory, which refers to their product as "glassy steel", and another at University of Virginia, named "DARVA-Glass 101". The product is non-magnetic at room temperature and significantly stronger than conventional steel. In 2018, a team at SLAC National Accelerator Laboratory, the National Institute of Standards and Technology (NIST) and Northwestern University reported the use of artificial intelligence to predict and evaluate samples of 20,000 different likely metallic glass alloys in a year. Properties Amorphous metal is usually an alloy rather than a pure metal. The alloys contain atoms of significantly different sizes, leading to low free volume (and therefore up to orders of magnitude higher viscosity than other metals and alloys) in molten state. The viscosity prevents the atoms from moving enough to form an ordered lattice. The material displays low shrinkage during cooling, and resistance to plastic deformation. The absence of grain boundaries, the weak spots of crystalline materials, leads to better wear resistance and lesscorrosion. Amorphous metals, while technically glasses, are much tougher and less brittle than oxide glasses and ceramics. Amorphous metals are either non-ferromagnetic, if they are composed of Ln, Mg, Zr, Ti, Pd, Ca, Cu, Pt and Au, or ferromagnetic, if they are composed of Fe, Co, and Ni. Thermal conductivity is lower than in crystalline metals. As formation of amorphous structure relies on fast cooling, this limits the thickness of amorphous structures. To form amorphous structure despite slower cooling, the alloy has to be made of three or more components, leading to complex crystal units with higher potential energy and lower odds of formation. The atomic radius of the components has to be significantly different (over 12%), to achieve high packing density and low free volume. The combination of components should have negative mixing heat, inhibiting crystal nucleation and prolonging the time the molten metal stays in supercooled state. As temperatures change, the electrical resistivity of amorphous metals behaves very different than that of regular metals. While resistivity in crystalline metals generally increases with temperature, following Matthiessen's rule, resistivity in many amorphous metals decreases with increasing temperature. This effect can be observed in amorphous metals of high resistivities between 150 and 300 microohm-centimeters. In these metals, the scattering events causing the resistivity of the metal are not statistically independent, thus explaining the breakdown of Matthiessen's rule. The fact that the thermal change of the resistivity in amorphous metals can be negative over a large range of temperatures and correlated to their absolute resistivity values was identified by Mooij in 1973, becoming Mooijs-rule. Alloys of boron, silicon, phosphorus, and other glass formers with magnetic metals (iron, cobalt, nickel) have high magnetic susceptibility, with low coercivity and high electrical resistance. Usually the electrical conductivity of a metallic glass is of the same low order of magnitude as of a molten metal just above the melting point. The high resistance leads to low losses by eddy currents when subjected to alternating magnetic fields, a property useful for e.g. transformer magnetic cores. Their low coercivity also contributes to low loss. Buckel and Hilsch discovered the superconductivity of amorphous metal thin films experimentally in the early 1950s. For certain metallic elements the superconducting critical temperature Tc can be higher in the amorphous state (e.g. upon alloying) than in the crystalline state, and in several cases Tc increases upon increasing the structural disorder. This behavior can be explained by the effect of structural disorder on electron-phonon coupling. Amorphous metals have higher tensile yield strengths and higher elastic strain limits than polycrystalline metal alloys, but their ductilities and fatigue strengths are lower. Amorphous alloys have a variety of potentially useful properties. In particular, they tend to be stronger than crystalline alloys of similar chemical composition, and they can sustain larger reversible ("elastic") deformations than crystalline alloys. Amorphous metals derive their strength directly from their non-crystalline structure, which does not have defects (such as dislocations) that limit their strength. Vitreloy is an amorphous metal with a tensile strength almost double that of high-grade titanium. However, metallic glasses at room temperature are not ductile and tend to fail suddenly and surprisingly when loaded in tension, which limits applicability in reliability-critical applications. Metal matrix composites consisting of a ductile crystalline metal matrix containing dendritic particles or fibers of an amorphous glass metal are an alternative. Perhaps the most useful property of bulk amorphous alloys is that they are true glasses, which means that they soften and flow upon heating. This allows for easy processing, such as by injection molding, in much the same way as polymers. As a result, amorphous alloys have been commercialized for use in sports equipment, medical devices, and as cases for electronic equipment. Thin films of amorphous metals can be deposited as protective coatings via high velocity oxygen fuel. Applications Commercial The most important application exploits the magnetic properties of some ferromagnetic metallic glasses. The low magnetization loss is used in high efficiency transformers at line frequency and in some higher frequency transformers. Amorphous steel is very brittle that makes it difficult to punch into motor laminations. Electronic article surveillance (such as passive ID tags) often uses metallic glasses because of these magnetic properties. Ti-based metallic glass, when made into thin pipes, have a high tensile strength of , elastic elongation of 2% and high corrosion resistance. A Ti–Zr–Cu–Ni–Sn metallic glass was used to improve the sensitivity of a Coriolis flow meter. This flow meter is about 28-53 times more sensitive than conventional meters, which can be applied in fossil-fuel, chemical, environmental, semiconductor and medical science industries. Zr-Al-Ni-Cu based metallic glass can be shaped into pressure sensors for automobile and other industries. Such sensors are smaller, more sensitive, and possess greater pressure endurance than conventional stainless steel. Additionally, this alloy was used to make the world's smallest geared motor with diameter at the time. Potential Amorphous metals exhibit unique softening behavior above their glass transition and this softening has been increasingly explored for thermoplastic forming of metallic glasses. Such low softening temperature supports simple methods for making nanoparticlecomposites (e.g. carbon nanotubes) and bulk metallic glasses. It has been shown that metallic glasses can be patterned on extremely length scales as small as 10 nm. This may solve problems of nanoimprint lithography where expensive nano-molds made of silicon break easily. Nano-molds made from metallic glasses are easy to fabricate and more durable than silicon molds. The superior electronic, thermal and mechanical properties of bulk metallic glasses compared to polymers make them a good option for developing nanocomposites for electronic application such as field electron emission devices. Ti40Cu36Pd14Zr10 is believed to be noncarcinogenic, is about three times stronger than titanium, and its elastic modulus nearly matches bones. It has a high wear resistance and does not produce abrasion powder. The alloy does not undergo shrinkage on solidification. A surface structure can be generated that is biologically attachable by surface modification using laser pulses, allowing better joining with bone. Laser powder bed fusion (LPBF) has been used to process Zr-based bulk metallic glass (BMG) for biomedical applications. Zr-based BMGs shows good biocompatibility, supporting osteoblastic cell growth similar to Ti-6Al-4V alloy. The favorable response coupled with the ability to tailor surface properties through SLM highlights the promise of SLM Zr- based BMGs like AMLOY-ZR01 for orthopaedic implant applications. However, their degradation under inflammatory conditions requires further investigation. Mg60Zn35Ca5 is under investigation as a biomaterial for implantation into bones as screws, pins, or plates, to fix fractures. Unlike traditional steel or titanium, this material dissolves in organisms at a rate of roughly 1 millimeter per month and is replaced with bone tissue. This speed can be adjusted by varying the zinc content. Bulk metallic glasses seem to exhibit superior properties. SAM2X5-630 is claimed to have the highest recorded plasticity for any steel alloy, essentially the highest threshold at which a material can withstand an impact without deforming permanently. The alloy can withstand pressure and stress of up to without permanent deformation. This is the highest impact resistance of any bulk metallic glass ever recorded . This makes it as an attractive option for armour material and other applications that require high stress tolerance. Additive manufacturing One challenge when synthesising a metallic glass is that the techniques often only produce very small samples, due to the need for high cooling rates. 3D-printing methods have been suggested as a method to create larger bulk samples. Selective laser melting (SLM) is one example of an additive manufacturing method that has been used to make iron based metallic glasses. Laser foil printing (LFP) is another method where foils of the amorphous metals are stacked and welded together, layer by layer. Modeling and theory Bulk metallic glasses have been modeled using atomic scale simulations (within the density functional theory framework) in a similar manner to high entropy alloys. This has allowed predictions to be made about their behavior, stability and many more properties. As such, new bulk metallic glass systems can be tested and tailored for a specific purpose (e.g. bone replacement or aero-engine component) without as much empirical searching of the phase space or experimental trial and error. Ab-initio molecular dynamics (MD) simulation confirmed that the atomic surface structure of a Ni-Nb metallic glass observed by scanning tunneling microscopy is a kind of spectroscopy. At negative applied bias it visualizes only one soft of atoms (Ni) owing to the structure of electronic density of states calculated using ab-initio MD simulation. One common way to try and understand the electronic properties of amorphous metals is by comparing them to liquid metals, which are similarly disordered, and for which established theoretical frameworks exist. For simple amorphous metals, good estimations can be reached by semi-classical modelling of the movement of individual electrons using the Boltzmann equation and approximating the scattering potential as the superposition of the electronic potential of each nucleus in the surrounding metal. To simplify the calculations, the electronic potentials of the atomic nuclei can be truncated to give a muffin-tin pseudopotential. In this theory, there are two main effects that govern the change of resistivity with increasing temperatures. Both are based on the induction of vibrations of the atomic nuclei of the metal as temperatures increase. One is, that the atomic structure gets increasingly smeared out as the exact positions of the atomic nuclei get less and less well defined. The other is the introduction of phonons. While the smearing out generally decreases the resistivity of the metal, the introduction of phonons generally adds scattering sites and therefore increases resistivity. Together, they can explain the anomalous decrease of resistivity in amorphous metals, as the first part outweighs the second. In contrast to regular crystalline metals, the phonon contribution in an amorphous metal does not get frozen out at low temperatures. Due to the lack of a defined crystal structure, there are always some phonon wavelengths that can be excited. While this semi-classical approach holds well for many amorphous metals, it generally breaks down under more extreme conditions. At very low temperatures, the quantum nature of the electrons leads to long range interference effects of the electrons with each other in what is called "weak localization effects". In very strongly disordered metals, impurities in the atomic structure can induce bound electronic states in what is called "Anderson localization", effectively binding the electrons and inhibiting their movement. See also Bioabsorbable metallic glass Glass-ceramic-to-metal seals Liquidmetal Materials science Structure of liquids and glasses Amorphous brazing foil References Further reading External links Liquidmetal Design Guide "Metallic glass: a drop of the hard stuff" at New Scientist Glass-Like Metal Performs Better Under Stress Physical Review Focus, June 9, 2005 "Overview of metallic glasses" New Computational Method Developed By Carnegie Mellon University Physicist Could Speed Design and Testing of Metallic Glass (2004) (the alloy database developed by Marek Mihalkovic, Michael Widom, and others) New tungsten-tantalum-copper amorphous alloy developed at the Korea Advanced Institute of Science and Technology Digital Chosunilbo (English Edition) : Daily News in English About Korea Amorphous Metals in Electric-Power Distribution Applications Amorphous and Nanocrystalline Soft Magnets Metallic glasses and those composites, Materials Research Forum LLC, Millersville, PA, USA, (2018), p. 336 Alloys Metallurgy Glass
Amorphous metal
Physics,Chemistry,Materials_science,Engineering
3,975
16,112,279
https://en.wikipedia.org/wiki/TVLM%20513-46546
TVLM 513-46546 is an M9 ultracool dwarf at the red dwarf/brown dwarf mass boundary in the constellation Boötes. It exhibits flare star activity, which is most pronounced at radio wavelengths. The star has a mass approximately 80 times the mass of Jupiter (or 8 percent of the Sun's mass). The radio emission is broadband and highly circularly polarized, similar to planetary auroral radio emissions. The radio emission is periodic, with bursts emitted every 7054 s, with nearly one hundredth of a second precision. Subtle variations in the radio pulses could suggest that the ultracool dwarf rotates faster at the equator than the poles (differential rotation) in a manner similar to the Sun. Planetary system On 4 August 2020 astronomers announced the discovery of a Saturn-like planet TVLM 513b around this star with a period of days, a mass of between 0.35 and 0.42 , a circular orbit (e≃0), a semi-major axis of between 0.28 and 0.31 AU and an inclination angle of 71−88°. The companion was detected by the radio astrometry method. References Boötes M-type main-sequence stars Planetary systems with one confirmed planet J15010818+2250020
TVLM 513-46546
Astronomy
262
2,129,782
https://en.wikipedia.org/wiki/Degree%20distribution
In the study of graphs and networks, the degree of a node in a network is the number of connections it has to other nodes and the degree distribution is the probability distribution of these degrees over the whole network. Definition The degree of a node in a network (sometimes referred to incorrectly as the connectivity) is the number of connections or edges the node has to other nodes. If a network is directed, meaning that edges point in one direction from one node to another node, then nodes have two different degrees, the in-degree, which is the number of incoming edges, and the out-degree, which is the number of outgoing edges. The degree distribution P(k) of a network is then defined to be the fraction of nodes in the network with degree k. Thus if there are n nodes in total in a network and nk of them have degree k, we have . The same information is also sometimes presented in the form of a cumulative degree distribution, the fraction of nodes with degree smaller than k, or even the complementary cumulative degree distribution, the fraction of nodes with degree greater than or equal to k (1 - C) if one considers C as the cumulative degree distribution; i.e. the complement of C. Observed degree distributions The degree distribution is very important in studying both real networks, such as the Internet and social networks, and theoretical networks. The simplest network model, for example, the (Erdős–Rényi model) random graph, in which each of n nodes is independently connected (or not) with probability p (or 1 − p), has a binomial distribution of degrees k: (or Poisson in the limit of large n, if the average degree is held fixed). Most networks in the real world, however, have degree distributions very different from this. Most are highly right-skewed, meaning that a large majority of nodes have low degree but a small number, known as "hubs", have high degree. Some networks, notably the Internet, the World Wide Web, and some social networks were argued to have degree distributions that approximately follow a power law: , where γ is a constant. Such networks are called scale-free networks and have attracted particular attention for their structural and dynamical properties. Excess degree distribution Excess degree distribution is the probability distribution, for a node reached by following an edge, of the number of other edges attached to that node. In other words, it is the distribution of outgoing links from a node reached by following a link. Suppose a network has a degree distribution , by selecting one node (randomly or not) and going to one of its neighbors (assuming to have one neighbor at least), then the probability of that node to have neighbors is not given by . The reason is that, whenever some node is selected in a heterogeneous network, it is more probable to reach the hubs by following one of the existing neighbors of that node. The true probability of such nodes to have degree is which is called the excess degree of that node. In the configuration model, which correlations between the nodes have been ignored and every node is assumed to be connected to any other nodes in the network with the same probability, the excess degree distribution can be found as: where is the mean-degree (average degree) of the model. It follows from that, that the average degree of the neighbor of any node is greater than the average degree of that node. In social networks, it mean that your friends, on average, have more friends than you. This is famous as the friendship paradox. It can be shown that a network can have a giant component, if its average excess degree is larger than one: Bear in mind that the last two equations are just for the configuration model and to derive the excess degree distribution of a real-word network, we should also add degree correlations into account. Generating functions method Generating functions can be used to calculate different properties of random networks. Given the degree distribution and the excess degree distribution of some network, and respectively, it is possible to write two power series in the following forms: and can also be obtained from derivatives of : If we know the generating function for a probability distribution then we can recover the values of by differentiating: Some properties, e.g. the moments, can be easily calculated from and its derivatives: And in general: For Poisson-distributed random networks, such as the ER graph, , that is the reason why the theory of random networks of this type is especially simple. The probability distributions for the 1st and 2nd-nearest neighbors are generated by the functions and . By extension, the distribution of -th neighbors is generated by: , with iterations of the function acting on itself. The average number of 1st neighbors, , is and the average number of 2nd neighbors is: Degree distribution for directed networks In a directed network, each node has some in-degree and some out-degree which are the number of links which have run into and out of that node respectfully. If is the probability that a randomly chosen node has in-degree and out-degree then the generating function assigned to this joint probability distribution can be written with two valuables and as: Since every link in a directed network must leave some node and enter another, the net average number of links entering a node is zero. Therefore, , which implies that, the generation function must satisfy: where is the mean degree (both in and out) of the nodes in the network; Using the function , we can again find the generation function for the in/out-degree distribution and in/out-excess degree distribution, as before. can be defined as generating functions for the number of arriving links at a randomly chosen node, and can be defined as the number of arriving links at a node reached by following a randomly chosen link. We can also define generating functions and for the number leaving such a node: Here, the average number of 1st neighbors, , or as previously introduced as , is and the average number of 2nd neighbors reachable from a randomly chosen node is given by: . These are also the numbers of 1st and 2nd neighbors from which a random node can be reached, since these equations are manifestly symmetric in and . Degree distribution for signed networks In a signed network, each node has a positive-degree and a negative degree which are the positive number of links and negative number of links connected to that node respectfully. So and denote negative degree distribution and positive degree distribution of the signed network. See also Graph theory Complex network Scale-free network Random graph Structural cut-off References Graph theory Graph invariants Network theory
Degree distribution
Mathematics
1,338
4,236,111
https://en.wikipedia.org/wiki/Radia%20Perlman
Radia Joy Perlman (; born December 18, 1951) is an American computer programmer and network engineer. She is a major figure in assembling the networks and technology to enable what we now know as the internet. She is most famous for her invention of the Spanning Tree Protocol (STP), which is fundamental to the operation of network bridges, while working for Digital Equipment Corporation, thus earning her nickname "Mother of the Internet". Her innovations have made a huge impact on how networks self-organize and move data. She also made large contributions to many other areas of network design and standardization: for example, enabling today's link-state routing protocols, to be more robust, scalable, and easy to manage. Perlman was elected a member of the National Academy of Engineering in 2019 for contributions to Internet routing and bridging protocols. She holds over 100 issued patents. She was elected to the Internet Hall of Fame in 2014, and to the National Inventors Hall of Fame in 2016. She received lifetime achievement awards from USENIX in 2006 and from the Association for Computing Machinery’s SIGCOMM in 2010. More recently she has invented the TRILL protocol to correct some of the shortcomings of spanning trees, allowing Ethernet to make optimal use of bandwidth. As of 2022, she was a Fellow at Dell Technologies. Early life Perlman was born in 1951 , Portsmouth, Virginia. She grew up in Loch Arbour, New Jersey. She is Jewish. Both of her parents worked as engineers for the US government. Her father worked on radar and her mother was a mathematician by training who worked as a computer programmer. During her school years Perlman found math and science to be “effortless and fascinating”, but had no problem achieving top grades in other subjects as well. She enjoyed playing the piano and French horn. While her mother helped her with her math homework, they mainly talked about literature and music. But she didn't feel like she fit underneath the stereotype of an "engineer" as she did not break apart computer parts. Despite being the best science and math student in her school it was only when Perlman took a programming class in high school that she started to consider a career that involved computers. She was the only woman in the class and later reflected "I was not a hands-on type person. It never occurred to me to take anything apart. I assumed I'd either get electrocuted, or I'd break something". She graduated from Ocean Township High School in 1969. Education As an undergraduate at MIT Perlman learned programming for a physics class. She was given her first paid job in 1971 as part-time programmer for the LOGO Lab at the (then) MIT Artificial Intelligence Laboratory, programming system software such as debuggers. Working under the supervision of Seymour Papert, she developed a child-friendly version of the educational robotics language LOGO, called TORTIS ("Toddler's Own Recursive Turtle Interpreter System"). During research performed in 1974–76, young children—the youngest aged 3½ years, programmed a LOGO educational robot called a Turtle. Perlman has been described as a pioneer of teaching young children computer programming. Afterwards, she was inspired to make a new programming language that would teach much younger children similar to Logo, but using special "keyboards" and input devices. This project was abandoned because "being the only woman around, I wanted to be taken seriously as a 'scientist' and was a little embarrassed that my project involved cute little kids". MIT media project later tracked her down and told her that she started a new field called tangible user interface from the leftovers of her abandoned project. As a math grad at MIT she needed to find an adviser for her thesis, and joined the MIT group at BBN Technologies. There she first got involved with designing network protocols. Perlman obtained a B.S. and M.S. in Mathematics and a Ph.D. in Computer Science from MIT in 1988. Her doctoral thesis on routing in environments where malicious network failures are present serves as the basis for much of the work that now exists in this area. When studying at MIT in the late 60s she was one among the 50 or so women students, in a class of about 1,000 students. To begin with MIT only had one women’s dorm, limiting the number of women students that could study. When the men’s dorms at MIT became coed Perlman moved out of the women’s dorm into a mixed dorm, where she became the "resident female". She later said that she was so used to the gender imbalance, that it became normal. Only when she saw other women students among a crowd of men she noticed that "it kind of looked weird". Career After graduation, she accepted a position with Bolt, Beranek, and Newman (BBN), a government contractor that developed software for network equipment. While working for BBN, Perlman made an impression on a manager for Digital Equipment Corp and was offered a job, joining the firm in 1980. During her time working at Digital, she quickly produced a solution that did exactly what the team wanted it to; the Spanning Tree Protocol. It allows a network to deliver data reliably by making it possible to design the network with redundant links. This setup provides automatic backup paths if an active link fails, and disables the links that are not part of the tree. This leaves a single, active path between any pair of network nodes. She is most famous for STP, which is fundamental to the operation of network bridges in many smaller networks. Perlman is the author of a textbook on networking called “Interconnections: Bridges, Routers, Switches, and Internetworking Protocols” and coauthor of another on network security called “Network Security: Private Communication in a Public World”, which is a now popular college textbook. Her contributions to network security include trust models for Public Key Infrastructure, data expiration, and distributed algorithms resilient despite malicious participants. She left Digital in 1993 and joined Novell. Then, in 1997 she left Novell and joined Sun Microsystems. Over the course of her career she has earned over 200 patents, 40 of them while working for Sun Microsystems, where in 2007 she held the title of Distinguished Engineer. She has taught courses at the University of Washington, Harvard University, MIT, and Texas A&M, and has been the keynote speaker at events all over the world. Perlman is the recipient of awards such as Lifetime Achievement awards from USENIX and the Association for Computing Machinery’s Special Interest Group on Data Communication (SIGCOMM). Spanning Tree Protocol Perlman invented the spanning tree algorithm and protocol. While working as a consulting engineer at Digital Equipment Corporation (DEC) in 1984 she was tasked with developing a straightforward protocol that enabled network bridges to locate loops in a local area network (LAN). It was required that the protocol should use a constant amount of memory when implemented on the network devices, regardless of how large the network was. Building and expanding bridged networks was difficult because loops, where more than one path leads to the same destination, could result in the collapse of the network. Redundant paths in the network meant that a bridge could forward a frame in multiple directions. Therefore loops could cause Ethernet frames to fail to reach their destination, thus flooding the network. Perlman utilized the fact that bridges had unique 48 bit MAC addresses, and devised a network protocol so that bridges within the LAN communicated with one another. The algorithm implemented on all bridges in the network allowed the bridges to designate one root bridge in the network. Each bridge then mapped the network and determined the shortest path to the root bridge, deactivating other redundant paths. Despite Perlman's concerns that it took the spanning tree protocol about a minute to react when changes in the network topology occurred, during which time a loop could bring down the network, it was standardized as 802.1d by the Institute of Electrical and Electronics Engineers (IEEE). Perlman said that the benefits of the protocol amount to the fact that "you don't have to worry about topology" when changing the way a LAN is connected. Perlman has however criticized changes which were made in the course of the standardization of the protocol. Perlman published a poem on STP, called 'Algorhyme': Other network protocols Perlman was the principal designer of the DECnet IV and V protocols, and IS-IS, the OSI equivalent of OSPF. She also made major contributions to the Connectionless Network Protocol (CLNP). Perlman has collaborated with Yakov Rekhter on developing network routing standards, such as the OSI Inter-Domain Routing Protocol (IDRP), the OSI equivalent of BGP. At DEC she also oversaw the transition from distance vector to link-state routing protocols. Link-state routing protocols had the advantage that they adapted to changes in the network topology faster, and DEC's link-state routing protocol was second only to the link-state routing protocol of the Advanced Research Projects Agency Network (ARPANET). While working on the DECnet project Perlman also helped to improve the intermediate-system to intermediate-system routing protocol, known as IS-IS, so that it could route the Internet Protocol (IP), AppleTalk and the Internetwork Packet Exchange (IPX) protocol. The Open Shortest Path First (OSPF) protocol relied in part on Perlman's research on fault-tolerant broadcasting of routing information. Perlman subsequently worked as a network engineer for Sun Microsystems, now Oracle. She specialized in network and security protocols and while working for Oracle and obtained more than 50 patents. When standarizing her work on TRILL, a combined bridging and routing protocol that proposes to supersede STP, she included version 2 of the earlier "Algorhyme": Awards Fellow of the Association for Computing Machinery,(class of 2016) National Inventors Hall of Fame induction (2016) Internet Hall of Fame induction (2014) SIGCOMM Award (2010) USENIX Lifetime Achievement Award (2006) Recipient of the first Anita Borg Institute Women of Vision Award for Innovation in 2005 Silicon Valley Intellectual Property Law Association Inventor of the year (2003) Honorary Doctorate, Royal Institute of Technology (June 28, 2000) Twice named as one of the 20 most influential people in the industry by Data Communications magazine: in the 20th anniversary issue (January 15, 1992) and the 25th anniversary issue (January 15, 1997). Perlman is the only person to be named in both issues. IEEE Fellow in 2008 for contributions to network routing and security protocols Fellow of the Association for Computing Machinery, class of 2016 Bibliography References External links Inventor of the Week archive at MIT: Spanning Tree Protocol 1951 births Living people American computer scientists Internet pioneers American women inventors Women Internet pioneers Computer systems researchers Computer security academics Digital Equipment Corporation people Massachusetts Institute of Technology School of Science alumni American women computer scientists 2016 fellows of the Association for Computing Machinery People in information technology People from Loch Arbour, New Jersey People from Portsmouth, Virginia Scientists from Virginia Sun Microsystems people Network topology Jewish American scientists Jewish women scientists Ocean Township High School alumni 21st-century American Jews 21st-century American women
Radia Perlman
Mathematics,Technology
2,300
73,634
https://en.wikipedia.org/wiki/Glossary%20of%20mathematical%20symbols
A mathematical symbol is a figure or a combination of figures that is used to represent a mathematical object, an action on mathematical objects, a relation between mathematical objects, or for structuring the other symbols that occur in a formula. As formulas are entirely constituted with symbols of various types, many symbols are needed for expressing all mathematics. The most basic symbols are the decimal digits (0, 1, 2, 3, 4, 5, 6, 7, 8, 9), and the letters of the Latin alphabet. The decimal digits are used for representing numbers through the Hindu–Arabic numeral system. Historically, upper-case letters were used for representing points in geometry, and lower-case letters were used for variables and constants. Letters are used for representing many other types of mathematical object. As the number of these types has increased, the Greek alphabet and some Hebrew letters have also come to be used. For more symbols, other typefaces are also used, mainly boldface , script typeface (the lower-case script face is rarely used because of the possible confusion with the standard face), German fraktur , and blackboard bold (the other letters are rarely used in this face, or their use is unconventional). It is commonplace to use alphabets, fonts and typefaces to group symbols by type. The use of specific Latin and Greek letters as symbols for denoting mathematical objects is not described in this article. For such uses, see Variable § Conventional variable names and List of mathematical constants. However, some symbols that are described here have the same shape as the letter from which they are derived, such as and . These letters alone are not sufficient for the needs of mathematicians, and many other symbols are used. Some take their origin in punctuation marks and diacritics traditionally used in typography; others by deforming letter forms, as in the cases of and . Others, such as and , were specially designed for mathematics. Layout of this article Normally, entries of a glossary are structured by topics and sorted alphabetically. This is not possible here, as there is no natural order on symbols, and many symbols are used in different parts of mathematics with different meanings, often completely unrelated. Therefore, some arbitrary choices had to be made, which are summarized below. The article is split into sections that are sorted by an increasing level of technicality. That is, the first sections contain the symbols that are encountered in most mathematical texts, and that are supposed to be known even by beginners. On the other hand, the last sections contain symbols that are specific to some area of mathematics and are ignored outside these areas. However, the long section on brackets has been placed near to the end, although most of its entries are elementary: this makes it easier to search for a symbol entry by scrolling. Most symbols have multiple meanings that are generally distinguished either by the area of mathematics where they are used or by their syntax, that is, by their position inside a formula and the nature of the other parts of the formula that are close to them. As readers may not be aware of the area of mathematics to which the symbol that they are looking for is related, the different meanings of a symbol are grouped in the section corresponding to their most common meaning. When the meaning depends on the syntax, a symbol may have different entries depending on the syntax. For summarizing the syntax in the entry name, the symbol is used for representing the neighboring parts of a formula that contains the symbol. See for examples of use. Most symbols have two printed versions. They can be displayed as Unicode characters, or in LaTeX format. With the Unicode version, using search engines and copy-pasting are easier. On the other hand, the LaTeX rendering is often much better (more aesthetic), and is generally considered a standard in mathematics. Therefore, in this article, the Unicode version of the symbols is used (when possible) for labelling their entry, and the LaTeX version is used in their description. So, for finding how to type a symbol in LaTeX, it suffices to look at the source of the article. For most symbols, the entry name is the corresponding Unicode symbol. So, for searching the entry of a symbol, it suffices to type or copy the Unicode symbol into the search textbox. Similarly, when possible, the entry name of a symbol is also an anchor, which allows linking easily from another Wikipedia article. When an entry name contains special characters such as [,], and |, there is also an anchor, but one has to look at the article source to know it. Finally, when there is an article on the symbol itself (not its mathematical meaning), it is linked to in the entry name. Arithmetic operators Equality, equivalence and similarity Comparison Set theory Basic logic Several logical symbols are widely used in all mathematics, and are listed here. For symbols that are used only in mathematical logic, or are rarely used, see List of logic symbols. Blackboard bold The blackboard bold typeface is widely used for denoting the basic number systems. These systems are often also denoted by the corresponding uppercase bold letter. A clear advantage of blackboard bold is that these symbols cannot be confused with anything else. This allows using them in any area of mathematics, without having to recall their definition. For example, if one encounters in combinatorics, one should immediately know that this denotes the real numbers, although combinatorics does not study the real numbers (but it uses them for many proofs). Calculus or the covariant derivative. (Capital Greek letter delta—not to be confused with , which may denote a geometric triangle or, alternatively, the symmetric difference of two sets.) (Note: the notation is not recommended for the four-gradient since both and are used to denote the d'Alembertian; see below.) (here an actual box, not a placeholder) Linear and multilinear algebra Advanced group theory Infinite numbers Brackets Many types of bracket are used in mathematics. Their meanings depend not only on their shapes, but also on the nature and the arrangement of what is delimited by them, and sometimes what appears between or before them. For this reason, in the entry titles, the symbol is used as a placeholder for schematizing the syntax that underlies the meaning. Parentheses Square brackets Braces Other brackets Symbols that do not belong to formulas In this section, the symbols that are listed are used as some sorts of punctuation marks in mathematical reasoning, or as abbreviations of natural language phrases. They are generally not used inside a formula. Some were used in classical logic for indicating the logical dependence between sentences written in plain language. Except for the first two, they are normally not used in printed mathematical texts since, for readability, it is generally recommended to have at least one word between two formulas. However, they are still used on a black board for indicating relationships between formulas. Miscellaneous See also Related articles Language of mathematics Mathematical notation Notation in probability and statistics Physical constants Related lists List of logic symbols List of mathematical constants Table of mathematical symbols by introduction date Blackboard bold Greek letters used in mathematics, science, and engineering Latin letters used in mathematics, science, and engineering List of common physics notations List of letters used in mathematics, science, and engineering List of mathematical abbreviations List of typographical symbols and punctuation marks ISO 31-11 (Mathematical signs and symbols for use in physical sciences and technology) List of APL functions Unicode symbols Unicode block Mathematical Alphanumeric Symbols (Unicode block) List of Unicode characters Letterlike Symbols Mathematical operators and symbols in Unicode Miscellaneous Mathematical Symbols: A, B, Technical Arrow (symbol) and Miscellaneous Symbols and Arrows Number Forms Geometric Shapes References External links Jeff Miller: Earliest Uses of Various Mathematical Symbols Numericana: Scientific Symbols and Icons GIF and PNG Images for Math Symbols Mathematical Symbols in Unicode Detexify: LaTeX Handwriting Recognition Tool Some Unicode charts of mathematical operators and symbols: Index of Unicode symbols Range 2100–214F: Unicode Letterlike Symbols Range 2190–21FF: Unicode Arrows Range 2200–22FF: Unicode Mathematical Operators Range 27C0–27EF: Unicode Miscellaneous Mathematical Symbols–A Range 2980–29FF: Unicode Miscellaneous Mathematical Symbols–B Range 2A00–2AFF: Unicode Supplementary Mathematical Operators Some Unicode cross-references: Short list of commonly used LaTeX symbols and Comprehensive LaTeX Symbol List MathML Characters - sorts out Unicode, HTML and MathML/TeX names on one page Unicode values and MathML names Unicode values and Postscript names from the source code for Ghostscript Mathematics Symbols Symbols Symbols Wikipedia glossaries using description lists
Glossary of mathematical symbols
Mathematics
1,777
38,922,926
https://en.wikipedia.org/wiki/PowerCLI
PowerCLI is a PowerShell-based command-line interface for managing VMware vSphere. VMware describes PowerCLI as "a powerful command-line tool that lets you automate all aspects of vSphere management, including network, storage, VM, guest OS and more. PowerCLI is distributed as PowerShell modules, and includes over 500 PowerShell cmdlets for managing and automating vSphere and vCloud, along with documentation and samples." PowerCLI runs in PowerShell on Windows, macOS, and Ubuntu operating systems. References External links VMware PowerCLI page VMware PowerCLI Cmdlets by Product Command-line software
PowerCLI
Technology
151
16,262,012
https://en.wikipedia.org/wiki/HD%2037974
HD 37974 (or R 126) a variable B[e] hypergiant in the Large Magellanic Cloud. It is surrounded by an unexpected dust disk. Properties R126, formally RMC (Radcliffe observatory Magellanic Cloud) 126, is a massive luminous star with several unusual properties. It exhibits the B[e] phenomenon where forbidden emission lines appear in the spectrum due to extended circumstellar material. Its spectrum also shows normal (permitted) emission lines formed in denser material closer to the star, indicative of a power stellar wind. The spectra include silicate and polycyclic aromatic hydrocarbon (PAH) features that suggest a dusty disc. The star itself is a hot supergiant thought to be seventy times more massive than the Sun and over a million times more luminous. It has evolved away from the main sequence (being an O-class star, when it was in MS) and is so luminous and large that it is losing material through its stellar wind over a billion times faster than the Sun. It would lose more material than the Sun contains in about 25,000 years. It is expected to evolve into Wolf–Rayet star in several hundred thousand years. Dusty disc The dust cloud around R126 is surprising because stars as massive as these were thought to be inhospitable to planet formation due to powerful stellar winds making it difficult for dust particles to condense. The nearby hypergiant HD 268835 shows similar features and is also likely to have a dusty disc, so R126 is not unique. The disc extends outwards for 60 times the size of Pluto's orbit around the Sun, and probably contains as much material as the entire Kuiper belt. It is unclear whether such a disc represents the first or last stages of the planet-forming process. Variability The brightness of R126 varies in an unpredictable way by around 0.6 magnitude over timescales of tens to hundreds of days. The faster variations are characteristic of α Cygni variables, irregular pulsating supergiants. The slower variations are accompanied by changes in the colour of the star, with it being redder when it is visually brighter, typical of the S Doradus phases of luminous blue variables. See also List of most massive stars References Stars in the Large Magellanic Cloud Dorado B-type hypergiants Large Magellanic Cloud R126 037974 Alpha Cygni variables B(e) stars J05362586-6922558 CPD-69 420
HD 37974
Astronomy
523
362,400
https://en.wikipedia.org/wiki/Separable%20polynomial
In mathematics, a polynomial P(X) over a given field K is separable if its roots are distinct in an algebraic closure of K, that is, the number of distinct roots is equal to the degree of the polynomial. This concept is closely related to square-free polynomial. If K is a perfect field then the two concepts coincide. In general, P(X) is separable if and only if it is square-free over any field that contains K, which holds if and only if P(X) is coprime to its formal derivative D P(X). Older definition In an older definition, P(X) was considered separable if each of its irreducible factors in K[X] is separable in the modern definition. In this definition, separability depended on the field K; for example, any polynomial over a perfect field would have been considered separable. This definition, although it can be convenient for Galois theory, is no longer in use. Separable field extensions Separable polynomials are used to define separable extensions: A field extension is a separable extension if and only if for every in which is algebraic over , the minimal polynomial of over is a separable polynomial. Inseparable extensions (that is, extensions which are not separable) may occur only in positive characteristic. The criterion above leads to the quick conclusion that if P is irreducible and not separable, then D P(X) = 0. Thus we must have P(X) = Q(X&hairsp;p) for some polynomial Q over K, where the prime number p is the characteristic. With this clue we can construct an example: P(X) = X&hairsp;p − T with K the field of rational functions in the indeterminate T over the finite field with p elements. Here one can prove directly that P(X) is irreducible and not separable. This is actually a typical example of why inseparability matters; in geometric terms P represents the mapping on the projective line over the finite field, taking co-ordinates to their pth power. Such mappings are fundamental to the algebraic geometry of finite fields. Put another way, there are coverings in that setting that cannot be 'seen' by Galois theory. (See Radical morphism for a higher-level discussion.) If L is the field extension K(T&hairsp;1/p), in other words the splitting field of P, then L/K is an example of a purely inseparable field extension. It is of degree p, but has no automorphism fixing K, other than the identity, because T&hairsp;1/p is the unique root of P. This shows directly that Galois theory must here break down. A field such that there are no such extensions is called perfect. That finite fields are perfect follows a posteriori from their known structure. One can show that the tensor product of fields of L with itself over K for this example has nilpotent elements that are non-zero. This is another manifestation of inseparability: that is, the tensor product operation on fields need not produce a ring that is a product of fields (so, not a commutative semisimple ring). If P(x) is separable, and its roots form a group (a subgroup of the field K), then P(x) is an additive polynomial. Applications in Galois theory Separable polynomials occur frequently in Galois theory. For example, let P be an irreducible polynomial with integer coefficients and p be a prime number which does not divide the leading coefficient of P. Let Q be the polynomial over the finite field with p elements, which is obtained by reducing modulo p the coefficients of P. Then, if Q is separable (which is the case for every p but a finite number) then the degrees of the irreducible factors of Q are the lengths of the cycles of some permutation of the Galois group of P. Another example: P being as above, a resolvent R for a group G is a polynomial whose coefficients are polynomials in the coefficients of P, which provides some information on the Galois group of P. More precisely, if R is separable and has a rational root then the Galois group of P is contained in G. For example, if D is the discriminant of P then is a resolvent for the alternating group. This resolvent is always separable (assuming the characteristic is not 2) if P is irreducible, but most resolvents are not always separable. See also Frobenius endomorphism References Field (mathematics) Polynomials
Separable polynomial
Mathematics
971
8,101,374
https://en.wikipedia.org/wiki/Ultraconnected%20space
In mathematics, a topological space is said to be ultraconnected if no two nonempty closed sets are disjoint. Equivalently, a space is ultraconnected if and only if the closures of two distinct points always have non trivial intersection. Hence, no T1 space with more than one point is ultraconnected. Properties Every ultraconnected space is path-connected (but not necessarily arc connected). If and are two points of and is a point in the intersection , the function defined by if , and if , is a continuous path between and . Every ultraconnected space is normal, limit point compact, and pseudocompact. Examples The following are examples of ultraconnected topological spaces. A set with the indiscrete topology. The Sierpiński space. A set with the excluded point topology. The right order topology on the real line. See also Hyperconnected space Notes References Lynn Arthur Steen and J. Arthur Seebach, Jr., Counterexamples in Topology. Springer-Verlag, New York, 1978. Reprinted by Dover Publications, New York, 1995. (Dover edition). Properties of topological spaces
Ultraconnected space
Mathematics
233
1,485,282
https://en.wikipedia.org/wiki/James%20Edward%20Allen%20Gibbs
James Edward Allen Gibbs (1829–1902) was a farmer, inventor, and businessman from Rockbridge County in the Shenandoah Valley in Virginia. On June 2, 1857, he was awarded a patent for the first twisted chain-stitch single-thread sewing machine using a rotating hook. In partnership with James Willcox, Gibbs became a principal in the Willcox & Gibbs Sewing Machine Company. The Willcox & Gibbs Sewing Machine Company started in 1857 by James E. A. Gibbs and James Willcox opened its London Office in 1859 at 135 Regent Street. By around 1871 the European offices were at 150 Cheapside, London and later 20 Fore Street, London. The company hired John Emory Powers for marketing its product. Powers pioneered the use of many new marketing techniques, including full-page ads in the form of a story or play, free trial uses of a product and installment purchasing plan. The marketing campaign created a demand for sewing machines in the Great Britain that Willcox and Gibbs could not meet. The machine's circular design was so popular that it was produced well into the early 20th century, long after most machines were of the more conventional design. The machines shown employ the Gibbs rotary twisted chain stitch mechanism which was less prone to coming undone. Following his successful invention, he named his family's farm "Raphine." The name originated from an old Greek word "raphis" which means "to sew." The community of Raphine, Virginia, was named in his honor. Gallery References External links A typical Willcox and Gibbs machine from about 1930 1829 births 1902 deaths 19th-century American businesspeople 19th-century American engineers 19th-century American inventors Businesspeople from Virginia Engineers from Virginia People from Rockbridge County, Virginia Sewing machines Farmers from Virginia Inventors from Virginia
James Edward Allen Gibbs
Physics,Technology
370
44,854,826
https://en.wikipedia.org/wiki/Viroinformatics
Viroinformatics is an amalgamation of virology with bioinformatics, involving the application of information and communication technology in various aspects of viral research. Currently there are more than 100 web servers and databases harboring knowledge regarding different viruses as well as distinct applications concerning diversity analysis, viral recombination, RNAi studies, drug design, protein–protein interaction, structural analysis etc. References External links Viral bioinformatics VBRC ViPR ViralZone Viral bioinformatics: introduction Viral genomics and bioinformatics Bioinformatics Computational biology Virology Computational fields of study
Viroinformatics
Technology,Engineering,Biology
125
4,686,654
https://en.wikipedia.org/wiki/Soil%20biology
Soil biology is the study of microbial and faunal activity and ecology in soil. Soil life, soil biota, soil fauna, or edaphon is a collective term that encompasses all organisms that spend a significant portion of their life cycle within a soil profile, or at the soil-litter interface. These organisms include earthworms, nematodes, protozoa, fungi, bacteria, different arthropods, as well as some reptiles (such as snakes), and species of burrowing mammals like gophers, moles and prairie dogs. Soil biology plays a vital role in determining many soil characteristics. The decomposition of organic matter by soil organisms has an immense influence on soil fertility, plant growth, soil structure, and carbon storage. As a relatively new science, much remains unknown about soil biology and its effect on soil ecosystems. Overview The soil is home to a large proportion of the world's biodiversity. The links between soil organisms and soil functions are complex. The interconnectedness and complexity of this soil 'food web' means any appraisal of soil function must necessarily take into account interactions with the living communities that exist within the soil. We know that soil organisms break down organic matter, making nutrients available for uptake by plants and other organisms. The nutrients stored in the bodies of soil organisms prevent nutrient loss by leaching. Microbial exudates act to maintain soil structure, and earthworms are important in bioturbation. However, we find that we do not understand critical aspects about how these populations function and interact. The discovery of glomalin in 1995 indicates that we lack the knowledge to correctly answer some of the most basic questions about the biogeochemical cycle in soils. There is much work ahead to gain a better understanding of the ecological role of soil biological components in the biosphere. In balanced soil, plants grow in an active and steady environment. The mineral content of the soil and its heartiful structure are important for their well-being, but it is the life in the earth that powers its cycles and provides its fertility. Without the activities of soil organisms, organic materials would accumulate and litter the soil surface, and there would be no food for plants. The soil biota includes: Megafauna: size range – 20 mm upward, e.g. moles, rabbits, and rodents. Macrofauna: size range – 2 to 20 mm, e.g. woodlice, earthworms, beetles, centipedes, slugs, snails, ants, and harvestmen. Mesofauna: size range – 100 micrometres to 2 mm, e.g. tardigrades, mites and springtails. Microfauna and Microflora: size range – 1 to 100 micrometres, e.g. yeasts, bacteria (commonly actinobacteria), fungi, protozoa, roundworms, and rotifers. Of these, bacteria and fungi play key roles in maintaining a healthy soil. They act as decomposers that break down organic materials to produce detritus and other breakdown products. Soil detritivores, like earthworms, ingest detritus and decompose it. Saprotrophs, well represented by fungi and bacteria, extract soluble nutrients from delitro. The ants (macrofaunas) help by breaking down in the same way but they also provide the motion part as they move in their armies. Also the rodents, wood-eaters help the soil to be more absorbent. Scope Soil biology involves work in the following areas: Modelling of biological processes and population dynamics Soil biology, physics and chemistry: occurrence of physicochemical parameters and surface properties on biological processes and population behavior Population biology and molecular ecology: methodological development and contribution to study microbial and faunal populations; diversity and population dynamics; genetic transfers, influence of environmental factors Community ecology and functioning processes: interactions between organisms and mineral or organic compounds; involvement of such interactions in soil pathogenicity; transformation of mineral and organic compounds, cycling of elements; soil structuration Complementary disciplinary approaches are necessarily utilized which involve molecular biology, genetics, ecophysiology, biogeography, ecology, soil processes, organic matter, nutrient dynamics and landscape ecology. Bacteria Bacteria are single-cell organisms and the most numerous denizens of agriculture, with populations ranging from 100 million to 3 billion in a gram. They are capable of very rapid reproduction by binary fission (dividing into two) in favourable conditions. One bacterium is capable of producing 16 million more in just 24 hours. Most soil bacteria live close to plant roots and are often referred to as rhizobacteria. Bacteria live in soil water, including the film of moisture surrounding soil particles, and some are able to swim by means of flagella. The majority of the beneficial soil-dwelling bacteria need oxygen (and are thus termed aerobic bacteria), whilst those that do not require air are referred to as anaerobic, and tend to cause putrefaction of dead organic matter. Aerobic bacteria are most active in a soil that is moist (but not saturated, as this will deprive aerobic bacteria of the air that they require), and neutral soil pH, and where there is plenty of food (carbohydrates and micronutrients from organic matter) available. Hostile conditions will not completely kill bacteria; rather, the bacteria will stop growing and get into a dormant stage, and those individuals with pro-adaptive mutations may compete better in the new conditions. Some Gram-positive bacteria produce spores in order to wait for more favourable circumstances, and Gram-negative bacteria get into a "nonculturable" stage. Bacteria are colonized by persistent viral agents (bacteriophages) that determine gene word order in bacterial host. From the organic gardener's point of view, the important roles that bacteria play are: Nitrification Nitrification is a vital part of the nitrogen cycle, wherein certain bacteria (which manufacture their own carbohydrate supply without using the process of photosynthesis) are able to transform nitrogen in the form of ammonium, which is produced by the decomposition of proteins, into nitrates, which are available to growing plants, and once again converted to proteins. Nitrogen fixation In another part of the cycle, the process of nitrogen fixation constantly puts additional nitrogen into biological circulation. This is carried out by free-living nitrogen-fixing bacteria in the soil or water such as Azotobacter, or by those that live in close symbiosis with leguminous plants, such as rhizobia. These bacteria form colonies in nodules they create on the roots of peas, beans, and related species. These are able to convert nitrogen from the atmosphere into nitrogen-containing organic substances. Denitrification While nitrogen fixation converts nitrogen from the atmosphere into organic compounds, a series of processes called denitrification returns an approximately equal amount of nitrogen to the atmosphere. Denitrifying bacteria tend to be anaerobes, or facultatively anaerobes (can alter between the oxygen dependent and oxygen independent types of metabolisms), including Achromobacter and Pseudomonas. The purification process caused by oxygen-free conditions converts nitrates and nitrites in soil into nitrogen gas or into gaseous compounds such as nitrous oxide or nitric oxide. In excess, denitrification can lead to overall losses of available soil nitrogen and subsequent loss of soil fertility. However, fixed nitrogen may circulate many times between organisms and the soil before denitrification returns it to the atmosphere. The diagram above illustrates the nitrogen cycle. Actinomycetota Actinomycetota are critical in the decomposition of organic matter and in humus formation. They specialize in breaking down cellulose and lignin along with the tough chitin found on the exoskeletons of insects. Their presence is responsible for the sweet "earthy" aroma associated with a good healthy soil. They require plenty of air and a pH between 6.0 and 7.5, but are more tolerant of dry conditions than most other bacteria and fungi. Fungi A gram of garden soil can contain around one million fungi, such as yeasts and moulds. Fungi have no chlorophyll, and are not able to photosynthesise. They cannot use atmospheric carbon dioxide as a source of carbon, therefore they are chemo-heterotrophic, meaning that, like animals, they require a chemical source of energy rather than being able to use light as an energy source, as well as organic substrates to get carbon for growth and development. Many fungi are parasitic, often causing disease to their living host plant, although some have beneficial relationships with living plants, as illustrated below. In terms of soil and humus creation, the most important fungi tend to be saprotrophic; that is, they live on dead or decaying organic matter, thus breaking it down and converting it to forms that are available to the higher plants. A succession of fungi species will colonise the dead matter, beginning with those that use sugars and starches, which are succeeded by those that are able to break down cellulose and lignins. Fungi spread underground by sending long thin threads known as mycelium throughout the soil; these threads can be observed throughout many soils and compost heaps. From the mycelia the fungi is able to throw up its fruiting bodies, the visible part above the soil (e.g., mushrooms, toadstools, and puffballs), which may contain millions of spores. When the fruiting body bursts, these spores are dispersed through the air to settle in fresh environments, and are able to lie dormant for up to years until the right conditions for their activation arise or the right food is made available. Mycorrhizae Those fungi that are able to live symbiotically with living plants, creating a relationship that is beneficial to both, are known as mycorrhizae (from myco meaning fungal and rhiza meaning root). Plant root hairs are invaded by the mycelia of the mycorrhiza, which lives partly in the soil and partly in the root, and may either cover the length of the root hair as a sheath or be concentrated around its tip. The mycorrhiza obtains the carbohydrates that it requires from the root, in return providing the plant with nutrients including nitrogen and moisture. Later the plant roots will also absorb the mycelium into its own tissues. Beneficial mycorrhizal associations are to be found in many of our edible and flowering crops. Shewell Cooper suggests that these include at least 80% of the Brassica and Solanum families (including tomatoes and potatoes), as well as the majority of tree species, especially in forest and woodlands. Here the mycorrhizae create a fine underground mesh that extends greatly beyond the limits of the tree's roots, greatly increasing their feeding range and actually causing neighbouring trees to become physically interconnected. The benefits of mycorrhizal relations to their plant partners are not limited to nutrients, but can be essential for plant reproduction. In situations where little light is able to reach the forest floor, such as the North American pine forests, a young seedling cannot obtain sufficient light to photosynthesise for itself and will not grow properly in a sterile soil. But, if the ground is underlain by a mycorrhizal mat, then the developing seedling will throw down roots that can link with the fungal threads and through them obtain the nutrients it needs, often indirectly obtained from its parents or neighbouring trees. David Attenborough points out the plant, fungi, animal relationship that creates a "three way harmonious trio" to be found in forest ecosystems, wherein the plant/fungi symbiosis is enhanced by animals such as the wild boar, deer, mice, or flying squirrel, which feed upon the fungi's fruiting bodies, including truffles, and cause their further spread (Private Life Of Plants, 1995). A greater understanding of the complex relationships that pervade natural systems is one of the major justifications of the organic gardener, in refraining from the use of artificial chemicals and the damage these might cause. Recent research has shown that arbuscular mycorrhizal fungi produce glomalin, a protein that binds soil particles and stores both carbon and nitrogen. These glomalin-related soil proteins are an important part of soil organic matter. Invertebrates Soil fauna affect soil formation and soil organic matter dynamically on many spatiotemporal scales. Earthworms, ants and termites mix the soil as they burrow, significantly affecting soil formation. Earthworms ingest soil particles and organic residues, enhancing the availability of plant nutrients in the material that passes through and out of their bodies. By aerating and stirring the soil, and by increasing the stability of soil aggregates, these organisms help to assure the ready infiltration of water. These organisms in the soil also help improve pH levels. Ants and termites are often referred to as "Soil engineers" because, when they create their nests, there are several chemical and physical changes made to the soil. Among these changes are increasing the presence of the most essential elements like carbon, nitrogen, and phosphorus—elements needed for plant growth. They also can gather soil particles from differing depths of soil and deposit them in other places, leading to the mixing of soil so it is richer with nutrients and other elements. Vertebrates The soil is also important to many mammals. Gophers, moles, prairie dogs, and other burrowing animals rely on this soil for protection and food. The animals even give back to the soil as their burrowing allows more rain, snow and water from ice to enter the soil instead of creating erosion. Table of soil life This table includes some familiar types of soil life of soil life, coherent with prevalent taxonomy as used in the linked Wikipedia articles. See also Agricultural soil science Agroecology Biogeochemical cycle Compost Nitrification Nitrogen cycle Potting soil Soil food web Soil microbiology Soil science Notes References Bibliography Alexander, 1977, Introduction to Soil Microbiology, 2nd edition, John Wiley Alexander, 1994, Biodegradation and Bioremediation, Academic Press Bardgett, R.D., 2005, The Biology of Soil: A Community and Ecosystem Approach, Oxford University Press Burges, A., and Raw, F., 1967, Soil Biology: Academic Press Coleman D.C. et al., 2004, Fundamentals of Soil Ecology, 2nd edition, Academic Press Coyne, 1999, Soil Microbiology: An Exploratory Approach, Delmar Doran, J.W., D.C. Coleman, D.F. Bezdicek and B.A. Stewart. 1994. Defining soil quality for a sustainable environment. Soil Science Society of America Special Publication Number 35, ASA, Madison Wis. Paul, P.A. and F.E. Clark. 1996, Soil Microbiology and Biochemistry, 2nd edition, Academic Press Richards, 1987, The Microbiology of Terrestrial Ecosystems, Longman Scientific & Technical Sylvia et al., 1998, Principles and Applications of Soil Microbiology, Prentice Hall Soil and Water Conservation Society, 2000, Soil Biology Primer. Tate, 2000, Soil Microbiology, 2nd edition, John Wiley van Elsas et al., 1997, Modern Soil Microbiology, Marcel Dekker Wood, 1995, Environmental Soil Biology, 2nd edition, Blackie A & P Vats, Rajeev & Sanjeev, Aggarwal. (2019). Impact of termite activity and its effect on soil composition. External links Michigan State University – Soil Ecology and Management: Soil Biology New South Wales – Soil Biology University of Minnesota – Soil Biology and Soil Management Soil-Net.com A free schools-age educational site, featuring much on soil biology and teaching about soil and its importance. Why organic fertilizers are a good choice for healthy soil Effects of transgenic zeaxanthin potatoes on soil quality Biosafety research project funded by the BMBF Phospholipid fatty-acid analysis protocol A method for analyzing the soil microbial community (pdf file) USDA-NRCS – Soil Biology Primer Soil science
Soil biology
Biology
3,360
30,666,647
https://en.wikipedia.org/wiki/Advances%20in%20Geometry
Advances in Geometry is a peer-reviewed mathematics journal published quarterly by Walter de Gruyter. Founded in 2001, the journal publishes articles on geometry. The journal is indexed by Mathematical Reviews and Zentralblatt MATH. Its 2016 MCQ was 0.45, and its 2021 impact factor was 0.763. References External links Geometry journals Academic journals established in 2001 English-language journals De Gruyter academic journals Quarterly journals
Advances in Geometry
Mathematics
88
1,834,821
https://en.wikipedia.org/wiki/Vesuvianite
Vesuvianite, also known as idocrase, is a green, brown, yellow, or blue silicate mineral. Vesuvianite occurs as tetragonal crystals in skarn deposits and limestones that have been subjected to contact metamorphism. It was first discovered within included blocks or adjacent to lavas on Mount Vesuvius, hence its name. Attractive-looking crystals are sometimes cut as gemstones. Localities which have yielded fine crystallized specimens include Mount Vesuvius and the Ala Valley near Turin, Piedmont. The specific gravity is 3.4 and the Mohs hardness is . The name "vesuvianite" was given by Abraham Gottlob Werner in 1795, because fine crystals of the mineral are found at Vesuvius; these are brown in color and occur in the ejected limestone blocks of Monte Somma. Several other names were applied to this species, one of which, "idocrase" by René Just Haüy in 1796, is now in common use. A sky bluish variety known as cyprine has been reported from Franklin, New Jersey and other locations; the blue is due to impurities of copper in a complex calcium aluminum sorosilicate. Californite is a name sometimes used for jade-like vesuvianite, also known as California jade, American jade or Vesuvianite jade. Xanthite is a manganese rich variety. Wiluite is an optically positive variety from Wilui, Siberia. Idocrase is an older synonym sometimes used for gemstone-quality vesuvianite. Also, Vessonite and Vassolite are variant spellings commonly encountered in the gem trade. References Additional sources Webmineral data Vesuvianite at Franklin-Sterling Mindat - Cyprine variants with location data Calcium minerals Magnesium minerals Aluminium minerals Mount Vesuvius Gemstones Geology of Italy Sorosilicates Tetragonal minerals Minerals in space group 126
Vesuvianite
Physics
398
73,522,835
https://en.wikipedia.org/wiki/Chronodisruption
Chronodisruption is a concept in the field of circadian biology that refers to the disturbance or alteration of the body's natural biological rhythms, for example the sleep-wake cycle, due to various environmental factors. The human body is synchronized to a 24-hour light-dark cycle, which is essential for maintaining optimal health and well-being. However, modern lifestyles —which involve exposure to artificial light (especially during nighttime), irregular sleep schedules, and shift work — can disrupt this natural rhythm, leading to a range of adverse physiological outcomes. Chronodisruption has been linked to a variety of health disorders and diseases, including neurodegenerative diseases, diabetes, mood disorders, cardiovascular disease, and cancer. Such disruptors can lead to dysregulation of hormones and neurotransmitters, though researchers continue to investigate the physiological implications of chronodisruption. Indeed, research in chronobiology is rapidly advancing, with an increasing focus on understanding the underlying mechanisms of chronodisruption and developing strategies to prevent or mitigate its adverse effects. This includes the development of pharmacological interventions, as well as lifestyle modifications such as optimizing one's sleeping environment and timing of meals and physical activity. Chronodisruption and Cancer People with chronodisruption have increased risk for certain types of cancer. Chronodisruption is demonstrated to have a causal role in cancer cell growth and tumor progression in rodents. In 2020, the International Agency for Research on Cancer (IARC) found that chronodisruption due to chronic night-shift work is a probable carcinogen (cancer-causing agent) in humans. In Humans Chronodisruption, in the form of shift work, increases the risk of breast cancer in women by about 50%. The risk of developing other forms of cancers, such as prostate cancer in men and colorectal cancer in women, may also increase with chronodisruption; studies in this area have shown modest, but statistically significant, associations. Chronodisruption is associated with impeded homeostasis of the cell cycle; this is correlated with malignant growth acceleration and cancer, potentially due to obstruction of normal DNA damage repair. In Model Organisms In the studies investigating the relationship between experimental chronic jet lag and tumor progression done by Filipski et al., mice were kept under either 12:12 Light-Dark cycles (LD cycles) or under 12:12 LD cycles that would phase-advance by eight hours every two days. Upon injection with Glasgow osteosarcoma cells, a rapid acceleration in cancer cell proliferation rate was observed in the mice experiencing an 8-hour phase advance every two days compared to the mice not experiencing phase advance. Moreover, clock gene expressions (e.g. mPer2) were suppressed in mice subjected to repeated phase advance, while the daily rhythm in clock gene expression was maintained in mice in a typical 12:12 LD cycle. The down-regulation of the p53 gene and over-expression of the c-Myc gene associated with the clock disturbance may also have contributed to tumor progression. Melatonin is known to be an endogenously produced oncostatic agent that inhibits tumor cell growth via various potential mechanisms. Studies showed that perfusing the human breast cancer xenografts growing in animals in melatonin-rich blood collected from premenopausal women significantly inhibited all signs of rapid cancer cell proliferation. On the other hand, melatonin-deficient blood collected from the same set of women failed to restrict tumor growth. In the originals studies done by Filipski et al., a mouse strain named B6D2F1, which had a low level of circulating melatonin, was used. Although no definite conclusion can be made on the possible effects of melatonin on cancer development in B6D2F1 mice based on the original studies, a general statement can be made: besides the direct effects of internal desynchronization with the external environment, the accelerated rate of cancer cell proliferation may also be a consequence of relative melatonin deficiency caused by chronodisruption. Extreme cases of chronic jet lag (6-hour advances every week last equal to or more than 4 weeks under experimental setting) were observed to cause premature death in aged male mice compared to their counterparts kept in stable external LD cycles. This consequence was not observed in mice experiencing chronic phase delays. This showed that persistent internal desynchronization as a result of repeated phase advances may be associated with reduced longevity. The findings may have great implications for shift workers and people that frequently experience transmeridian travels that advance their internal clock. Recent studies since 2016 in mice have shown that chronic jet-lag models accelerates tumorigenesis in genetic models of lung cancer, liver cancer, colorectal cancer, and skin cancer. It has been suggested that chronodisruption is an "Hallmark of Systemic Disease". Chronodisruption and Cardiovascular Disease Chronodisruption is correlated with an increased risk for cardiovascular disease in humans. Experiments involving light-dark cycle manipulations, internal period mutations, and clock gene disruptions in rodents provide insights into the relationship between chronodisruption and the risk of cardiovascular diseases. In Humans Chronodisruption is associated with a significantly increased risk of cardiovascular disease in humans. Shift work has been implicated as a major risk factor for coronary heart disease, hypertension, ischemic stroke, and sudden cardiac death. Social jet lag, discrepancy between the schedule of working days and free days or misalignment between biological time and social time, may also be associated with increases in cardiovascular disease risk, as evidenced by increased triglyceride levels, decreased high-density lipoprotein-cholesterol levels, and decreased insulin sensitivity. In Model Organisms Mice exposed to a shortened 10:10 LD cycle (20-hour cycle) were observed to exhibit symptoms of abnormal cardiac pathophysiology, including decreased levels of cardiomyocytes and vascular smooth muscle cell hypertrophy, compared to mice in a typical 12:12 LD cycle (24-hour). These symptoms were rescued when the mice were subsequently exposed to the typical 24-hour LD cycle. Mutant mice with a 22-hour intrinsic period were affected with symptoms of cardiomyopathy and early death as a result when put under a 24-hour LD cycle; however, their cardiac functions were normalized under a shortened LD cycle (22-hour cycle) that matched their intrinsic period. Experiment simulating "shift-work" in mice (keep mice awake for 6 hours during their inactive period for several days) showed that mice misaligned with the external LD cycle had decreased metabolic efficiency and disrupted cardiac function. Deletion or mutation of core clock genes (e.g. Bmal1, Clock, Npas2) was shown to have an adverse impact on cardiac function, including attenuating glucose utilization, accelerating cardiomyopathy, and reducing longevity. Chronodisruption and Metabolic Disorders Food is a strong Zeitgeber for peripheral clocks, and the timing of food intake can disrupt or amplify the coordination between the central pacemaker and peripheral systems. This misalignment can lead to detrimental effects on metabolic health, including symptoms like insulin resistance and increased body mass. In Humans There is an increased risk of Type 2 Diabetes associated with shift work, with even higher risks among rotating shift or night shift workers and health care workers. Chronodisruption has been shown to disturb the regulation of glucose and insulin in the body, providing a potential pathway for this increased risk. Additionally, shift workers exhibit a higher risk for obesity than day workers, which increases with the number of years exposed and the frequency of shifts. It is hypothesized that circadian regulation of hormonal secretion related to appetite, as well as the presence of circadian clocks in adipose tissue cells, may influence the increased obesity risk related to shift work, although further study will be necessary to confirm this pathway. Timing of the food intake matching the proper circadian phase is also essential. Cross-sectional studies done by Wang et al. demonstrated that people who consumed ≥ 33% of their daily energy intake in the evening were two-fold more likely to become obese than those who received their energy intake in the morning. Hence, timing of food intake is also correlated with obesity. In Model Organisms Swiss Webster mice (an all-purpose mouse strain used as a research model) that have altered timings of food intake due to exposure to artificial light at subjective night gained weight substantially beyond the control mice that were placed under a regular light-dark cycle. The experimental design that included light exposure at night would have led to a reduction of nighttime melatonin level and disturbed the melatonin rhythm. Melatonin was suggested to have anti-obesity effects due to its ability to stimulate the growth and metabolic activity of Brown Adipose Tissue, inducing weight loss. The relative melatonin deficiency due to light exposure at night may lead to obesity. However, melatonin level was not measured in the original experiments. More recent articles also suggested that the majority of laboratory mouse strains, including the Swiss Webster mice, do not produce melatonin on their own. Thus, the role of melatonin in the metabolic consequences of circadian misalignment caused by altered timings of food intake remains unclear. Mice fed with a high-fat, obesogenic diet showed dampened rhythms in feeding and dampened hepatic circadian rhythms, promoting hyperphagia and obesity. Studies investigating the effect of isocaloric time-restricted feeding (TRF) discovered that mice fed with a high-fat diet (HFD) in an 8-to-12-hour window during the normal feeding time (subjective night) had significantly less weight gain than the mice fed with HFD during the time when feeding is normally reduced (subjective day). This observation in mice suggested that the timing of food intake is associated with obesity. Chronodisruption is often associated with shortened sleep. Studies using rodents demonstrate that sleep deprivation, which leads to a reduced leptin level (the "satiety hormone") and an increased ghrelin level (the "hunger hormone"), encourages increased food intake. Experiments investigating clock gene mutants and knockouts show the strong linkage between obesity, metabolic disorders, and the circadian clock. ClockΔ19 mice with disrupted circadian rhythm (Clock gene mutant mice) have dampened diurnal feeding rhythm and are obese. ClockΔ19 mice with leptin knockout are significantly more obese than mice with leptin knockout only, implying the significant contribution of chronodisruption to obesity in mice. Similarly, mPer2-knockout mice fed a high-fat diet were significantly more obese than their wild-type counterpart. Chronodisruption and Reproduction In Humans Chronodisruption, in the form of shift work, has been associated with disturbances in menstrual period (increased irregularity and length of cycles) and mood. This deterioration of the menstrual cycle has also been shown to increase with increasing duration of chronodisruption. Chronodisruption during pregnancy is also associated with various negative outcomes, including low relative birth weight, preterm birth, and miscarriage. In Model Organisms Chronodisruption has a detrimental effect on the reproduction and development of offspring in rodents. Both clock gene mutations and experiencing phase advances or delays after copulation were observed to interfere with the ability to complete pregnancies. Deletion of the key clock gene, Bmal1, in mouse ovaries significantly reduces oocyte fertilization, early embryo development, and implantation. Gestational chronodisruption (clock misalignment during pregnancy) induced by chronic phase shift is linked with detrimental effects on the health of mouse progeny, including persistent metabolic, cardiovascular, and cognitive dysfunctions. However, these conditions were reversed when the chronodisrupted mother received melatonin in the subjective night, suggesting that maternal plasma melatonin rhythm may drive the fetal rhythm. Chronodisruption and Neurodegenerative Diseases In Humans Chronodisruption has also been implicated as a risk factor for neurodegenerative diseases such as Parkinson's Disease (PD) and Alzheimer's Disease (AD) in humans. Circadian regulation of metabolism and dopamine levels are hypothesized to contribute to the link between chronodisruption and PD. Increased risk for AD may be influenced by increased levels of t-tau protein in the blood due to sleep loss, as well as certain AD-risk genes which are suggested to be controlled by the circadian clock, though these factors are still under investigation. Sleep loss in pre-pathological stages of AD might be correlated with future pathological progression, including the increase of Amyloid-beta 42 in cerebrospinal fluid. In Model Organisms The misalignment between the sleep/wake cycle and feeding rhythms in mice causes circadian desynchrony between the SCN and hippocampus. Mice exposed to "jet lag" experimental conditions experience circadian misalignment, exhibiting an increased amount of inflammatory markers in blood, diminished hippocampus neurogenesis, and impaired learning and memory. Being exposed to altered LD cycles (e.g. 10:10 LD cycle) also disrupts SCN-mediated rhythms and causes peripheral metabolic alterations in mice, leading to decreased dendritic branching of cortical neurons, decreased cognitive flexibility, and behavioral impairments. Notable Researchers Chronodisruption first became a notable concept in 2003 when three researchers from the University of Cologne in Germany, Thomas C. Erren, Russel J. Reiter, and Claus Piekarski, published the journal, Light, timing of biological rhythms, and chronodisruption in man. At the time, Erren, Reiter, and Piekarski were studying how biological clocks can be used to understand cycles and causes of cancer, suggesting that cancer follows a rhythmic light cycle. These three men are considered to have conceived the term "chronodisruption", making large conceptual strides from "chronodisturbance", and even further, "circadian disruption". Circadian disruption is a brief or long period of interference within a circadian rhythm. Chronodisturbance is the disruption of a circadian rhythm which leads to adaptive changes, leading to a less substantial negative impact in comparison to chronodisruption, which leads to disease. Another notable researcher in the field is Mary E. Harrington. Thomas C. Erren is currently still employed by the University of Cologne, where his research focuses on intersections between chronobiology and disease in terms of prevention. Russel Reiter is employed by UT Health, San Antonio and involved in processes of aging and disease, specifically how oxygen interacts with neurodegenerative diseases. His research group is also studying properties of melatonin, its relations with circadian disruptions, and the resulting physiology. Mary E. Harrington is employed by Smith College, where she is the head of their neuroscience program. Her research is focused on the impact of disruptions to the central and peripheral clocks, as well as the impact of disruptions on Alzheimer's and aging. References Sleep
Chronodisruption
Biology
3,217
1,144,596
https://en.wikipedia.org/wiki/Lead%28II%29%20chloride
Lead(II) chloride (PbCl2) is an inorganic compound which is a white solid under ambient conditions. It is poorly soluble in water. Lead(II) chloride is one of the most important lead-based reagents. It also occurs naturally in the form of the mineral cotunnite. Structure and properties In solid PbCl2, each lead ion is coordinated by nine chloride ions in a tricapped triangular prism formation — six lie at the vertices of a triangular prism and three lie beyond the centers of each rectangular prism face. The 9 chloride ions are not equidistant from the central lead atom, 7 lie at 280–309 pm and 2 at 370 pm. PbCl2 forms white orthorhombic needles. In the gas phase, PbCl2 molecules have a bent structure with the Cl–Pb–Cl angle being 98° and each Pb–-Cl bond distance being 2.44 Å. Such PbCl2 is emitted from internal combustion engines that use ethylene chloride-tetraethyllead additives for antiknock purposes. PbCl2 is sparingly soluble in water, solubility product Ksp = at 20 °C. It is one of only 5 commonly water-insoluble chlorides, the other 4 being thallium(I) chloride, silver chloride (AgCl) with Ksp = , copper(I) chloride (CuCl) with Ksp = and mercury(I) chloride (Hg2Cl2) with Ksp = . Synthesis Solid lead(II) chloride precipitates upon addition of aqueous chloride sources (HCl, NaCl, KCl) to aqueous solutions of lead(II) compounds, such as lead(II) nitrate and lead(II) acetate: It also forms by treatment of basic lead(II) compounds such as Lead(II) oxide and lead(II) carbonate. Lead dioxide is reduced by chloride as follows: It also formed by the oxidation of lead metal by copper(II) chloride: Or most straightforwardly by the action of chlorine gas on lead metal: Reactions Addition of chloride ions to a suspension of PbCl2 gives rise to soluble complex ions. In these reactions the additional chloride (or other ligands) break up the chloride bridges that comprise the polymeric framework of solid PbCl2(s). PbCl2(s) + Cl− → [PbCl3]−(aq) PbCl2(s) + 2 Cl− → [PbCl4]2−(aq) PbCl2 reacts with molten NaNO2 to give PbO: PbCl2(l) + 3 NaNO2 → PbO + NaNO3 + 2 NO + 2 NaCl PbCl2 is used in synthesis of lead(IV) chloride (PbCl4): Cl2 is bubbled through a saturated solution of PbCl2 in aqueous NH4Cl forming [NH4]2[PbCl6]. The latter is reacted with cold concentrated sulfuric acid (H2SO4) forming PbCl4 as an oil. Lead(II) chloride is the main precursor for organometallic derivatives of lead, such as plumbocenes. The usual alkylating agents are employed, including Grignard reagents and organolithium compounds: 2 PbCl2 + 4 RLi → R4Pb + 4 LiCl + Pb 2 PbCl2 + 4 RMgBr → R4Pb + Pb + 4 MgBrCl 3 PbCl2 + 6 RMgBr → R3Pb-PbR3 + Pb + 6 MgBrCl These reactions produce derivatives that are more similar to organosilicon compounds, i.e. that Pb(II) tends to disproportionate upon alkylation. PbCl2 can be used to produce PbO2 by treating it with sodium hypochlorite (NaClO), forming a reddish-brown precipitate of PbO2. Uses Molten PbCl2 is used in the synthesis of lead titanate and barium lead titanate ceramics by cation replacement reactions: x PbCl2(l) + BaTiO3(s) → Ba1−xPbxTiO3 + x BaCl2 PbCl2 is used in production of infrared transmitting glass, and ornamental glass called aurene glass. Aurene glass has an iridescent surface formed by spraying with PbCl2 and reheating under controlled conditions. Stannous chloride (SnCl2) is used for the same purpose. Pb is used in HCl service even though the PbCl2 formed is slightly soluble in HCl. Addition of 6–25% of antimony (Sb) increases corrosion resistance. A basic chloride of lead, PbCl2·Pb(OH)2, is known as Pattinson's white lead and is used as pigment in white paint. Lead paint is now banned as a health hazard in many countries by the White Lead (Painting) Convention, 1921. PbCl2 is an intermediate in refining bismuth (Bi) ore. The ore containing Bi, Pb, and Zn is first treated with molten caustic soda to remove traces of arsenic and tellurium. This is followed by the Parkes process to remove any silver and gold present. There are now Bi, Pb, and Zn in the ore. At 500 °C, it receives treatment from Cl2 gas. First, ZnCl2 forms and is excreted. Pure Bi is left behind after PbCl2 forms and is eliminated. Lastly, BiCl3 would form. Toxicity Like other soluble lead compounds, exposure to PbCl2 may cause lead poisoning. References External links IARC Monograph: "Lead and Lead Compounds" IARC Monograph: "Inorganic and Organic Lead Compounds" National Pollutant Inventory – Lead and Lead Compounds Fact Sheet Case Studies in Environmental Medicine – Lead Toxicity ToxFAQs: Lead Lead(II) compounds Chlorides Metal halides IARC Group 2A carcinogens
Lead(II) chloride
Chemistry
1,294
760,736
https://en.wikipedia.org/wiki/Alan%20C.%20Gilmore
Alan Charles Gilmore (born 1944 in Greymouth, New Zealand) is a New Zealand astronomer and a discoverer of minor planets and other astronomical objects. He is credited by the Minor Planet Center with the discovery of 41 minor planets, all but one in collaboration with his wife Pamela M. Kilmartin. Both astronomers are also active nova- and comet-hunters. Until their retirement in 2014, Gilmore and Kilmartin worked at Mount John University Observatory (Department of Physics and Astronomy, University of Canterbury, Christchurch, New Zealand), where they continue to receive observing time. He is also a member of the Organizing Committee of IAU Commission 6, which oversees the dissemination of information and the assignment of credit for astronomical discoveries. The Commission still bears the name "Astronomical Telegrams", even though telegrams are no longer used. On 2007 August 30, Gilmore discovered his first periodic comet, P/2007 Q2. The Eunomia asteroid 2537 Gilmore was named in his honor, while his wife is honored with the outer main-belt asteroid 3907 Kilmartin. Gilmore talks on astronomy on the Radio New Zealand program Nights' Science. In May 2019 he and his wife were honored by New Zealand post with a stamp in its New Zealand Space Pioneers series. List of discovered minor planets See also Gary Hug Miguel Itzigsohn References External links Alan Gilmore, UC SPARK - University of Canterbury 20th-century New Zealand astronomers 21st-century New Zealand astronomers Discoverers of asteroids Discoverers of comets Living people 1944 births
Alan C. Gilmore
Astronomy
310
907,108
https://en.wikipedia.org/wiki/Alfv%C3%A9n%20wave
In plasma physics, an Alfvén wave, named after Hannes Alfvén, is a type of plasma wave in which ions oscillate in response to a restoring force provided by an effective tension on the magnetic field lines. Definition An Alfvén wave is a low-frequency (compared to the ion gyrofrequency) travelling oscillation of the ions and magnetic field in a plasma. The ion mass density provides the inertia and the magnetic field line tension provides the restoring force. Alfvén waves propagate in the direction of the magnetic field, and the motion of the ions and the perturbation of the magnetic field are transverse to the direction of propagation. However, Alfvén waves existing at oblique incidences will smoothly change into magnetosonic waves when the propagation is perpendicular to the magnetic field. Alfvén waves are dispersionless. Alfvén velocity The low-frequency relative permittivity of a magnetized plasma is given by where is the magnetic flux density, is the speed of light, is the permeability of the vacuum, and the mass density is the sum over all species of charged plasma particles (electrons as well as all types of ions). Here species has number density and mass per particle . The phase velocity of an electromagnetic wave in such a medium is For the case of an Alfvén wave where is the Alfvén wave group velocity. (The formula for the phase velocity assumes that the plasma particles are moving at non-relativistic speeds, the mass-weighted particle velocity is zero in the frame of reference, and the wave is propagating parallel to the magnetic field vector.) If , then . On the other hand, when , . That is, at high field or low density, the group velocity of the Alfvén wave approaches the speed of light, and the Alfvén wave becomes an ordinary electromagnetic wave. Neglecting the contribution of the electrons to the mass density, , where is the ion number density and is the mean ion mass per particle, so that Alfvén time In plasma physics, the Alfvén time is an important timescale for wave phenomena. It is related to the Alfvén velocity by: where denotes the characteristic scale of the system. For example, could be the minor radius of the torus in a tokamak. Relativistic case The Alfvén wave velocity in relativistic magnetohydrodynamics is where is the total energy density of plasma particles, is the total plasma pressure, and is the magnetic pressure. In the non-relativistic limit, where , this formula reduces to the one given previously. History The coronal heating problem The study of Alfvén waves began from the coronal heating problem, a longstanding question in heliophysics. It was unclear why the temperature of the solar corona is hot (about one million kelvins) compared to its surface (the photosphere), which is only a few thousand kelvins. Intuitively, it would make sense to see a decrease in temperature when moving away from a heat source, but this does not seem to be the case even though the photosphere is denser and would generate more heat than the corona. In 1942, Hannes Alfvén proposed in Nature the existence of an electromagnetic-hydrodynamic wave which would carry energy from the photosphere to heat up the corona and the solar wind. He claimed that the sun had all the necessary criteria to support these waves and they may in turn be responsible for sun spots. He stated: If a conducting liquid is placed in a constant magnetic field, every motion of the liquid gives rise to an E.M.F. which produces electric currents. Owing to the magnetic field, these currents give mechanical forces which change the state of motion of the liquid. Thus a kind of combined electromagnetic–hydrodynamic wave is produced. This would eventually turn out to be Alfvén waves. He received the 1970 Nobel Prize in Physics for this discovery. Experimental studies and observations The convection zone of the Sun, the region beneath the photosphere in which energy is transported primarily by convection, is sensitive to the motion of the core due to the rotation of the Sun. Together with varying pressure gradients beneath the surface, electromagnetic fluctuations produced in the convection zone induce random motion on the photospheric surface and produce Alfvén waves. The waves then leave the surface, travel through the chromosphere and transition zone, and interact with the ionized plasma. The wave itself carries energy and some of the electrically charged plasma. In the early 1990s, de Pontieu and Haerendel suggested that Alfvén waves may also be associated with the plasma jets known as spicules. It was theorized these brief spurts of superheated gas were carried by the combined energy and momentum of their own upward velocity, as well as the oscillating transverse motion of the Alfvén waves. In 2007, Alfvén waves were reportedly observed for the first time traveling towards the corona by Tomczyk et al., but their predictions could not conclude that the energy carried by the Alfvén waves was sufficient to heat the corona to its enormous temperatures, for the observed amplitudes of the waves were not high enough. However, in 2011, McIntosh et al. reported the observation of highly energetic Alfvén waves combined with energetic spicules which could sustain heating the corona to its million-kelvin temperature. These observed amplitudes (20.0 km/s against 2007's observed 0.5 km/s) contained over one hundred times more energy than the ones observed in 2007. The short period of the waves also allowed more energy transfer into the coronal atmosphere. The 50,000 km-long spicules may also play a part in accelerating the solar wind past the corona. Alfvén waves are routinely observed in solar wind, in particular in fast solar wind streams. The role of Alfvénic oscillations in the interaction between fast solar wind and the Earth's magnetosphere is currently under debate. However, the above-mentioned discoveries of Alfvén waves in the complex Sun's atmosphere, starting from the Hinode era in 2007 for the next 10 years, mostly fall in the realm of Alfvénic waves essentially generated as a mixed mode due to transverse structuring of the magnetic and plasma properties in the localized flux tubes. In 2009, Jess et al. reported the periodic variation of H-alpha line-width as observed by Swedish Solar Telescope (SST) above chromospheric bright-points. They claimed first direct detection of the long-period (126–700 s), incompressible, torsional Alfvén waves in the lower solar atmosphere. After the seminal work of Jess et al. (2009), in 2017 Srivastava et al. detected the existence of high-frequency torsional Alfvén waves in the Sun's chromospheric fine-structured flux tubes. They discovered that these high-frequency waves carry substantial energy capable of heating the Sun's corona and also originating the supersonic solar wind. In 2018, using spectral imaging observations, non-LTE (local thermodynamic equilibrium) inversions and magnetic field extrapolations of sunspot atmospheres, Grant et al. found evidence for elliptically polarized Alfvén waves forming fast-mode shocks in the outer regions of the chromospheric umbral atmosphere. They provided quantification of the degree of physical heat provided by the dissipation of such Alfvén wave modes above active region spots. In 2024, a paper was published in the journal Science detailing a set of observations of what turned out to be the same jet of solar wind made by Parker Solar Probe and Solar Orbiter in February 2022, and implying Alfvén waves were what kept the jet's energy high enough to match the observations. Historical timeline 1942: Alfvén suggests the existence of electromagnetic-hydromagnetic waves in a paper published in Nature 150, 405–406 (1942). 1949: Laboratory experiments by S. Lundquist produce such waves in magnetized mercury, with a velocity that approximated Alfvén's formula. 1949: Enrico Fermi uses Alfvén waves in his theory of cosmic rays. 1950: Alfvén publishes the first edition of his book, Cosmical Electrodynamics, detailing hydromagnetic waves, and discussing their application to both laboratory and space plasmas. 1952: Additional confirmation appears in experiments by Winston Bostick and Morton Levine with ionized helium. 1954: Bo Lehnert produces Alfvén waves in liquid sodium. 1958: Eugene Parker suggests hydromagnetic waves in the interstellar medium. 1958: Berthold, Harris, and Hope detect Alfvén waves in the ionosphere after the Argus nuclear test, generated by the explosion, and traveling at speeds predicted by Alfvén formula. 1958: Eugene Parker suggests hydromagnetic waves in the Solar corona extending into the Solar wind. 1959: D. F. Jephcott produces Alfvén waves in a gas discharge. 1959: C. H. Kelley and J. Yenser produce Alfvén waves in the ambient atmosphere. 1960: Coleman et al. report the measurement of Alfvén waves by the magnetometer aboard the Pioneer and Explorer satellites. 1961: Sugiura suggests evidence of hydromagnetic waves in the Earth's magnetic field. 1961: Normal Alfvén modes and resonances in liquid sodium are studied by Jameson. 1966: R. O. Motz generates and observes Alfvén waves in mercury. 1970: Hannes Alfvén wins the 1970 Nobel Prize in Physics for "fundamental work and discoveries in magneto-hydrodynamics with fruitful applications in different parts of plasma physics". 1973: Eugene Parker suggests hydromagnetic waves in the intergalactic medium. 1974: J. V. Hollweg suggests the existence of hydromagnetic waves in interplanetary space. 1977: Mendis and Ip suggest the existence of hydromagnetic waves in the coma of Comet Kohoutek. 1984: Roberts et al. predict the presence of standing MHD waves in the solar corona and opens the field of coronal seismology. 1999: Aschwanden et al. and Nakariakov et al. report the detection of damped transverse oscillations of solar coronal loops observed with the extreme ultraviolet (EUV) imager on board the Transition Region And Coronal Explorer (TRACE), interpreted as standing kink (or "Alfvénic") oscillations of the loops. This confirms the theoretical prediction of Roberts et al. (1984). 2007: Tomczyk et al. reported the detection of Alfvénic waves in images of the solar corona with the Coronal Multi-Channel Polarimeter (CoMP) instrument at the National Solar Observatory, New Mexico. However, these observations turned out to be kink waves of coronal plasma structures.doi:10.1051/0004-6361/200911840 2007: A special issue on the Hinode space observatory was released in the journal Science. Alfvén wave signatures in the coronal atmosphere were observed by Cirtain et al., Okamoto et al., and De Pontieu et al. By estimating the observed waves' energy density, De Pontieu et al. have shown that the energy associated with the waves is sufficient to heat the corona and accelerate the solar wind. 2008: Kaghashvili et al. uses driven wave fluctuations as a diagnostic tool to detect Alfvén waves in the solar corona. 2009: Jess et al. detect torsional Alfvén waves in the structured Sun's chromosphere using the Swedish Solar Telescope. 2011: Alfvén waves are shown to propagate in a liquid metal alloy made of Gallium. 2017: 3D numerical modelling performed by Srivastava et al. show that the high-frequency (12–42 mHz) Alfvén waves detected by the Swedish Solar Telescope can carry substantial energy to heat the Sun's inner corona. 2018: Using spectral imaging observations, non-LTE inversions and magnetic field extrapolations of sunspot atmospheres, Grant et al. found evidence for elliptically polarized Alfvén waves forming fast-mode shocks in the outer regions of the chromospheric umbral atmosphere. For the first time, these authors provided quantification of the degree of physical heat provided by the dissipation of such Alfvén wave modes. 2024: Alfvén waves are implied to be behind a smaller than expected energy loss in solar wind jets out as far as Venus' orbit, based on Parker Solar Probe and Solar Orbiter observations only two days apart. See also Alfvén surface Computational magnetohydrodynamics Electrohydrodynamics Electromagnetic pump Ferrofluid Magnetic flow meter Magnetohydrodynamic turbulence MHD generator MHD sensor Molten salt Plasma stability Shocks and discontinuities (magnetohydrodynamics) References Further reading External links Mysterious Solar Ripples Detected Dave Mosher 2 September 2007 Space.com EurekAlert! notification of 7 December 2007 Science special issue EurekAlert! notification: "Scientists find solution to solar puzzle" Waves in plasmas
Alfvén wave
Physics
2,706
72,008,050
https://en.wikipedia.org/wiki/Ana%20Celia%20Mota
Ana Celia Mota (born 1935) is a retired Argentine-American condensed matter physicist specializing in phenomena at ultracold temperatures, including superfluids and superconductors. She is a professor emerita at ETH Zurich in Switzerland. Education and career Mota was born in 1935 in Argentina, and is a US citizen. She studied physics at the Balseiro Institute in Argentina, where she earned a licenciate in 1960, and became a doctoral student of John C. Wheatley. Her research with him concerned the heat capacity of liquid Helium-3. After earning her doctorate in 1967, she worked for eight years in the Department of Physics and Institute for Pure and Applied Physical Sciences at the University of California, San Diego, and then for five more years at the University of Cologne, before joining ETH Zurich in 1980. At ETH Zurich, she was Senior Researcher in the Laboratory of Solid State Physics, professor, and director of a research group on low-temperature physics. Recognition Mota was named a Fellow of the American Physical Society (APS) in 1994, after a nomination from the APS Division of Condensed Matter Physics, "for work on superfluidity and superconductivity at ultra-low temperatures". References 1935 births Living people Argentine physicists Argentine women physicists American physicists American women physicists Swiss physicists Swiss women physicists Condensed matter physicists Fellows of the American Physical Society Academic staff of ETH Zurich
Ana Celia Mota
Physics,Materials_science
300
355,054
https://en.wikipedia.org/wiki/CAcert.org
CAcert.org is a community-driven certificate authority that issues free X.509 public key certificates. CAcert.org relies heavily on automation and therefore issues only Domain-validated certificates (and not Extended validation or Organization Validation certificates). These certificates can be used to digitally sign and encrypt email; encrypt code and documents; and to authenticate and authorize user connections to websites via TLS/SSL. CAcert Inc. Association On 24 July 2003, Duane Groth incorporated CAcert Inc. as a non-profit association registered in New South Wales, Australia and after, in September 2024, moved to Europe in Geneva, Switzerland. CAcert Inc runs CAcert.org—a community-driven certificate authority. In 2004, the Dutch Internet pioneer Teus Hagen became involved. He served as board member and, in 2008, as a president. Certificate Trust status CAcert.org's root certificates are not included in the most widely deployed certificate stores and has to be added by its customers. As of 2021, most browsers, email clients, and operating systems do not automatically trust certificates issued by CAcert. Thus, users receive an "untrusted certificate" warning upon trying to view a website providing X.509 certificate issued by CAcert, or view emails authenticated with CAcert certificates in Microsoft Outlook, Mozilla Thunderbird, etc. CAcert uses its own certificate on its website. Web browsers Discussion for inclusion of CAcert root certificate in Mozilla Application Suite and Mozilla Firefox started in 2004. Mozilla had no CA certificate policy at the time. Eventually, Mozilla developed a policy which required CAcert to improve their management system and conduct audits. In April 2007, CAcert formally withdrew its application for inclusion in the Mozilla root program. At the same time, the CA/Browser Forum was established to facilitate communication among browser vendors and Certificate Authorities. Mozilla's advice was incorporated into "baseline requirements" used by most major browser vendors. Progress towards meeting these requirements can hardly be expected in the near future. Operating systems FreeBSD included CAcert's root certificate but removed it in 2008, following Mozilla's policy. In 2014, CAcert was removed from Ubuntu, Debian, and OpenBSD root stores. In 2018, CAcert was removed from Arch Linux. As of Feb 2022, the following operating systems or distributions include the CAcert root certificate by default: Arch Linux FreeWRT Gentoo (app-misc/ca-certificates only when USE flag cacert is set, defaults OFF from version 20161102.3.27.2-r2 ) GRML Knoppix Mandriva Linux MirOS BSD Openfire Privatix Replicant (Android) As of 2021, the following operating systems or distributions have an optional package with the CAcert root certificate: Debian openSUSE Web of trust To create higher-trust certificates, users can participate in a web of trust system whereby users physically meet and verify each other's identities. CAcert maintains the number of assurance points for each account. Assurance points can be gained through various means, primarily by having one's identity physically verified by users classified as "Assurers". Having more assurance points allows users more privileges such as writing a name in the certificate and longer expiration times on certificates. A user with at least 100 assurance points is a Prospective Assurer, and may—after passing an Assurer Challenge—verify other users; more assurance points allow the Assurer to assign more assurance points to others. CAcert sponsors key signing parties, especially at big events such as CeBIT and FOSDEM. As of 2021, CAcert's web of trust has over 380,000 verified users. Root certificate descriptions Since October 2005, CAcert offers Class 1 and Class 3 root certificates. Class 3 is a high-security subset of Class 1. See also Let's Encrypt CAcert wiki Further reading References Cryptography organizations Certificate authorities Transport_Layer_Security Information privacy Safety_engineering
CAcert.org
Engineering
873
33,535,660
https://en.wikipedia.org/wiki/Dibenzoylmethane
Dibenzoylmethane (DBM) is an organic compound with the formula (C6H5C(O))2CH2. DBM is the name for a 1,3-diketone, but the compound exists primarily as one of two equivalent enol tautomers. DBM is a white solid. Due UV-absorbing properties, derivatives of DBM such as avobenzone, have found applications as sunscreen products. Synthesis and reactions DBM is prepared by condensation of ethyl benzoate with acetophenone. Like other 1,3-diketones (or their enols), DBM condenses with a variety of bifunctional reagents to give heterocycles. Hydrazine gives diphenylpyrazole. Urea and thiourea also condense to give six-membered rings. With metal salts, the conjugate base of DBM forms complexes akin to the metal acetylacetonates. Occurrence and medicinal properties Dibenzoylmethane (DBM) is a minor constituent in the root extract of Licorice (Glycyrrhiza glabra in the family Leguminosae). It is also found in Curcumin. These occurrences have led to investigations into the medicinal properties of this class of compounds. DBM (and Trazodone) slow disease progression by preventing the cessation of protein synthesis in neurons. Related compounds Benzoylacetone References Aromatic ketones Chelating agents Ligands Phenyl compounds
Dibenzoylmethane
Chemistry
324
6,162,658
https://en.wikipedia.org/wiki/The%20Food%20Project
The Food Project is a non-profit organization that employs teenagers on farms in Lincoln, Roxbury and the North Shore of Massachusetts. It focuses on community improvement and outreach, and education about health, leadership, charity, and sustainable agriculture. The youth are recruited from urban areas of Boston, Lynn, and surrounding suburbs to plant and harvest crops for sale at Farmers' Markets and CSAs, and donation to local hunger-relief organizations and homeless shelters. The program fosters community building and having a good work ethic. History Founded in 1991 by Ward Cheney, a local farmer and educator, The Food Project (abbreviated as TFP) is a non-profit organization committed to bringing youth together from the urban neighborhoods of Boston and the surrounding suburbs in order to build sustainable food systems. Overview The core of The Food Project's program is employment of youth on farms in Lincoln and Roxbury, called the Summer Youth Program. Participants are hired in the Spring, with equal representation from the city and nearby suburbs, and are divided into crews of about 10 crew workers, an assistant crew leader and a crew leader. The program is 8 weeks long (now 6.5 due to budget), beginning in late June and ending in August to coincide with the Massachusetts public school calendar. There is also an academic year program and an internship program, both of which run throughout the year, though with fewer participants than the Summer Program. Crews are rotated throughout the growing season so each has experience working on the farms in Lincoln and Roxbury. In addition to doing farmwork and harvesting, all crews also work in local hunger-relief institutions like the Pine Street Inn, Revision House and Urban Farm and Rosie's Place, where they help serve food cooked from the vegetables they grow. Using this paradigm, summer crew workers experience all aspects of their labor, from planting and harvesting to food donation. Sites The Food Project has 10 growing sites in total. Lincoln, Massachusetts, and several in Roxbury, Massachusetts. The North Shore branch has a farm in Lynn and a farm in Beverly. The main offices for The Food Project are located in Lincoln Center. Community-supported agriculture One of the main tenets of The Food Project is that the land that the program uses is a part of the community and therefore must be integrated. One of the innovative ways in which this is accomplished is through community-supported agriculture (CSA), an agricultural model founded in Japan (Teikei) and implemented in the U.S. by Indian Line Farm, in Massachusetts. In this model, local consumers buy a share of the farm's harvest. Produce from the harvest is then distributed at the farm to shareholders up to a defined limit. Selection and volume depends on the time of year, but freshness is frequently unmatched by standard retail vendors because the food can be consumed immediately after harvesting. The Food Project's 400 member CSA Farm Share program offers fresh vegetables, herbs and flowers and has pickup locations in Lincoln, Cambridge, Somerville, Arlington and Jamaica Plain. They publish a CSA Newsletter which is distributed to all shareholders weekly. Additionally, shareholders can harvest their produce themselves from specially designated plots at certain Food Project growing sites. Program milestones 1991: Founded by Ward Cheney in conjunction with the Massachusetts Audubon Society. 1992: First growing season funded with $100,000 and farmed on a plot on Drumlin Farm in Lincoln. 1993: Groundbreaking of the half-acre Langdon Street Lot in Roxbury. 1994: First growing season for the Langdon Street Lot. 1995: West Cottage Street Lot is cleared by summer crew workers and land is prepared for growing. 1998: A video, two books and many manuals document The Food Project's program and philosophy. The Rooted in Community Network is also co-founded. 2001: A neighborhood gardener lends some land, an undeveloped lot blocks away from the other two sites on Albion Street in Roxbury. Remediation completes in 2001 and growing begins in 2002. 2003: The Food Project launches BLAST, an international initiative focusing on the next generation of leaders, farmers and practitioners in food systems work. 2008: In conjunction with the City of Boston, The Food Project pioneers the use of EBT/SNAP/food stamps at its farmers markets with a program called Boston Bounty Bucks. 2010: The Food Project receives a $600,000 stimulus grant from the Obama administration. US Secretary of Health and Human Services, Kathleen Sebelius visits Boston land. 2014: The Wenham Conservation Commission starts leasing the 34-acre Reynolds Farm to The Food Project. 2015: Dudley Neighbors Inc. (DNI), which has ownership over the parcel of land at 40 West Cottage where The Food Project has farmed on a year-to-year basis since 1998, has granted The Food Project a 99-year lease. 2017: North Shore Community College receives grant from the Massachusetts Skills Capital Grant Program to build an environmental horticulture greenhouse in Lynn, offering seedling and nursery space to The Food Project starting in summer 2018. Other activities In 2004, The Food Project was supported by the Cirque du Soleil through a donation of tickets to their Varekai Benefit performance at Suffolk Downs in Boston. Because of the show's history and origins in youth street performing, the donation was a sign of continued support to youth programs. In 2005, The Food Project won the Mayor's Award for Excellence in Children's Health, given by the Boston Mayor's Office, the Harvard School of Public Health (HSPH), and Children's Hospital Boston. Support The Food Project receives support from a variety of local and national private foundations, corporations, nonprofits, government agencies, and other institutions. Notes and references External links Official website Non-profit organizations based in Massachusetts Community building Urban planning Agricultural organizations based in the United States
The Food Project
Engineering
1,176
51,405,191
https://en.wikipedia.org/wiki/Markar%20Clock%20Tower
The Markar Clock Tower also known as Borj-e Sa'at-e Markar () is a historic clock tower in Yazd, Iran. It is located on the geographic centre of Iran. History The cost of its construction was paid by a Zoroastrian from India, Pashutanji Marker. The clock tower was constructed on 26 Oct 1942. The Markar clock tower or Borj-e Sa'at Markar is located in the middle of the Marker Clock Plaza. Architecture The tower has a height of about 4 meters, a square shape, a pyramid on, and looks like an obelisk. The Tower is located in the center point of Iran coordinately. The movement system has been made in London by J. Smith & Sons Co. The spring should be charged weekly. Mirza Soroush obtained permission for, and supervised the construction of, the Markar Plaza with gardens around it. The plaza is situated on the road to Kerman just north of the Markarabad school entrance. There are poems on four sides of tower from a local poet, Naser, which located in two lines and should read clockwise. The upper line poem is about Ferdowsi, but the lower line is about the benefactor. The last hemstitch (شادم از کردار نیک مارکار) implies the end time/year of building of tower in Abjad numerals system; 13:20 or 1320 (Solar Hijri), as well as Markar's religion (Zoroastrian) by using one of his religion maxims: "Good Deeds"; (Persian: کردار نیک). This hemstitch has an Abjad numeric value: 300+1+4+40+1+7+20+200+4+1+200+50+10+20+40+1+200+20+1+200=1320 Etymology Sa'at means "clock", which refers to the four face clock in top of the tower. Markar is the name of benefactor who paid the cost of building. Gallery References External links Buildings and structures in Yazd Geographical centres
Markar Clock Tower
Physics,Mathematics
457