text stringlengths 60 353k | source stringclasses 2 values |
|---|---|
**Weighted space**
Weighted space:
In functional analysis, a weighted space is a space of functions under a weighted norm, which is a finite norm (or semi-norm) that involves multiplication by a particular function referred to as the weight.
Weighted space:
Weights can be used to expand or reduce a space of considered functions. For example, in the space of functions from a set U⊂R to R under the norm ‖⋅‖U defined by: sup x∈U|f(x)| , functions that have infinity as a limit point are excluded. However, the weighted norm sup x∈U|f(x)11+x2| is finite for many more functions, so the associated space contains more functions. Alternatively, the weighted norm sup x∈U|f(x)(1+x4)| is finite for many fewer functions.
Weighted space:
When the weight is of the form 11+xm , the weighted space is called polynomial-weighted. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Digital transcriptome subtraction**
Digital transcriptome subtraction:
Digital transcriptome subtraction (DTS) is a bioinformatics method to detect the presence of novel pathogen transcripts through computational removal of the host sequences. DTS is the direct in silico analogue of the wet-lab approach representational difference analysis (RDA), and is made possible by unbiased high-throughput sequencing and the availability of a high-quality, annotated reference genome of the host. The method specifically examines the etiological agent of infectious diseases and is best known for discovering Merkel cell polyomavirus, the suspect causative agent in Merkel-cell carcinoma.
History:
Using computational subtraction to discover novel pathogens was first proposed in 2002 by Meyerson et al. using human expressed sequence tag (EST) datasets. In a proof of principle experiment, Meyerson et al. demonstrated that it was a feasible approach using Epstein–Barr virus-infected lymphocytes in post-transplant lymphoproliferative disorder (PTLD).In 2007, the term "Digital Transcriptome Subtraction" was coined by the Chang-Moore group, and was used to discover Merkel cell polymavirus in Merkel-cell carcinoma.Simultaneously to the MCV discovery, this approach was used to implicate a novel arenavirus as cause of fatality in a case where three patients died of similar illnesses shortly following organ transplantations from a single donor.
Method:
Construction of cDNA library After treatment with DNase I to eliminate human genomic DNA, total RNA is extracted from primary infected tissue. Messenger RNA is then purified using an oligo-dT column that binds to the poly-A tail, a signal specifically found on transcribed genes. Using random hexamers priming, reverse transcriptase (RT) convert all mRNA into cDNA and cloned into bacterial vectors. Bacteria, usually E. coli, are then transformed using the cDNA vectors and selected using a marker, the collection of transformed clones is the cDNA library. This generates a snap-shot of tissue mRNA that is stable and can be sequenced at a later stage.
Method:
Sequencing and quality control The cDNA library must be sequenced to great depth (i.e. number of clones sequenced) in order to detect a theoretical rare pathogen sequence (Table 1), especially if the foreign sequence is novel. Chang-Moore recommend a sequencing depth of 200,000 transcripts or greater using multiple sequencing platforms.
Stringent quality control are then applied to the raw sequences to minimize false-positive results. The initial quality screen uses several general parameters to exclude ambiguous sequences, leaving behind a dataset of high-fidelity (Hi-Fi) reads.
Low Phred score cutoff is used to remove low-quality end sequences. Typically, a Phred score cutoff of 20 or 30 is used to ensure 99%-99.9% accuracy in each base-calling.
Vector and adaptor removal.
Low complexity - complexity score of a sequence reflects number of identical bases in a series (homo-polymers) such as poly-dT or poly-dA.
Human repetitive DNA.
Length - parameter is dependent on the optimized read length specific to the sequencing technology that was used.
BLAST and exclude E. coli genome sequences.
Method:
BLAST to host genome Using MEGABLAST, Hi-Fi reads are then matched to sequences in annotated databases and any positive matches are then subtracted from the dataset. Minimum hit length for a positive match of human sequence is typically 30 consecutive identical bases, which equates to a BLAST score of 60; generally, the remaining sequence is BLAST again with less stringent parameters to allow for slight mismatches (1 in 20 nucleotide). The vast majority of sequences (>99%) should be removed from the dataset at this stage.
Method:
Subtracted sequences typically include: Reference human transcriptome - eliminates any known human transcripts from expression library sets.
Reference human genome - eliminates genes that have been missed by the annotation process and any contaminating genomic sequences during cDNA library construction.
Mitochondrial DNA - mitochondrial DNA are highly abundant and polymorphic due to rapid mutation rate.
Immunoglobulin region - The immunoglobulin loci is highly polymorphic and would otherwise yield false-positive due to poor alignment to the reference genome.
Method:
Other vertebrate sequences Unannotated sequences Analysis of "non-host" candidates Alignment to pathogen databases After stringent rounds of subtraction, the remaining sequences are clustered into non-redundant contigs and aligned to known pathogen sequences using low-stringency parameters. As pathogen genomes mutates quickly, nucleotide-nucleotide alignments, or blastn, is usually uninformative as it is possible to have mutations at certain bases without changing the amino acid residue due to codon degeneracy. Matching the in silico translated protein sequences of all 6 open reading frames to the amino acid sequence to annotated proteins, or blastx, is the preferred alignment method as it increases the likelihood of identifying a novel pathogen by matching to a related strain/species. Experimental extension of candidate sequences might also be used at this stage to maximize chances of a positive match.
Method:
De novo assembly In cases where alignment to known pathogens is uninformative or ambiguous, contigs of candidate sequence can be used as templates for primer walking in primary infected tissue to generate the complete pathogen genome sequence. As viral transcripts are exceedingly rare ratio tissue mRNA (10 transcripts in 1 million), it is unlikely to generate a transcriptome based on the original candidate sequences alone due to low coverage.
Method:
Validation of pathogen Once a putative pathogen has been identified in the high-throughput sequencing data, it is imperative to validate the presence of pathogen in infected patients using more sensitive techniques, such as: RT-PCR and derivative methods, including 3'- and 5'-RACE to confirm the existence of pathogen mRNA.
Immunohistochemistry using antibodies to related pathogen to determine existence the pathogen in tissues.
Serological tests to measure pathogen-specific antibody titer.
Bacterial culture/viral culture, which is considered as the gold standard in laboratory diagnosis.
Applications:
The primary application for DTS lies in identification of pathogenic viruses in cancer. It can also be used to identify viral pathogens in non-cancer related disease. Future clinical applications could include the use of DTS on a routine basis in individuals. DTS could also apply to agriculture, identifying pathogens that have an effect on output. Computation subtraction was already used in a metagenomics study that associated viral infection by IAPV with colony collapse disorder in honey bees.
Applications:
Advantages Requires no prior knowledge about pathogen sequence.
Can identify previously unassociated, potentially treatable pathogens.
Uses already available molecular methods and resources.
Disadvantages Identifies the presence of pathogen but does not establish causal link to disease. See Koch's postulate and Bradford Hill criteria.
Must have a highly reliable, complete reference transcriptome for the organism being studied.
Lack of foreign sequence identification cannot entirely exclude a pathogenic foreign body. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Tate pairing**
Tate pairing:
In mathematics, Tate pairing is any of several closely related bilinear pairings involving elliptic curves or abelian varieties, usually over local or finite fields, based on the Tate duality pairings introduced by Tate (1958, 1963) and extended by Lichtenbaum (1969). Rück & Frey (1994) applied the Tate pairing over finite fields to cryptography. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**SPP1 holin family**
SPP1 holin family:
The SPP1 Holin (SPP1 Holin) Family (TC# 1.E.31) consists of proteins of between 90 and 160 amino acyl residues (aas) in length that exhibit two transmembrane segments (TMSs). SPP1 is a double-stranded DNA phage that infects the Gram-positive bacteria. Although annotated as holins, members of the SPP1 family are not yet functionally characterized. A representative list of proteins belonging to the SPP1 Holin family can be found in Transporter Classification Database. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**VIT signals**
VIT signals:
In television broadcasting, VIT signals (vertical interval test signals) are a group of test signals inserted in the composite video signal. These signals are used to weight the transmission characteristics of the system between the test generator and the output of the demodulator, where the system includes the microwave links, or TVROs as well as the TV transmitters and the transposers. There are both ATSC and EBU standards for VIT. (Because analogue television is being phased out globally, VIT standards are considered superseded.)
Blanking in CVS:
In a composite video signal (CVS) there are two types of blanking: horizontal and vertical. Horizontal blanking is between lines and vertical blanking is between fields (half frames). In a poorly tuned TV receiver the horizontal blanking can be seen at the right or left of the image and the vertical blanking can be seen at the top or bottom of the image. VIT signals are inserted in the vertical blanking.
Blanking in CVS:
Vertical blanking In each field vertical blanking is about 1612 μs in System B (also G and H; analogue system in most of Europe) and 1333 μs in System M (analogue TV system in USA). This duration is equal to 25 lines in system B and 21 lines in system M. Although 7.5 lines are used for synchronization of the image, the remaining lines can be used for other purposes. Two of these lines in each field are reserved for test signals. Since there are two fields in each frame (image), the number of lines reserved for test signals is four per frame.
Test signals:
In both systems, line numbers 17 and 18 are assigned for VIT signals in each field. (These line numbers are used just for the first field. For second field, they correspond to line 280 and 281 in system M, and line 330 and 331 in system B.) Usually the following test signals are used: Luminance bar (low frequency tilt) 2T signal (Overshoot) 20T signal (differential gain and phase) Staircase (luminance linearity) Group of sine waves with different frequencies (video characteristics) Color carrier superimposed on staircase (differential gain and phase) Group of color carriers with different amplitudes (intermodulation of luminance and color) | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**EXCLAIM**
EXCLAIM:
The EXtensible Cross-Linguistic Automatic Information Machine (EXCLAIM) was an integrated tool for cross-language information retrieval (CLIR), created at the University of California, Santa Cruz in early 2006, with some support for more than a dozen languages. The lead developers were Justin Nuger and Jesse Saba Kirchner.
EXCLAIM:
Early work on CLIR depended on manually constructed parallel corpora for each pair of languages. This method is labor-intensive compared to parallel corpora created automatically. A more efficient way of finding data to train a CLIR system is to use matching pages on the web which are written in different languages.EXCLAIM capitalizes on the idea of latent parallel corpora on the web by automating the alignment of such corpora in various domains. The most significant of these is Wikipedia itself, which includes articles in 250 languages. The role of EXCLAIM is to use semantics and linguistic analytic tools to align the information in these Wikipedias so that they can be treated as parallel corpora. EXCLAIM is also extensible to incorporate information from many other sources, such as the Chinese Community Health Resource Center (CCHRC).
EXCLAIM:
One of the main goals of the EXCLAIM project is to provide the kind of computational tools and CLIR tools for minority languages and endangered languages which are often available only for powerful or prosperous majority languages.
Current status:
In 2009, EXCLAIM was in a beta state, with varying degrees of functionality for different languages. Support for CLIR using the Wikipedia dataset and the most current version of EXCLAIM (v.0.5), including full UTF-8 support and Porter stemming for the English component, was available for the following twenty-three languages: Support using the Wikipedia dataset and an earlier version of EXCLAIM (v.0.3) is available for the following languages: Significant developments in the most recent version of EXCLAIM include support for Mandarin Chinese. By developing support for this language, EXCLAIM has added solutions to segmentation and encoding problems which will allow the system to be extended to many other languages written with non-European orthographic conventions. This support is supplied through the Trimming And Reformatting Modular System (TARMS) toolkit.
Current status:
Future versions of EXCLAIM will extend the system to additional languages. Other goals include incorporation of available latent datasets in addition to the Wikipedia dataset.
The EXCLAIM development plan calls for an integrated CLIR instrument usable searching from English for information in any of the supported languages, or searching from any of the supported languages for information in English when EXCLAIM 1.0 is released. Future versions will allow searching from any supported language into any other, and searching from and into multiple languages.
Further applications:
EXCLAIM has been incorporated into several projects which rely on cross-language query expansion as part of their backends. One such project is a cross-linguistic readability software generation framework, detailed in work presented at ACL 2009. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Phosphorus-31 nuclear magnetic resonance**
Phosphorus-31 nuclear magnetic resonance:
Phosphorus-31 NMR spectroscopy is an analytical chemistry technique that uses nuclear magnetic resonance (NMR) to study chemical compounds that contain phosphorus. Phosphorus is commonly found in organic compounds and coordination complexes (as phosphines), making it useful to measure 31P NMR spectra routinely. Solution 31P-NMR is one of the more routine NMR techniques because 31P has an isotopic abundance of 100% and a relatively high gyromagnetic ratio. The 31P nucleus also has a spin of 1⁄2, making spectra relatively easy to interpret. The only other highly sensitive NMR-active nuclei spin 1⁄2 that are monoisotopic (or nearly so) are 1H and 19F.
Operational aspects:
With a gyromagnetic ratio 40.5% of that for 1H, 31P NMR signals are observed near 202 MHz on an 11.7-Tesla magnet (used for 500 MHz 1H NMR measurements). Chemical shifts are referenced to 85% phosphoric acid, which is assigned the chemical shift of 0, with positive shifts to low field/high frequency. Due to the inconsistent nuclear Overhauser effect, integrations are not useful. Most often, spectra are recorded with protons decoupled.
Applications in chemistry:
31P-NMR spectroscopy is useful to assay purity and to assign structures of phosphorus-containing compounds because these signals are well resolved and often occur at characteristic frequencies. Chemical shifts and coupling constants span a large range but sometimes are not readily predictable. The Gutmann-Beckett method uses Et3PO in conjunction with 31P NMR-spectroscopy to assess the Lewis acidity of molecular species.
Applications in chemistry:
Chemical shifts The ordinary range of chemical shifts ranges from about δ250 to −δ250, which is much wider than typical for 1H NMR. Unlike 1H NMR spectroscopy, 31P NMR shifts are primarily not determined by the magnitude of the diamagnetic shielding, but are dominated by the so-called paramagnetic shielding tensor (unrelated to paramagnetism). The paramagnetic shielding tensor, σp, includes terms that describe the radial expansion (related to charge), energies of excited states, and bond overlap. Illustrative of the effects lead to big changes in chemical shifts, the chemical shifts of the two phosphate esters (MeO)3PO (δ2.1) and (t-BuO)3PO (δ-13.3). More dramatic are the shifts for phosphine derivatives H3P (δ-240), (CH3)3P (δ-62), (i-Pr)3P (δ20), and (t-Bu)3P (δ61.9).
Applications in chemistry:
Coupling constants One-bond coupling is illustrated by PH3 where J(P,H) is 189 Hz. Two-bond couplings, e.g. PCH are an order of magnitude smaller. The situation for phosphorus-carbon couplings are more complicated since the two-bond couplings are often larger than one-bond couplings. The J(13C,31P) values for triphenylphosphine are respectively −12.5, 19.6, 6.8, and 0.3 for one-, two-, three-, and four-bond couplings.
Applications in chemistry:
Historical note The convention surrounding 31P-NMR (and other nuclei) changed convention in 1975: "The dimensionless scale should be defined as positive in the high frequency (low field) direction." Therefore, note that manuscripts published before 1976 will generally have the opposite sign.
Biomolecular applications:
31P-NMR spectroscopy is widely used for studies of phospholipid bilayers and biological membranes in native conditions. The analysis of 31P-NMR spectra of lipids could provide a wide range of information about lipid bilayer packing, phase transitions (gel phase, physiological liquid crystal phase, ripple phases, non bilayer phases), lipid head group orientation/dynamics, and elastic properties of pure lipid bilayer and as a result of binding of proteins and other biomolecules.
Biomolecular applications:
In addition, a specific N-H...(O)-P experiment (INEPT transfer using three-bond scalar coupling 3JN-P~5 Hz) could provide a direct information about formation of hydrogen bonds between amine protons of protein to phosphate of lipid headgroups, which is useful in studies of protein/membrane interactions. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Klavika**
Klavika:
Klavika is a family of sans-serif fonts designed by Eric Olson and released by Process Type Foundry in 2004. It contains four weights: light, regular, medium, and bold (with corresponding italics) and variations of numerals.The family of typefaces is described as straight-sided technical sans-serifs flexible for editorial and identity design.The capital G has no bar, the capital Q has a tail at the bottom, the lowercase g is double story, and the lowercase k has diagonal strokes that meet at the vertical, with a gap.
Features:
Old-style and small cap numerals (with tabular) Small caps True italics Multiple language support Full set of arrows Available in OpenType, TrueType, PostScript, WOFF and EOT formats.
In use:
The "GALAXY" in the original Samsung Galaxy logo used Klavika Basic Light as the font with the only modification being that the "L" has a rounded corner. Later it was phased out and replaced by custom-made typography, especially Samsung Sharp Sans for product logos and Samsung One in advertising materials.
The Facebook logo uses a modified version of Klavika Bold.
The old DeviantArt logo used slightly modified regular and bold versions of this Klavika.
The American TV network NBC used Klavika for on-screen branding in 2006 but has since changed its primary typeface several times. Its cable channels MSNBC and CNBC also use the font.
The American cable channel ESPN uses Klavika for on-screen presentation from 2009; by 2014, they had switched to Helvetica.
Belgian Dutch language television channel VTM uses Klavika for on-screen branding and, since 2011, also on the news graphics.
Moses Mabhida Stadium in Durban, South Africa used Klavika in the signage during 2010 FIFA World Cup.
South Korean bid for 2022 FIFA World Cup used Klavika for the presentation in Latin alphabets.
In use:
Chevrolet uses a customized version of Klavika as a corporate typeface. One noticeable difference is the shape of the capital M which has straight rather than splayed sides. For Greek and Cyrillic alphabets, however, local Chevrolet dealers—in Greece and some of the countries using Cyrillic—uses some typefaces similar to Klavika. The condensed fonts were designed by Process Type Foundry LLC with Aaron Carámbula for General Motors marketer FutureBrand as part of re-design of Chevrolet in 2006. After the expiry of the exclusivity period, the commercial version of the font (Klavika Condensed) was released to the public in the fall of 2008. Chevrolet continued using Klavika until replacing it with custom fonts (Durant and Louis) around 2013.
In use:
Atlassian has been using this since their re-branding in October 2011. Starting with Super Bowl XLVII in 2013, CBS used Klavika in their sports broadcasts. It was also used in sister network CBS Sports Network. They switched to FF DIN starting with Super Bowl 50 in 2016.
Online gaming site Ijji uses Klavika.
Klavika is used in the logo and visual identity of Young Life.
For World Youth Day 2011 Visual Identification of city of Lublin, Poland.
Visual Identification of city of Katowice, Poland. (Klavika CH) Katowice Visual Identification Document Košice Transport Company (Dopravný podnik mesta Košice) has been using the font since 2010.
The Glasgow Subway system now uses the font in all its recently re-branded visual identity.
Motorola Mobility uses this font as their some ads of Motorola's homepage and user's manual titles.
Portuguese college Instituto Superior Técnico uses this typeface since its 2012 re-branding.
University of Applied Sciences, Worms in its new logo, starting 1st of September 2014.
Irish public TV channel RTÉ One started using the font in its presentation graphics from 1st January 2014.
Klavika is the main corporate typeface of Advanced Micro Devices since 2013 and is used along with Calibri.
IAM RoadSmart has used Klavika as its main typeface since rebranding in 2016.
YG Entertainment uses Klavika since its brand identity renewal in 2013.
Polish Armed Forces (SZRP) and its service branches (Land Forces, Navy, Air Force, Special Forces, Territorial Defence Force) used Klavika typeface for their logo. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Sprite Remix**
Sprite Remix:
Sprite Tropical Mix is a line of "remixed" colorless caffeine-free sodas and drink-flavoring packets made by The Coca-Cola Company. Sprite Remix is one of the most uncommonly known Sprite flavors. Although based on Sprite, the Remixes were each flavored differently from the original. It was discontinued in 2005 in the United States. In the spring of 2015, the Tropical Sprite Remix flavor was reintroduced under the name Sprite Tropical and renamed Sprite Tropical Mix a year later.
Flavors:
Sprite Tropical Remix: Sprite with tropical fruit flavors, introduced 2002; reintroduced as Sprite Tropical in Spring 2015. Renamed Sprite Tropical Mix in Spring 2016.
Sprite BerryClear Remix: Sprite with berry flavors, introduced in April 2004.
Sprite Aruba Jam Remix: Sprite with fruit flavors, introduced in April 2005, short-lived.
'Remix Flavor Hits' packets:
Coca-Cola also had a do-it-yourself promotion, where it offered free 1.25 ounce (36.9 ml.) flavor packets, which consumers ripped open and poured into their Sprite. Flavors included grape, vanilla, and cherry flavor.
Sprite Tropical Mix:
Sprite Tropical Remix was re-released in the spring of 2015 under new branding. Sources on Twitter, Facebook, and Instagram have shown it popping up in the eastern United States, with eBay listings also appearing.
After a limited-roll out in spring 2015 and no public mention from Coca-Cola, the official Sprite website was updated showing the limited time re-release for Sprite Tropical Mix on February 29, 2016. The new bottle label shows that the tropical flavors are lemon/lime, strawberry, and pineapple.
In 2018, Sprite introduced a similar drink, 'MIX by Sprite Tropic Berry, exclusive to McDonald's restaurants. Along with the base lemon-lime flavor, Tropic Berry includes a blend of strawberry, orange, and pineapple flavor.
In 2019, The Coca-Cola Company redesigned Sprite packaging, replacing the drink's former retro branding with uniform minimalist branding, matching that of other flavors in the Sprite lineup. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Time-lapse embryo imaging**
Time-lapse embryo imaging:
Time-lapse embryo imaging is an emerging non-invasive embryo selection technique used in reproductive biology. It is used to help select embryos with lower risk of defects and/or greater potential of implantation. The procedure involves taking thousands of pictures of the growing embryo in vitro during incubation to study morphology and morphokinetic parameters.In terms of pregnancy rates, live births, or the risk of stillbirth or miscarriage there is a lack of evidence of sufficient quality to know if there is any difference between time-lapse embryo imaging and conventional embryo assessment in in-vitro fertilisation (IVF). Further trials are needed in order to determine whether time-lapse embryo imaging can impact on outcomes such as live-birth for couples undergoing IVF or ICSI. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**International Society of Psychiatric Genetics**
International Society of Psychiatric Genetics:
The International Society of Psychiatric Genetics (ISPG) is a learned society that aims to "promote and facilitate research in the genetics of psychiatric disorders, substance use disorders and allied traits". To this end, among other things, it organizes an annual "World Congress of Psychiatric Genetics".It also awards each year the "Ming Tsuang Lifetime Achievement Award" for scientists who have made major contributions to the field of psychiatric genetics and the "Theodore Reich Young Investigator Award" for work of exceptional merit by researchers under 40 years of age.
Presidents:
The following people have been president of the society: 1992–1996: Theodore Reich 1996–2000: Peter McGuffin 2000–2005: Mike Owen 2005–2010: Ming Tsuang 2010–2012: Nick Craddock 2012–2016: Francis J. McMahon 2016–present: Thomas G. Schulze
Ming Tsuang Lifetime Achievement Award:
The annual Ming Tsuang Lifetime Achievement Award is given to a distinguished senior scientist who has made significant and sustained contributions to the advancement of the field of psychiatric genetics. It is named for Ming Tsuang, who was the recipient of the award in 1995. The following persons have received this award: | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Tin-silver-copper**
Tin-silver-copper:
Tin-silver-copper (Sn-Ag-Cu, also known as SAC), is a lead-free (Pb-free) alloy commonly used for electronic solder. It is the main choice for lead-free surface-mount technology (SMT) assembly in the industry, as it is near eutectic, with adequate thermal fatigue properties, strength, and wettability. Lead-free solder is gaining much attention as the environmental effects of lead in industrial products is recognized, and as a result of Europe's RoHS legislation to remove lead and other hazardous materials from electronics. Japanese electronics companies have also looked at Pb-free solder for its industrial advantages.
Tin-silver-copper:
Typical alloys are 3–4% silver, 0.5–0.7% copper, and the balance (95%+) tin. For example, the common "SAC305" solder is 3.0% silver and 0.5% copper. Cheaper alternatives with less silver are used in some applications, such as SAC105 and SAC0307 (0.3% silver, 0.7% copper), at the expense of a somewhat higher melting point.
History:
In 2000, there were several lead-free assemblies and chip products initiatives being driven by the Japan Electronic Industries Development Association (JEIDA) and Waste Electrical and Electronic Equipment Directive (WEEE). These initiatives resulted in tin-silver-copper alloys being considered and tested as lead-free solder ball alternatives for array product assemblies.In 2003, tin-silver-copper was being used as a lead-free solder. However, its performance was criticized because it left a dull, irregular finish and it was difficult to keep the copper content under control. In 2005, tin-silver-copper alloys constituted approximately 65% of lead-free alloys used in the industry and this percentage has been increasing. Large companies such as Sony and Intel switched from using lead-containing solder to a tin-silver-copper alloy.
Constraints and tradeoffs:
The process requirements for (Pb-free) SAC solders and Sn-Pb solders are different both materially and logistically for electronic assembly. In addition, the reliability of Sn-Pb solders is well established, while SAC solders are still undergoing study, (though much work has been done to justify the use of SAC solders, such as the iNEMI Lead Free Solder Project).
Constraints and tradeoffs:
One important difference is that Pb-free soldering requires higher temperatures and increased process control to achieve the same results as that of the tin-lead method. The melting point of SAC alloys is 217–220 °C, or about 34 °C higher than the melting point of the eutectic tin-lead (63/37) alloy. This requires peak temperatures in the range of 235–245 °C to achieve wetting and wicking.Some of the components susceptible to SAC assembly temperatures are electrolytic capacitors, connectors, opto-electronics, and older style plastic components. However, a number of companies have started offering 260 °C compatible components to meet the requirements of Pb-free solders. iNEMI has proposed that a good target for development purposes would be around 260 °C.Also, SAC solders are alloyed with a larger number of metals so there is the potential for a far wider variety of intermetallics to be present in a solder joint. These more complex compositions can result in solder joint microstructures that are not as thoroughly studied as current tin-lead solder microstructures. These concerns are magnified by the unintentional use of lead-free solders in either processes designed solely for tin-lead solders or environments where material interactions are poorly understood. For example, the reworking of a tin-lead solder joint with Pb-free solder. These mixed-finish possibilities could negatively impact the solder's reliability.
Advantages:
SAC solders have outperformed high-Pb solders C4 joints in ceramic ball grid array (CBGA) systems, which are ball-grid arrays with a ceramic substrate. The CBGA showed consistently better results in thermal cycling for Pb-free alloys. The findings also show that SAC alloys are proportionately better in thermal fatigue as the thermal cycling range decreases. SAC performs better than Sn-Pb at the less extreme cycling conditions. Another advantage of SAC is that it appears to be more resistant to gold embrittlement than Sn-Pb. In test results, the strength of the joints is substantially higher for the SAC alloys than the Sn-Pb alloy. Also, the failure mode is changed from a partially brittle joint separation to a ductile tearing with the SAC. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Empathetic sound**
Empathetic sound:
Empathetic sound in a film refers to music or sound effects that match the present action or scene in rhythm, tone, and/or mood and aim to evoke that mood in the audience. The concept, coined by Michel Chion and also associated with Robert Stam, is derived from empathy, i.e., feeling the feelings of others. The opposite of empathetic sound is anempathetic sound.Empathetic sound may be either non-diegetic, e.g., a sad song playing over a depressing or upsetting scene, or diegetic, e.g., a song playing on the radio that matches a character's feelings. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Sterculinine**
Sterculinine:
Sterculinine is an alkaloid isolated from the Chinese drug Pangdahai (an extract of Sterculia lychnophora). | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Edward O. Thorp's Real Blackjack**
Edward O. Thorp's Real Blackjack:
Edward O. Thorp's Real Blackjack is a 1990 video game published by Villa Crespo Software.
Gameplay:
Edward O. Thorp's Real Blackjack is a game in which Edward O. Thorp serves as the player's expert guide in learning how to play blackjack better.
Reception:
Michael S. Lasky reviewed the game for Computer Gaming World, and stated that "It is singularly one of the best casino game/tutorials available today."Harry Bee for Compute! said the game "doesn't look as slick or play as simply as card games that focus on entertainment. It's substantial enough to take as lightly or seriously as you like." | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Church pennant**
Church pennant:
A church pennant is a pennant flown to indicate that a religious service is in progress. It is flown on ships and establishments (bases).
Marine Nationale:
The French Navy maintained a church pennant but it fell into disuse in 1905.
Royal Navy and Royal Netherlands Navy:
The Church Pennant as used by the Royal Navy, other navies of the Commonwealth, and the Royal Netherlands Navy.
History The broad pennant combination of the English Flag at the hoist and the Dutch National Flag in the fly originates from the Anglo-Dutch wars of the late 17th century, when it was used on Sundays to indicate that a service was in progress and a ceasefire existed between the warring nations.
United States Navy:
The United States Navy maintains several church pennants, of which the appropriate one is flown immediately above the ensign wherever the ensign is displayed, at the gaff when under way, or at the flagstaff when not under way, when religious services are held aboard ship by a Navy chaplain. Originally, the only authorized church pennant was for Christian chaplains, regardless of specific denomination. Later in 1975, the Secretary of Navy approved a similar Jewish worship pennant. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Sex industry**
Sex industry:
The sex industry (also called the sex trade) consists of businesses that either directly or indirectly provide sex-related products and services or adult entertainment. The industry includes activities involving direct provision of sex-related services, such as prostitution, strip clubs, host and hostess clubs and sex-related pastimes, such as pornography, sex-oriented men's magazines, sex movies, sex toys and fetish or BDSM paraphernalia. Sex channels for television and pre-paid sex movies for video on demand, are part of the sex industry, as are adult movie theaters, sex shops, peep shows, and strip clubs. The sex industry employs millions of people worldwide, mainly women. These range from the sex worker, also called adult service provider (ASP) or adult sex provider, who provides sexual services, to a multitude of support personnel.
Etymology:
The origins of the term sex industry are uncertain, but it appears to have arisen in the 1970s. A 1977 report by the Ontario Royal Commission on Violence in the Communications Industry (LaMarsh Commission) quoted author Peter McCabe as writing in Argosy: "Ten years ago the sex industry did not exist. When people talked of commercial sex they meant Playboy." A 1976 article in The New York Times by columnist Russell Baker claimed that "[M]ost of the problems created by New York City's booming sex industry result from the city's reluctance to treat it as an industry", arguing why sex shops constituted an "industry", and should be treated as such by concentrating them in a single neighborhood, suggesting the "sex industry" was not yet commonly recognized as such.
Types:
Prostitution Prostitution is a main component of the sex industry and may take place in a brothel, at a facility provided by the prostitute, at a client's hotel room, in a parked car, or on the street. Often this is arranged through a pimp or an escort agency. Prostitution involves a prostitute or sex worker providing commercial sexual services to a client. In some cases, the prostitute is at liberty to determine whether she or he will engage in a particular type of sexual activity, but forced prostitution and sexual slavery does exist in some places around the world. Reasons as to why an individual may enter into prostitution are varied. Socialist and radical feminists have cited poverty, oppressive capitalistic processes, and patriarchal societies that marginalizes people based on race and class as reasons for the continued presence of prostitution, as these aspects all work together to maintain oppression. Other reasons include displacement due to conflict and war. Institutionalized racism in the United States has been cited as a reason for the prevalence of sex workers who are Black or other people of color, as this leads to inequality and a lack of access to resources.The legality of prostitution and associated activities (soliciting, brothels, procuring) varies by jurisdiction. Yet even where it is illegal, a thriving underground business usually exists because of high demand and the high income that can be made by pimps, brothel owners, escort agencies, and traffickers.A brothel is a commercial establishment where people may engage in sexual activity with a prostitute, though for legal or cultural reasons they may describe themselves as massage parlors, bars, strip clubs or by some other description. Sex work in a brothel is considered safer than street prostitution.Prostitution and the operation of brothels are legal in some countries, but illegal in others. For instance, there are legal brothels in Nevada, US, due to the legalization of prostitution in some areas of the state. In countries where prostitution and brothels are legal, brothels may be subject to many and varied restrictions. Forced prostitution is usually illegal as is prostitution by or with minors, though the age may vary. Some countries prohibit particular sex acts. In some countries, brothels are subject to strict planning restrictions and in some cases are confined to designated red-light districts. Some countries prohibit or regulate how brothels advertise their services, or they may prohibit the sale or consumption of alcohol on the premises. In some countries where operating a brothel is legal, some brothel operators may choose to operate illegally.
Types:
Some men and women may travel away from their home to engage with local prostitutes, in a practice called sex tourism, and can have a variety of different socio-economic effects on the destinations. Male sex tourism can create or augment demand for sex services in the host countries, while female sex tourism tends not to use facilities that are specifically devoted to that purpose. Like tourism in general, sex tourism can make a significant contribution to local economies, especially in popular urban centers and places particularly known as sex tourism destinations. Sex tourism may arise as a result of stringent anti-prostitution laws in a tourist's home country, and although it may contribute to the destination economy, it can create social problems in the host country.
Types:
Prostitution is extremely prevalent in Asia, particularly in Southeast Asian nations such as Indonesia, Malaysia, Philippines, and Thailand. Due to the longstanding economic instability of many of these nations, increasing numbers of women have been forced to turn towards the sex industry there for work. According to Lin Lim, an International Labour Organization official who directed a study on prostitution in Southeast Asia, "it is very likely that women who lose their jobs in manufacturing and other service sectors and whose families rely on their remittances may be driven to enter the sex sector." The sex industry of some destinations has consequently grown to become their dominant commercial sector. Conversely, the sex industry in China has been revived by the nation's recent economic success. The nation's liberal economic policies in the early 1980s have been credited with revitalizing the sex industry as rural communities rapidly expand into highly developed urban centers. A typical example of this can be found in the city of Dalian. The city was declared a special economic zone in 1984; by the twenty-first century what had been a small fishing community developed an advanced commercial sector and a correspondingly large sex industry. A large portion of China's sex workers are immigrants from other Asian nations, such as Korea and Japan. In spite of these circumstances, most Asian countries do not have strong policies regarding prostitution. Their governments are challenged in this regard because of the differing contexts that surround prostitution, from voluntary and financially beneficial labor to virtual slavery. The increasing economic prominence of China and Japan have made these issues a global concern. As a result of Southeast Asia's lax policies regarding prostitution, the region has also become a hotbed for sex tourism, with a significant portion of this industry's clients being North American or European.
Types:
Pornography Pornography is the explicit portrayal of explicit sexual subject matter for the purposes of sexual arousal and erotic satisfaction. A pornographic model poses for pornographic photographs. A pornographic film actor or porn star performs in pornographic films. In cases where only limited dramatic skills are involved, a performer in pornographic films may be called a pornographic model. Pornography can be provided to the consumer in a variety of media, ranging from books, magazines, postcards, photos, sculpture, drawing, painting, animation, sound recording, film, video, or video game. However, when sexual acts are performed for a live audience, by definition it is not pornography, as the term applies to the depiction of the act, rather than the act itself. Thus, portrayals such as sex shows and striptease are not classified as pornography.
Types:
The first home-PCs capable of network communication prompted the arrival of online services for adults in the late 1980s and early 1990s. The wide-open early days of the World Wide Web quickly snowballed into the dot-com boom, in-part fueled by an incredible global increase in the demand for and consumption of pornography and erotica. Around 2009, the U.S. porn industry's revenue of $10–15 billion a year was more than the combined revenue of professional sports and live music combined and roughly on par or above Hollywood's box office revenue.There is mixed evidence on the social impact of pornography. Some insights come from meta-analyses synthesizing data from prior research. A 2015 meta-analysis indicated that pornography consumption is correlated with sexual aggression. However, it is unknown if pornography promotes, reduces or has no effect on sexual aggression at an individual level, because this correlation may not be causal. In fact, counter intuitively, pornography has been found to reduce sexual aggression at a societal level. A 2009 review stated that all scientific investigations of increases in the availability of pornography show no change or a decrease in the level of sexual offending. The question of whether pornography consumption affects consumers' happiness was addressed by a 2017 meta-analysis. It concluded that men who consume pornography are less satisfied with some areas of their lives, but pornography consumption does not make a significant difference in other areas, or to the lives of women. Additionally, a sample of Americans revealed in 2017 that those who had viewed pornography were more likely to experience romantic relationship breakup than their non-pornography watching counterparts, and that the effect was more pronounced with men.
Types:
Other types Adult entertainment is entertainment intended to be viewed by adults only, and distinguished from family entertainment. The style of adult entertainment may be ribaldry or bawdry. Any entertainment that normally includes sexual content qualifies as adult entertainment, including sex channels for television and pre-paid sex films for "on demand", as well as adult movie theaters, sex shops, and strip clubs. It also includes sex-oriented men's magazines, sex movies, sex toys and fetish and BDSM paraphernalia. Sex workers can be prostitutes, call girls, pornographic film actors, pornographic models, sex show performers, erotic dancers, striptease dancers, bikini baristas, telephone sex operators, cybersex operators, erotic massagers, or amateur porn stars for online sex sessions and videos. Other specialists in the wider industry include courtesans and dominatrixes, some of whom may hope to earn more by specialising in these niche markets.
Types:
Other members of the sex industry include the hostesses that work in many bars in China. These hostesses are women who are hired by men to sit with them and provide them with company, which entails drinking and making conversation, while the men flirt and make sexual comments. A number of these hostesses also offer sexual services at offsite locations to the men who hire them. Although this is not done by every woman who works as a hostess in the bars of China, the hostesses are all generally labeled as "grey women". This means that while they are not seen as prostitutes, they are not considered suitable marriage partners for many men. Other woman who are included in the "grey women" category are the permanent mistresses or "second wives" that many Chinese businessmen have. The Chinese government makes efforts to keep secret the fact that many of these hostesses are also prostitutes and make up a significant part of the sex industry. They do not want China's image in the rest of the world to become sullied. Hostesses are given a significant degree of freedom to choose whether or not they would like to service a client sexually, although a refusal does sometimes spark conflict.In addition, like any other industry, there are people who work in or service the sex industry as managers, film crews, photographers, website developers and webmasters, sales personnel, book and magazine writers and editors, etc. Some create business models, negotiate trade, make press releases, draw up contracts with other owners, buy and sell content, offer technical support, run servers, billing services, or payroll, organize trade shows and various events, do marketing and sales forecasts, provide human resources, or provide tax services and legal support. Usually, those in management or staff do not have direct dealings with sex workers, instead hiring photographers who have direct contact with the sex workers.
Perspectives:
The sex industry is controversial, and there are people, organizations and governments that have objections to it, and, as a result, pornography, prostitution, striptease and other similar occupations are illegal in many countries. This is typically the case in countries with strong religious traditions.
The term anti-pornography movement is used to describe those who argue that pornography has a variety of harmful effects on society, such as encouragement of human trafficking, desensitization, pedophilia, dehumanization, exploitation, sexual dysfunction, and inability to maintain healthy sexual relationships.
Perspectives:
Feminist views Feminism is divided on the issue of the sex industry. In her essay "What is wrong with prostitution", Carole Pateman makes the point that it is literally the objectification of woman. They are making their bodies an object that men can buy for a price. She also makes the point that prostitution and many other sex industries reinforce the idea of male ownership of a woman. On the other hand, some other feminists see the sex industry as empowering women. They could be seen as simply jobs. The woman who is working them are breaking free from social norms that would previously keep their sexuality under wraps as immoral.
Perspectives:
Based on these arguments, Sweden, Norway and Iceland have criminalized the buying of sexual services, while decriminalizing the selling of sexual services. (In other words, clients and pimps can be prosecuted for moneyed sexual transactions, but not prostitutes). Supporter of this model of legislation claim reduced illegal prostitution and human trafficking in these countries. Opponents dispute these claims. Women’s rights organisations and sex workers have opposed the Nordic model and attempts to criminalise those paying for sex, saying that it pushes the industry underground and makes work more dangerous for sex workers and increases violence against women, instead supporting the full decriminalisation or legalisation of sex work.Some feminists, such as Gail Dines, are opposed to pornography, arguing that it is an industry which exploits women and which is complicit in violence against women, both in its production (where they charge that abuse and exploitation of women performing in pornography are rampant) and in its consumption (where they charge that pornography eroticizes the domination, humiliation, and coercion of women, and reinforces sexual and cultural attitudes that are complicit in rape and sexual harassment). They charge that pornography contributes to the male-centered objectification of women and thus to sexism. However, other feminists are opposed to censorship, and have argued against the introduction of anti-porn legislation in the United States—among them Betty Friedan, Kate Millett, Karen DeCrow, Wendy Kaminer and Jamaica Kincaid.
Socio-economic issues:
Use of children While the legality of adult sexual entertainment varies by country, the use of children in the sex industry is illegal nearly everywhere in the world.
Socio-economic issues:
Commercial sexual exploitation of children (CSEC) is the "sexual abuse by the adult and remuneration in cash or kind to the child or a third person or persons. The child is treated as a sexual object and as a commercial object".CSEC includes the prostitution of children, child pornography, child sex tourism and other forms of transactional sex where a child engages in sexual activities to have key needs fulfilled, such as food, shelter or access to education. It includes forms of transactional sex where the sexual abuse of children is not stopped or reported by household members, due to benefits derived by the household from the perpetrator. CSEC is prevalent in Asia and parts of Latin America.
Socio-economic issues:
Thailand, Cambodia, India, Brazil and Mexico have been identified as the primary countries where the commercial sexual exploitation of children takes place. Certain places around the world are recognized for child sex tourism.
Socio-economic issues:
Low socio-economic status Caste-based prostitution Castes are largely hereditary social classes often emerging around certain professions. Lower castes are associated with professions considered "unclean", which has often included prostitution. In pre-modern Korea, the Kisaeng were women from the lower caste Cheonmin who were trained to provide entertainment, conversation, and sexual services to men of the upper class. In South Asia, castes whose females are involved in prostitution by tradition, sometimes called Intergenerational prostitution, include the Bedias, the Perna caste, the Banchhada, the Nat caste and, in Nepal, the Badi people.
Socio-economic issues:
Migrants Some researchers have claimed that sex workers can benefit from their profession in terms of immigration status. In her essay "Selling Sex for Visas: Sex Tourism as a Stepping-Stone to International Migration" anthropologist Denise Brennan cited an example of prostitutes in the Dominican Republic resort town of Sosúa, where some female prostitutes marry their customers in order to immigrate to other countries and seek a better life. The customers are, however, the ones that hold the power in this situation as they can withhold or revoke the sex worker's visa, either denying them the ability to immigrate or forcing them to return to their country of origin. Sex workers are also at risk of judgement from family members and relatives for having been associated with the sex tourism industry. It also must be noted that migrant sex work happens due to globalization. Globalization has produced growth both in sex tourism and in the migration of women to places where the sex industry thrives.
Socio-economic issues:
Effect on relationships Dolf Zillmann asserts that extensive viewing of pornographic material produces many sociological effects which he characterizes as unfavorable, including a decreased respect for long-term, monogamous relationships, and an attenuated desire for procreation. He claims that pornography can "potentially undermine the traditional values that favor marriage, family, and children" and that it depicts sexuality in a way which is not connected to "emotional attachment, of kindness, of caring, and especially not of continuance of the relationship, as such continuance would translate into responsibilities".
Socio-economic issues:
Effect on crime Additionally, some researchers claim that pornography causes unequivocal harm to society by increasing rates of sexual assault, a line of research which has been critiqued in "The effects of Pornography: An International Perspective" on external validity grounds, while others claim there is a correlation between pornography and a decrease of sex crimes.
Socio-economic issues:
Discrimination and exoticization Some customers see sex workers from other countries as exotic commodities that can be fetishized or exploited. Many producers and proponents of pornography featuring gay actors claim that this work is liberating and offers them a voice in popular media while critics view it as a degradation of the eroticization of inequality and that advocates for this new line of cinema are only creating a new barrier for homosexuals to contend with.
Socio-economic issues:
Spread of diseases The sex industry also raises concerns about the spread of STDs. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**1-Hexene**
1-Hexene:
1-Hexene (hex-1-ene) is an organic compound with the formula C6H12. It is an alkene that is classified in industry as higher olefin and an alpha-olefin, the latter term meaning that the double bond is located at the alpha (primary) position, endowing the compound with higher reactivity and thus useful chemical properties. 1-Hexene is an industrially significant linear alpha olefin. 1-Hexene is a colourless liquid.
Production:
1-Hexene is commonly manufactured by two general routes: (i) full-range processes via the oligomerization of ethylene and (ii) on-purpose technology. A minor route to 1-hexene, used commercially on smaller scales, is the dehydration of hexanol. Prior to the 1970s, 1-hexene was also manufactured by the thermal cracking of waxes. Linear internal hexenes were manufactured by chlorination/dehydrochlorination of linear paraffins."Ethylene oligomerization" combines ethylene molecules to produce linear alpha-olefins of various chain lengths with an even number of carbon atoms. This approach result in a distribution or “full range” of alpha-olefins. The Shell higher olefin process (SHOP) employs this approach. Linde and SABIC have developed the α-SABLIN technology using the oligomerization of ethylene to produce 21 percent 1-hexene. CP Chemicals and Innovene also have full-range processes. Typically, 1-hexene content ranges from about twenty percent distribution in the Ethyl (Innovene) process, whereas only twelve percent of distribution in the CP Chemicals and Idemitsu processes.
Production:
An on purpose route to 1-hexene using ethylene trimerization was first brought on stream in Qatar in 2003 by Chevron-Phillips. A second plant was scheduled to start in 2011 in Saudi Arabia and a third planned for 2014 in the US. The Sasol process is also considered an on-purpose route to 1-hexene. Sasol commercially employs Fischer–Tropsch synthesis to make fuels from synthesis gas derived from coal. The synthesis recovers 1-hexene from the aforementioned fuel streams, where the initial 1-hexene concentration cut may be 60% in a narrow distillation, with the remainder being vinylidenes, linear and branched internal olefins, linear and branched paraffins, alcohols, aldehydes, carboxylic acids, and aromatic compounds. The trimerization of ethylene by homogeneous catalysts has been demonstrated. An alternative on-purpose route has been reported by Lummus Technology.
Applications:
The primary use of 1-hexene is as a comonomer in production of polyethylene. High-density polyethylene (HDPE) and linear low-density polyethylene (LLDPE) use approximately 2–4% and 8–10% of comonomers, respectively.
Another significant use of 1-hexene is the production of the linear aldehyde heptanal via hydroformylation (oxo synthesis). Heptanal can be converted to the short-chain fatty acid heptanoic acid or the alcohol heptanol.
The chemical is used in the synthesis of flavors, perfumes, dyes and resins.
Hazards:
1-Hexene is considered dangerous because in liquid and vapor form it is highly flammable and may be fatal if swallowed and enters airways.
The widespread use of 1-hexene may result in its release to the environment through various waste streams. The substance is toxic to aquatic organisms. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Axe tie**
Axe tie:
Axe ties are railway ties (or sleeper) that are hewn by hand, usually with a broadaxe. There are 2,900 ties per mile of track on a first class railroad. The early railways would not accept ties cut with a saw, as it was claimed that the kerf of the saw splintered the fibres of the wood, leaving them more likely to soak up moisture causing premature rot.
The process:
Geoff Marples wrote an account of being a tiehack in the East Kootenays in 1938 and described the process of making axe ties to include: First a suitable tree was chosen and then felling and limbing the tree. Next came scoring which is chopping, by eye without a chalk line, of notches to remove extra wood about every 10 inches (250 mm); hewing the trunks only on two sides unless the log was over 11 inches (280 mm) in diameter; bucking (cutting to in this case 8 ft or 2.44 m); peeling any remaining bark off; and stacking the ties so a chain can be wrapped around them. Next came skidding each group of ties to a landing with a team of horses, and then loading and hauling the ties to a railway siding by truck and unloading by hand. Scaling was the key event where a railroad inspector accepted or culled (rejected) and graded each tie as a number one (7 by 9 in or 178 by 229 mm used for the main railroad lines) or number two (6 by 6 in or 152 by 152 mm used for sidings). Loading the 200-pound (91 kg) ties by hand onto a car was the last task. Marples wrote that he netted 48¢ for each grade one, and 36¢ for each grade two and made $150 for a winter's work.
Wood species:
Cedar was the most sought after wood for ties, since it is known for being extremely resistant to rot. However, as electric power came into more common use in the early 1900s, it was substituted with other species such as Tamarack. In northern regions where jack pine was plentiful, that species became a more common source for railway ties. Jack pine ties did not last as long as cedar or tamarack (lying on the ground), but were cheaper to produce. As creosote treatment came into use the axe ties were phased out, but jack pine remained best suited for softwood ties.
Production in Canada:
Axe tie production was an early industry of importance for many communities in Ontario along the railway in the early 1900s. Examples include Foleyet and Nemegos. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**ISO/IEC 15504**
ISO/IEC 15504:
ISO/IEC 15504 Information technology – Process assessment, also termed Software Process Improvement and Capability dEtermination (SPICE), is a set of technical standards documents for the computer software development process and related business management functions. It is one of the joint International Organization for Standardization (ISO) and International Electrotechnical Commission (IEC) standards, which was developed by the ISO and IEC joint subcommittee, ISO/IEC JTC 1/SC 7.ISO/IEC 15504 was initially derived from process lifecycle standard ISO/IEC 12207 and from maturity models like Bootstrap, Trillium and the Capability Maturity Model (CMM).
ISO/IEC 15504:
ISO/IEC 15504 has been superseded by ISO/IEC 33000:2015 Information technology – Process assessment – Concepts and terminology as of March, 2015.
Overview:
ISO/IEC 15504 is the reference model for the maturity models (consisting of capability levels which in turn consist of the process attributes and further consist of generic practices) against which the assessors can place the evidence that they collect during their assessment, so that the assessors can give an overall determination of the organization's capabilities for delivering products (software, systems, and IT services).
History:
A working group was formed in 1993 to draft the international standard and used the acronym SPICE. SPICE initially stood for Software Process Improvement and Capability Evaluation, but in consideration of French concerns over the meaning of evaluation, SPICE has now been renamed Software Process Improvement and Capability Determination. SPICE is still used for the user group of the standard, and the title for the annual conference. The first SPICE was held in Limerick, Ireland in 2000, SPICE 2003 was hosted by ESA in the Netherlands, SPICE 2004 was hosted in Portugal, SPICE 2005 in Austria, SPICE 2006 in Luxembourg, SPICE 2007 in South Korea, SPICE 2008 in Nuremberg, Germany and SPICE 2009 in Helsinki, Finland.
History:
The first versions of the standard focused exclusively on software development processes. This was expanded to cover all related processes in a software business, for example project management, configuration management, quality assurance, and so on. The list of processes covered grew to cover six areas: organizational, management, engineering, acquisition supply, support, and operations.
In a major revision to the draft standard in 2004, the process reference model was removed and is now related to the ISO/IEC 12207 (Software Lifecycle Processes). The issued standard now specifies the measurement framework and can use different process reference models. There are five general and industry models in use.
Part 5 specifies software process assessment and part 6 specifies system process assessment.
The latest work in the ISO standards working group includes creation of a maturity model, which is planned to become ISO/IEC 15504 part 7.
The standard:
The Technical Report (TR) document for ISO/IEC TR 15504 was divided into 9 parts. The initial International Standard was recreated in 5 parts. This was proposed from Japan when the TRs were published at 1997.
The International Standard (IS) version of ISO/IEC 15504 now comprises 6 parts. The 7th part is currently in an advanced Final Draft Standard form and work has started on part 8.
Part 1 of ISO/IEC TR 15504 explains the concepts and gives an overview of the framework.
Reference model ISO/IEC 15504 contains a reference model. The reference model defines a process dimension and a capability dimension.
The process dimension in the reference model is not the subject of part 2 of ISO/IEC 15504, but part 2 refers to external process lifecycle standards including ISO/IEC 12207 and ISO/IEC 15288. The standard defines means to verify conformity of reference models.
Processes The process dimension defines processes divided into the five process categories of: customer-supplier engineering supporting management organizationWith new parts being published, the process categories will expand, particularly for IT service process categories and enterprise process categories.
The standard:
Capability levels and process attributes For each process, ISO/IEC 15504 defines a capability level on the following scale: The capability of processes is measured using process attributes. The international standard defines nine process attributes: 1.1 Process performance 2.1 Performance management 2.2 Work product management 3.1 Process definition 3.2 Process deployment 4.1 Process measurement 4.2 Process control 5.1 Process innovation 5.2 Process optimizationEach process attribute consists of one or more generic practices, which are further elaborated into practice indicators to aid assessment performance.
The standard:
Rating scale of process attributes Each process attribute is assessed on a four-point (N-P-L-F) rating scale: Not achieved (0–15%) Partially achieved (>15–50%) Largely achieved (>50–85%) Fully achieved (>85–100%).The rating is based upon evidence collected against the practice indicators, which demonstrate fulfillment of the process attribute.
Assessments ISO/IEC 15504 provides a guide for performing an assessment.This includes: the assessment process the model for the assessment any tools used in the assessment Assessment process Performing assessments is the subject of parts 2 and 3 of ISO/IEC 15504. Part 2 is the normative part and part 3 gives a guidance to fulfill the requirements in part 2.
The standard:
One of the requirements is to use a conformant assessment method for the assessment process. The actual method is not specified in the standard although the standard places requirements on the method, method developers and assessors using the method. The standard provides general guidance to assessors and this must be supplemented by undergoing formal training and detailed guidance during initial assessments.
The standard:
The assessment process can be generalized as the following steps: initiate an assessment (assessment sponsor) select assessor and assessment team plan the assessment, including processes and organizational unit to be assessed (lead assessor and assessment team) pre-assessment briefing data collection data validation process rating reporting the assessment resultAn assessor can collect data on a process by various means, including interviews with persons performing the process, collecting documents and quality records, and collecting statistical process data. The assessor validates this data to ensure it is accurate and completely covers the assessment scope. The assessor assesses this data (using his expert judgment) against a process's base practices and the capability dimension's generic practices in the process rating step. Process rating requires some exercising of expert judgment on the part of the assessor and this is the reason that there are requirements on assessor qualifications and competency. The process rating is then presented as a preliminary finding to the sponsor (and preferably also to the persons assessed) to ensure that they agree that the assessment is accurate. In a few cases, there may be feedback requiring further assessment before a final process rating is made.
The standard:
Assessment model The process assessment model (PAM) is the detailed model used for an actual assessment. This is an elaboration of the process reference model (PRM) provided by the process lifecycle standards.The process assessment model (PAM) in part 5 is based on the process reference model (PRM) for software: ISO/IEC 12207.The process assessment model in part 6 is based on the process reference model for systems: ISO/IEC 15288.The standard allows other models to be used instead, if they meet ISO/IEC 15504's criteria, which include a defined community of interest and meeting the requirements for content (i.e. process purpose, process outcomes and assessment indicators).
The standard:
Tools used in the assessment There exist several assessment tools. The simplest comprise paper-based tools. In general, they are laid out to incorporate the assessment model indicators, including the base practice indicators and generic practice indicators. Assessors write down the assessment results and notes supporting the assessment judgment.
There are a limited number of computer based tools that present the indicators and allow users to enter the assessment judgment and notes in formatted screens, as well as automate the collated assessment result (i.e. the process attribute ratings) and creating reports.
Assessor qualifications and competency For a successful assessment, the assessor must have a suitable level of the relevant skills and experience.
These skills include: personal qualities such as communication skills.
relevant education and training and experience.
specific skills for particular categories, e.g. management skills for the management category.
ISO/IEC 15504 related training and experience in process capability assessments.The competency of assessors is the subject of part 3 of ISO/IEC 15504.
The standard:
In summary, the ISO/IEC 15504 specific training and experience for assessors comprise: completion of a 5-day lead assessor training course performing at least one assessment successfully under supervision of a competent lead assessor performing at least one assessment successfully as a lead assessor under the supervision of a competent lead assessor. The competent lead assessor defines when the assessment is successfully performed. There exist schemes for certifying assessors and guiding lead assessors in making this judgement.
The standard:
Uses ISO/IEC 15504 can be used in two contexts: Process improvement, and Capability determination (= evaluation of supplier's process capability).
The standard:
Process improvement ISO/IEC 15504 can be used to perform process improvement within a technology organization. Process improvement is always difficult, and initiatives often fail, so it is important to understand the initial baseline level (process capability level), and to assess the situation after an improvement project. ISO 15504 provides a standard for assessing the organization's capacity to deliver at each of these stages.
The standard:
In particular, the reference framework of ISO/IEC 15504 provides a structure for defining objectives, which facilitates specific programs to achieve these objectives.
Process improvement is the subject of part 4 of ISO/IEC 15504. It specifies requirements for improvement programmes and provides guidance on planning and executing improvements, including a description of an eight step improvement programme. Following this improvement programme is not mandatory and several alternative improvement programmes exist.
Capability determination An organization considering outsourcing software development needs to have a good understanding of the capability of potential suppliers to deliver.
The standard:
ISO/IEC 15504 (Part 4) can also be used to inform supplier selection decisions. The ISO/IEC 15504 framework provides a framework for assessing proposed suppliers, as assessed either by the organization itself, or by an independent assessor.The organization can determine a target capability for suppliers, based on the organization's needs, and then assess suppliers against a set of target process profiles that specify this target capability. Part 4 of the ISO/IEC 15504 specifies the high level requirements and an initiative has been started to create an extended part of the standard covering target process profiles. Target process profiles are particularly important in contexts where the organization (for example, a government department) is required to accept the cheapest qualifying vendor. This also enables suppliers to identify gaps between their current capability and the level required by a potential customer, and to undertake improvement to achieve the contract requirements (i.e. become qualified). Work on extending the value of capability determination includes a method called Practical Process Profiles - which uses risk as the determining factor in setting target process profiles. Combining risk and processes promotes improvement with active risk reduction, hence reducing the likelihood of problems occurring.
Acceptance of ISO/IEC 15504:
ISO/IEC 15504 has been successful as: ISO/IEC 15504 is available through National Standards Bodies.
It has the support of the international community.
Over 4,000 assessments have been performed to date.
Major sectors are leading the pace such as automotive, space and medical systems with industry relevant variants.
Domain-specific models like Automotive SPICE and SPICE 4 SPACE can be derived from it.
Acceptance of ISO/IEC 15504:
There have been many international initiatives to support take-up such as SPICE for small and very small entities.On the other hand, ISO/IEC 15504 may not be as popular as CMMI for the following reasons: ISO/IEC 15504 is not available as free download, but must be purchased from the ISO. (Automotive SPICE, on the other hand, can be freely downloaded from the link supplied below.) CMM, and later CMMI, were originally available as free downloads from the SEI website. However, beginning with CMMI v2.0 a license must now be purchased from SEI.
Acceptance of ISO/IEC 15504:
The CMM, and later CMMI, were originally sponsored by the US Department of Defense (DoD). Now, however, DoD no longer funds CMMI or mandates its use.
The CMM was created first, and reached critical 'market' share before ISO 15504 became available.
Acceptance of ISO/IEC 15504:
The CMM has subsequently been replaced by the CMMI, which incorporates many of the ideas of ISO/IEC 15504, but also retains the benefits of the CMM.Like the CMM, ISO/IEC 15504 was created in a development context, making it difficult to apply in a service management context. But work has started to develop an ISO/IEC 20000-based process reference model (ISO/IEC 20000-4) that can serve as a basis for a process assessment model. This is planned to become part 8 to the standard (ISO/IEC 15504-8). In addition there are methods available that adapt its use to various contexts. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**NASDAQ Biotechnology Index**
NASDAQ Biotechnology Index:
The NASDAQ Biotechnology Index is a stock market index made up of securities of NASDAQ-listed companies classified according to the Industry Classification Benchmark as either the Biotechnology or the Pharmaceutical industry. A list of the 213 components of the index is published online.
Criteria:
In order to remain included within the index, the components must meet the following criteria: The security U.S. listing must be exclusively on the NASDAQ National Market (unless the security was dually listed on another U.S. market prior to January 1, 2004 and has continuously maintained such listing).
The issuer of the security must be classified according to the Industry Classification Benchmark as either Biotechnology or Pharmaceuticals.
The security may not be issued by an issuer currently in bankruptcy proceedings.
The security must have a market capitalization of at least $200 million.
The security must have an average daily trading volume of at least 100,000 shares.
The issuer of the security may not have entered into a definitive agreement or other arrangement which would likely result in the security no longer being Index.
The issuer of the security may not have annual financial statements with an audit opinion that is currently withdrawn.
The issuer of the security must have "seasoned" on NASDAQ or another recognized market for at least 6 months; in the case of spin-offs, the operating history of the spin-off will be considered.
Investing in the index:
This index is tracked by several exchange-traded funds, the most liquid of which is the iShares Nasdaq Biotechnology (Nasdaq: IBB). | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Plasma speaker**
Plasma speaker:
Plasma speakers or ionophones are a form of loudspeaker which varies air pressure via an electrical plasma instead of a solid diaphragm. The plasma arc heats the surrounding air causing it to expand. Varying the electrical signal that drives the plasma and connected to the output of an audio amplifier, the plasma size varies which in turn varies the expansion of the surrounding air creating sound waves.The plasma is typically in the form of a glow discharge and acts as a massless radiating element. The technique is a much later development of physics principles demonstrated by William Duddell's "singing arc" of 1900, and can be related to modern ion thruster spacecraft propulsion.
Plasma speaker:
The term ionophone was used by Dr. Siegfried Klein who developed a plasma tweeter that was licensed for commercial production by DuKane with the Ionovac and Fane Acoustics with the Ionofane in the late 1940s and 1950s.The effect takes advantage of several physical principles: Firstly, ionization of gases causes their electrical resistance to drop significantly, making them conductive. This resulting plasma can be made to vibrate sympathetically with alternating electric fields and magnetic fields. Secondly, the involved plasma, itself a field of ions, has a relatively negligible mass. Thus the air remains mechanically coupled with the massless plasma allowing it to radiate a potentially ideal reproduction of the sound source when the electric or magnetic field is modulated with an audio frequency signal.
Comparison to conventional loudspeakers:
Conventional loudspeaker transducer designs use the input electrical audio frequency signal to vibrate a significant mass: In a dynamic loudspeaker this driver is coupled to a stiff speaker cone—a diaphragm which pushes air at audio frequencies. But the inertia inherent in its mass resists acceleration—and all changes in cone position. Additionally, speaker cones will eventually suffer tensile fatigue from the repeated shaking of sonic vibration.Thus conventional speaker output, or the fidelity of the device, is distorted by physical limitations inherent in its design. These distortions have long been the limiting factor in commercial reproduction of strong high frequencies. To a lesser extent square wave characteristics are also problematic; the reproduction of square waves most stress a speaker cone.
Comparison to conventional loudspeakers:
In a plasma speaker, as member of the family of massless speakers, these limitations do not exist. The low-inertia driver has exceptional transient response compared to other designs. The result is an even output, accurate even at higher frequencies beyond the human audible range. Such speakers are notable for accuracy and clarity, but not lower frequencies because plasma is composed of tiny molecules and with such low mass are unable to move large volumes of air unless the plasma are in large number. So these designs are more effective as tweeters.
Practical considerations:
Plasma speaker designs ionize ambient air which contains the gases nitrogen and oxygen. In an intense electrical field these gases can produce reactive by-products, and in closed rooms these can reach a hazardous level. The two predominant gases produced are ozone and nitrogen dioxide.
Practical considerations:
Plasmatronics produced a commercial plasma speaker that used a helium tank to provide the ionization gas. In 1978 Alan E. Hill of the Air Force Weapons Laboratory in Albuquerque, NM, designed the Plasmatronics Hill Type I, a commercial helium-plasma tweeter. This avoided the ozone and nitrogen oxides produced by radio frequency decomposition of air in earlier generations of plasma tweeters. But the operation of such speakers requires a continuous supply of helium.
Practical considerations:
In the 1950s, the pioneering DuKane Corporation produced the air-ionizing Ionovac, marketed in the UK as the Ionophone. Currently there remain manufacturers in Germany who use this design, as well as many do-it-yourself designs available on the Internet.
Practical considerations:
To make the plasma speaker a more widely available product, ExcelPhysics, a Seattle-based company, and Images Scientific Instruments, a New York-based company, both offered their own variant of the plasma speaker as a DIY kit. The ExcelPhysics variant used a flyback transformer to step up voltage, a 555 timing chip to provide modulation and a 44 kHz carrier signal, and an audio amplifier. The kit is no longer marketed. A flame speaker uses a modulated flame for the driver and could be considered related to the plasma loudspeaker. This was explored using the combustion of natural gas or candles to produce a plasma through which current is then passed. These combustion designs do not require high voltages to generate a plasma field, but there has been no commercial products using them.
Practical considerations:
A similar effect is occasionally observed in the vicinity of high-power amplitude-modulated radio transmitters when a corona discharge (inadvertently) occurs from the transmitting antenna, where voltages in the tens of thousands are involved. The ionized air is heated in direct relationship to the modulating signal with surprisingly high fidelity over a wide area. Due to the destructive effects of the (self-sustaining) discharge this cannot be permitted to persist, and automatic systems momentarily shut down transmission within a few seconds to quench the "flame". | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Air navigation**
Air navigation:
The basic principles of air navigation are identical to general navigation, which includes the process of planning, recording, and controlling the movement of a craft from one place to another.Successful air navigation involves piloting an aircraft from place to place without getting lost, not breaking the laws applying to aircraft, or endangering the safety of those on board or on the ground. Air navigation differs from the navigation of surface craft in several ways; Aircraft travel at relatively high speeds, leaving less time to calculate their position en route. Aircraft normally cannot stop in mid-air to ascertain their position at leisure. Aircraft are safety-limited by the amount of fuel they can carry; a surface vehicle can usually get lost, run out of fuel, then simply await rescue. There is no in-flight rescue for most aircraft. Additionally, collisions with obstructions are usually fatal. Therefore, constant awareness of position is critical for aircraft pilots.
Air navigation:
The techniques used for navigation in the air will depend on whether the aircraft is flying under visual flight rules (VFR) or instrument flight rules (IFR). In the latter case, the pilot will navigate exclusively using instruments and radio navigation aids such as beacons, or as directed under radar control by air traffic control. In the former case, a pilot will largely navigate using "dead reckoning" combined with visual observations (known as pilotage), with reference to appropriate maps. This may be supplemented using radio navigation aids or satellite based positioning systems.
Route planning:
The first step in navigation is deciding where one wishes to go. A private pilot planning a flight under VFR will usually use an aeronautical chart of the area which is published specifically for the use of pilots. This map will depict controlled airspace, radio navigation aids and airfields prominently, as well as hazards to flying such as mountains, tall radio masts, etc. It also includes sufficient ground detail – towns, roads, wooded areas – to aid visual navigation. In the UK, the CAA publishes a series of maps covering the whole of the UK at various scales, updated annually. The information is also updated in the notices to airmen, or NOTAMs.
Route planning:
The pilot will choose a route, taking care to avoid controlled airspace that is not permitted for the flight, restricted areas, danger areas and so on. The chosen route is plotted on the map, and the lines drawn are called the track. The aim of all subsequent navigation is to follow the chosen track as accurately as possible. Occasionally, the pilot may elect on one leg to follow a clearly visible feature on the ground such as a railway track, river, highway, or coast.
Route planning:
When an aircraft is in flight, it is moving relative to the body of air through which it is flying; therefore maintaining an accurate ground track is not as easy as it might appear, unless there is no wind at all—a very rare occurrence. The pilot must adjust heading to compensate for the wind, in order to follow the ground track. Initially the pilot will calculate headings to fly for each leg of the trip prior to departure, using the forecast wind directions and speeds supplied by the meteorological authorities for the purpose. These figures are generally accurate and updated several times per day, but the unpredictable nature of the weather means that the pilot must be prepared to make further adjustments in flight. A general aviation (GA) pilot will often make use of either a flight computer – a type of slide rule – or a purpose-designed electronic navigational computer to calculate initial headings.
Route planning:
The primary instrument of navigation is the magnetic compass. The needle or card aligns itself to magnetic north, which does not coincide with true north, so the pilot must also allow for this, called the magnetic variation (or declination). The variation that applies locally is also shown on the flight map. Once the pilot has calculated the actual headings required, the next step is to calculate the flight times for each leg. This is necessary to perform accurate dead reckoning. The pilot also needs to take into account the slower initial airspeed during climb to calculate the time to top of climb. It is also helpful to calculate the top of descent, or the point at which the pilot would plan to commence the descent for landing.
Route planning:
The flight time will depend on both the desired cruising speed of the aircraft, and the wind – a tailwind will shorten flight times, a headwind will increase them. The flight computer has scales to help pilots compute these easily.
Route planning:
The point of no return, sometimes referred to as the PNR, is the point on a flight at which a plane has just enough fuel, plus any mandatory reserve, to return to the airfield from which it departed. Beyond this point that option is closed, and the plane must proceed to some other destination. Alternatively, with respect to a large region without airfields, e.g. an ocean, it can mean the point before which it is closer to turn around and after which it is closer to continue. Similarly, the Equal time point, referred to as the ETP (also critical point), is the point in the flight where it would take the same time to continue flying straight, or track back to the departure aerodrome. The ETP is not dependent on fuel, but wind, giving a change in ground speed out from, and back to the departure aerodrome. In Nil wind conditions, the ETP is located halfway between the two aerodromes, but in reality it is shifted depending on the windspeed and direction.
Route planning:
The aircraft that is flying across the Ocean for example, would be required to calculate ETPs for one engine inoperative, depressurization, and a normal ETP; all of which could actually be different points along the route. For example, in one engine inoperative and depressurization situations the aircraft would be forced to lower operational altitudes, which would affect its fuel consumption, cruise speed and ground speed. Each situation therefore would have a different ETP.
Route planning:
Commercial aircraft are not allowed to operate along a route that is out of range of a suitable place to land if an emergency such as an engine failure occurs. The ETP calculations serve as a planning strategy, so flight crews always have an 'out' in an emergency event, allowing a safe diversion to their chosen alternate.
Route planning:
The final stage is to note which areas the route will pass through or over, and to make a note of all of the things to be done – which ATC units to contact, the appropriate frequencies, visual reporting points, and so on. It is also important to note which pressure setting regions will be entered, so that the pilot can ask for the QNH (air pressure) of those regions. Finally, the pilot should have in mind some alternative plans in case the route cannot be flown for some reason – unexpected weather conditions being the most common. At times the pilot may be required to file a flight plan for an alternate destination and to carry adequate fuel for this. The more work a pilot can do on the ground prior to departure, the easier it will be in the air.
Route planning:
IFR planning Instrument flight rules (IFR) navigation is similar to visual flight rules (VFR) flight planning except that the task is generally made simpler by the use of special charts that show IFR routes from beacon to beacon with the lowest safe altitude (LSALT), bearings (in both directions), and distance marked for each route. IFR pilots may fly on other routes but they then must perform all such calculations themselves; the LSALT calculation is the most difficult. The pilot then needs to look at the weather and minimum specifications for landing at the destination airport and the alternate requirements. Pilots must also comply with all the rules including their legal ability to use a particular instrument approach depending on how recently they last performed one.
Route planning:
In recent years, strict beacon-to-beacon flight paths have started to be replaced by routes derived through performance-based navigation (PBN) techniques. When operators develop flight plans for their aircraft, the PBN approach encourages them to assess the overall accuracy, integrity, availability, continuity, and functionality of the aggregate navigation aids present within the applicable airspace. Once these determinations have been made, the operator develops a route that is the most time and fuel efficient while respecting all applicable safety concerns—thereby maximizing both the aircraft's and the airspace's overall performance capabilities.
Route planning:
Under the PBN approach, technologies evolve over time (e.g., ground beacons become satellite beacons) without requiring the underlying aircraft operation to be recalculated. Also, navigation specifications used to assess the sensors and equipment that are available in an airspace can be cataloged and shared to inform equipment upgrade decisions and the ongoing harmonization of the world's various air navigation systems.
In flight:
Once in flight, the pilot must take pains to stick to plan, otherwise getting lost is all too easy. This is especially true if flying in the dark or over featureless terrain. This means that the pilot must stick to the calculated headings, heights and speeds as accurately as possible, unless flying under visual flight rules. The visual pilot must regularly compare the ground with the map, (pilotage) to ensure that the track is being followed although adjustments are generally calculated and planned. Usually, the pilot will fly for some time as planned to a point where features on the ground are easily recognised. If the wind is different from that expected, the pilot must adjust heading accordingly, but this is not done by guesswork, but by mental calculation – often using the 1 in 60 rule. For example, a two degree error at the halfway stage can be corrected by adjusting heading by four degrees the other way to arrive in position at the end of the leg. This is also a point to reassess the estimated time for the leg. A good pilot will become adept at applying a variety of techniques to stay on track.
In flight:
While the compass is the primary instrument used to determine one's heading, pilots will usually refer instead to the direction indicator (DI), a gyroscopically driven device which is much more stable than a compass. The compass reading will be used to correct for any drift (precession) of the DI periodically. The compass itself will only show a steady reading when the aircraft has been in straight and level flight long enough to allow it to settle.
In flight:
Should the pilot be unable to complete a leg – for example bad weather arises, or the visibility falls below the minima permitted by the pilot's license, the pilot must divert to another route. Since this is an unplanned leg, the pilot must be able to mentally calculate suitable headings to give the desired new track. Using the flight computer in flight is usually impractical, so mental techniques to give rough and ready results are used. The wind is usually allowed for by assuming that sine A = A, for angles less than 60° (when expressed in terms of a fraction of 60° – e.g. 30° is 1/2 of 60°, and sine 30° = 0.5), which is adequately accurate. A method for computing this mentally is the clock code. However the pilot must be extra vigilant when flying diversions to maintain awareness of position.
In flight:
Some diversions can be temporary – for example to skirt around a local storm cloud. In such cases, the pilot can turn 60 degrees away his desired heading for a given period of time. Once clear of the storm, he can then turn back in the opposite direction 120 degrees, and fly this heading for the same length of time. This is a 'wind-star' maneuver and, with no winds aloft, will place him back on his original track with his trip time increased by the length of one diversion leg.
In flight:
Another reason for not relying on the magnetic compass during flight, apart from calibrating the Heading indicator from time to time, is because magnetic compasses are subject to errors caused by flight conditions and other internal and external interferences on the magnet system.
Navigation aids:
Many GA aircraft are fitted with a variety of navigation aids, such as Automatic direction finder (ADF), inertial navigation, compasses, radar navigation, VHF omnidirectional range (VOR) and Global navigation satellite system (GNSS).
Navigation aids:
ADF uses non-directional beacons (NDBs) on the ground to drive a display which shows the direction of the beacon from the aircraft. The pilot may use this bearing to draw a line on the map to show the bearing from the beacon. By using a second beacon, two lines may be drawn to locate the aircraft at the intersection of the lines. This is called a cross-cut. Alternatively, if the track takes the flight directly overhead a beacon, the pilot can use the ADF instrument to maintain heading relative to the beacon, though "following the needle" is bad practice, especially in the presence of a strong cross wind – the pilot's actual track will spiral in towards the beacon, not what was intended. NDBs also can give erroneous readings because they use very long wavelengths, which are easily bent and reflected by ground features and the atmosphere. NDBs continue to be used as a common form of navigation in some countries with relatively few navigational aids.
Navigation aids:
VOR is a more sophisticated system, and is still the primary air navigation system established for aircraft flying under IFR in those countries with many navigational aids. In this system, a beacon emits a specially modulated signal which consists of two sine waves which are out of phase. The phase difference corresponds to the actual bearing relative to magnetic north (in some cases true north) that the receiver is from the station. The upshot is that the receiver can determine with certainty the exact bearing from the station. Again, a cross-cut is used to pinpoint the location. Many VOR stations also have additional equipment called DME (distance measuring equipment) which will allow a suitable receiver to determine the exact distance from the station. Together with the bearing, this allows an exact position to be determined from a single beacon alone. For convenience, some VOR stations also transmit local weather information which the pilot can listen in to, perhaps generated by an Automated Surface Observing System. A VOR which is co-located with a DME is usually a component of a TACAN.
Navigation aids:
Prior to the advent of GNSS, Celestial Navigation was also used by trained navigators on military bombers and transport aircraft in the event of all electronic navigational aids being turned off in time of war. Originally navigators used an astrodome and regular sextant but the more streamlined periscopic sextant was used from the 1940s to the 1990s. From the 1970s airliners used inertial navigation systems, especially on inter-continental routes, until the shooting down of Korean Air Lines Flight 007 in 1983 prompted the US government to make GPS available for civilian use.
Navigation aids:
Finally, an aircraft may be supervised from the ground using surveillance information from e.g. radar or multilateration. ATC can then feed back information to the pilot to help establish position, or can actually tell the pilot the position of the aircraft, depending on the level of ATC service the pilot is receiving.
Navigation aids:
The use of GNSS in aircraft is becoming increasingly common. GNSS provides very precise aircraft position, altitude, heading and ground speed information. GNSS makes navigation precision once reserved to large RNAV-equipped aircraft available to the GA pilot. Recently, many airports include GNSS instrument approaches. GNSS approaches consist of either overlays to existing precision and non-precision approaches or stand-alone GNSS approaches. Approaches having the lowest decision heights generally require that GNSS be augmented by a second system—e.g., the FAA's Wide Area Augmentation System (WAAS).
Flight navigator:
Civilian flight navigators (a mostly redundant aircrew position, also called 'air navigator' or 'flight navigator'), were employed on older aircraft, typically between the late-1910s and the 1970s. The crew member, occasionally two navigation crew members for some flights, was responsible for the trip navigation, including its dead reckoning and celestial navigation. This was especially essential when trips were flown over oceans or other large bodies of water, where radio navigation aids were not originally available. (satellite coverage is now provided worldwide). As sophisticated electronic and GNSS systems came online, the navigator's position was discontinued and its function was assumed by dual-licensed pilot-navigators, and still later by the flight's primary pilots (Captain and First Officer), resulting in a downsizing in the number of aircrew positions for commercial flights. As the installation of electronic navigation systems into the Captain's and FO's instrument panels was relatively straight forward, the navigator's position in commercial aviation (but not necessarily military aviation) became redundant. (Some countries task their air forces to fly without navigation aids during wartime, thus still requiring a navigator's position). Most civilian air navigators were retired or made redundant by the early 1980s. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**N-slit interferometric equation**
N-slit interferometric equation:
Quantum mechanics was first applied to optics, and interference in particular, by Paul Dirac. Richard Feynman, in his Lectures on Physics, uses Dirac's notation to describe thought experiments on double-slit interference of electrons. Feynman's approach was extended to N-slit interferometers for either single-photon illumination, or narrow-linewidth laser illumination, that is, illumination by indistinguishable photons, by Frank Duarte. The N-slit interferometer was first applied in the generation and measurement of complex interference patterns.In this article the generalized N-slit interferometric equation, derived via Dirac's notation, is described. Although originally derived to reproduce and predict N-slit interferograms, this equation also has applications to other areas of optics.
Probability amplitudes and the N-slit interferometric equation:
In this approach the probability amplitude for the propagation of a photon from a source s to an interference plane x, via an array of slits j, is given using Dirac's bra–ket notation as ⟨x|s⟩=∑j=1N⟨x|j⟩⟨j|s⟩ This equation represents the probability amplitude of a photon propagating from s to x via an array of j slits. Using a wavefunction representation for probability amplitudes, and defining the probability amplitudes as ⟨j|s⟩=Ψ(rj,s)e−iθj⟨x|j⟩=Ψ(rx,j)e−iϕj where θj and Φj are the incidence and diffraction phase angles, respectively. Thus, the overall probability amplitude can be rewritten as ⟨x|s⟩=∑j=1NΨ(rj)e−iΩj where Ψ(rj)=Ψ(rx,j)Ψ(rj,s) and Ωj=θj+ϕj after some algebra, the corresponding probability becomes cos (Ωm−Ωj)) where N is the total number of slits in the array, or transmission grating, and the term in parentheses represents the phase that is directly related to the exact path differences derived from the geometry of the N-slit array (j), the intra interferometric distance, and the interferometric plane x. In its simplest version, the phase term can be related to the geometry using cos cos k|Lm−Lm−1| where k is the wavenumber, and Lm and Lm − 1 represent the exact path differences. Here the Dirac–Duarte (DD) interferometric equation is a probability distribution that is related to the intensity distribution measured experimentally. The calculations are performed numerically.The DD interferometric equation applies to the propagation of a single photon, or the propagation of an ensemble of indistinguishable photons, and enables the accurate prediction of measured N-slit interferometric patterns continuously from the near to the far field. Interferograms generated with this equation have been shown to compare well with measured interferograms for both even (N = 2, 4, 6...) and odd (N = 3, 5, 7...) values of N from 2 to 1600.
Applications:
At a practical level, the N-slit interferometric equation was introduced for imaging applications and is routinely applied to predict N-slit laser interferograms, both in the near and far field. Thus, it has become a valuable tool in the alignment of large, and very large, N-slit laser interferometers used in the study of clear air turbulence and the propagation of interferometric characters for secure laser communications in space. Other analytical applications are described below.
Applications:
Generalized diffraction and refraction The N-slit interferometric equation has been applied to describe classical phenomena such as interference, diffraction, refraction (Snell's law), and reflection, in a rational and unified approach, using quantum mechanics principles. In particular, this interferometric approach has been used to derive generalized refraction equations for both positive and negative refraction, thus providing a clear link between diffraction theory and generalized refraction.From the phase term, of the interferometric equation, the expression sin sin ϕm)(2πλ)=Mπ can be obtained, where M = 0, 2, 4....
Applications:
For n1 = n2, this equation can be written as sin sin ϕm)=mλ which is the generalized diffraction grating equation. Here, θm is the angle of incidence, φm is the angle of diffraction, λ is the wavelength, and m = 0, 1, 2... is the order of diffraction.
Under certain conditions, dm ≪ λ, which can be readily obtained experimentally, the phase term becomes sin sin ϕm)=0 which is the generalized refraction equation, where θm is the angle of incidence, and φm now becomes the angle of refraction.
Cavity linewidth equation Furthermore, the N-slit interferometric equation has been applied to derive the cavity linewidth equation applicable to dispersive oscillators, such as the multiple-prism grating laser oscillators: Δλ≈Δθ(∂Θ∂λ)−1 In this equation, Δθ is the beam divergence and the overall intracavity angular dispersion is the quantity in parentheses.
Applications:
Fourier transform imaging Researchers working on Fourier-transform ghost imaging consider the N-slit interferometric equation as an avenue to investigate the quantum nature of ghost imaging. Also, the N-slit interferometric approach is one of several approaches applied to describe basic optical phenomena in a cohesive and unified manner.Note: given the various terminologies in use, for N-slit interferometry, it should be made explicit that the N-slit interferometric equation applies to two-slit interference, three-slit interference, four-slit interference, etc.
Applications:
Quantum entanglement The Dirac principles and probabilistic methodology used to derive the N-slit interferometric equation have also been used to derive the polarization quantum entanglement probability amplitude |ψ⟩=12(|x⟩1|y⟩2−|y⟩1|x⟩2) and corresponding probability amplitudes depicting the propagation of multiple pairs of quanta.
Comparison with classical methods:
A comparison of the Dirac approach with classical methods, in the performance of interferometric calculations, has been done by Travis S. Taylor et al. These authors concluded that the interferometric equation, derived via the Dirac formalism, was advantageous in the very near field.
Some differences between the DD interferometric equation and classical formalisms can be summarized as follows: The classical Fresnel approach is used for near-field applications and the classical Fraunhofer approach is used for far-field applications. That division is not necessary when using the DD interferometric approach as this formalism applies to both the near and the far-field cases.
The Fraunhofer approach works for plane-wave illumination. The DD approach works for both, plane wave illumination or highly diffractive illumination patterns.
The DD interferometric equation is statistical in character. This is not the case of the classical formulations.So far there has been no published comparison with more general classical approaches based on the Huygens–Fresnel principle or Kirchhoff's diffraction formula. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Palmer-Bowlus Flume**
Palmer-Bowlus Flume:
The Palmer-Bowlus flume, is a class of flumes commonly used to measure the flow of wastewater in sewer pipes and conduits. The Palmer-Bowlus flume has a u-shaped cross-section and was designed to be inserted into, or in line with, pipes and u-channels found in sanitary sewer applications.As a long-throated flume, the point of measurement of the Palmer-Bowlus flume is anywhere upstream of the throat ramp greater than D/2 (D=flume size). Montana flume has a single, specified point of measurement in the contracting section at which the level is measured. Unlike most other flumes used for open channel flow measurement, the Palmer-Bowlus flume can be calibrated by theoretical analysis.
Palmer-Bowlus Flume:
The general design of the flume detailed in ASTM D5390: Standard Test Method for Open-Channel Flow Measurement of Water with Palmer-Bowlus Flumes. It is important to note that unlike the Parshall flume, the standard for the flume does not set out specific sizes and flow rates, but only general characteristics for the class of flume.
Palmer-Bowlus Flume:
18 sizes of Palmer-Bowlus flumes have been developed - in line with the common pipe sizes to which they would be adapted - from 4-inches to 72-inches. In practice, though, it is uncommon to see Palmer-Bowlus flumes greater than 24-inches in size.Under average flow conditions, the Palmer-Bowlus flume is accurate to within 3-5%. For lower flow rates - where the depth is low relative to the length of the flume - the accuracy decreases to 5-6%. This error, combined with typical installation / flow meter errors, means that overall site accuracy is somewhat less than other more common flumes.
Free-Flow Characteristics:
Flow in the Palmer-Bowlus Flume transitions from a circular bottom section to a raised trapezoidal throat and then back - accelerating sub-critical flow (Fr~0.5) to a supercritical state (Fr>1) to develop the level-to-flow relationship.
Free-Flow Characteristics:
The simplified free-flow discharge can be summarized as Q=CHan Where Q is flow rate C is the free-flow coefficient for the flume Ha is the head at the primary point of measurement n varies with flume size (See Table 1 below)Note that Palmer-Bowlus flumes are proprietary to each manufacturer / throat configuration. The table presented below is for the most common throat configuration - a trapezoidal ramp - and is simplified for the entire flume flow range. For other throat configurations refer to the manufacturer's flow tables.
Free-Flow vs. Submerged Flow:
Free-Flow – when there is no “back water” to restrict flow through a flume. Only the single depth (primary point of measurement - Ha) needs to be measured to calculate the flow rate. A free flow also induces a hydraulic jump downstream of the flume.
Free-Flow vs. Submerged Flow:
Submerged Flow – when the water surface downstream of the flume is high enough to restrict flow through a flume, the flume is deemed to be submerged. Submergence transitions for Palmer-Bowlus flumes are quite high - 85-90%. As a result, corrections for submerged flow in Palmer-Bowlus flumes have not been published. As a result, it is important to set the flume so that it does not experience submerged flow conditions. Although commonly thought of as occurring at higher flow rates, submerged flow can exist at any flow level as it is a function of downstream conditions. In natural stream applications, submerged flow is frequently the result of vegetative growth on the downstream channel banks, sedimentation, or subsidence of the flume.
Construction:
Unlike other flumes - such as the Parshall, the Palmer-Bowlus flumes is typically only fabricated in two materials: Fiberglass (wastewater applications due to its corrosion resistance) Stainless steel (applications involving high temperatures / corrosive flow streams)
Drawbacks:
For standard Palmer-Bowlus flumes with the standard trapezoidal throat ramp: The flume may experience sedimentation / solids drop out upstream of the throat ramp. This is particularly true if the flow rates are low and the solids content is high or the solids heavy.
Unlike other flumes where the design and discharge equations have been standardized, Palmer-Bowlus flume may not be readily programmed into the secondary flow meters commonly used with the flume.
As a long-throated flume, the Palmer-Bowlus flume requires long straight runs upstream - 25 pipe diameters. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Rectal dilator**
Rectal dilator:
A rectal or anal dilator is a medical device similar to a speculum designed to open and relax the internal/external anal sphincter and rectum in order to facilitate medical inspection or relieve constipation. One early version of a rectal dilator was Dr. Young's Ideal Rectal Dilators, invented in 1892.Rectal dilators are also used as sex toys. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Playoff beard**
Playoff beard:
A playoff beard is the superstitious practice of male athletes not shaving their beards during the playoffs. Playoff beards were introduced by ice hockey players participating in the Stanley Cup playoffs, and are now a tradition in many sports leagues. Many fans of professional sports teams also grow playoff beards. The player stops shaving when his team enters the playoffs and does not shave until his team is eliminated or wins the Stanley Cup (or equivalent championship).
Playoff beard:
The tradition was started in the 1980s. The 1984-85 Detroit Red Wings were the first team documented to wear them. Wings forwards Ivan Boldirev and Danny Gare began the practice in Jan. 1985, trying to inspire the team to win four straight games. Defenseman Brad Park called it his "playoff beard" - thus coining the phrase. Sometime in the 1980's the New York Islanders also decided to do so; and according to Islander Mike Bossy, was likely started by teammate Butch Goring. The tradition is also practiced by nearly all North American hockey leagues, including high school leagues and the NCAA hockey teams, as well as minor league affiliates. According to some observers, one may trim the beard after a loss in an effort to change the team's luck; Jim Dowd and Roberto Luongo are examples of players who did this.
History:
The 1984–85 Detroit Red Wings were the first team documented to wear them. Wings forwards Ivan Boldirev and Danny Gare began the practice in Jan. 1985, trying to inspire the team to win four straight games. Defenseman Brad Park called it his "playoff beard" - thus coining the phrase. (from the Detroit Free Press, Feb. 3, 1985 - article by Bernie Czarniecki). Hall of Famer Denis Potvin says that the Islanders of the 1980s would "play four games in five nights in the first round and it was just something that kind of happened." The 1980 Islanders included two Swedish players (Stefan Persson and Anders Kallur), so it is possible that tennis champion Björn Borg's custom of not shaving his beard during Wimbledon, which he had been doing for several years by that time, was an influence on the start of the practice in hockey. Some players have said the beard is both a reminder of team unity and a way to get a player thinking about the playoffs from the moment he looks in the mirror in the morning. The 2009 Red Wings used the slogan "The beard is back" for the final series of their 2009 Stanley Cup playoffs run. They played the Pittsburgh Penguins in the Stanley Cup Finals that year (won by Pittsburgh) in which most of the players of both teams (and the owner of the Penguins, Mario Lemieux) grew beards.
History:
In 2009, the Beard-a-thon campaign was launched to encourage fans to grow their own playoff beards for charity. In its first four years, more than 22,000 NHL fans participated in the "Beard-a-thon" and raised over two million dollars for charities.In June 2015, Mark Lazarus, chairman of NBC Sports (who is the U.S. rightsholder of the league), told the Chicago Tribune that he had been lobbying the NHL to discourage the practice, arguing that it hinders the ability for viewers to recognize players.
Other sports:
The playoff beard has expanded into Major League Baseball (MLB), the Canadian Football League (CFL), the National Football League (NFL) and, to a lesser extent, the National Basketball Association (NBA). The practice generally resembles that of ice hockey, in that players do not shave until they either win a championship or are eliminated.
Other sports:
American football National Football League (NFL) players who have grown playoff beards include Chicago Bears quarterback (QB) Mitchell Trubisky, Seattle Seahawks quarterback (QB) Russell Wilson, Pittsburgh Steelers QB Ben Roethlisberger and defensive end (DE) Brett Keisel, New England Patriots wide receiver (WR) Julian Edelman and former Denver Broncos QB Jake Plummer. In fact, after Roethlisberger and his beard led the Steelers to their Super Bowl XL victory, he was shaved by David Letterman during an appearance on the Late Show with David Letterman.
Other sports:
Association football In 1993, Sheffield United's veteran striker Alan Cork did not shave during the club's four-month FA Cup run where they ultimately reached the semi-finals.
In Major League Soccer (MLS), players on the Houston Dynamo roster kept a "lucky beard" for the duration of the 2006 and 2007 MLS Cup Playoffs. They renewed the tradition during their run to the 2011 MLS Cup.
The LA Galaxy grew playoff beards during their run to the 2005 MLS Cup and again during the 2010 MLS Cup Playoffs.
Other sports:
Baseball The Boston Red Sox featured many players who grew beards during the team's 2013 season. "The beard-growing movement began in spring training with Mike Napoli and Jonny Gomes, and as the Red Sox kept winning — despite all predictions to the contrary — most of the team got on board with the beards." By the beginning of the World Series against the St. Louis Cardinals, only pitcher Koji Uehara was without facial hair. However, in the past, he did have a beard. Fans all over joined the team in solidarity as good luck to win the 2013 World Series. On October 23, 2013, Business Insider posted pictures of the Red Sox players with and without their good luck charms. An additional superstition for the team came during the season and post-season: when a player scores an especially important run, at the end of the game a tug would be given to this player's beard.
Other sports:
Basketball Cleveland Cavaliers center Zydrunas Ilgauskas wore a playoff beard in 2006, but did not bring it back for the 2007 playoffs, citing spousal disapproval.
In a variant of the playoff beard, the Dallas Mavericks stopped shaving during the 2012–13 regular season until the team reached a .500 winning percentage (achieved in mid-April).
Tennis Starting in the late 1970s, five-time Wimbledon champion Björn Borg used to let his beard grow prior to that particular tournament. Referring to that custom, Sports Illustrated published an article about Borg shortly before the 1981 Wimbledon tournament titled, "The beard has begun." Motorsport An October 2014 skit shows Team Penske personnel growing "Chase beards", including female staff.
Fan beards:
Fans often grow beards as a sign of support while their favorite team is in the playoffs.
In 2006, the NPR show Weekend America featured a segment about St. Louis Cardinals fans who grew beards during the playoffs. Several Cardinals players grew beards as well.
Outside of sports:
Male students at some universities in the United States, Canada, Sweden and New Zealand have also begun to sport an academic variation on the playoff beard - not shaving between the period when regular classes end and their final exam.
in 1960, partly to distance themselves from non-contributing teammate Aad van Wijngaarden (known for taking credit for the work of others), scientists Edsger W. Dijkstra and Jaap Zonneveld agreed to not shave until they completed the Electrologica ALGOL 60 compiler.
Other playoff hair:
During the 2010 playoffs, Patrick Kane of the Chicago Blackhawks chose to style his hair into a "playoff mullet" in addition to growing a playoff beard. He did it because of his struggles to grow a beard the year before. He has continued this throughout his career including the Chicago Blackhawks' 2013 and 2015 Championships In the 2008 NHL Playoffs some of the Calgary Flames, including Craig Conroy, David Moss, Dustin Boyd and Jarome Iginla, got "faux-hawk" haircuts. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Vidalia (software)**
Vidalia (software):
Vidalia is a discontinued cross-platform GUI for controlling Tor, built using Qt. The name comes from the Vidalia onion since Tor uses onion routing. It allows the user to start, stop or view the status of Tor, view, filter or search log messages, monitor bandwidth usage, and configure some aspects of Tor. Vidalia also makes it easier to contribute to the Tor network by optionally helping the user set up a Tor relay.
Vidalia (software):
Another prominent feature of Vidalia is its Tor network map, which lets the user see the geographic location of relays on the Tor network, as well as where the user's application traffic is going.
Release:
Vidalia is released under the GNU General Public License. It runs on any platform supported by Qt 4.2, including Windows, Mac OS X, and Linux or other Unix-like variants using the X11 window system.
Vidalia is no longer maintained or supported, and Tor developers do not recommend its use anymore. In 2013 it was replaced with a Firefox-based Tor controller called Tor Launcher. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Value-added theory**
Value-added theory:
Value-added theory (also known as social strain theory) is a sociological theory, first proposed by Neil Smelser in 1962, which posits that certain conditions are needed for the development of a social movement.
Overview:
Smelser porter considered social movements to be the side-effects of rapid social change. He argued that six things were necessary and sufficient for collective behavior to emerge, and that social movements must evolve through the following relevant stages: Structural conduciveness: the structure of society must be organized in such a way that certain protest actions become more likely.
Structural strain: there must be a strain on society that is caused by factors related to the structure of the current social system, such as inequality or injustice, and existing power holders are unwilling or unable to address the problem.
Generalized belief: the strain should be clearly defined, agreed upon, and understood by participants in group action.
Precipitating factors: event(s) must occur that act as the proverbial spark that ignites the flame of action.
Mobilization for action: participants must have a network and organization that allows them to take collective action.
Operation (failure) of social control: authorities either will or will not react. High levels of social control by those in power, like politicians or police, often makes it more difficult for social movements to achieve their goals.
In academia:
The concept of value added is also utilized in the field of economics; in this case it refers to the total value of the revenue created by a product minus intermediate consumption.
Criticism:
Critics of value-added theory note that it is overly focused on the structural-functional approach because it views all strain on society as disruptive. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Minor loop feedback**
Minor loop feedback:
Minor loop feedback is a classical method used to design stable robust linear feedback control systems using feedback loops around sub-systems within the overall feedback loop. The method is sometimes called minor loop synthesis in college textbooks, some government documents.The method is suitable for design by graphical methods and was used before digital computers became available. In World War 2 this method was used to design Gun laying control systems. It is still used now, but not always referred to by name. It is often discussed within the context of Bode plot methods. Minor loop feedback can be used to stabilize opamps.
Example:
Telescope position servo This example is slightly simplified (no gears between the motor and the load) from the control system for the Harlan J. Smith Telescope at the McDonald Observatory. In the figure there are three feedback loops: current control loop, velocity control loop and position control loop. The last is the main loop. The other two are minor loops. The forward path, considering only the forward path without the minor loop feedback, has three unavoidable phase shifting stages. The motor inductance and winding resistance form a low-pass filter with a bandwidth around 200 Hz. Acceleration to velocity is an integrator and velocity to position is an integrator. This would have a total phase shift of 180 to 270 degrees. Simply connecting position feedback would almost always result in unstable behaviour.
Example:
Current control loop The innermost loop regulates the current in the torque motor. This type of motor creates torque that is nearly proportional to the rotor current, even if it is forced to turn backward. Because of the action of the commutator, there are instances when two rotor windings are simultaneously energized. If the motor was driven by a voltage controlled voltage source, the current would roughly double, as would the torque. By sensing the current with a small sensing resister (RS) and feeding that voltage back to the inverting input of the drive amplifier, the amplifier becomes a voltage controlled current source. With constant current, when two windings are energized, they share the current and the variation of torque is on the order of 10%.
Example:
Velocity control loop The next innermost loop regulates motor speed. The voltage signal from the Tachometer (a small permanent magnet DC generator) is proportional to the angular velocity of the motor. This signal is fed back to the inverting input of the velocity control amplifier (KV). The velocity control system makes the system 'stiffer' when presented with torque variations such as wind, movement about the second axis and torque ripple from the motor.
Example:
Position control loop The outermost loop, the main loop, regulates load position. In this example, position feedback of the actual load position is presented by a Rotary encoder that produces a binary output code. The actual position is compared to the desired position by a digital subtracter that drives a DAC (Digital-to-analog converter) that drives the position control amplifier (KP). Position control allows the servo to compensate for sag and for slight position ripple caused by gears (not shown) between the motor and the telescope Synthesis The usual design procedure is to design the innermost subsystem (the current control loop in the telescope example) using local feedback to linearize and flatten the gain. Stability is generally assured by Bode plot methods. Usually, the bandwidth is made as wide as possible. Then the next loop (the velocity loop in the telescope example) is designed. The bandwidth of this sub-system is set to be a factor of 3 to 5 less than the bandwidth of the enclosed system. This process continues with each loop having less bandwidth than the bandwidth of the enclosed system. As long as the bandwidth of each loop is less than the bandwidth of the enclosed sub-system by a factor of 3 to 5, the phase shift of the enclosed system can be neglected, i.e. the sub-system can be treated as simple flat gain. Since the bandwidth of each sub-system is less than the bandwidth of the system it encloses, it is desirable to make the bandwidth of each sub-system as large as possible so that there is enough bandwidth in the outermost loop. The system is often expressed as a Signal-flow graph and its overall transfer function can be computed from Mason's Gain Formula. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Searchmedica**
Searchmedica:
SearchMedica was a series of free medical search engines built by doctors for doctors and other medical professionals, with localized versions for the United Kingdom, the United States, France and Spain.
Description:
SearchMedica was a specialist medical search engine for medical professionals. There were four localized versions: Searchmedica.co.uk, which targeted UK GPs and medical professionals, and included clinical, drug and research data to the latest NICE, PCT or Department of Health guidelines.
SearchMedica.com with sections for nine therapeutic areas (cardiovascular, diabetes/endocrine, hematology/oncology, infectious, mental/nervous system, musculoskeletal, pediatric, radiology and respiratory) as well as all of medicine (content from all of the foregoing specialties and more) and practice management. Includes data from PubMed (Medline citations) as well as other health-related government websites and authoritative clinical sites, analyzed and approved by medical specialists.
Description:
Searchmedica.fr, a version for the French market with a focus on medical websites in French Searchmedica.es, a Spanish version.SearchMedica connected medical professionals with well-known, credible journals, peer-reviewed research, and evidence-based articles written for practicing healthcare professionals. In addition to ranking search results according to both publication date and relevance, SearchMedica also allowed users to focus searches by categories such as journal content, evidence-based medicine, guidelines, and patient information. SearchMedica was run by CMPMedica, a part of United Business Media, and was set up using search engine technology from Convera.
History:
The English version was the launched in June 2006, followed by the US version in August 2006 and the French version in February 2007. The Spanish version went live in the summer of 2007 and a relaunch occurred in January 2008.
None of the primary websites appear to be operational as of 2020.
Business model:
SearchMedica makes money through advertising. However, advertising does not bias the ranking of results in their search engine. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Cable Internet access**
Cable Internet access:
In telecommunications, cable Internet access, shortened to cable Internet, is a form of broadband Internet access which uses the same infrastructure as cable television. Like digital subscriber line and fiber to the premises services, cable Internet access provides network edge connectivity (last mile access) from the Internet service provider to an end user. It is integrated into the cable television infrastructure analogously to DSL which uses the existing telephone network. Cable TV networks and telecommunications networks are the two predominant forms of residential Internet access. Recently, both have seen increased competition from fiber deployments, wireless, and mobile networks.
Hardware and bit rates:
Broadband cable Internet access requires a cable modem at the customer's premises and a cable modem termination system (CMTS) at a cable operator facility, typically a cable television headend. The two are connected via coaxial cable to a hybrid fibre-coaxial (HFC) network. While access networks are referred to as last-mile technologies, cable Internet systems can typically operate where the distance between the modem and the termination system is up to 160 kilometres (99 mi). If the HFC network is large, the cable modem termination system can be grouped into hubs for efficient management. Several standards have been used for cable internet, but the most common is DOCSIS.A cable modem at the customer is connected via coaxial cable to an optical node, and thus into an HFC network. An optical node serves many modems as the modems are connected with coaxial cable to a coaxial cable "trunk" via distribution "taps" on the trunk, which then connects to the node, possibly using amplifiers along the trunk. The optical node converts the Radiofrequency (RF) signal in the coaxial cable trunk into light pulses to be sent through optical fibers in the HFC network. At the other end of the network, an optics platform or headend platform converts the light pulses into RF signals in coaxial cables again using transmitter and receiver modules, and the cable modem termination system (CMTS) connects to these coaxial cables. An example of an optics platform is the Arris CH3000. There are two coaxial cables at the CMTS for each node: one for the downstream (download speed signal), and the other for the upstream (upload speed signal). The CMTS then connects to the ISP's IP (Internet Protocol) network.Downstream, the direction toward the user, bit rates can be as high as 1 Gbit/s. Upstream traffic, originating at the user, ranges from 384 kbit/s to more than 50 Mbit/s, although maximum effective range seems to be unknown. One downstream channel can handle hundreds of cable modems. As the system grows, the CMTS can be upgraded with more downstream and upstream ports, and grouped into hub CMTSs for efficient management.
Hardware and bit rates:
Most Data Over Cable Service Interface Specification (DOCSIS) cable modems restrict upload and download rates, with customizable limits. These limits are set in configuration files which are downloaded to the modem using the Trivial File Transfer Protocol, when the modem first establishes a connection to the provider's equipment. Some users have attempted to override the bandwidth cap and gain access to the full bandwidth of the system by uploading their own configuration file to the cable modem - a process called uncapping.
Shared bandwidth:
In most residential broadband technologies, such as cable Internet, DSL, satellite internet, or wireless broadband, a population of users share the available bandwidth. Some technologies share only their core network, while some including cable internet and passive optical network (PON) also share the access network. This arrangement allows the network operator to take advantage of statistical multiplexing, a bandwidth sharing technique which is employed to distribute bandwidth fairly, in order to provide an adequate level of service at an acceptable price. However, the operator has to monitor usage patterns and scale the network appropriately, to ensure that customers receive adequate service even during peak-usage times. If the network operator does not provide enough bandwidth for a particular neighborhood, the connection would become saturated and speeds would drop if many people are using the service at the same time, or drop out completely. Operators have been known to use a bandwidth cap, or other bandwidth throttling technique; users' download speed is limited during peak times, if they have downloaded a large amount of data that day. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Alligator leather**
Alligator leather:
Leather is created when an animal skin or hide is chemically treated in a process called tanning to preserve them for long term use as material for clothing, handbags, footwear, furniture, sports equipment and tools. Alligator leather is also commonly used to create similar items as mentioned above.
Alligator leather is not only used due to its durable skin, but also its natural enamel sheen, which is aesthetically pleasing for consumers buying expensive products.
History:
The earliest use of alligator skin was said to be in 1800 in North America. It was used to make boots, shoes, saddles and other products. Despite the first use being recorded in the 1800, alligator skin production increased majorly during the mid-1800s. During the American Civil War in 1861, saddles and boots were made for the Confederate troops. This led to alligator leather rising to the top of choice for leather usage.The durability and softness that we get from alligator leather today started when commercial tanning began in the early 1900s in New York, New Jersey and Europe. This brought a major increase in demand as a fashion based material. The sudden spike in demand for the leather led to alligator population decreasing in Louisiana in the mid-1900s, Louisiana being the biggest harvesting state in America. 1962 was when alligator hunting was closed statewide due to low numbers, the effect of non-regulated harvests.By 1967, alligators made the endangered species list in America. This was not for long, however, for in 1987, alligators were no more in threat of endangerment due to the cultivation and conservation efforts that led to their numbers gradually increasing again.
Applications:
Handbags Luxury brands are known to use rare and expensive materials to justify their prices. One of these sought after materials is alligator leather. One particular brand that is synonymous with high-end luxury would be Hermès. One of their most iconic and expensive bags ever made would be their Alligator Birkin that was priced at a staggering US$379,261 at auction. Luxury brands prefer the highest grade under belly section of the alligator, as they usually need a large piece of the hide. Lower grade alligator leather usually has scar tissue that would decrease the value of the product.
Applications:
Boots or shoes Alligator leather shoes and boots are a common sight in high-end retail stores. It also is common for the American Midwest/Cowboy market to invest in alligator Leather boots or shoes due to its durability. While high-grade leather can create exceptional quality, low-grade leather can also be pieced together to create a great product.
Applications:
Clothing Leather use for clothing dates back to 1200BC when ancient Greeks used it as a material as it was durable and helped tackle different climates. While leather is still used as a material for winter clothing, due to its rarity and exclusivity, alligator leather has become a luxury. High-end expensive brands use alligator hide for clothing items, most popular being jackets and winter wear.
Process of tanning leather:
Tanning is the process of processing the raw skin of an animal to leather. If not done correctly, the skin is prone to bacteria and ultimately decomposition. The tanning process for alligator is a lengthy process. The process first starts with first obtaining the skin and depending on the final product the leather will be used for; the tanner will choose which part of the animal to use.
Process of tanning leather:
Extraction of skin Depending on what the final product will be, tanneries dissect and process that portion of the alligator. For a softer and more malleable product, tanners choose to use the underbelly and perform a "belly cut". Usually younger farm raised alligators are chosen for this process as the skin is not as tough as an adult alligator. Underbelly hide is usually used for luxury products and so therefore farm raised alligators will not have scars or marks as wild alligators may have. This is due to the fact it is not raised in the wild and therefore do not encounter situations that would cause scars or damage to the skin.
Process of tanning leather:
The "hornback cut" however is used more for the raised scaly appearance that products such as belt or boot makers desire. This is the top portion of the alligator. Older wild alligators are used for this process of leather.
Process of tanning leather:
Scraping The raw hide is then scraped using usually a dull tool so that it does not penetrate or cut the skin. This removes the remaining flesh and fat on the hide. Once this is done the inner portion of the skin will look white. This however does not fully remove all flesh and fat. Once this is done, wash the skin to remove any blood or flesh residue that is attached to the skin. Leave to dry.
Process of tanning leather:
Dry salting Once dry, a layer of salt is applied to all parts of the skin to completely dry it out. Drying out the skin will slow the process of decomposition. This process is usually done twice or three times.
Process of tanning leather:
Brining or pickling Once the salting process is done, the hide is ready for brining. Brining is the chemical process that further enhances the curing process of the alligator. This is to remove any bacteria or elements that could attract bacteria, as that is what causes decomposition. To brine the hide, usually a mixture of bleach, borax and salt is added to water in a plastic drum. The alligator is then soaked in the solution for about 48 hours to completely remove any non-tannable proteins.
Process of tanning leather:
Neutralizing For optimum tanning acidity levels (pH level 4 or 5), sodium bicarbonate is added to the pickle solution. Throughout the pickling stage, the water must be a room temperature level. The hide is then returned to the plastic drum for another 20–30 minutes.
Degreasing Degreasing is the process of removing any left over fat from the hide so that there is no chance of oil or fat stains. This can be done with a degreasing product or heavy duty washing liquid. The hide is placed in another plastic drum with the product or washing liquid and warm water.
Tanning:
Two of the most common ways to most common tanning methods and which method Alligator hide requires: Vegetable tanning Vegetable tanning uses the natural tannins that are found in plants, tree bark and other natural sources. This process produces brown leather that is soft and malleable. Although it is great for products such as bookbinding or early plate armour due to its softness, when in contact with water, the leather tends to shrink in size, as it is unstable. This is one of the most environmentally friendly ways to tan leather as no additional acids or chemicals are used.
Tanning:
Chromium tanned leather Chromium tanning is the most popular tanning method as 90% of all leather in the world is processed this way. Alligator hide is also tanned using the Chromium process. A reason many tanneries choose to use chromium is due to the final leather product being more durable and stretchy, ideal for leather accessories and garments. The process includes submerging the hide in a toxic slush of chromium salts and chemicals to create a light blue product that is supple and durable. This process of tanning is done usually twice to soften the leather to its desired texture. Chromium tanning also creates a stronger product that is more resistant to water and creates less shrinkage when in contact.
Tanning:
Environmental impact The environmental impact of leather, especially at the tanning process, has a mass negative effect. The waste product of tanning can be broken down into two categories.
Tanning:
Water waste An enormous amount of polluted water is discharged from the tanning process. Almost 90% of pollution of the leather industry comes from the tanning and pre tanning stages of production. As a base calculation, tanning one ton of hide creates 20 to 80 cubic meters of polluted water. Due to chemicals raising the pH levels in the water, a large amount of chemical oxygen demand (COD) and total dissolved solids (TDS). The water also shows an increase in chloride and sulphate levels. The process also uses a large amount of water, which is not environmentally friendly.
Tanning:
One example of poor disposal of wastewater can be seen in Hazaribagh, Bangladesh. Ironically Hazaribagh in Urdu translates to "a thousand gardens"; quite the contrary to what the situation is like. Despite the leather industry in Bangladesh being a 1 billion dollar industry, providing thousands of jobs to many, the environmental impact is grave. The use of chromium salts, acids and toxins have caused the Buriganga River, which runs along Hazaribagh, to turn black. An estimated 21,600 cubic meters of polluted water was disposed per day in 2005.
Tanning:
Solid waste Tanneries produce a huge amount of solid waste. Generally 35-60% of the total amount of solid waste is organic matter. This consists of the flesh and organic body parts of the animal. Due to decomposition, a lack of composting or efficient disposal can cause a lot of disease and unsanitary waste. This affects groundwater systems and agricultural activities as the waste is usually dumped in landfills.
Tanning:
Health complications The tanning industry not only causes detrimental environmental impact, it also creates many health complications for workers in countries without effective safety regulations and protection standards. Some of these countries are China, India and Bangladesh. The toxic exposure to the chemicals in tanneries causes skin and respiratory disease amongst workers due to the lack of safety equipment and training.
Tanning:
Tanneries that use chromium increase the chances of workers getting respiratory illnesses and can sometimes lead to lung, nasal or sinus cancer.The raw hides are also a breeding ground for anthrax left untreated this can be a potentially deadly disease. Chromium tanning also can severely destroy skin amongst workers. Due to unprotected handling, when the skin absorbs chromium it can leave the skin dry and cracked. The acidity in the tanning and liming stages can cause erosions in the skin, which is irreversible. According to a report in 2001 estimate, around 90% of workers die before the age of 50 due to health complications in Hazaribagh, Bangladesh; a city well known for its tanning industry which has low health and safety standards. This being one city compared to the many around the world | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Apple Pie ABC**
Apple Pie ABC:
Apple Pie ABC is an old and enduring English alphabet rhyme for children which has gone through several variations since the 17th century.
History:
The Apple Pie ABC is a simple rhyme meant to teach children the order of the alphabet and relates the various ways children react to an apple pie. After the first line, A was an apple pie, the rest of the letters refer to verbs. The earliest printed versions, dating from the 18th century, have the following form: "A was an Apple pie; B bit it; C cut it; D dealt it; E eat it; F fought for it; G got it; H had it; J joined it; K kept it; L longed for it; M mourned for it; N nodded at it; O opened it; P peeped in it; Q quartered it; R ran for it; S stole it; T took it; V viewed it; W wanted it; X, Y, Z, and &, All wished for a piece in hand". At that time the writing of the capital letters I and J, and of U and V, was not differentiated, which explains the absence of the two vowels. Later versions added I and U with, "I inspected it" and "U upset it".
History:
The earliest mention of the rhyme was in a religious work dated 1671, but covered only the letters A-G. It first appeared in printed form in Child’s New Plaything: being a spelling-book intended to make the learning to read a diversion instead of a task (London 1742, Boston 1750), followed soon after by Tom Thumb's Playbook to teach children their letters as soon as they can speak, being a new and pleasant method to allure little ones in the first principles of learning (London 1747; Boston 1764). The latter was reprinted eight times in the U.S. by the end of the century. But by then much the same rhyme was appearing under the macabre title The Tragical Death of A, Apple Pye Who was Cut in Pieces and Eat by Twenty-Five Gentlemen with whom All Little People Ought to be Very well acquainted (London 1770; Worcester, Mass. 1787) - also many times reprinted in both countries.
History:
It has been speculated that the phrase ‘in apple pie order’ refers to the regular progression of this alphabet rhyme.
Variations:
Variations in wording began to appear with the start of the 19th century. The History of the APPLE PIE, an Alphabet for little Masters and Misses, ‘written by Z’ (London 1808), has "B bit it, C cried for it, D danced for it, E eyed it, F fiddled for it, G gobbled it, H hid it, I inspected it, J jumped over it, K kicked it, L laughed at it, M mourned for it, N nodded for it, O opened it, P peeped into it, Q quaked for it, R rode for it, S skipped for it, T took it, U upset it, V viewed it, W warbled for it, X Xerxes drew his sword for it, Y yawned for it, Z zealous that all good boys and girls should be acquainted with his family, sat down and wrote the history of it". There are two American versions in the Beinecke Rare Book & Manuscript Library; a slightly different English version from 1835 is archived in the Open Library The most popular illustrated later edition of the rhyme was Kate Greenaway’s A Apple Pie: An Old-Fashioned Alphabet Book (London, 1886), which has been continuously reprinted up to the present. In place of the plaintive yearning for a piece of the pie with which the original version ends, she substitutes the more fulfilled "UVWXYZ all had a large slice and went off to bed", so allowing herself to get away with only twenty illustrations.
Variations:
The rhyme also began to be changed in other ways, as in The Real History of the Apple Pie, which has an extended coda: Says A, give me a good large slice, Says B, a little bit, but nice, Says C, cut me a piece of crust, Take it, says D, it’s dry as dust, Says E, I’ll eat it fast, I will, Says F, I vow I’ll have my fill, Says G, give it me good and great, Says H, a little bit I hate, Says I, it’s ice I must request, Says J, the juice I love the best, Says K, let’s keep it up above, Says L, the border’s what I love, Says M, it makes your teeth to chatter, N said, it’s nice, there’s nought the matter, O others’ plates with grief surveyed, P for a large piece begged and prayed, Q quarrelled for the topmost slice, R rubbed his hands and said “it’s nice,” S silent sat, and simply looked, T thought, and said, it’s nicely cooked, U understood the fruit was cherry, V vanished when they all got merry, W wished there’d been a quince in, X here explained he’d need convincing, Y said, I’ll eat, and yield to none, Z, like a zany, said he’d done, While & purloined the dish, And for another pie did wish.Eventually completely original works were created that took their beginning from the rhyme. In 1871 Edward Lear made fun of it in his nonsense parody "A was once an apple pie", which soon diverged into nursery language and then treated other subjects for the rest of the alphabet. The illustrations in McLoughlin Brothers' linen-mounted Apple Pie ABC (New York, 1888) appear to be largely dependent on the original work but the verses are different: E stands for Ellen who sat at the table And tried to eat more than she really was able.
Variations:
F had a fight with his sisters and brothers, Declaring he would not divide with the others.In 1899, however, the firm printed the original rhyme under the title ABC of the Apple Pie. Meanwhile, Raphael Tuck and Sons were publishing their own linen-mounted Father Tuck’s Apple Pie ABC (London, 1899) which, once more, features a completely different rhyme: E stands for eat; wait till it’s cooled from the heat.
Variations:
F stands for fruit – best of all, apples sweet.Despite the popularity of revised and new versions during the 19th century, the original rhyme did not drop out of circulation. Kate Greenaway’s late Victorian A, Apple Pie was largely based on the old rhyme, as were some 20th-century examples. The accompanying illustrations, however, have now moved their focus from using children as protagonists to a more fanciful approach this century, ranging from the whimsical beasts of Étienne Delessert (Aa was an Apple Pie, Mankato, Minn. 2005) to the animated alphabet of England's Luke Farookhi.
Literary allusions:
The nursery rhyme seems to have appealed particularly to the English writer Charles Dickens, who mentions it in three works. The first mention is within an essay on "A Christmas Tree" (1850), on which the illustrated nursery books so popular at the time are hung. "Thin books, in themselves, at first, but many of them, and with deliciously smooth covers of bright red or green. What fat black letters to begin with! 'A was an archer, and shot at a frog.' Of course he was. He was an apple-pie also, and there he is! He was a good many things in his time, was A, and so were most of his friends, except X, who had so little versatility, that I never knew him to get beyond Xerxes or Xantippe."His novel Bleak House (1852) introduces an allusion into a description of legal process in the Court of Chancery: "Equity sends questions to law, law sends questions back to equity; law finds it can't do this, equity finds it can't do that; neither can so much as say it can't do anything, without this solicitor instructing and this counsel appearing for A, and that solicitor instructing and that counsel appearing for B; and so on through the whole alphabet, like the history of the apple pie."Finally there is his story of "The Italian Prisoner" (1860), which details the difficulties of transporting an enormous bottle of wine through Italy. "The suspicions that attached to this innocent Bottle greatly aggravated my difficulties. It was like the apple-pie in the child's book. Parma pouted at it, Modena mocked it, Tuscany tackled it, Naples nibbled it, Rome refused it, Austria accused it, Soldiers suspected it, Jesuits jobbed it."The reference to Xerxes in the first of these quotations, and to "the history of the apple pie" in the second, suggests that it is the "Z" version with which Dickens is acquainted. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Epley maneuver**
Epley maneuver:
The Epley maneuver or repositioning maneuver is a maneuver used by medical professionals to treat one common cause of vertigo, benign paroxysmal positional vertigo (BPPV) of the posterior or anterior canals of the ear. The maneuver works by allowing free-floating particles, displaced otoconia, from the affected semicircular canal to be relocated by using gravity, back into the utricle, where they can no longer stimulate the cupula, therefore relieving the patient of bothersome vertigo. The maneuver was developed by the physician, John M. Epley, and was first described in 1980.A version of the maneuver called the "modified" Epley does not include vibrations of the mastoid process originally indicated by Epley, as the vibration procedures have been proven ineffective. The modified procedure has become that now described generally as the Epley maneuver.
Effectiveness:
An Epley maneuver is a safe and effective treatment for BPPV, although the condition recurs in approximately one third of cases.
Sequence of positions:
The following sequence of positions describes the Epley maneuver: The patient begins in an upright sitting posture, with the legs fully extended and the head rotated 45 degrees toward the side in the same direction that gives a positive Dix–Hallpike test.
Then the patient is quickly lowered into a supine position (on the back), with the head held approximately in a 30-degree neck extension (Dix-Hallpike position), with the head remaining rotated to the side.
The clinician observes the patient's eyes for “primary stage” nystagmus.
The patient remains in this position for approximately 1–2 minutes.
Then the patient's head is rotated 90 degrees in the opposite direction, so that the opposite ear faces the floor, while maintaining 30 degrees of neck extension.
The patient remains in this position for approximately 1–2 minutes.
Keeping the head and neck in a fixed position relative to the body, the patient rolls onto the shoulder, rotating the head another 90 degrees in the direction being faced. Now the patient is looking downward at a 45-degree angle.
The eyes should be observed immediately by the clinician for “secondary stage” nystagmus (this secondary stage nystagmus should be in the same direction as the primary stage nystagmus).
The patient remains in this position for approximately 1–2 minutes.
Finally, the patient is slowly brought up to an upright sitting posture, while maintaining the 45-degree rotation of the head.
The patient holds a sitting position for up to 30 seconds.These steps may be repeated twice, for a total of three times during a procedure. During every step of this procedure, the patient may experience some dizziness.
Post-treatment phase:
Following the treatment, the clinician may provide the patient with a soft collar, often worn for the remainder of the day, as a cue to avoid any head positions that may once again displace the otoconia. The patient may be instructed to be cautious of bending over, lying backward, moving the head up and down, or tilting the head to either side. For the next two nights, patients should sleep in a semi-recumbent position. This means sleeping with the head halfway between being flat and being upright (at a 45-degree angle). This is most easily done by using a recliner chair or by using pillows arranged on a couch. The soft collar is removed occasionally. When doing so, the patient should be encouraged to perform horizontal movements of the head to maintain normal neck range of motion. It is important to instruct the patient that horizontal movement of the head should be performed to prevent stiff neck muscles.
Post-treatment phase:
It remains uncertain whether activity restrictions following the treatment improve the effectiveness of the canalith repositioning maneuver. However, study patients who were not provided with any activity restrictions, needed one or two additional treatment sessions to attain a successful outcome. The Epley maneuver appears to be a long-term, effective, and conservative treatment for BPPV that has a limited number of complications (nausea, vomiting, and residual vertigo) and is well tolerated by patients.
Background information:
The goal of an Epley maneuver is to restore the equilibrium of the vestibular system, more specifically, to the semicircular canals, in order to treat the symptoms associated with BPPV. There is compelling evidence that free-floating otoconia, probably displaced from the otolithic membrane in the utricle are the main cause of this disequilibrium. Recent pathological findings also suggest that the displaced otoconia typically settle in the posterior semicircular canal in the cupula of the ampulla and render it sensitive to gravity. The cupula move in relation to acceleration of the head during rotary movements and signal to the brain via action potentials about which way the head is moving in relation to its surroundings. However, once a crystal becomes lodged in the cupula, it only takes slight head movements in combination with gravity, to create an action potential, which signals to the brain that the head is moving through space, when in reality, it is not, thus creating the experience of vertigo associated with BPPV.When a therapist is performing an Epley maneuver, the patient's head is rotated to 45 degrees in the direction of the affected side, in order to target the posterior semicircular canal of the affected side. When the patient is passively positioned from an upright seated posture down to a supine (lying on the back) position, this momentum helps to dislodge the otoconia (crystal) embedded in the cupula. Steps 3–10 in the above-mentioned procedure are intended to cause the newly dislodged crystal to be brought back to the utricle through the posterior semicircular canal so that it can be re-absorbed by the utricle.In 1957, John M. Epley received his M.D. degree from the University of Oregon Medical School (now Oregon Health Sciences University). While a resident at Stanford Medical School he conducted original research on the first multichannel cochlear implant. He developed his BPPV technique in 1979. He died July 31, 2019. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Biodistribution**
Biodistribution:
Biodistribution is a method of tracking where compounds of interest travel in an experimental animal or human subject. For example, in the development of new compounds for PET (positron emission tomography) scanning, a radioactive isotope is chemically joined with a peptide (subunit of a protein). This particular class of isotopes emits positrons (which are antimatter particles, equal in mass to the electron, but with a positive charge). When ejected from the nucleus, positrons encounter an electron, and undergo annihilation which produces two gamma rays travelling in opposite directions. These gamma rays can be measured, and when compared to a standard, quantified.
Biodistribution analysis:
Purpose and results A useful novel radiolabelled compound is one that is suitable either for medical imaging of certain body parts such as brain or tumors (injecting low doses of radioactivity) or for treating tumors (requiring injection of high doses of radioactivity). In both cases, the compound needs to accumulate in the target organ and any surplus compound present needs to clear the body rapidly. In medical diagnostic imaging, this then produces a clear diagnostic image (high image contrast), and in radiotherapy leads to an attack of the target (e.g. tumor) while minimizing side effects to non-target organs. Additional factors need to be evaluated in the development of a new diagnostic or therapeutic compound, including safety for humans. From an efficacy point of view, the biodistribution is an important aspect which can be measured by dissection or by imaging.
Biodistribution analysis:
By dissection For example, a new radiolabelled compound is injected intravenously into a group of 16-20 rodents (typically mice or rats). At intervals of 1, 2, 4, and 24 hours, smaller groups (4-5) of the animals are euthanized, then dissected. The organs of interest (usually: blood, liver, spleen, kidney, muscle, fat, adrenals, pancreas, brain, bone, stomach, small intestine, and upper and lower large intestine, a tumor if present) are placed in pre-weighed containers and weighed, then placed into a device that measures radioactivity (e.g. gamma radiation). Normalizing the tissue radioactivity concentrations to the injected dose gives values in units of percent of the injected dose per gram of organ or biological tissue. The results give a dynamic view of how the compound moves through the animal and where it is retained.
Biodistribution analysis:
By imaging Similar to the dissection procedure, animals are injected with a low dose of a radiolabelled compound. At the chosen time points after injection, PET or SPECT images are acquired, typically also a CT or MR image for anatomical reference. The radioactivity concentration is measured from the PET or SPECT images for the various organs of interest. This may include measuring the volume of these organs e.g. from the CT image (rather than weighing the organs as in the dissection procedure) or assessing the radioactivity concentration in a representative part of the organ. Normalizing the tissue radioactivity concentrations to the injected dose gives values in units of percent of the injected dose per milliliter of organ or biological tissue.
Biodistribution analysis:
A benefit of imaging is that the animals can be anaesthetized for imaging for several or all the required time points, that is few animals are required for this procedure and all of them are kept alive. This is considered a non-invasive procedure. In addition, the procedure is in essence the same as for medical diagnostic imaging in the clinic with two main differences: (1) novel compounds under development may be injected into animals subject to scrutiny and approval of the detailed experimental plan while clinicians can only inject radiolabelled compounds that had been tested rigorously and approved for use in humans; (2) animals usually need to be anaesthetized for the duration of the scan (on the order of minutes) while humans are awake and simply need to stay still during the scan.
Non-invasive biodistribution imaging in gene therapy:
In gene therapy, gene delivery vectors, such as viruses, can be imaged according either to their particle biodistribution or their transduction pattern. The former means labeling the viruses with a contrast agent, being visible in some imaging modality, such as MRI or SPECT/PET and latter means visualising the marker gene of gene delivery vector to be visible by the means of immunohistochemical methods, optical imaging or even by PCR. Non-invasive imaging has gained popularity as the imaging equipment has become available for research use from clinics.
Non-invasive biodistribution imaging in gene therapy:
For example, avidin-displaying baculoviruses could be imaged in rat brain by coating them with biotinylated iron particles, rendering them visible in MR imaging. The biodistribution of the iron-virus particles was seen to concentrate on the choroid plexus cells of lateral ventricles. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Dog leukocyte antigen**
Dog leukocyte antigen:
The dog leukocyte antigen (DLA) is a part of the major histocompatibility complex (MHC) in dogs, encoding genes in the MHC. The DLA and MHC system are interchangeable terms in canines. The MHC plays a critical role in the immune response system and consists of three regions: class I, class II and class III. DLA genes belong to the first two classes, which are involved in the regulation of antigens in the immune system. The class II genes are highly polymorphic, with many different alleles/haplotypes that have been linked to diseases, allergies, and autoimmune conditions such as diabetes, polyarthritus, and hypothyroidism in canines.
Dog leukocyte antigen:
There are likely hundreds of immunologically relevant genes making up the DLA region in the canine genome; as of the present date the complete characteristics of the gene is unknown. MHC genes represent candidates for disease susceptibility in canines; some alleles promote protection against immune-mediated diseases and some increase susceptibility. For example, certain combinations of the DLA-DRB1 and DQ alleles are most favorable for good immune regulation. These alleles help balance immune surveillance and immune response without increasing the risk of developing an autoimmune condition. Different canine breeds have MHC/DLA allele association; these genes exhibit more inter-breed differentiation than intra-breed differentiation. Dogs have been selectively bred for different phenotypes, so the underlying genotypes and linked regions also differ among breeds. Selection on the DLA can lead to an increase in the prevalence of immune-mediated diseases. Due to selective breeding some breeds have become restricted in their DLA genes, with a limited subset of DLA alleles occurring within the breed. This explains some of the variation in immune responses among breeds. This occurs because there is a strong linkage disequilibrium that exists between DLA class II loci. The pattern displayed by the genetic differences among human ethnic groups is analogous to the pattern displayed by the distribution of DLA types in different canine breeds. MHC genes in humans are also known to be major contributors to autoimmune condition development.
Canine diabetes and DLA:
In 1974 J. Nerup and others discovered that there is a link between diabetes and MHC genes. Dog leukocyte antigen has been found to be the genetic component associated with canine diabetes. The common alleles/haplotypes found in diabetes prone breeds (Samoyed, Carin Terrier, and Tibetan Terrier) are DLA DBR1*009, DQA1*001, and DQB1*008. The DLA DQA1 alleles code for an arginine amino acid at position 55 in region two, this increases the risk of developing diabetes in dog as arginine is a positive amino acid which can impair antigen binding. This allele is also associated with hypothyroidism which implies that this allele increases susceptibility for endocrinopathic immune-mediated diseases. It is possible that the link discovered between DLA associations and diabetes could be due to "makers" of susceptibility and that the true reason for susceptibility lies elsewhere in the genome. It could be associated with particular DLA alleles/haplotypes or caused by the strong linkage disequilibrium. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Robust optimization**
Robust optimization:
Robust optimization is a field of mathematical optimization theory that deals with optimization problems in which a certain measure of robustness is sought against uncertainty that can be represented as deterministic variability in the value of the parameters of the problem itself and/or its solution. It is related to, but often distinguished from, probabilistic optimization methods such as chance-constrained optimization.
History:
The origins of robust optimization date back to the establishment of modern decision theory in the 1950s and the use of worst case analysis and Wald's maximin model as a tool for the treatment of severe uncertainty. It became a discipline of its own in the 1970s with parallel developments in several scientific and technological fields. Over the years, it has been applied in statistics, but also in operations research, electrical engineering, control theory, finance, portfolio management logistics, manufacturing engineering, chemical engineering, medicine, and computer science. In engineering problems, these formulations often take the name of "Robust Design Optimization", RDO or "Reliability Based Design Optimization", RBDO.
Example 1:
Consider the following linear programming problem max 10 ,∀(c,d)∈P where P is a given subset of R2 What makes this a 'robust optimization' problem is the ∀(c,d)∈P clause in the constraints. Its implication is that for a pair (x,y) to be admissible, the constraint 10 must be satisfied by the worst (c,d)∈P pertaining to (x,y) , namely the pair (c,d)∈P that maximizes the value of cx+dy for the given value of (x,y) If the parameter space P is finite (consisting of finitely many elements), then this robust optimization problem itself is a linear programming problem: for each (c,d)∈P there is a linear constraint 10 If P is not a finite set, then this problem is a linear semi-infinite programming problem, namely a linear programming problem with finitely many (2) decision variables and infinitely many constraints.
Classification:
There are a number of classification criteria for robust optimization problems/models. In particular, one can distinguish between problems dealing with local and global models of robustness; and between probabilistic and non-probabilistic models of robustness. Modern robust optimization deals primarily with non-probabilistic models of robustness that are worst case oriented and as such usually deploy Wald's maximin models.
Classification:
Local robustness There are cases where robustness is sought against small perturbations in a nominal value of a parameter. A very popular model of local robustness is the radius of stability model: := max ρ≥0{ρ:u∈S(x),∀u∈B(ρ,u^)} where u^ denotes the nominal value of the parameter, B(ρ,u^) denotes a ball of radius ρ centered at u^ and S(x) denotes the set of values of u that satisfy given stability/performance conditions associated with decision x In words, the robustness (radius of stability) of decision x is the radius of the largest ball centered at u^ all of whose elements satisfy the stability requirements imposed on x . The picture is this: where the rectangle U(x) represents the set of all the values u associated with decision x Global robustness Consider the simple abstract robust optimization problem max x∈X{f(x):g(x,u)≤b,∀u∈U} where U denotes the set of all possible values of u under consideration.
Classification:
This is a global robust optimization problem in the sense that the robustness constraint g(x,u)≤b,∀u∈U represents all the possible values of u The difficulty is that such a "global" constraint can be too demanding in that there is no x∈X that satisfies this constraint. But even if such an x∈X exists, the constraint can be too "conservative" in that it yields a solution x∈X that generates a very small payoff f(x) that is not representative of the performance of other decisions in X . For instance, there could be an x′∈X that only slightly violates the robustness constraint but yields a very large payoff f(x′) . In such cases it might be necessary to relax a bit the robustness constraint and/or modify the statement of the problem.
Classification:
Example 2 Consider the case where the objective is to satisfy a constraint g(x,u)≤b, . where x∈X denotes the decision variable and u is a parameter whose set of possible values in U . If there is no x∈X such that g(x,u)≤b,∀u∈U , then the following intuitive measure of robustness suggests itself: := max Y⊆U{size(Y):g(x,u)≤b,∀u∈Y},x∈X where size(Y) denotes an appropriate measure of the "size" of set Y . For example, if U is a finite set, then size(Y) could be defined as the cardinality of set Y In words, the robustness of decision is the size of the largest subset of U for which the constraint g(x,u)≤b is satisfied for each u in this set. An optimal decision is then a decision whose robustness is the largest.
Classification:
This yields the following robust optimization problem: max x∈X,Y⊆U{size(Y):g(x,u)≤b,∀u∈Y} This intuitive notion of global robustness is not used often in practice because the robust optimization problems that it induces are usually (not always) very difficult to solve.
Example 3 Consider the robust optimization problem := max x∈X{f(x):g(x,u)≤b,∀u∈U} where g is a real-valued function on X×U , and assume that there is no feasible solution to this problem because the robustness constraint g(x,u)≤b,∀u∈U is too demanding.
Classification:
To overcome this difficulty, let N be a relatively small subset of U representing "normal" values of u and consider the following robust optimization problem: := max x∈X{f(x):g(x,u)≤b,∀u∈N} Since N is much smaller than U , its optimal solution may not perform well on a large portion of U and therefore may not be robust against the variability of u over U One way to fix this difficulty is to relax the constraint g(x,u)≤b for values of u outside the set N in a controlled manner so that larger violations are allowed as the distance of u from N increases. For instance, consider the relaxed robustness constraint g(x,u)≤b+β⋅dist(u,N),∀u∈U where β≥0 is a control parameter and dist(u,N) denotes the distance of u from N . Thus, for β=0 the relaxed robustness constraint reduces back to the original robustness constraint.
Classification:
This yields the following (relaxed) robust optimization problem: := max x∈X{f(x):g(x,u)≤b+β⋅dist(u,N),∀u∈U} The function dist is defined in such a manner that dist(u,N)≥0,∀u∈U and dist(u,N)=0,∀u∈N and therefore the optimal solution to the relaxed problem satisfies the original constraint g(x,u)≤b for all values of u in N . It also satisfies the relaxed constraint g(x,u)≤b+β⋅dist(u,N) outside N Non-probabilistic robust optimization models The dominating paradigm in this area of robust optimization is Wald's maximin model, namely max min u∈U(x)f(x,u) where the max represents the decision maker, the min represents Nature, namely uncertainty, X represents the decision space and U(x) denotes the set of possible values of u associated with decision x . This is the classic format of the generic model, and is often referred to as minimax or maximin optimization problem. The non-probabilistic (deterministic) model has been and is being extensively used for robust optimization especially in the field of signal processing.The equivalent mathematical programming (MP) of the classic format above is max x∈X,v∈R{v:v≤f(x,u),∀u∈U(x)} Constraints can be incorporated explicitly in these models. The generic constrained classic format is max min u∈U(x){f(x,u):g(x,u)≤b,∀u∈U(x)} The equivalent constrained MP format is defined as: max x∈X,v∈R{v:v≤f(x,u),g(x,u)≤b,∀u∈U(x)} Probabilistically robust optimization models These models quantify the uncertainty in the "true" value of the parameter of interest by probability distribution functions. They have been traditionally classified as stochastic programming and stochastic optimization models. Recently, probabilistically robust optimization has gained popularity by the introduction of rigorous theories such as scenario optimization able to quantify the robustness level of solutions obtained by randomization. These methods are also relevant to data-driven optimization methods.
Classification:
Robust counterpart The solution method to many robust program involves creating a deterministic equivalent, called the robust counterpart. The practical difficulty of a robust program depends on if its robust counterpart is computationally tractable. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Siemens Milltronics Process Instruments**
Siemens Milltronics Process Instruments:
Siemens AG (German pronunciation: [ˈziːməns] (listen) or [-mɛns]) is a German multinational technology conglomerate. Its operations encompass automation and digitalization in the process and manufacturing industries, intelligent infrastructure for buildings and distributed energy systems, rail transport solutions, as well as health technology and digital healthcare services. Siemens is the largest industrial manufacturing company in Europe, and holds the position of global market leader in industrial automation and industrial software.The origins of the conglomerate can be traced back to 1847 to the Telegraphen Bau-Anstalt von Siemens & Halske established in Berlin by Werner von Siemens and Johann Georg Halske. In 1966, the present-day corporation emerged from the merger of three companies: Siemens & Halske, Siemens-Schuckert, and Siemens-Reiniger-Werke. Today headquartered in Munich and Berlin, Siemens and its subsidiaries employ approximately 311,000 people worldwide and reported a global revenue of around €72 billion in 2022. The company is a component of the DAX and Euro Stoxx 50 stock market indices.As of 2023, the principal divisions of Siemens are Digital Industries, Smart Infrastructure, Mobility, Healthineers, and Financial Services, with Siemens Healthineers and Siemens Mobility operating as independent entities. Major business divisions that were once part of Siemens before being spun off include semiconductor manufacturer Infineon Technologies (1999), Siemens Mobile (2005), Gigaset Communications (2008), the photonics business Osram (2013), and Siemens Energy (2020).
History:
1847 to 1901 Siemens & Halske was founded by Werner von Siemens and Johann Georg Halske on 1 October 1847. Based on the telegraph, their invention used a needle to point to the sequence of letters, instead of using Morse code. The company, then called Telegraphen-Bauanstalt von Siemens & Halske, opened its first workshop on 12 October.In 1848, the company built the first long-distance telegraph line in Europe: 500 km from Berlin to Frankfurt am Main. In 1850, the founder's younger brother, Carl Wilhelm Siemens, later Sir William Siemens, started to represent the company in London. The London agency became a branch office in 1858. In the 1850s, the company was involved in building long-distance telegraph networks in Russia. In 1855, a company branch headed by another brother, Carl Heinrich von Siemens, opened in St Petersburg, Russia. In 1867, Siemens completed the monumental Indo-European telegraph line stretching over 11,000 km from London to Calcutta.
History:
In 1867, Werner von Siemens described a dynamo without permanent magnets. A similar system was also independently invented by Ányos Jedlik and Charles Wheatstone, but Siemens became the first company to build such devices. In 1881, a Siemens AC Alternator driven by a watermill was used to power the world's first electric street lighting in the town of Godalming, United Kingdom. The company continued to grow and diversified into electric trains and light bulbs. In 1885, Siemens sold one of its generators to George Westinghouse, thereby enabling Westinghouse to begin experimenting with AC networks in Pittsburgh, Pennsylvania.
History:
In 1887, Siemens opened its first office in Japan. In 1890, the founder retired and left the running of the company to his brother Carl and sons Arnold and Wilhelm. In 1892, Siemens was contracted to construct the Hobart electric tramway in Tasmania, Australia, as it increased its markets. The system opened in 1893 and became the first complete electric tram network in the Southern Hemisphere.
History:
1901 to 1933 Siemens & Halske (S & H) was incorporated in 1897 and then merged parts of its activities with Schuckert & Co., Nuremberg, in 1903 to become Siemens-Schuckert. In 1907, Siemens (Siemens & Halske and Siemens-Schuckert) had 34,324 employees and was the seventh-largest company in the German empire by number of employees. (see List of German companies by employees in 1907) In 1919, S & H and two other companies jointly formed the Osram lightbulb company.
History:
During the 1920s and 1930s, S & H started to manufacture radios, television sets, and electron microscopes.In 1932, Reiniger, Gebbert & Schall (Erlangen), Phönix AG (Rudolstadt) and Siemens-Reiniger-Veifa mbH (Berlin) merged to form the Siemens-Reiniger-Werke AG (SRW), the third of the so-called parent companies that merged in 1966 to form the present-day Siemens AG.In the 1920s, Siemens constructed the Ardnacrusha Hydro Power station on the River Shannon in the then Irish Free State, and it was a world first for its design. The company is remembered for its desire to raise the wages of its underpaid workers, only to be overruled by the Cumann na nGaedheal government.
History:
1933 to 1945 Siemens (at the time: Siemens-Schuckert) exploited the forced labour of deported people in extermination camps. The company owned a plant in Auschwitz concentration camp.
History:
Siemens exploited the forced labour of women in the concentration camp of Ravensbrück. The factory was located in front of the camp.During the final years of World War II, numerous plants and factories in Berlin and other major cities were destroyed by Allied air raids. To prevent further losses, manufacturing was therefore moved to alternative places and regions not affected by the air war. The goal was to secure continued production of important war-related and everyday goods. According to records, Siemens was operating almost 400 alternative or relocated manufacturing plants at the end of 1944 and in early 1945.
History:
In 1972, Siemens sued German satirist F.C. Delius for his satirical history of the company, Unsere Siemens-Welt, and it was determined much of the book contained false claims although the trial itself publicized Siemens's history in Nazi Germany. The company supplied electrical parts to Nazi concentration camps and death camps. The factories had poor working conditions, where malnutrition and death were common. Also, the scholarship has shown that the camp factories were created, run, and supplied by the SS, in conjunction with company officials, sometimes high-level officials.
History:
1945 to 2001 In the 1950s, and from their new base in Bavaria, S&H started to manufacture computers, semiconductor devices, washing machines, and pacemakers. In 1966, Siemens & Halske (S&H, founded in 1847), Siemens-Schuckertwerke (SSW, founded in 1903) and Siemens-Reiniger-Werke (SRW, founded in 1932) merged to form Siemens AG. In 1969, Siemens formed Kraftwerk Union with AEG by pooling their nuclear power businesses.
History:
The company's first digital telephone exchange was produced in 1980, and in 1988, Siemens and GEC acquired the UK defence and technology company Plessey. Plessey's holdings were split, and Siemens took over the avionics, radar and traffic control businesses—as Siemens Plessey.
History:
In 1977, Advanced Micro Devices (AMD) entered into a joint venture with Siemens, which wanted to enhance its technology expertise and enter the American market. Siemens purchased 20% of AMD's stock, giving the company an infusion of cash to increase its product lines. The two companies also jointly established Advanced Micro Computers (AMC), located in Silicon Valley and in Germany, allowing AMD to enter the microcomputer development and manufacturing field, in particular based on AMD's second-source Zilog Z8000 microprocessors. When the two companies' vision for Advanced Micro Computers diverged, AMD bought out Siemens's stake in the American division in 1979. AMD closed Advanced Micro Computers in late 1981 after switching focus to manufacturing second-source Intel x86 microprocessors.In 1985, Siemens bought Allis-Chalmers' interest in the partnership company Siemens-Allis (formed 1978) which supplied electrical control equipment. It was incorporated into Siemens's Energy and Automation division.In 1987, Siemens reintegrated Kraftwerk Union, the unit overseeing nuclear power business.In 1989, Siemens bought the solar photovoltaic business, including 3 solar module manufacturing plants, from industry pioneer ARCO Solar, owned by oil firm ARCO.In 1991, Siemens acquired Nixdorf Computer AG and renamed it Siemens Nixdorf Informationssysteme AG, in order to produce personal computers.In October 1991, Siemens acquired the Industrial Systems Division of Texas Instruments, Inc, based in Johnson City, Tennessee. This division was organized as Siemens Industrial Automation, Inc., and was later absorbed by Siemens Energy and Automation, Inc.
History:
In 1992, Siemens bought out IBM's half of ROLM (Siemens had bought into ROLM five years earlier), thus creating SiemensROLM Communications; eventually dropping ROLM from the name later in the 1990s.In 1993–1994, Siemens C651 electric trains for Singapore's Mass Rapid Transit (MRT) system were built in Austria.In 1997, Siemens agreed to sell the defence arm of Siemens Plessey to British Aerospace (BAe) and a German aerospace company, DaimlerChrysler Aerospace. BAe and DASA acquired the British and German divisions of the operation respectively.In October 1997, Siemens Financial Services (SFS) was founded to act as a competence center for financing issues and as a manager of financial risks within Siemens.
History:
In 1998, Siemens acquired Westinghouse Power Generation for more than $1.5 billion from the CBS Corporation and moving Siemens from third to second in the world power generation market.In 1999, Siemens's semiconductor operations were spun off into a new company called Infineon Technologies. Its Electromechanical Components operations were converted into a legally independent company: Siemens Electromechanical Components GmbH & Co. KG, (which, later that year, was sold to Tyco International Ltd for approximately $1.1 billion.In the same year, Siemens Nixdorf Informationssysteme AG became part of Fujitsu Siemens Computers AG, with its retail banking technology group becoming Wincor Nixdorf.In 2000, Shared Medical Systems Corporation was acquired by the Siemens's Medical Engineering Group, eventually becoming part of Siemens Medical Solutions.
History:
Also in 2000, Atecs-Mannesman was acquired by Siemens, The sale was finalised in April 2001 with 50% of the shares acquired, acquisition, Mannesmann VDO AG merged into Siemens Automotive forming Siemens VDO Automotive AG, Atecs Mannesmann Dematic Systems merged into Siemens Production and Logistics forming Siemens Dematic AG, Mannesmann Demag Delaval merged into the Power Generation division of Siemens AG. Other parts of the company were acquired by Robert Bosch GmbH at the same time. Also, Moore Products Co. of Spring House, PA USA was acquired by Siemens Energy & Automation, Inc.
History:
2001 to 2005 In 2001, Chemtech Group of Brazil was incorporated into the Siemens Group; it provides industrial process optimisation, consultancy and other engineering services.Also in 2001, Siemens formed joint venture Framatome with Areva SA of France by merging much of the companies' nuclear businesses.In 2002, Siemens sold some of its business activities to Kohlberg Kravis Roberts & Co. L.P. (KKR), with its metering business included in the sale package.In 2002, Siemens abandoned the solar photovoltaic industry by selling its participation in a joint-venture company, established in 2001 with Shell and E.ON, to Shell.In 2003, Siemens acquired the flow division of Danfoss and incorporated it into the Automation and Drives division. Also in 2003 Siemens acquired IndX software (realtime data organisation and presentation). The same year in an unrelated development Siemens reopened its office in Kabul. Also in 2003 agreed to buy Alstom Industrial Turbines; a manufacturer of small, medium and industrial gas turbines for €1.1 billion.
History:
On 11 February 2003, Siemens planned to shorten phones' shelf life by bringing out annual Xelibri lines, with new devices launched as spring -summer and autumn-winter collections. On 6 March 2003, the company opened an office in San Jose. On 7 March 2003, the company announced that it planned to gain 10 per cent of the mainland China market for handsets. On 18 March 2003, the company unveiled the latest in its series of Xelibri fashion phones.In 2004, the wind energy company Bonus Energy in Brande, Denmark was acquired, forming Siemens Wind Power division. Also in 2004, Siemens invested in Dasan Networks (South Korea, broadband network equipment) acquiring ~40% of the shares, Nokia Siemens disinvested itself of the shares in 2008. The same year Siemens acquired Photo-Scan (UK, CCTV systems), US Filter Corporation (water and Waste Water Treatment Technologies/ Solutions, acquired from Veolia), Huntsville Electronics Corporation (automobile electronics, acquired from Chrysler), and Chantry Networks (WLAN equipment).In 2005, Siemens sold the Siemens mobile manufacturing business to BenQ, forming the BenQ-Siemens division. Also in 2005 Siemens acquired Flender Holding GmbH (Bocholt, Germany, gears/industrial drives), Bewator AB (building security systems), Wheelabrator Air Pollution Control, Inc. (Industrial and power station dust control systems), AN Windenergie GmbH. (Wind energy), Power Technologies Inc. (Schenectady, USA, energy industry software and training), CTI Molecular Imaging (Positron emission tomography and molecular imaging systems), Myrio (IPTV systems), Shaw Power Technologies International Ltd (UK/USA, electrical engineering consulting, acquired from Shaw Group), and Transmitton (Ashby de la Zouch UK, rail and other industry control and asset management).
History:
2005 and continuing: worldwide bribery scandal Beginning in 2005, Siemens became embroiled in a multi-national bribery scandal. Among the various incidents was the Siemens Greek bribery scandal, where the company was accused of deals with Greek government officials during the 2004 Summer Olympics. This case, along with others, triggered legal investigations in Germany, initiated by prosecutors in Italy, Liechtenstein, and Switzerland, and later followed by an American investigation in 2006 due to the company's activities while listed on US stock exchanges.Investigations found that Siemens had a pattern of bribing officials to secure contracts, with the company spending approximately $1.3 billion on bribes across several countries, and maintaining separate accounting records to conceal this. Following the investigations, Siemens settled in December 2008, paying a combined total of approximately $1.6 billion to the US and Germany in what was, at the time, the largest bribery fine in history. In addition, the company was required to invest $1 billion in developing and maintaining new internal compliance procedures. Siemens admitted to violating the accounting provisions of the Foreign Corrupt Practices Act, while its Bangladesh and Venezuela subsidiaries pleaded guilty to paying bribes.Despite initial expectations of a fine as high as $5 billion, the final amount was significantly less, in part due to Siemens's cooperation with the investigators, the upcoming change in the US administration, and Siemens's role as a US military contractor. The payments included $450 million in fines and penalties and a forfeiture of $350 million in profits in the US. Siemens also revamped its compliance systems, appointing Peter Y. Solmssen, a US lawyer, as an independent director in charge of compliance and accepting oversight from Theo Waigel, a former German finance minister. Siemens implemented new anti-corruption policies, including a comprehensive anti-corruption handbook, online tools for due diligence and compliance, a confidential communications channel for employees, and a corporate disciplinary committee. This process involved hiring approximately 500 full-time compliance personnel worldwide.Siemens's bribery culture was not new; it was highlighted as far back as 1914 when both Siemens and Vickers were involved in a scandal over bribes paid to Japanese naval authorities. The company resorted to bribery as it sought to expand its business in the developing world after World War II. Up until 1999, bribes were a tax-deductible business expense in Germany, with no penalties for bribing foreign officials. However, with the implementation of the 1999 OECD Anti-Bribery Convention, Siemens started using off-shore accounts to hide its bribery.
History:
During the investigation, key player Reinhard Siekaczek, a mid-level executive in the telecommunications unit, provided critical evidence. He disclosed that he had managed an annual global bribery budget of $40 to $50 million and provided information about the company's 2,700 worldwide contractors, who were typically used to channel money to government officials. Notable instances of bribery included substantial payments in Argentina, Israel, Venezuela, China, Nigeria, and Russia to secure large contracts.The investigation resulted in multiple prosecutions and settlements with various governments, as well as legal action against Siemens employees and those who received bribes. Noteworthy cases include the conviction of two former executives in 2007 for bribing Italian energy company Enel, a settlement with the Greek government in 2012 for 330 million euros over the Greek bribery scandal, and a guilty plea in 2014 from former Siemens executive Andres Truppel for channeling nearly $100 million in bribes to Argentine government officials. Siemens also faced repercussions from the World Bank due to fraudulent practices by its Russian affiliate. In 2009, Siemens agreed not to bid on World Bank projects for two years and to establish a $100 million fund at the World Bank to support anti-corruption activities over 15 years, known as the "Siemens Integrity Initiative." Other substantial fines include a payment of ₦7 billion (US$46.57 million) to the Nigerian government in 2010, and a US$42.7 million penalty in Israel in 2014 to avoid charges of securities fraud.
History:
2006 to 2011 In 2006, Siemens purchased Bayer Diagnostics which was incorporated into the Medical Solutions Diagnostics division on 1 January 2007, also in 2006 Siemens acquired Controlotron (New York) (ultrasonic flow meters), and also in 2006 Siemens acquired Diagnostic Products Corp., Kadon Electro Mechanical Services Ltd. (now TurboCare Canada Ltd.), Kühnle, Kopp, & Kausch AG, Opto Control, and VistaScape Security Systems.In January 2007, Siemens was fined €396 million by the European Commission for price fixing in EU electricity markets through a cartel involving 11 companies, including ABB, Alstom, Fuji Electric, Hitachi Japan, AE Power Systems, Mitsubishi Electric Corp, Schneider, Areva, Toshiba and VA Tech. According to the commission, "between 1988 and 2004, the companies rigged bids for procurement contracts, fixed prices, allocated projects to each other, shared markets and exchanged commercially important and confidential information." Siemens was given the highest fine of €396 million, more than half of the total, for its alleged leadership role in the activity.
History:
In March 2007, a Siemens board member was temporarily arrested and accused of illegally financing AUB, a business-friendly labour association which competes against the trade union IG Metall. He was released on bail. Offices of AUB and Siemens were searched. Siemens denied any wrongdoing.In April the Fixed Networks, Mobile Networks and Carrier Services divisions of Siemens merged with Nokia's Network Business Group in a 50/50 joint venture, creating a fixed and mobile network company called Nokia Siemens Networks. Nokia delayed the merger due to bribery investigations against Siemens. In October 2007, a court in Munich found that the company had bribed public officials in Libya, Russia, and Nigeria in return for the awarding of contracts; four former Nigerian Ministers of Communications were among those named as recipients of the payments. The company admitted to having paid the bribes and agreed to pay a fine of 201 million euros. In December 2007, the Nigerian government cancelled a contract with Siemens due to the bribery findings.Also in 2007, Siemens acquired Vai Ingdesi Automation (Argentina, Industrial Automation), UGS Corp., Dade Behring, Sidelco (Quebec, Canada), S/D Engineers Inc., and Gesellschaft für Systemforschung und Dienstleistungen im Gesundheitswesen mbH (GSD) (Germany).In July 2008, Siemens AG formed a joint venture of the Enterprise Communications business with the Gores Group, renamed Unify in 2013. The Gores Group holding a majority interest of 51% stake, with Siemens AG holding a minority interest of 49%.In August 2008, Siemens Project Ventures invested $15 million in the Arava Power Company. In a press release published that month, Peter Löscher, President and CEO of Siemens AG said: "This investment is another consequential step in further strengthening our green and sustainable technologies". Siemens now holds a 40% stake in the company.In January 2009, Siemens sold its 34% stake in Framatome, complaining limited managerial influence. In March, it formed an alliance with Rosatom of Russia to engage in nuclear-power activities.In April 2009, Fujitsu Siemens Computers became Fujitsu Technology Solutions as a result of Fujitsu buying out Siemens's share of the company.
History:
In June 2009 news broke that Nokia Siemens had supplied telecommunications equipment to the Iranian telecom company that included the ability to intercept and monitor telecommunications, a facility known as "lawful intercept". The equipment was believed to have been used in the suppression of the 2009 Iranian election protests, leading to criticism of the company, including by the European Parliament. Nokia Siemens later divested its call monitoring business, and reduced its activities in Iran.In October 2009, Siemens signed a $418 million contract to buy Solel Solar Systems, an Israeli company in the solar thermal power business.In December 2010, Siemens agreed to sell its IT Solutions and Services subsidiary for €850 million to Atos. As part of the deal, Siemens agreed to take a 15% stake in the enlarged Atos, to be held for a minimum of five years. In addition, Siemens concluded a seven-year outsourcing contract worth around €5.5 billion, under which Atos will provide managed services and systems integration to Siemens. At the same time, Germany’s Wegmann Group acquired Siemens's 49-percent stake in armored vehicle manufacturer Krauss-Maffei Wegmann GmbH, establishing Wegmann as the sole shareholder of KMW, pending approval by government authorities.
History:
2011 to present In March 2011, it was decided to list Osram on the stock market in the autumn, but CEO Peter Löscher said Siemens intended to retain a long-term interest in the company, which was already independent from the technological and managerial viewpoints.
History:
In September 2011, Siemens, which had been responsible for constructing all 17 of Germany's existing nuclear power plants, announced that it would exit the nuclear sector following the Fukushima disaster and the subsequent changes to German energy policy. Chief executive Peter Löscher has supported the German government's planned Energiewende, its transition to renewable energy technologies, calling it a "project of the century" and saying Berlin's target of reaching 35% renewable energy sources by 2020 was feasible.In November 2012, Siemens acquired the Rail division of Invensys for £1.7 billion. In the same month, Siemens acquired a privately held company, LMS International NV.In August 2013, Nokia acquired 100% of the company Nokia Siemens Networks, with a buy-out of Siemens AG, ending Siemens role in telecommunication.In August 2013, Siemens won a $966.8 million order for power plant components from oil firm Saudi Aramco, the largest bid it has ever received from the Saudi company.In 2014, Siemens announced plans to build a $264 million facility for making offshore wind turbines in Paull, England, as Britain's wind power rapidly expands. Siemens chose the Hull area on the east coast of England because it is close to other large offshore projects planned in coming years. The new plant is expected to begin producing turbine rotor blades in 2016. The plant and the associated service center, in Green Port Hull nearby, will employ about 1,000 workers. The facilities will serve the UK market, where the electricity that major power producers generate from wind grew by about 38 percent in 2013, representing about 6 percent of total electricity, according to government figures. There are also plans to increase Britain's wind-generating capacity at least threefold by 2020, to 14 gigawatts.In May 2014, Rolls-Royce agreed to sell its gas turbine and compressor energy business to Siemens for £1 billion.In June 2014, Siemens and Mitsubishi Heavy Industries announced their formation of joint ventures to bid for Alstom's troubled energy and transportation businesses (in locomotives, steam turbines, and aircraft engines). A rival bid by General Electric (GE) has been criticized by French government sources, who consider Alstom's operations as a "vital national interest" at a moment when the French unemployment level stands above 10% and some voters are turning towards the far-right.In 2015, Siemens acquired U.S. oilfield equipment maker Dresser-Rand Group Inc for $7.6 billion.In November 2016, Siemens acquired EDA company Mentor Graphics for $4.5 billion.In November 2017, the U.S. Department of Justice charged three Chinese employees of Guangzhou Bo Yu Information Technology Company Limited with hacking into corporate entities, including Siemens AG.In December 2017, Siemens acquired the medical technology company Fast Track Diagnostics for an undisclosed amount.In August 2018, Siemens acquired rapid application development company Mendix for €0.6 billion in cash.In May 2018, Siemens acquired J2 Innovations for an undisclosed amount.In May 2018, Siemens acquired Enlighted, Inc. for an undisclosed amount.In September 2019, Siemens and Orascom Construction signed an agreement with the Iraqi government to rebuild two power plants, which is believed to setup the company for future deals in the country.In 2019–2020, Siemens was identified as a key engineering company supporting the controversial Adani Carmichael coal mine in Queensland (Australia).In January 2020, Siemens signed an agreement to acquire 99% equity share capital of Indian switchgear manufacturer C&S Electric at €267 million (₹2,100 crore). The takeover was approved by the Competition Commission of India in August 2020.In April 2020, Siemens acquired a 77% majority stake in Indian building solution provider iMetrex Technologies for an undisclosed sum.In April 2020, Siemens Energy was created as an independent company out of the energy division of Siemens. The trading of shares of the new Siemens Energy AG on the stock exchange is expected to be possible from 28 September onwards.In August 2020, Siemens Healthineers AG announced that it plans to acquire U.S. cancer device and software company Varian Medical Systems in an all-stock deal valued at $16.4 billion.In February 2021, Roland Busch replaced Joe Kaeser as CEO.In October 2021, Siemens acquired the building IoT software and hardware company Wattsense for an undisclosed sum.In May 2022, Siemens made the decision to cease its operations in Russia after 170 years and disassociate itself from any involvement with the Russian government due to the ongoing war of aggression against Ukraine. This decision affected the approximately 3,000 employees working for the company in the country. The announcement came with a financial statement in which Siemens disclosed a second-quarter loss of approximately US$625 million as a direct consequence of the imposed sanctions on Russia.In July 2022, Siemens acquired ZONA Technology, an aerospace simulation firm.In October 2022, Siemens announced a strategic partnership with Swedish electric commercial vehicle manufacturer Volta Trucks to deliver and scale eMobility charging infrastructure to simplify the transition to fleet electrification.In June 2023, Siemens announced a global investment plan of €2 billion to expand its manufacturing capacity, including specific commitments of €200 million for a new high-tech plant in Singapore and €140 million to enlarge a facility in Chengdu, China. The strategy aims to foster diversification across Asia, enhance growth in the Chinese market, and decrease dependency on a single country by utilizing Singapore as a primary export hub to Southeast Asia. Simultaneously, Siemens will allocate €1 billion for the development of new facilities and factories in Germany, including €500 million for the expansion and modernization of a factory in Erlangen, expected to enhance production capacity by 60% by 2029. This coincides with the German government's concerns about the economic and security risks associated with investing in China. Additional German investments will finance a new semiconductor factory in Forchheim and a training center for Siemens Healthineers in Erlangen.
Operations:
As of 2023, the principal divisions of Siemens are Digital Industries, Smart Infrastructure, Siemens Mobility, Siemens Healthineers and Siemens Financial Services, with Siemens Healthineers and Siemens Mobility operating as independent entities. Siemens also operates a number of "Portfolio Companies" with market-specific offerings. In 2020, the energy business was spun off into the separate Siemens Energy AG, with Siemens retaining a stake of 25% as of June 2023. Other business units of the company include Siemens Technology (T) for research and development, Siemens Real Estate (SRE) for corporate real estate management, Siemens Advanta for consulting services (including the management consulting division Siemens Advanta Consulting), next47 as a venture capital fund, and Siemens Global Business Services (GBS) as a shared services unit.
Operations:
Digital Industries The Digital Industries division focuses on the automation needs of discrete and process industries. This includes factory automation infrastructure, numerical control systems, engines, drives, inverters, integrated automation systems for machine tools and production machines, and machine to machine communication products. The division also develops industrial control systems, various types of sensors, and radio-frequency identification systems. In industrial automation and industrial software, Siemens is the global market leader.In addition to hardware, Digital Industries supplies software for product lifecycle management (PLM), simulation and testing of mechatronic systems, and the MindSphere cloud-based IoT operating system that connects physical infrastructure to the digital world. The software portfolio is supplemented by the Mendix platform for low-code application development and digital marketplaces like Supplyframe and Pixeom. Key customer markets span automotive, machine building, pharmaceuticals, chemicals, food and beverage, electronics, and semiconductors.In 2023, CEO Roland Busch announced the aim to raise software businesses sales share to 20% in the long term. In June 2023, Siemens launched a new open digital platform called "Siemens Xcelerator", which houses a curated portfolio of IoT-enabled hardware, software, and digital services from both Siemens and third parties. Siemens also announced a partnership with Nvidia, aiming to leverage its Omniverse platform with its 3D design capabilities. Xcelerator is part of a broader industry trend towards digital environments ("metaverses"), and is delivered through a software as a service (SaaS) subscription model, targeting accessibility for a range of businesses including small and medium-sized enterprises.
Operations:
Smart Infrastructure Siemens Smart Infrastructure offerings are categorized into buildings, electrification, and electrical products. Its buildings portfolio includes building automation systems, heating, ventilation, and air conditioning (HVAC) controls, and fire safety and security systems, and energy performance services. The electrification portfolio is dedicated to grid resilience and efficiency, encompassing grid simulation, operation control software, power-system automation and protection, and medium to low voltage switchgear. Moreover, it includes charging infrastructure for electric vehicles. In the realm of electrical products, the division offers low-voltage switching, measuring and control equipment, distribution systems, and medium voltage switchgear.In the renewable energy industry, the company provides a portfolio of products and services to help build and operate microgrids of any size. It provides generation and distribution of electrical energy as well as monitoring and controlling of microgrids. By using primarily renewable energy, microgrids reduce carbon-dioxide emissions, which is often required by government regulations. It supplied a sustainable storage product and microgrids to Enel Produzione SPA for the island of Ventotene in Italy.
Operations:
Siemens Mobility Siemens Mobility is a division involved in passenger and freight transportation. This includes providing rolling stock, which covers a range of vehicles for urban, regional, and long-distance travel. The division also offers rail infrastructure products and services such as rail automation, digital station solutions, railway communication systems, and yard and depot solutions.In 2019, the European Commission blocked a merger between Alstom and Siemens Mobility, citing anti-trust regulations. The plan would have seen the creation of a "European champion" to compete with China's CRRC.
Operations:
Siemens Healthineers Siemens Healthineers AG is a publicly listed company that was spun off from Siemens in 2017. As of 2022, Siemens retains a 75% majority stake in Siemens Healthineers.As a global provider of healthcare solutions and services, its range of offerings includes the manufacture and sale of diagnostic and therapeutic products, clinical consulting, and a variety of training services. Its operations are divided into four main sectors: imaging, diagnostics, Varian Medical Systems, and advanced therapies. Imaging includes magnetic resonance, computed tomography, X-ray, molecular imaging, and ultrasound devices. The diagnostics segment offers in-vitro diagnostic products for laboratory and point-of-care settings. Varian, an American company acquired by Siemens Healthineers in 2021, covers technologies related to cancer care, and advanced therapies focus on image-guided minimally invasive procedures.
Operations:
Siemens Financial Services Siemens Financial Services (SFS) is a division that delivers a range of financing solutions. These services target both Siemens's customers and external companies, including debt and equity investments. It provides leasing, lending, working capital, structured financing, and equipment and project financing solutions. SFS is also involved in providing financial advisory services and risk management expertise to Siemens's industrial businesses, helping assess risk profiles of projects and business models.
Operations:
Former operations Siemens is known for actively refining its core business through strategic divestitures, pursuing a strategy referred to as "Corporate Clarity" that focuses on selling non-core aspects of the business. Major business divisions that were once part of Siemens before being spun off include: Infineon Technologies (1999) Siemens Mobile (2005) Gigaset Communications (2008) Osram (2013) Siemens Energy (2020) Joint ventures Siemens's current joint ventures include: Siemens Traction Equipment Ltd. (STEZ), Zhuzhou China, is a joint venture between Siemens, Zhuzhou CSR Times Electric Co., Ltd. (TEC) and CSR Zhuzhou Electric Locomotive Co., Ltd. (ZELC), which produces AC drive electric locomotives and AC locomotive traction components.
Operations:
OMNETRIC Group, A Siemens & Accenture company formed in 2014.Former joint ventures in which Siemens no longer holds any equity include: Fujitsu Siemens Computers (sold to Fujitsu in 2009) Nokia Siemens Networks (sold to Nokia in 2013) BSH Hausgeräte (sold to Bosch in 2014) Primetals Technologies (sold to Mitsubishi Heavy Industries in 2019).
Operations:
Silcar was a joint venture between Siemens Ltd and Thiess Services Pty Ltd until 2013. Silcar is a 3,000 person Australian organisation providing productivity and reliability for large scale and technically complex plant assets. Services include asset management, design, construction, operations and maintenance. Silcar operates across a range of industries and essential services including power generation, electrical distribution, manufacturing, mining and telecommunications. In July 2013, Thiess took full control.
Corporate affairs:
Siemens is incorporated in Germany and has its corporate headquarters at the Wittelsbacherplatz in central Munich.
Locations As of 2011, Siemens has operations in around 190 countries and approximately 285 production and manufacturing facilities.
Research and development In 2022, Siemens invested a total of €5.6 billion in research and development, equivalent to 7.8% of revenues. As of 30 September 2022, Siemens had approximately 46,900 employees engaged in research and development and held approximately 43,600 patents worldwide.
Corporate affairs:
Leadership Chairmen of the Siemens-Schuckertwerke Managing Board (1903 to 1966) Alfred Berliner (1903 to 1912) Carl Friedrich von Siemens (1912 to 1919) Otto Heinrich (1919 to 1920) Carl Köttgen (1920 to 1939) Rudolf Bingel (1939 to 1945) Wolf-Dietrich von Witzleben (1945 to 1949) Günther Scharowsky (1949 to 1951) Friedrich Bauer (1951 to 1962) Bernhard Plettner (1962 to 1966)Chairmen of the Siemens & Halske / Siemens-Schuckertwerke Supervisory Board (1918 to 1966) Wilhelm von Siemens (1918 to 1919) Carl Friedrich von Siemens (1919 to 1941) Hermann von Siemens (1941 to 1946) Friedrich Carl Siemens (1946 to 1948) Hermann von Siemens (1948 to 1956) Ernst von Siemens (1956 to 1966)Chairmen of the Siemens AG Managing Board (1966 to present) Hans Kerschbaum, Adolf Lohse, Bernhard Plettner (Presidency of the Managing Board) (1966 to 1967) Erwin Hachmann, Bernhard Plettner, Gerd Tacke (Presidency of the Managing Board) (1967 to 1968) Gerd Tacke (1968 to 1971) Bernhard Plettner (1971 to 1981) Karlheinz Kaske (1981 to 1992) Heinrich von Pierer (1992 to 2005) Klaus Kleinfeld (2005 to 2007) Peter Löscher (2007 to 2013) Joe Kaeser (2013 to 2021) Roland Busch (2021 to present)Chairmen of the Siemens AG Supervisory Board (1966 to present) Ernst von Siemens (1966 to 1971) Peter von Siemens (1971 to 1981) Bernhard Plettner (1981 to 1988) Heribald Närger (1988 to 1993) Hermann Franz (1993 to 1998) Karl-Hermann Baumann (1998 to 2005) Heinrich von Pierer (2005 to 2007) Gerhard Cromme (2007 to 2018) Jim Hagemann Snabe (2018 to present)Managing Board (present day) Roland Busch (CEO Siemens AG) Klaus Helmrich Cedrik Neike (CEO Digital Industries) Matthias Rebellius (CEO Smart Infrastructure) Ralf P. Thomas (CFO) Judith Wiese
Financials:
For the fiscal year 2022, Siemens reported a revenue of EUR 71.977 billion, an increase of 15.6% over the previous fiscal cycle. Siemens's shares traded at over US$78 per share, and its market capitalization was valued at US$123 billion in July 2023.
* In 2020, Siemens Energy became an independent company.
Financials:
Shareholders The company has issued 881,000,000 shares of common stock. The largest single shareholder continues to be the founding shareholder, the Siemens family, with a stake of 6.9%. 62% are held by institutional asset managers, the largest being two divisions of the world's largest asset manager BlackRock. 83.97% of the shares are considered public float, however including such strategic investors as the State of Qatar (DIC Company Ltd.) with 3.04%, the Government Pension Fund of Norway with 2.5% and Siemens AG itself with 3.04%. 19% are held by private investors, 13% by investors that are considered unidentifiable. 26% are owned by German investors, 21% by US investors, followed by the UK (11%), France (8%), Switzerland (8%) and a number of others (26%). | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**DES Action USA**
DES Action USA:
DES Action USA is a national consumer advocacy group, whose mission is to educate and raise awareness about diethylstilbestrol (DES) in the public and medical and legal professions, support the DES-exposed population, and advocate for consumer vigilance and rights.
History of DES:
DES is a synthetic estrogen, which was, from about 1940, prescribed to women at risk of miscarriage and, later, to promote healthy babies. Medical research has now shown that DES has adverse effects and that the approximately 10 million DES-exposed people are at risk for cancer, infertility, pregnancy complications, and other health problems.The drug came onto the United States market in 1940 from several manufacturers, since the drug was both unpatented and cheap to produce. Despite research demonstrating possible carcinogenic effects (documented in the 1930s) and its inefficacy for preventing miscarriage in 1953, pharmaceutical companies publicized earlier and more favorable studies in their marketing and promotion. In 1971, medical researchers identified several clusters of rare vaginal cancers in some of the children exposed to DES in utero, now teenagers and young adults. The news was alarming to many of the women who had taken the drug during their pregnancies.
Early DES advocacy:
As the women's health movement was developing during the same years, concerned women around the country used the model of local organizing and outreach to find other DES-exposed women and their children (known as DES mothers, sons, and daughters).
Early DES advocacy:
One of the earliest gatherings was of the DES Information Group, founded by Pat Cody (1923-2010) in 1974, working with the Berkeley Free Clinic in California. Cody's group soon joined with the DES Action Committee of the San Francisco-based Committee for the Medical Rights of Women, and the two groups merged into a Bay-area organization. Similar grassroots organizations were being started in Connecticut, Illinois, Massachusetts, New Jersey, New York, and Pennsylvania.In February 1978, U.S. Secretary of the Department of Health, Education, and Welfare (HEW), Joseph Califano, convened the National DES Task Force. It was charged with making recommendations for research and continuing treatment for persons exposed to DES. The task force issued a physician advisory in November 1978 recommending doctors review their records and notify patients who were prescribed DES.
Founding:
DES Action National (later renamed DES Action USA) formed in 1977 with a meeting of these grassroots groups. The organization incorporated as a non-profit in 1979.Much of DES Action USA's early work was at local and state levels. Public health grants and legislative efforts supported educational outreach and medical care in California and New York. DES Action USA also successfully lobbied that a Federal Task Force be established. During the 1980s, work at state and national levels resulted in several official National DES Awareness Weeks (1983-1985), continued educational work and outreach efforts, and support for liability litigation cases against manufacturers.
Advocacy and legislation:
Aware of the need for national funding for DES research and education, director Nora Cody (daughter of Pat Cody) worked closely with Representative Louise M. Slaughter (D-NY) to draft and support a research bill during the early 1990s. The bill was signed into law in 1992, with a second bill passed in 1998 to support research on the third generation of DES-affected people (the grandchildren of DES mothers). Through the early 2000s, their most high-profile work involved working with the Centers for Disease Control to develop and distribute consumer and health professional medical education materials to communicate the latest research about continuing effects of DES exposure.In 2014, DES Action USA, facing the aging of its leadership, chose to merge with another medical nonprofit, the MedShadow Foundation, which does similar educational and outreach work but focuses on the risks and long-term adverse effects of many prescription drugs. The decision allowed the DES Action USA Board to step down from administration, while allowing DES Action USA to continue its work and mission. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Hamiltonian optics**
Hamiltonian optics:
Hamiltonian optics and Lagrangian optics are two formulations of geometrical optics which share much of the mathematical formalism with Hamiltonian mechanics and Lagrangian mechanics.
Hamilton's principle:
In physics, Hamilton's principle states that the evolution of a system (q1(σ),…,qN(σ)) described by N generalized coordinates between two specified states at two specified parameters σA and σB is a stationary point (a point where the variation is zero) of the action functional, or where q˙k=dqk/dσ and L is the Lagrangian. Condition δS=0 is valid if and only if the Euler-Lagrange equations are satisfied, i.e., with k=1,…,N The momentum is defined as and the Euler–Lagrange equations can then be rewritten as where p˙k=dpk/dσ A different approach to solving this problem consists in defining a Hamiltonian (taking a Legendre transform of the Lagrangian) as for which a new set of differential equations can be derived by looking at how the total differential of the Lagrangian depends on parameter σ, positions qi and their derivatives q˙i relative to σ. This derivation is the same as in Hamiltonian mechanics, only with time t now replaced by a general parameter σ. Those differential equations are the Hamilton's equations with k=1,…,N . Hamilton's equations are first-order differential equations, while Euler-Lagrange's equations are second-order.
Lagrangian optics:
The general results presented above for Hamilton's principle can be applied to optics. In 3D euclidean space the generalized coordinates are now the coordinates of euclidean space.
Lagrangian optics:
Fermat's principle Fermat's principle states that the optical length of the path followed by light between two fixed points, A and B, is a stationary point. It may be a maximum, a minimum, constant or an inflection point. In general, as light travels, it moves in a medium of variable refractive index which is a scalar field of position in space, that is, n=n(x1,x2,x3) in 3D euclidean space. Assuming now that light travels along the x3 axis, the path of a light ray may be parametrized as s=(x1(x3),x2(x3),x3) starting at a point A=(x1(x3A),x2(x3A),x3A) and ending at a point B=(x1(x3B),x2(x3B),x3B) . In this case, when compared to Hamilton's principle above, coordinates x1 and x2 take the role of the generalized coordinates qk while x3 takes the role of parameter σ , that is, parameter σ =x3 and N=2.
Lagrangian optics:
In the context of calculus of variations this can be written as where ds is an infinitesimal displacement along the ray given by {\textstyle ds={\sqrt {dx_{1}^{2}+dx_{2}^{2}+dx_{3}^{2}}}} and is the optical Lagrangian and x˙k=dxk/dx3 The optical path length (OPL) is defined as where n is the local refractive index as a function of position along the path between points A and B.
Lagrangian optics:
The Euler-Lagrange equations The general results presented above for Hamilton's principle can be applied to optics using the Lagrangian defined in Fermat's principle. The Euler-Lagrange equations with parameter σ =x3 and N=2 applied to Fermat's principle result in with k = 1, 2 and where L is the optical Lagrangian and x˙k=dxk/dx3 Optical momentum The optical momentum is defined as and from the definition of the optical Lagrangian {\textstyle L=n{\sqrt {1+{\dot {x}}_{1}^{2}+{\dot {x}}_{2}^{2}}}} this expression can be rewritten as or in vector form where e^ is a unit vector and angles α1, α2 and α3 are the angles p makes to axis x1, x2 and x3 respectively, as shown in figure "optical momentum". Therefore, the optical momentum is a vector of norm where n is the refractive index at which p is calculated. Vector p points in the direction of propagation of light. If light is propagating in a gradient index optic the path of the light ray is curved and vector p is tangent to the light ray.
Lagrangian optics:
The expression for the optical path length can also be written as a function of the optical momentum. Having in consideration that x˙3=dx3/dx3=1 the expression for the optical Lagrangian can be rewritten as and the expression for the optical path length is Hamilton's equations Similarly to what happens in Hamiltonian mechanics, also in optics the Hamiltonian is defined by the expression given above for N = 2 corresponding to functions x1(x3) and x2(x3) to be determined Comparing this expression with L=x˙1p1+x˙2p2+p3 for the Lagrangian results in And the corresponding Hamilton's equations with parameter σ =x3 and k=1,2 applied to optics are with x˙k=dxk/dx3 and p˙k=dpk/dx3
Applications:
It is assumed that light travels along the x3 axis, in Hamilton's principle above, coordinates x1 and x2 take the role of the generalized coordinates qk while x3 takes the role of parameter σ , that is, parameter σ =x3 and N=2.
Refraction and reflection If plane x1x2 separates two media of refractive index nA below and nB above it, the refractive index is given by a step function and from Hamilton's equations and therefore p˙k=0 or Constant for k = 1, 2.
Applications:
An incoming light ray has momentum pA before refraction (below plane x1x2) and momentum pB after refraction (above plane x1x2). The light ray makes an angle θA with axis x3 (the normal to the refractive surface) before refraction and an angle θB with axis x3 after refraction. Since the p1 and p2 components of the momentum are constant, only p3 changes from p3A to p3B.
Applications:
Figure "refraction" shows the geometry of this refraction from which sin sin θB . Since ‖pA‖=nA and ‖pB‖=nB , this last expression can be written as which is Snell's law of refraction.
Applications:
In figure "refraction", the normal to the refractive surface points in the direction of axis x3, and also of vector v=pA−pB . A unit normal n=v/‖v‖ to the refractive surface can then be obtained from the momenta of the incoming and outgoing rays by where i and r are unit vectors in the directions of the incident and refracted rays. Also, the outgoing ray (in the direction of pB ) is contained in the plane defined by the incoming ray (in the direction of pA ) and the normal n to the surface.
Applications:
A similar argument can be used for reflection in deriving the law of specular reflection, only now with nA=nB, resulting in θA=θB. Also, if i and r are unit vectors in the directions of the incident and refracted ray respectively, the corresponding normal to the surface is given by the same expression as for refraction, only with nA=nB In vector form, if i is a unit vector pointing in the direction of the incident ray and n is the unit normal to the surface, the direction r of the refracted ray is given by: with If i⋅n<0 then −n should be used in the calculations. When Δ<0 , light suffers total internal reflection and the expression for the reflected ray is that of reflection: Rays and wavefronts From the definition of optical path length {\textstyle S=\int L\,dx_{3}} with k=1,2 where the Euler-Lagrange equations ∂L/∂xk=dpk/dx3 with k=1,2 were used. Also, from the last of Hamilton's equations ∂H/∂x3=−∂L/∂x3 and from H=−p3 above combining the equations for the components of momentum p results in Since p is a vector tangent to the light rays, surfaces S=Constant must be perpendicular to those light rays. These surfaces are called wavefronts. Figure "rays and wavefronts" illustrates this relationship. Also shown is optical momentum p, tangent to a light ray and perpendicular to the wavefront.
Applications:
Vector field p=∇S is conservative vector field. The gradient theorem can then be applied to the optical path length (as given above) resulting in and the optical path length S calculated along a curve C between points A and B is a function of only its end points A and B and not the shape of the curve between them. In particular, if the curve is closed, it starts and ends at the same point, or A=B so that This result may be applied to a closed path ABCDA as in figure "optical path length" for curve segment AB the optical momentum p is perpendicular to a displacement ds along curve AB, or p⋅ds=0 . The same is true for segment CD. For segment BC the optical momentum p has the same direction as displacement ds and p⋅ds=nds . For segment DA the optical momentum p has the opposite direction to displacement ds and p⋅ds=−nds . However inverting the direction of the integration so that the integral is taken from A to D, ds inverts direction and p⋅ds=nds . From these considerations or and the optical path length SBC between points B and C along the ray connecting them is the same as the optical path length SAD between points A and D along the ray connecting them. The optical path length is constant between wavefronts.
Applications:
Phase space Figure "2D phase space" shows at the top some light rays in a two-dimensional space. Here x2=0 and p2=0 so light travels on the plane x1x3 in directions of increasing x3 values. In this case p12+p32=n2 and the direction of a light ray is completely specified by the p1 component of momentum p=(p1,p3) since p2=0. If p1 is given, p3 may be calculated (given the value of the refractive index n) and therefore p1 suffices to determine the direction of the light ray. The refractive index of the medium the ray is traveling in is determined by ‖p‖=n For example, ray rC crosses axis x1 at coordinate xB with an optical momentum pC, which has its tip on a circle of radius n centered at position xB. Coordinate xB and the horizontal coordinate p1C of momentum pC completely define ray rC as it crosses axis x1. This ray may then be defined by a point rC=(xB,p1C) in space x1p1 as shown at the bottom of the figure. Space x1p1 is called phase space and different light rays may be represented by different points in this space.
Applications:
As such, ray rD shown at the top is represented by a point rD in phase space at the bottom. All rays crossing axis x1 at coordinate xB contained between rays rC and rD are represented by a vertical line connecting points rC and rD in phase space. Accordingly, all rays crossing axis x1 at coordinate xA contained between rays rA and rB are represented by a vertical line connecting points rA and rB in phase space. In general, all rays crossing axis x1 between xL and xR are represented by a volume R in phase space. The rays at the boundary ∂R of volume R are called edge rays. For example, at position xA of axis x1, rays rA and rB are the edge rays since all other rays are contained between these two. (A ray parallel to x1 would not be between the two rays, since the momentum is not in-between the two rays) In three-dimensional geometry the optical momentum is given by p=(p1,p2,p3) with p12+p22+p32=n2 . If p1 and p2 are given, p3 may be calculated (given the value of the refractive index n) and therefore p1 and p2 suffice to determine the direction of the light ray. A ray traveling along axis x3 is then defined by a point (x1,x2) in plane x1x2 and a direction (p1,p2). It may then be defined by a point in four-dimensional phase space x1x2p1p2.
Applications:
Conservation of etendue Figure "volume variation" shows a volume V bound by an area A. Over time, if the boundary A moves, the volume of V may vary. In particular, an infinitesimal area dA with outward pointing unit normal n moves with a velocity v.
Applications:
This leads to a volume variation dV=dA(v⋅n)dt . Making use of Gauss's theorem, the variation in time of the total volume V volume moving in space is The rightmost term is a volume integral over the volume V and the middle term is the surface integral over the boundary A of the volume V. Also, v is the velocity with which the points in V are moving.
Applications:
In optics coordinate x3 takes the role of time. In phase space a light ray is identified by a point (x1,x2,p1,p2) which moves with a "velocity" v=(x˙1,x˙2,p˙1,p˙2) where the dot represents a derivative relative to x3 . A set of light rays spreading over dx1 in coordinate x1 , dx2 in coordinate x2 , dp1 in coordinate p1 and dp2 in coordinate p2 occupies a volume dV=dx1dx2dp1dp2 in phase space. In general, a large set of rays occupies a large volume V in phase space to which Gauss's theorem may be applied and using Hamilton's equations or dV/dx3=0 and Constant which means that the phase space volume is conserved as light travels along an optical system.
Applications:
The volume occupied by a set of rays in phase space is called etendue, which is conserved as light rays progress in the optical system along direction x3. This corresponds to Liouville's theorem, which also applies to Hamiltonian mechanics.
Applications:
However, the meaning of Liouville’s theorem in mechanics is rather different from the theorem of conservation of étendue. Liouville’s theorem is essentially statistical in nature, and it refers to the evolution in time of an ensemble of mechanical systems of identical properties but with different initial conditions. Each system is represented by a single point in phase space, and the theorem states that the average density of points in phase space is constant in time. An example would be the molecules of a perfect classical gas in equilibrium in a container. Each point in phase space, which in this example has 2N dimensions, where N is the number of molecules, represents one of an ensemble of identical containers, an ensemble large enough to permit taking a statistical average of the density of representative points. Liouville’s theorem states that if all the containers remain in equilibrium, the average density of points remains constant.
Applications:
Imaging and nonimaging optics Figure "conservation of etendue" shows on the left a diagrammatic two-dimensional optical system in which x2=0 and p2=0 so light travels on the plane x1x3 in directions of increasing x3 values.
Light rays crossing the input aperture of the optic at point x1=xI are contained between edge rays rA and rB represented by a vertical line between points rA and rB at the phase space of the input aperture (right, bottom corner of the figure). All rays crossing the input aperture are represented in phase space by a region RI.
Applications:
Also, light rays crossing the output aperture of the optic at point x1=xO are contained between edge rays rA and rB represented by a vertical line between points rA and rB at the phase space of the output aperture (right, top corner of the figure). All rays crossing the output aperture are represented in phase space by a region RO.
Applications:
Conservation of etendue in the optical system means that the volume (or area in this two-dimensional case) in phase space occupied by RI at the input aperture must be the same as the volume in phase space occupied by RO at the output aperture.
Applications:
In imaging optics, all light rays crossing the input aperture at x1=xI are redirected by it towards the output aperture at x1=xO where xI=m xO. This ensures that an image of the input is formed at the output with a magnification m. In phase space, this means that vertical lines in the phase space at the input are transformed into vertical lines at the output. That would be the case of vertical line rA rB in RI transformed to vertical line rA rB in RO.
Applications:
In nonimaging optics, the goal is not to form an image but simply to transfer all light from the input aperture to the output aperture. This is accomplished by transforming the edge rays ∂RI of RI to edge rays ∂RO of RO. This is known as the edge ray principle.
Generalizations:
Above it was assumed that light travels along the x3 axis, in Hamilton's principle above, coordinates x1 and x2 take the role of the generalized coordinates qk while x3 takes the role of parameter σ , that is, parameter σ =x3 and N=2. However, different parametrizations of the light rays are possible, as well as the use of generalized coordinates.
Generalizations:
General ray parametrization A more general situation can be considered in which the path of a light ray is parametrized as s=(x1(σ),x2(σ),x3(σ)) in which σ is a general parameter. In this case, when compared to Hamilton's principle above, coordinates x1 , x2 and x3 take the role of the generalized coordinates qk with N=3. Applying Hamilton's principle to optics in this case leads to where now L=nds/dσ and x˙k=dxk/dσ and for which the Euler-Lagrange equations applied to this form of Fermat's principle result in with k=1,2,3 and where L is the optical Lagrangian. Also in this case the optical momentum is defined as and the Hamiltonian P is defined by the expression given above for N=3 corresponding to functions x1(σ) , x2(σ) and x3(σ) to be determined And the corresponding Hamilton's equations with k=1,2,3 applied optics are with x˙k=dxk/dσ and p˙k=dpk/dσ The optical Lagrangian is given by and does not explicitly depend on parameter σ. For that reason not all solutions of the Euler-Lagrange equations will be possible light rays, since their derivation assumed an explicit dependence of L on σ which does not happen in optics.
Generalizations:
The optical momentum components can be obtained from where x˙k=dxk/dσ . The expression for the Lagrangian can be rewritten as Comparing this expression for L with that for the Hamiltonian P it can be concluded that From the expressions for the components pk of the optical momentum results The optical Hamiltonian is chosen as although other choices could be made. The Hamilton's equations with k = 1, 2, 3 defined above together with P=0 define the possible light rays.
Generalizations:
Generalized coordinates As in Hamiltonian mechanics, it is also possible to write the equations of Hamiltonian optics in terms of generalized coordinates (q1(σ),q2(σ),q3(σ)) , generalized momenta (u1(σ),u2(σ),u3(σ)) and Hamiltonian P as where the optical momentum is given by and e^1 , e^2 and e^3 are unit vectors. A particular case is obtained when these vectors form an orthonormal basis, that is, they are all perpendicular to each other. In that case, ukak/n is the cosine of the angle the optical momentum p makes to unit vector e^k | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Sims' vaginal speculum**
Sims' vaginal speculum:
In gynaecology, Sims' vaginal speculum is a double-bladed surgical instrument used for examining the vagina and cervix. It was developed by J. Marion Sims out of pewter spoon, but nowadays it is manufactured out of stainless steel or plastic. The plastic speculum is disposable, but the stainless steel one is not. Therefore, the stainless steel speculum should be sterilized before each use. Sims' speculum is inserted into the vagina to retract posterior vaginal wall. It gives more exposure of the vaginal walls than Cusco's Speculum and therefore is preferred for gynaecological surgeries. It is possible to slide the instrument around the vaginal wall to enable better visualization. The groove in the middle of Sims' speculum allows free flow of secretions and blood to the outside, thereby keeping the area dry. Sims' speculum is available in various sizes, and the size appropriate to the vaginal dimensions of the woman is chosen for use. The disadvantage of Sims' speculum is that it is not self-retaining. The examiner might want to use an anterior wall retractor in addition to Sims' speculum for better visualization of the cervix. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Automatic basis function construction**
Automatic basis function construction:
In machine learning, automatic basis function construction (or basis discovery) is the mathematical method of looking for a set of task-independent basis functions that map the state space to a lower-dimensional embedding, while still representing the value function accurately. Automatic basis construction is independent of prior knowledge of the domain, which allows it to perform well where expert-constructed basis functions are difficult or impossible to create.
Motivation:
In reinforcement learning (RL), most real-world Markov Decision Process (MDP) problems have large or continuous state spaces, which typically require some sort of approximation to be represented efficiently.
Motivation:
Linear function approximators (LFAs) are widely adopted for their low theoretical complexity. Two sub-problems needs to be solved for better approximation: weight optimization and basis construction. To solve the second problem, one way is to design special basis functions. Those basis functions work well in specific tasks but are significantly restricted to domains. Thus constructing basis construction functions automatically is preferred for broader applications.
Problem definition:
A Markov decision process with finite state space and fixed policy is defined with a 5-tuple s,a,p,γ,r , which includes the finite state space S=1,2,…,s , the finite action space A , the reward function r , discount factor γ∈[0,1) , and the transition model P Bellman equation is defined as: v=r+γPv.
Problem definition:
When the number of elements in S is small, v is usually maintained as tabular form. While S grows too large for this kind of representation. v is commonly being approximated via a linear combination of basis function Φ=ϕ1,ϕ2,…,ϕn , so that we have: v≈v^=∑i=1nθnϕn Here Φ is a |S|×n matrix in which every row contains a feature vector for corresponding row, θ is a weight vector with n parameters and usually n≪|s| Basis construction looks for ways to automatically construct better basis function Φ which can represent the value function well. A good construction method should have the following characteristics: Small error bounds between the estimate and real value function Form orthogonal basis in the value function space Converge to stationary value function fast
Popular methods:
Proto-value basis In this approach, Mahadevan analyzes the connectivity graph between states to determine a set of basis functions.The normalized graph Laplacian is defined as: L=I−D−12WD−12 Here W is an adjacency matrix which represents the states of fixed policy MDP which forms an undirected graph (N,E). D is a diagonal matrix related to nodes' degrees.
In discrete state space, the adjacency matrix W could be constructed by simply checking whether two states are connected, and D could be calculated by summing up every row of W. In continuous state space, we could take random walk Laplacian of W.
This spectral framework can be used for value function approximation (VFA). Given the fixed policy, the edge weights are determined by corresponding states' transition probability. To get smooth value approximation, diffusion wavelets are used.
Krylov basis Krylov basis construction uses the actual transition matrix instead of random walk Laplacian. The assumption of this method is that transition model P and reward r are available.
The vectors in Neumann series are denoted as yi=Pir for all i∈[0,infty) It shows that Krylov space spanned by y0,y1,…,ym−1 is enough to represent any value function, and m is the degree of minimal polynomial of (I−γP) Suppose the minimal polynomial is p(A)=1α0∑i=0m−1αi+1Ai , and we have BA=I , the value function can be written as: v=Br=1α0∑i=0m−1αi+1(I−γP)ir=∑i=0m−1αi+1βiyi.
Popular methods:
Algorithm Augmented Krylov Method z1,z2,…,zk are top real eigenvectors of P := r for := 1:(l+k) do if i>k+1 then := Pzi−1 end if for := 1:(i−1) do := zi−<zj,zi>zj; end for if ∥≈ 0 then break; end if end for k: number of eigenvectors in basis l: total number of vectors Bellman error basis Bellman error (or BEBFs) is defined as: ε=r+γPv^−v^=r+γPΦθ−Φθ Loosely speaking, Bellman error points towards the optimal value function. The sequence of BEBF form a basis space which is orthogonal to the real value function space; thus with sufficient number of BEBFs, any value function can be represented exactly.
Popular methods:
Algorithm BEBF stage stage i=1, ϕ1=r stage i∈[2,N] compute the weight vector θi according to current basis function Φi compute new bellman error by ε=r+γPΦiθi−Φiθi add bellman error to form new basis function: Φi+1=[Φi:ε] ;N represents the number of iterations till convergence.
":" means juxtaposing matrices or vectors.
Bellman average reward bases Bellman Average Reward Bases (or BARBs) is similar to Krylov Bases, but the reward function is being dilated by the average adjusted transition matrix P−P∗ . Here P∗ can be calculated by many methods in.BARBs converges faster than BEBFs and Krylov when γ is close to 1.
Algorithm BARBs stage stage i=1, P∗r stage i∈[2,N] compute the weight vector θi according to current basis function Φi compute new basis: :ϕi+1=r−P∗r+PΦiθi−Φiθi , and add it to form new bases matrix Φi+1=[Φi:ϕi+1] ;N represents the number of iterations till convergence.
":" means juxtaposing matrices or vectors.
Discussion and analysis:
There are two principal types of basis construction methods.
Discussion and analysis:
The first type of methods are reward-sensitive, like Krylov and BEBFs; they dilate the reward function geometrically through transition matrix. However, when discount factor γ approaches to 1, Krylov and BEBFs converge slowly. This is because the error Krylov based methods are restricted by Chebyshev polynomial bound. To solve this problem, methods such as BARBs are proposed. BARBs is an incremental variant of Drazin bases, and converges faster than Krylov and BEBFs when γ becomes large.
Discussion and analysis:
Another type is reward-insensitive proto value basis function derived from graph Lapalacian. This method uses graph information, but the construction of adjacency matrix makes this method hard to analyze. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Occupational hearing loss**
Occupational hearing loss:
Occupational hearing loss (OHL) is hearing loss that occurs as a result of occupational hazards, such as excessive noise and ototoxic chemicals. Noise is a common workplace hazard, and recognized as the risk factor for noise-induced hearing loss and tinnitus but it is not the only risk factor that can result in a work-related hearing loss. Also, noise-induced hearing loss can result from exposures that are not restricted to the occupational setting.
Occupational hearing loss:
OHL is a prevalent occupational concern in various work environments worldwide. In the United States, organizations such as the Occupational Safety and Health Administration (OSHA), the National Institute for Occupational Safety and Health (NIOSH) and the Mine Safety and Health Administration (MSHA) work with employers and workers to reduce or eliminate occupational hearing hazards through a hierarchy of hazard controls. OHL is one of the most common work-related illness in the United States. Occupational hearing hazards include industrial noise, and exposure to various ototoxic chemicals. Combined exposure to both industrial noise and ototoxic chemicals may cause more damage than either one would in isolation. Many chemicals have not been tested for ototoxicity, so unknown threats may exist.
Occupational hearing loss:
A 2016 study by NIOSH found that the mining sector had the highest prevalence of hearing impairment at 17%, followed by the construction sector (16%) and the manufacturing sector (14%). The public safety sector had the lowest rate of hearing impairment, at 7%. Overall, audiometric records show that about 33% of working-age adults with a history of occupational noise exposure have evidence of noise-induced hearing damage, and 16% of noise-exposed workers have material hearing impairment. In the service sector the prevalence of hearing loss was 17% compared to 16% for all industries combined. Several sub-sectors however exceeded the overall prevalence (10-33% higher) and/or had adjusted risks significantly higher than the reference industry. Workers in Administration of Urban Planning and Community and Rural Development had the highest prevalence (50%), and workers in Solid Waste Combustors and Incinerators had more than double the risk, the highest of any sub-sector. Some sub-sectors traditionally viewed as "low-risk" such as Real Estate and Rental and Leasing, and financial sub-sectors (Credit Unions, Call centers), and also had high prevalences and risks.Personal protective equipment, administrative controls, and engineering controls can all work to reduce exposure to noise and chemicals, either by providing the worker with protection such as earplugs, or by reducing the noise or chemicals at the source or limiting the time or level of exposure.
Background:
OHL is defined as any type of hearing loss, i.e. sensorineural, conductive, or mixed hearing loss, that occurs due to hazardous characteristics of a work environment. The hearing loss can range in severity from mild to profound and can be accompanied by tinnitus. Hazards of a work environment that can result in OHL include excessive noise, ototoxic chemicals, or physical trauma. OHL caused by excessive exposure to noise is also known as noise-induced hearing loss (NIHL). Noise exposure combined with ototoxic chemical exposure can results in more damage to hearing. OHL caused by physical trauma may include foreign bodies in the ear, vibration, barotrauma, or head injury. OHL, as well as hearing loss in general, can cause negative secondary social and emotional effects that can impact quality of life.Within the United States of America, approximately 10 million people have NIHL. Over twice that number (~22 million) are occupationally exposed to dangerous noise levels. Hearing loss accounted for a sizable percentage of occupational illness in 2007, at 14% of cases. United States government agencies such as OSHA, NIOSH and MSHA are working to understand the causes of OHL and how it can be prevented while providing regulations and guidelines to help protect the hearing of workers in all occupations.
Causes:
Noise exposure Exposure to noise can cause vibrations able to cause permanent damage to the ear. Both the volume of the noise and the duration of exposure can influence the likelihood of damage. Sound is measured in units called decibels, which is a logarithmic scale of sound levels that corresponds to the level of loudness that an individual's ear would perceive. Because it is a logarithmic scale, even small incremental increases in decibels correlate to large increases in loudness, and an increase in the risk of hearing loss.Sounds above 80 dB have the potential to cause permanent hearing loss. The intensity of sound is considered too great and hazardous if someone must yell in order to be heard. Ringing in the ears upon leaving work is also indicative of noise that is at a dangerous level. Farming, machinery work, and construction are some of the many occupations that put workers at risk of hearing loss.NIOSH establishes recommended exposure limits (RELs) to protect workers against the health effects of exposure to hazardous substances and agents encountered in the workplace. These NIOSH limits are based on the best available science and practices. NIOSH established the REL for occupational noise exposures to be 85 decibels, A-weighted (dB[A]) as an 8-hour time-weighted average. Occupational noise exposure at or above this level are considered hazardous. The REL is based on exposures at work 5 days per week and assumes that the individual spends the other 16 hours in the day, as well as weekends, in quieter conditions. NIOSH also specifies a maximum allowable daily noise dose, expressed in percentages. For example, a person continuously exposed to 85 dB(A) over an 8-hour work shift will reach 100% of their daily noise dose. This dose limit uses a 3-dB time-intensity tradeoff commonly referred to as the exchange rate or equal-energy rule: for every 3-dB increase in noise level, the allowable exposure time is reduced by half. For example, if the exposure level increases to 88 dB(A), workers should only be exposed for four hours. Alternatively, for every 3-dB decrease in noise level, the allowable exposure time is doubled, as shown in the table below.
Causes:
OSHA's current permissible exposure limit (PEL) for workers is an average of 90 dB over an 8-hour work day. Unlike NIOSH, OSHA uses a 5-dB exchange rate, where an increase in 5-dB for a sound corresponds to the amount of time workers may be exposed to that particular source of sound being halved. For example, workers cannot be exposed to a sound level of 95 dB for more than 4 hours per day, or to sounds at 100 dB for more than 2 hours per day. Employers who expose workers to 85 dB or more for 8 hour shifts are required to provide hearing exams and protection, monitor noise levels, and provide training.
Causes:
Sound level meters and dosimeters are two types of devices that are used to measure sound levels in the workplace. Dosimeters are typically worn by the employee to measure their own personal sound exposure. Other sound level meters can be used to double check dosimeter measurements, or used when dosimeters cannot be worn by the employees. They can also be used to evaluate engineering controls aimed at reducing noise levels.Some recent studies suggest that some smartphone applications may be able to measure noise as precisely as a Type 2 SLM. Although most smartphone sound measurement apps are not accurate enough to be used for legally required measurements, the NIOSH Sound Level Meter app met the requirements of IEC 61672/ANSI S1.4 Sound Level Meter Standards (Electroacoustics - Sound Level Meters - Part 3: Periodic Tests).
Causes:
Ototoxic chemical exposure Chemically-induced hearing loss (CIHL) is a potential result of occupational exposures. Certain chemical compounds may have ototoxic effects.[1] Exposure to organic solvents, heavy metals, and asphyxiants such as carbon monoxide can all cause hearing loss. These chemicals can be inhaled, ingested, or absorbed through the skin. Damage can occur to either the inner ear or the auditory nerve. Certain medications may also have the potential to cause hearing loss.Both noise and chemical exposures are common in many industries, and can both contribute to hearing loss simultaneously. Damage may be more likely or more severe if both are present, in particular if noise is impulsive. Industries in which combinations of exposures may exist include construction, fiberglass, metal manufacturing, and many more.It is estimated that over 22 million workers are exposed to dangerous noise levels, and 10 million are exposed to solvents that could potentially cause hearing loss every year, with an unknown number exposed to other ototoxic chemicals. A 2018 informational bulletin by the US Occupational Safety and Health Administration (OSHA) and the National Institute for Occupational Safety and Health (NIOSH) introduces the issue, provides examples of ototoxic chemicals, lists the industries and occupations at risk and provides prevention information.[2]
Prevention:
OHL is preventable, but currently the interventions to prevent it involve many components. Stricter legislation might reduce noise levels in the workplace. Hearing protection devices, such as earmuffs and earplugs can reduce noise exposure to safe levels, but, instructions are needed on how to put plugs into the ears correctly to achieve potential attenuation. Giving workers information on their noise exposure levels by itself was not shown to decrease noise. Engineering solutions might lead to similar noise reduction as that provided by hearing protection, but better evaluation of the noise exposures resulting from engineering interventions is needed, as most of the available information is limited to observations in laboratory conditions. Overall, the effects of hearing loss prevention programs are unclear. Better use of hearing protection as part of a program but does not necessarily protect against hearing loss. The 2017 Cochrane review concluded that in order to prevent NIHL in the workplace the quality of the implementation of prevention programs, particularly on the hearing protection component of the program, affects results, and that better quality of studies, especially in the field of engineering controls, and better implementation of legislation are needed. While the review concluded there is a lack of conclusive evidence it highlighted that this should not be interpreted as evidence of lack of effectiveness. The implication is that future research could affect conclusions reached.
Prevention:
Hierarchy of controls The hierarchy of controls provides a visual guide to the effectiveness of the various workplace controls set in place to eliminate or reduce exposure to occupational hazards, including noise or ototoxic chemicals. The hierarchy includes the following from most effective to least effective: Elimination: complete removal of the hazard Substitution: replacement that offers a smaller risk Engineering controls: physical changes to reduce exposure Administrative controls: changes in work procedures or training Personal protective equipment (PPE): individual equipment to reduce exposure, e.g. earplugs Engineering controls Engineering controls is the next highest in the hierarchy of risk reduction methods when elimination and substitution of the hazard are not possible. These types of controls typically involve making changes in equipment or other changes to minimize the level of noise that reaches a worker's ear. They may also involve measures such as barriers between the worker and the source of the noise, mufflers, regular maintenance of the machinery, or substituting quieter equipment.The OSHA Technical Manual (OTM) on noise provides technical information about workplace hazards and controls to OSHA's Compliance Safety and Health Officers (CSHOs). The content of the OTM is based on currently available research publications, OSHA standards, and consensus standards. The OTM is available to the public for use by other health and safety professionals, employers, and anyone involved in developing or implementing an effective workplace safety and health program.Examples of noise control strategies adopted in the workplace can be seen among the winners of the Safe-in-Sound Excellence in Hearing Loss Prevention Awards.
Prevention:
Administrative controls Administrative control, behind engineering control, is the next best form of prevention of noise exposure. They can either reduce the exposure to noise, or reduce the decibel level of the noise itself. Limiting the amount of time a worker is allowed to be around an unsafe level of noise exposure, and creating procedures for operation of equipment that could produce harmful levels of noise are both examples of administrative controls.
Prevention:
Personal protection Elimination or reduction of the source of noise or chemical exposure is ideal, but when that is not possible or adequate, wearing personal protective equipment (PPE) such as earplugs or earmuffs can help reduce the risk of hearing loss due to noise exposure. PPE should be a last resort and not be used in substitution for engineering or administrative controls. It is important that workers are properly trained on the use of PPE to ensure proper protection. A personal attenuation rating can be objectively measured through a hearing protection fit-testing system.
Prevention:
Other initiatives In addition to the hierarchy of controls, other programs have been created to promote the prevention of hearing loss in the workplace. For example, the Buy Quiet program was created to encourage the purchase of quieter tools and machinery in the workplace. Additionally, the Safe-n-Sound Award was created to recognize organizations that excel in preventing occupational hearing loss.
History:
Occupational hearing loss is a very present industrial issue that has been noticed since the Industrial Revolution. As industrial society continues to grow, this issue is becoming increasingly detrimental due to the exposure of chemicals and physical objects. Millions of employees have been affected by occupational hearing loss, especially in industry. Industrialized countries see most of these damages as they result in both economic and living problems.
History:
Within the United States of America alone, 10 of the 28 million people that have experienced hearing loss related to noise exposure. Rarely do workers express concerns or complaints regarding Occupational hearing loss. In order to gather relevant information, workers who have experienced loud work environments are questioned regarding their hearing abilities during everyday activities. When analyzing OHP, it is necessary to consider family history, hobbies, recreational activities, and how they could play a role in a person's hearing loss. In order to test hearing loss, audiometers are used to and are adjusted to American National Standards Institute (ANSI) regulations. The Occupation and Safety Health Association (OSHA) of the United States of America requires a program that conserves hearing when noise levels are greater than 85 dB. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Alioth (Debian)**
Alioth (Debian):
Alioth was a FusionForge system run by the Debian project for development of free software and free documentation, especially software or documentation to do with Debian.Most of the projects hosted by Alioth were packaging existing software in the Debian format. However, there were some notable non-Debian projects hosted, like SANE project.
History:
Alioth had been announced in March 2003. Originally Alioth was to be hosted on the SourceForge code base; the Free Software version of GForge was chosen later as it avoided the need to duplicate effort spent on rebranding SourceForge. Since 2009, Alioth has been running a GForge descendant called FusionForge.
In 2018, Alioth has been replaced by a GitLab based solution hosted on salsa.debian.org. Alioth has been finally switched off in June 2018.Alioth administrators have included Raphaël Hertzog and Roland Mas. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Nuclear power plant**
Nuclear power plant:
A nuclear power plant (NPP) is a thermal power station in which the heat source is a nuclear reactor. As is typical of thermal power stations, heat is used to generate steam that drives a steam turbine connected to a generator that produces electricity. As of August 2023, the International Atomic Energy Agency reported there were 412 nuclear power reactors in operation in 31 countries around the world, and 57 nuclear power reactors under construction.Nuclear plants are very often used for base load since their operations, maintenance, and fuel costs are at the lower end of the spectrum of costs. However, building a nuclear power plant often spans five to ten years, which can accrue to significant financial costs, depending on how the initial investments are financed.Nuclear power plants have a carbon footprint comparable to that of renewable energy such as solar farms and wind farms, and much lower than fossil fuels such as natural gas and coal. Despite some spectacular catastrophes, nuclear power plants are among the safest mode of electricity generation, comparable to solar and wind power plants.
History:
The first time that heat from a nuclear reactor was used to generate electricity was on December 21, 1951, at the Experimental Breeder Reactor I, feeding four light bulbs.On June 27, 1954, the world's first nuclear power station to generate electricity for a power grid, the Obninsk Nuclear Power Plant, commenced operations in Obninsk, in the Soviet Union.
The world's first full scale power station, Calder Hall in the United Kingdom, opened on October 17, 1956. The world's first full scale power station solely devoted to electricity production—Calder Hall was also meant to produce plutonium—the Shippingport Atomic Power Station in Pennsylvania, United States—was connected to the grid on December 18, 1957.
Basic components:
Systems The conversion to electrical energy takes place indirectly, as in conventional thermal power stations. The fission in a nuclear reactor heats the reactor coolant. The coolant may be water or gas, or even liquid metal, depending on the type of reactor. The reactor coolant then goes to a steam generator and heats water to produce steam. The pressurized steam is then usually fed to a multi-stage steam turbine. After the steam turbine has expanded and partially condensed the steam, the remaining vapor is condensed in a condenser. The condenser is a heat exchanger which is connected to a secondary side such as a river or a cooling tower. The water is then pumped back into the steam generator and the cycle begins again. The water-steam cycle corresponds to the Rankine cycle.
Basic components:
The nuclear reactor is the heart of the station. In its central part, the reactor's core produces heat due to nuclear fission. With this heat, a coolant is heated as it is pumped through the reactor and thereby removes the energy from the reactor. The heat from nuclear fission is used to raise steam, which runs through turbines, which in turn power the electrical generators.
Basic components:
Nuclear reactors usually rely on uranium to fuel the chain reaction. Uranium is a very heavy metal that is abundant on Earth and is found in sea water as well as most rocks. Naturally occurring uranium is found in two different isotopes: uranium-238 (U-238), accounting for 99.3% and uranium-235 (U-235) accounting for about 0.7%. U-238 has 146 neutrons and U-235 has 143 neutrons.
Basic components:
Different isotopes have different behaviors. For instance, U-235 is fissile which means that it is easily split and gives off a lot of energy making it ideal for nuclear energy. On the other hand, U-238 does not have that property despite it being the same element. Different isotopes also have different half-lives. U-238 has a longer half-life than U-235, so it takes longer to decay over time. This also means that U-238 is less radioactive than U-235.
Basic components:
Since nuclear fission creates radioactivity, the reactor core is surrounded by a protective shield. This containment absorbs radiation and prevents radioactive material from being released into the environment. In addition, many reactors are equipped with a dome of concrete to protect the reactor against both internal casualties and external impacts.
Basic components:
The purpose of the steam turbine is to convert the heat contained in steam into mechanical energy. The engine house with the steam turbine is usually structurally separated from the main reactor building. It is aligned so as to prevent debris from the destruction of a turbine in operation from flying towards the reactor.In the case of a pressurized water reactor, the steam turbine is separated from the nuclear system. To detect a leak in the steam generator and thus the passage of radioactive water at an early stage, an activity meter is mounted to track the outlet steam of the steam generator. In contrast, boiling water reactors pass radioactive water through the steam turbine, so the turbine is kept as part of the radiologically controlled area of the nuclear power station.
Basic components:
The electric generator converts mechanical power supplied by the turbine into electrical power. Low-pole AC synchronous generators of high rated power are used. A cooling system removes heat from the reactor core and transports it to another area of the station, where the thermal energy can be harnessed to produce electricity or to do other useful work. Typically the hot coolant is used as a heat source for a boiler, and the pressurized steam from that drives one or more steam turbine driven electrical generators.In the event of an emergency, safety valves can be used to prevent pipes from bursting or the reactor from exploding. The valves are designed so that they can derive all of the supplied flow rates with little increase in pressure. In the case of the BWR, the steam is directed into the suppression chamber and condenses there. The chambers on a heat exchanger are connected to the intermediate cooling circuit.
Basic components:
The main condenser is a large cross-flow shell and tube heat exchanger that takes wet vapor, a mixture of liquid water and steam at saturation conditions, from the turbine-generator exhaust and condenses it back into sub-cooled liquid water so it can be pumped back to the reactor by the condensate and feedwater pumps.
Basic components:
In the main condenser, the wet vapor turbine exhaust come into contact with thousands of tubes that have much colder water flowing through them on the other side. The cooling water typically come from a natural body of water such as a river or lake. Palo Verde Nuclear Generating Station, located in the desert about 97 kilometres (60 mi) west of Phoenix, Arizona, is the only nuclear facility that does not use a natural body of water for cooling, instead it uses treated sewage from the greater Phoenix metropolitan area. The water coming from the cooling body of water is either pumped back to the water source at a warmer temperature or returns to a cooling tower where it either cools for more uses or evaporates into water vapor that rises out the top of the tower.The water level in the steam generator and the nuclear reactor is controlled using the feedwater system. The feedwater pump has the task of taking the water from the condensate system, increasing the pressure and forcing it into either the steam generators—in the case of a pressurized water reactor — or directly into the reactor, for boiling water reactors.
Basic components:
Continuous power supply to the plant is critical to ensure safe operation. Most nuclear stations require at least two distinct sources of offsite power for redundancy. These are usually provided by multiple transformers that are sufficiently separated and can receive power from multiple transmission lines. In addition, in some nuclear stations, the turbine generator can power the station's loads while the station is online, without requiring external power. This is achieved via station service transformers which tap power from the generator output before they reach the step-up transformer.
Economics:
The economics of nuclear power plants is a controversial subject, and multibillion-dollar investments ride on the choice of an energy source. Nuclear power stations typically have high capital costs, but low direct fuel costs, with the costs of fuel extraction, processing, use and spent fuel storage internalized costs. Therefore, comparison with other power generation methods is strongly dependent on assumptions about construction timescales and capital financing for nuclear stations. Cost estimates take into account station decommissioning and nuclear waste storage or recycling costs in the United States due to the Price Anderson Act.
Economics:
With the prospect that all spent nuclear fuel could potentially be recycled by using future reactors, generation IV reactors are being designed to completely close the nuclear fuel cycle. However, up to now, there has not been any actual bulk recycling of waste from a NPP, and on-site temporary storage is still being used at almost all plant sites due to construction problems for deep geological repositories. Only Finland has stable repository plans, therefore from a worldwide perspective, long-term waste storage costs are uncertain.
Economics:
Construction, or capital cost aside, measures to mitigate global warming such as a carbon tax or carbon emissions trading, increasingly favor the economics of nuclear power. Further efficiencies are hoped to be achieved through more advanced reactor designs, Generation III reactors promise to be at least 17% more fuel efficient, and have lower capital costs, while Generation IV reactors promise further gains in fuel efficiency and significant reductions in nuclear waste.
Economics:
In Eastern Europe, a number of long-established projects are struggling to find financing, notably Belene in Bulgaria and the additional reactors at Cernavodă in Romania, and some potential backers have pulled out. Where cheap gas is available and its future supply relatively secure, this also poses a major problem for nuclear projects.Analysis of the economics of nuclear power must take into account who bears the risks of future uncertainties. To date all operating nuclear power stations were developed by state-owned or regulated utilities where many of the risks associated with construction costs, operating performance, fuel price, and other factors were borne by consumers rather than suppliers. Many countries have now liberalized the electricity market where these risks and the risk of cheaper competitors emerging before capital costs are recovered, are borne by station suppliers and operators rather than consumers, which leads to a significantly different evaluation of the economics of new nuclear power stations.Following the 2011 Fukushima nuclear accident in Japan, costs are likely to go up for currently operating and new nuclear power stations, due to increased requirements for on-site spent fuel management and elevated design basis threats. However many designs, such as the currently under construction AP1000, use passive nuclear safety cooling systems, unlike those of Fukushima I which required active cooling systems, which largely eliminates the need to spend more on redundant back up safety equipment.
Economics:
According to the World Nuclear Association, as of March 2020: Nuclear power is cost competitive with other forms of electricity generation, except where there is direct access to low-cost fossil fuels.
Fuel costs for nuclear plants are a minor proportion of total generating costs, though capital costs are greater than those for coal-fired plants and much greater than those for gas-fired plants.
System costs for nuclear power (as well as coal and gas-fired generation) are very much lower than for intermittent renewables.
Providing incentives for long-term, high-capital investment in deregulated markets driven by short-term price signals presents a challenge in securing a diversified and reliable electricity supply system.
In assessing the economics of nuclear power, decommissioning and waste disposal costs are fully taken into account.
Nuclear power plant construction is typical of large infrastructure projects around the world, whose costs and delivery challenges tend to be under-estimated.
Safety and accidents:
Modern nuclear reactor designs have had numerous safety improvements since the first-generation nuclear reactors. A nuclear power plant cannot explode like a nuclear weapon because the fuel for uranium reactors is not enriched enough, and nuclear weapons require precision explosives to force fuel into a small enough volume to go supercritical. Most reactors require continuous temperature control to prevent a core meltdown, which has occurred on a few occasions through accident or natural disaster, releasing radiation and making the surrounding area uninhabitable. Plants must be defended against theft of nuclear material and attack by enemy military planes or missiles.The most serious accidents to date have been the 1979 Three Mile Island accident, the 1986 Chernobyl disaster, and the 2011 Fukushima Daiichi nuclear disaster, corresponding to the beginning of the operation of generation II reactors.
Safety and accidents:
Professor of sociology Charles Perrow states that multiple and unexpected failures are built into society's complex and tightly coupled nuclear reactor systems. Such accidents are unavoidable and cannot be designed around. An interdisciplinary team from MIT has estimated that given the expected growth of nuclear power from 2005 to 2055, at least four serious nuclear accidents would be expected in that period. The MIT study does not take into account improvements in safety since 1970.
Controversy:
The nuclear power debate about the deployment and use of nuclear fission reactors to generate electricity from nuclear fuel for civilian purposes peaked during the 1970s and 1980s, when it "reached an intensity unprecedented in the history of technology controversies," in some countries.Proponents argue that nuclear power is a sustainable energy source which reduces carbon emissions and can increase energy security if its use supplants a dependence on imported fuels. Proponents advance the notion that nuclear power produces virtually no air pollution, in contrast to the chief viable alternative of fossil fuel. Proponents also believe that nuclear power is the only viable course to achieve energy independence for most Western countries. They emphasize that the risks of storing waste are small and can be further reduced by using the latest technology in newer reactors, and the operational safety record in the Western world is excellent when compared to the other major kinds of power plants.Opponents say that nuclear power poses many threats to people and the environment, and that costs do not justify benefits. Threats include health risks and environmental damage from uranium mining, processing and transport, the risk of nuclear weapons proliferation or sabotage, and the problem of radioactive nuclear waste. Another environmental issue is discharge of hot water into the sea. The hot water modifies the environmental conditions for marine flora and fauna. They also contend that reactors themselves are enormously complex machines where many things can and do go wrong, and there have been many serious nuclear accidents. Critics do not believe that these risks can be reduced through new technology, despite rapid advancements in containment procedures and storage methods.
Controversy:
Opponents argue that when all the energy-intensive stages of the nuclear fuel chain are considered, from uranium mining to nuclear decommissioning, nuclear power is not a low-carbon electricity source despite the possibility of refinement and long-term storage being powered by a nuclear facility. Those countries that do not contain uranium mines cannot achieve energy independence through existing nuclear power technologies. Actual construction costs often exceed estimates, and spent fuel management costs are difficult to define.On 1 August 2020, the UAE launched the Arab region's first-ever nuclear energy plant. Unit 1 of the Barakah plant in the Al Dhafrah region of Abu Dhabi commenced generating heat on the first day of its launch, while the remaining 3 Units are being built. However, Nuclear Consulting Group head, Paul Dorfman, warned the Gulf nation's investment into the plant as a risk "further destabilizing the volatile Gulf region, damaging the environment and raising the possibility of nuclear proliferation."
Reprocessing:
Nuclear reprocessing technology was developed to chemically separate and recover fissionable plutonium from irradiated nuclear fuel. Reprocessing serves multiple purposes, whose relative importance has changed over time. Originally reprocessing was used solely to extract plutonium for producing nuclear weapons. With the commercialization of nuclear power, the reprocessed plutonium was recycled back into MOX nuclear fuel for thermal reactors. The reprocessed uranium, which constitutes the bulk of the spent fuel material, can in principle also be re-used as fuel, but that is only economic when uranium prices are high or disposal is expensive. Finally, the breeder reactor can employ not only the recycled plutonium and uranium in spent fuel, but all the actinides, closing the nuclear fuel cycle and potentially multiplying the energy extracted from natural uranium by more than 60 times.Nuclear reprocessing reduces the volume of high-level waste, but by itself does not reduce radioactivity or heat generation and therefore does not eliminate the need for a geological waste repository. Reprocessing has been politically controversial because of the potential to contribute to nuclear proliferation, the potential vulnerability to nuclear terrorism, the political challenges of repository siting (a problem that applies equally to direct disposal of spent fuel), and because of its high cost compared to the once-through fuel cycle. In the United States, the Obama administration stepped back from President Bush's plans for commercial-scale reprocessing and reverted to a program focused on reprocessing-related scientific research.
Accident indemnification:
Nuclear power works under an insurance framework that limits or structures accident liabilities in accordance with the Paris Convention on Third Party Liability in the Field of Nuclear Energy, the Brussels supplementary convention, and the Vienna Convention on Civil Liability for Nuclear Damage.
However states with a majority of the world's nuclear power stations, including the U.S., Russia, China and Japan, are not party to international nuclear liability conventions.
Accident indemnification:
United States In the United States, insurance for nuclear or radiological incidents is covered (for facilities licensed through 2025) by the Price-Anderson Nuclear Industries Indemnity Act.United Kingdom Under the energy policy of the United Kingdom through its 1965 Nuclear Installations Act, liability is governed for nuclear damage for which a UK nuclear licensee is responsible. The Act requires compensation to be paid for damage up to a limit of £150 million by the liable operator for ten years after the incident. Between ten and thirty years afterwards, the Government meets this obligation. The Government is also liable for additional limited cross-border liability (about £300 million) under international conventions (Paris Convention on Third Party Liability in the Field of Nuclear Energy and Brussels Convention supplementary to the Paris Convention).
Decommissioning:
Nuclear decommissioning is the dismantling of a nuclear power station and decontamination of the site to a state no longer requiring protection from radiation for the general public. The main difference from the dismantling of other power stations is the presence of radioactive material that requires special precautions to remove and safely relocate to a waste repository.
Decommissioning:
Decommissioning involves many administrative and technical actions. It includes all clean-up of radioactivity and progressive demolition of the station. Once a facility is decommissioned, there should no longer be any danger of a radioactive accident or to any persons visiting it. After a facility has been completely decommissioned it is released from regulatory control, and the licensee of the station no longer has responsibility for its nuclear safety.
Decommissioning:
Timing and deferral of decommissioning Generally speaking, nuclear stations were originally designed for a life of about 30 years. Newer stations are designed for a 40 to 60-year operating life. The Centurion Reactor is a future class of nuclear reactor that is being designed to last 100 years.One of the major limiting wear factors is the deterioration of the reactor's pressure vessel under the action of neutron bombardment, however in 2018 Rosatom announced it had developed a thermal annealing technique for reactor pressure vessels which ameliorates radiation damage and extends service life by between 15 and 30 years.
Flexibility:
Nuclear stations are used primarily for base load because of economic considerations. The fuel cost of operations for a nuclear station is smaller than the fuel cost for operation of coal or gas plants. Since most of the cost of nuclear power plant is capital cost, there is almost no cost saving by running it at less than full capacity.Nuclear power plants are routinely used in load following mode on a large scale in France, although "it is generally accepted that this is not an ideal economic situation for nuclear stations." Unit A at the decommissioned German Biblis Nuclear Power Plant was designed to modulate its output 15% per minute between 40% and 100% of its nominal power.Russia has led in the practical development of floating nuclear power stations, which can be transported to the desired location and occasionally relocated or moved for easier decommissioning. In 2022, the United States Department of Energy funded a three-year research study of offshore floating nuclear power generation. In October 2022, NuScale Power and Canadian company Prodigy announced a joint project to bring a North American small modular reactor based floating plant to market. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Fibular collateral ligament**
Fibular collateral ligament:
The lateral collateral ligament (LCL, long external lateral ligament or fibular collateral ligament) is an extrinsic ligament of the knee located on the lateral side of the knee. Its superior attachment is at the lateral epicondyle of the femur (superoposterior to the popliteal groove); its inferior attachment is at the at the lateral aspect of the head of fibula (anterior to the apex). The LCL is not fused with the joint capsule. Inferiorly, the LCL splits the tendon of insertion of the biceps femoris muscle.
Structure:
The LCL measures some 5 cm in length. It is rounded, and is more narrow and less broad compared to the medial collateral ligament. It extends obliquely inferoposteriorly from its superior attachment to its inferior attachment.In contrast to the medial collateral ligament, it is not fused with either the capsular ligament nor the lateral meniscus. Because of this, the LCL is more flexible than its medial counterpart, and is therefore less susceptible to injury.
Structure:
Relations Immediately below its origin is the groove for the tendon of the popliteus.The greater part of its lateral surface is covered by the tendon of the biceps femoris; the tendon, however, divides at its insertion into two parts, which are separated by the ligament.Deep to the ligament are the tendon of the popliteus, and the inferior lateral genicular vessels and nerve.
Function:
Both collateral ligaments are taut when the knee joint is in extension. With the knee in flexion, the radius of curvatures of the condyles is decreased and the origin and insertions of the ligaments are brought closer together which make them lax. The pair of ligaments thus stabilize the knee joint in the coronal plane. Therefore, damage and rupture of these ligaments can be diagnosed by examining the knee's stability in the mediolateral axis.
Clinical significance:
Causes of injury The LCL is usually injured as a result of varus force across the knee, which is a force pushing the knee from the medial (inner) side of the joint, causing stress on the outside. An example of this would be a direct blow to the inside of the knee. The LCL can also be injured by a noncontact injury, such as a hyperextension stress, again causing varus force across the knee.An LCL injury usually occurs simultaneously as the other ligaments of the knee are injured. Multiple knee ligament tears and stresses can result from a significant trauma that includes direct blunt force to the knee, such as an automobile crash.
Clinical significance:
Symptoms Symptoms of a sprain or tear of the LCL includes pain to the lateral aspect of the knee, instability of the knee when walking, swelling and ecchymosis (bruising) at the site of trauma. Direct trauma to the medial aspect of the knee may also affect the peroneal nerve, which could result in a foot drop or paresthesias below the knee which could present itself as a tingling sensation.
Treatment:
An isolated LCL tear or sprain rarely requires surgery. If the injury is a Grade 1 or Grade II, microscopic or partial macroscopic tearing respectively, the injury is treated with rest and rehabilitation. Ice, electrical stimulation and elevation are all methods to reduce the pain and swelling felt in the initial stages after the injury takes place. Physical therapy focuses on regaining full range-of-motion, such as biking, stretching and careful applications of pressure on the joint. Full recovery of Grade I or Grade II tears should take between 6 weeks and 3 months. Continued pain, swelling and instability to the joint after this time period may require surgical repair or reconstruction to the ligament. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Alkb homolog 3, alpha-ketoglutaratedependent dioxygenase**
Alkb homolog 3, alpha-ketoglutaratedependent dioxygenase:
AlkB homolog 3, alpha-ketoglutaratedependent dioxygenase is a protein that in humans is encoded by the ALKBH3 gene.
Function:
The Escherichia coli AlkB protein protects against the cytotoxicity of methylating agents by repair of the specific DNA lesions generated in single-stranded DNA. ALKBH2 (MIM 610602) and ALKBH3 are E. coli AlkB homologs that catalyze the removal of 1-methyladenine and 3-methylcytosine (Duncan et al., 2002 [PubMed 12486230]). | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Lindsay M. De Biase**
Lindsay M. De Biase:
Lindsay M. De Biase is an American neuroscientist and glial biologist as well as an assistant professor at the David Geffen School of Medicine at the University of California, Los Angeles. De Biase explores the diversity of microglia that exist within the basal ganglia circuitry to one day target regional or circuit-specific microglia in disease. De Biase's graduate work highlighted the existence and roles of neuron-OPC synapses in development and her postdoctoral work was critical in showing that microglia are not homogenous within the brain parenchyma.
Early life and education:
De Biase pursued her undergraduate degree at Yale University in New Haven, Connecticut. De Biase majored in Cellular, Molecular, and Developmental Biology and received her Bachelors of Science in 2003. After completing her degree at Yale, De Biase worked as a research technician in the lab of Eric Hoffman at the Children's National Medical Center in Washington, D.C. De Biase explored gene expression changes in amyotrophic lateral sclerosis in addition to characterizing various immune cell states. Along with the Hoffman Lab, De Biase explored the gene expression of T helper cells. She found that TH2 cells express the NKG2A and CD56 upon activation while TH1 cells do not.Following her time as a researcher technician, in 2005 De Biase pursued her graduate degree at Johns Hopkins School of Medicine. De Biase completed her graduate training in neuroscience under the mentorship of Dwight Bergles. She explored the synapses and signaling between neurons and oligodendrocyte precursor cells.De Biase first characterized the expression patterns and roles of NG2+ oligodendrocyte progenitors in the mouse brain. She found that these NG2+ cells, which go on to form oligodendrocytes later in development, have a unique early expression of voltage gated sodium channels, ionotropic glutamate receptors, and they form synapses with glutamatergic neurons. She further found that these cells exhibited low amplitude spikes, but not action potentials and that later in their development, this spiking ability was lost as well as their synaptic input and glutamate receptors. Overall her early results showed that oligodendrocyte progenitors, through their glutamatergic synapses with neurons, are able to monitor neural activity early in development before transitioning into their oligodendrocyte identities. Next, De Biase explored the role of NMDARs on oligodendrocyte precursors (OPCs) in oligodendrocyte differentiation. When she knocked out NMDARs in OPCs, De Biase did not observe effects on differentiation or cell survival, but she did find significant changes in AMPAR expression, suggesting that NMDARs help to regulate AMPAR signalling with neighbouring axons in development. Overall, De Biase's graduate work highlighted the novel roles and functions of previously unknown OPC-neuron synapses in development.
Career and research:
De Biase completed her graduate training in 2011 and then pursued her postdoctoral work under the mentorship of Antonello Bonci at the National Institute on Drug Abuse. De Biase explored the diversity of microglial phenotypes across basal ganglia nuclei, which called into question prior hypotheses about the homogeneity of microglia in the central nervous system. Her discoveries in the Bonci Lab laid the foundation for her independent career and research program.In 2018, De Biase joined the faculty at the David Geffen School of Medicine at the University of California, Los Angeles. De Biase is an assistant professor in the Department of Physiology and is the principal investigator of the De Biase Lab. The research focus of the De Biase Lab is the development and functions of the diversity of microglia within the basal ganglia circuitry. Based on De Biase's postdoctoral work, microglia appear to be specialized to different brain regions and neural circuits, which allows for specific development of therapies for neurological and psychiatric diseases that target circuit specific microglia based on their unique gene expression, functions, and roles in disease. The De Biase Lab uses high resolution imaging techniques, electrophysiological recordings of microglia, and gene expression analyses to probe microglia in their different circuits and states in the basal ganglia.
Career and research:
Diversity of microglia phenotypes Since microglia originate from the same yolk sac progenitors, it has been thought that microglia are homogenous within the brain parenchyma. In probing the roles of microglia in the basal ganglia, De Biase called this hypothesis into question. She found that the anatomical features, lysosomal content, membrane properties, and transcriptomes of microglia differ across the basal ganglia nuclei. These regional differences in microglia phenotype were established within the second postnatal week and are reinforced by local environmental cues. Her findings suggest a critical role for circuit specific microglia contributing to the unique functions and roles of specific neural circuits in the brain. Following up this work in her own lab at UCLA, De Biase and her team explored the time course of microglia specialization and dynamics of microglia maturation across the mesolimbic neural circuits. They found that by the second postnatal week, microglia population numbers peak in nucleus accumbens (NAc) and by the third postnatal week they peak in the ventral tegmental area (VTA). The surge of microglia population expansion occurs on postnatal day 8, only once the microglia have tiled the brain in an even fashion. Microglia then undergo cell death to reach adult brain levels of microglia expression. Lastly, De Biase and her group observed that the regional differences in microglia expression begin around postnatal day 8. Overall, their results showed that during this time of divergence in microglial identity, microglia may play a highly active role in shaping neural circuits.
Awards and honors:
2019 Glen Foundation and American Foundation for Aging Research Grant for Junior Faculty 2018 NARSAD Young Investigator Award 2017 NIDA Postdoctoral Fellow Mentoring Award 2017 NIDA Women's Science Advisory Committee, Excellence in Scientific Research Award 2014-2016 Fellows Award for Research Excellence, NIH 2009 Robert Goodman Scholars Award, Johns Hopkins School of Medicine | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Hammar experiment**
Hammar experiment:
The Hammar experiment was an experiment designed and conducted by Gustaf Wilhelm Hammar (1935) to test the aether drag hypothesis. Its negative result refuted some specific aether drag models, and confirmed special relativity.
Overview:
Experiments such as the Michelson–Morley experiment of 1887 (and later other experiments such as the Trouton–Noble experiment in 1903 or the Trouton–Rankine experiment in 1908), presented evidence against the theory of a medium for light propagation known as the luminiferous aether; a theory that had been an established part of science for nearly one hundred years at the time. These results cast doubts on what was then a very central assumption of modern science, and later led to the development of special relativity.
Overview:
In an attempt to explain the results of the Michelson–Morley experiment in the context of the assumed medium, aether, many new hypotheses were examined. One of the proposals was that instead of passing through a static and unmoving aether, massive objects like the Earth may drag some of the aether along with them, making it impossible to detect a "wind". Oliver Lodge (1893–1897) was one of the first to perform a test of this theory by using rotating and massive lead blocks in an experiment that attempted to cause an asymmetrical aether wind. His tests yielded no appreciable results differing from previous tests for the aether wind.In the 1920s, Dayton Miller conducted repetitions of the Michelson–Morley experiments. He ultimately constructed an apparatus in such a way as to minimize the mass along the path of the experiment, conducting it at the peak of a tall hill in a building that was made of lightweight materials. He produced measurements showing a diurnal variance, suggesting detection of the "wind", which he ascribed to the lack of mass making while previous experiments were carried out with considerable mass around their apparatus.
The experiment:
To test Miller's assertion, Hammar conducted the following experiment using a common-path interferometer in 1935.
The experiment:
Using a half-silvered mirror A, he divided a ray of white light into two half-rays. One half-ray was sent in the transverse direction into a heavy walled steel pipe terminated with lead plugs. In this pipe, the ray was reflected by mirror D and sent into the longitudinal direction to another mirror C at the other end of the pipe. There it was reflected and sent in the transverse direction to a mirror B outside of the pipe. From B it traveled back to A in the longitudinal direction. The other half-ray traversed the same path in the opposite direction.
The experiment:
The topology of the light path was that of a Sagnac interferometer with an odd number of reflections. Sagnac interferometers offer excellent contrast and fringe stability, and the configuration with an odd number of reflections is only slightly less stable than the configuration with an even number of reflections. (With an odd number of reflections, the oppositely traveling beams are laterally inverted with respect to each other over most of the light path, so that the topology deviates slightly from strict common path.) The relative immunity of his apparatus to vibration, mechanical stress and temperature effects, allowed Hammar to detect fringe displacements as little as 1/10 of a fringe, despite using the interferometer outdoors in an open environment with no temperature control.
The experiment:
Similar to Lodge's experiment, Hammar's apparatus should have caused an asymmetry in any proposed aether wind. Hammar's expectation of the results was that: With the apparatus aligned perpendicular to the aether wind, both long arms would be equally affected by aether entrainment. With the apparatus aligned parallel to the aether wind, one arm would be more affected by aether entrainment than the other. The following expected propagation times for the counter-propagating rays were given by Robertson/Noonan: t1=ABc+v+BC+DAc2−v2+CDc−v+Δv t2=ABc−v+BC+DAc2−v2+CDc+v−Δv where Δv is the velocity of the entrained aether. This gives an expected time difference: Δt≃2lΔvc2 On September 1, 1934, Hammar set up the apparatus on top of a high hill two miles south of Moscow, Idaho, and made many observations with the apparatus turned in all directions of the azimuth during the daylight hours of September 1, 2, and 3. He saw no shift of the interference fringes, corresponding to an upper limit of 0.074 km/s. These results are considered a proof against the aether drag hypothesis as it was proposed by Miller.
Consequences for Aether drag hypothesis:
Because differing ideas of "aether drag" existed, the interpretation of all aether drag experiments can be done in the context of each version of the hypothesis.
None or partial entrainment by any object with mass. This was discussed by scientists such as Augustin-Jean Fresnel and François Arago. It was refuted by the Michelson–Morley experiment.
Complete entrainment within or in the vicinity of all masses. It was refuted by the Aberration of light, Sagnac effect, Oliver Lodge's experiments, and Hammar's experiment.
Complete entrainment within or in the vicinity of only very large masses such as Earth. It was refuted by the Aberration of light, Michelson–Gale–Pearson experiment. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Timber hitch**
Timber hitch:
The timber hitch is a knot used to attach a single length of rope to a cylindrical object. Secure while tension is maintained, it is easily untied even after heavy loading.The timber hitch is a very old knot. It is first known to have been mentioned in a nautical source c. 1625 and illustrated in 1762.
Usage:
As the name suggests, this knot is often used by lumbermen and arborists for attaching ropes or chains to tree trunks, branches, and logs. For stability when towing or lowering long items, the addition of a half-hitch in front of the timber hitch creates a timber hitch and a half hitch, or known as a killick hitch when at sea. A killick is "a small anchor or weight for mooring a boat, sometimes consisting of a stone secured by pieces of wood". This can also prevent the timber hitch from rolling. The timber hitch is one of the few knots that can easily be tied in a chain, leading to its use in applications where ropes lack the necessary strength and would break under the same amount of tension.
Usage:
This knot is also known as the Bowyer's Knot, as it is used to attach the lower end of the bowstring to the bottom limb on an English longbow.The hitch is also one of the methods used to connect ukulele and classical guitar strings to the bridge of the instruments.
Tying:
To make the knot, pass the rope completely around the object. Pass the running end around the standing part, then through the loop just formed. Make three or more turns (or twists) around the working part. Pull on the standing part to tighten around the object.
A common error in tying can be avoided by assuring that the turns are made in the working part around itself. When making the hitch in laid rope, the turns should be made with the lay of the rope, that is, in the same direction as the twist of the rope.
Security:
Although The Ashley Book of Knots states that "three tucks or turns are ample", this work was written prior to the wide use of synthetic fiber cordage. Later sources suggest five or more turns may be required for full security in modern ropes.Nylon, Polyester much more slippery, and 2x as strong for less surface for friction also than natural fiber. Actually pictured is better Figure 8 Timber Hitch. #1669 that doesn't immediately tuck but rides over before tucking. Ashley states can use 1 less tuck.
ABoK Context:
The Timber Hitches list almost immediately in "CHAPTER 21: HITCHES TO SPAR AND RAIL (RIGHT-ANGLE PULL)", only preceded there by 3 Half Hitch base forms. The context begins with typical Half Hitch#1662 as worst security/nip warnings warning with Skull/Crossbones, but a base structure to build on. Then shows the most security at top nip/opposing the linear load pull position as a safer Half Hitch form#1663 awarding Anchor icon if constant pull. Then introduces Timber Hitch #1665 concept from extension of worst nip Half Hitch tail#1662 . #1666 then shows Fig.8 concept as upgrade to Half Hitch#1662 and shows the nip position pushed to halfway between normal and top nip Half Hitch. Also adds a geometric consideration of:"particularly if the encompassed object is small." of even higher nip. #1668 then shows the Fig.8 Timber Hitch with nip more to side and not bottom as improvement.Next trick is in #1669 Fig.8 Hitch with Round Turn. Where the Round Turn is around the Standing Part and Fig.8 portion actually pictured as fig.8 Timber Hitch and so adds that the "Round Turn on the Standing Part adds materially to the strength of the knot."Next chapter is "CHAPTER 22: HITCHES TO MASTS, RIGGING, AND CABLE (LENGTHWISE PULL) To withstand a lengthwise pull without slipping is about the most that can be asked of a hitch. Great care must be exercised in tying the following series of knots, and the impossible must not be expected" that starts off with a Timber Hitch preceded by 'lengthwise' Half Hitch form to convert Timber from "RIGHT-ANGLE PULL" to "LENGTHWISE PULL" usage in the back to back chapters. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Numbered-node cycle network**
Numbered-node cycle network:
The numbered-node cycle network (Dutch: fietsknooppuntennetwerk; German: Knotenpunktbezogene Wegweisung/Knotenpunktsystem für Radwanderern [formal] and Radeln nach Zahlen ["bike-by-numbers", informal]) is a wayfinding system. It spans the Netherlands, Belgium, parts of France and Germany, and parts of Croatia, and is expanding rapidly, as of 2017. Each intersection or node is given a number, and the numbers are signposted, so the cyclist always knows which way to go to get to the next node.
Numbered-node cycle network:
Numbers are not unique, but nodes with the same number are placed far apart, so that they can't be confused. To find a route, the cyclist uses a list of node numbers (the sequence of intersections they will pass through). The list is generated with a website, or a downloaded, roadside or paper map. Intersection numbers need little translation.
Numbered-node cycle network:
Bike networks are, by nature, more distributed than car routes, with more junctions; they do not gather all cyclists onto arterial bike routes. The numbered-node network makes long-distance bike travel simpler (by making it harder to get lost), and faster (by making frequent stops to check a map needless). Areas on the numbered-node network cite substantial economic benefits, including revenues from increased bike tourism.
Numbered-node cycle network:
The numbered-node network is more flexible than previous signage systems, which only indicated long, pre-determined routes. The numbered-node network signage can be used to plan and follow any arbitrary route through the network. This makes for more flexible bicycle touring, and is more usable for utility cycling.
History:
The system was designed by the Belgian Hugo Bollen. Bollen worked as a mine engineer from 1971 to 1990, and then joined Regionaal Landschap Kempen en Maasland (RLKM). RLKM did not ask Bollen to design the scheme; he volunteered it. The idea of labelling each intersection was inspired by his annoyance at having to stop at each intersection to read the map, when out biking with his wife; he personally describes himself as more of a hiker than a biker. Rumours notwithstanding, the numbering was not inspired by a wayfinding system from the mines, nor by the London Underground. Bollen said in a 2017 interview that the choice was straight logic: he needed to label each intersection, and using town names would have caused chaos, and there weren't enough letters in the alphabet, so he used numbers. He wanted something short; he felt it was important that the signage not contain too much information.Initially Tourism Limburg did not have much faith in the scheme, saying that Limburgers didn't want to bike by numbers, but they came to support it. The first signage was installed in 1995, and the network has grown rapidly since. RLKM estimates that the network brings 16.5 million euros of revenue to Kempen (Campine) in Maarsland annually. Bollen has said he was surprised by the system's success.The Flemish Prize for Merit in Sports was awarded to the system in 2009. The system won the Paul Mijksenaar Award for functional design in 2013.
History:
Areas The system was first introduced in the Netherlands in 1999, and by 2014, the entire Netherlands was part to the network. It was first introduced to Germany in 2001, and is being extended in multiple regions, including near the borders, in the Ruhrgebiet, Lower Rhine region, Sauerland and Siegerland (as of March 2021).The system is displacing more traditional national cycling route network signage: long, named routes, each individually signposted. In 2017–2021, the Netherlands reduced its LF-routes, amalgamating some of them. The ways themselves remained part of the numbered-node network. Belgium also reduced its named routes in 2012.
Use:
Paper numbered-node-network maps can be bought at tourist offices, and some hotels, and restaurants; paper-format maps are also available online. There are also many online route planners. OpenStreetMap, a Wikipedia-style map, has extensive information on the numbered-node network, available as downloadable maps and datasets under the Open Database License.Cyclists sometimes print the lists of node numbers and fasten them to their handlebars or front mudguard. They can thus refer to them and pick the right path without having to memorize numbers or stop.
Use:
Paper, downloaded or roadside maps simplify changing route when plans, weather, etc. change.As with rest areas alongside car routes, bike numbered-node networks are designed and upgraded for access to roadside services, such as public toilets, accommodation, food and drink. This encourages tourism and is seen as a form of local economic development. These facilities are also mapped in some areas.The routes are not entirely on dedicated cycleways; some parts are on quiet roads. The routes are selected for being pleasant to cycle, and thus may not always be the shortest and fastest routes. While there is no formal international standard which routes must meet, routes are tested before they are added to the network, and there's an expectation that they will be more-or-less to Dutch standards for cycle routes. Points with a lot of cycle accidents may be removed from the network.Some points have official signpost stickers giving instructions for submitting comments or finding out more about the location. Some jurisdictions also advertise themed routes, or routes that parallel train routes, publishing a series of numbers that specify them.
Road signs:
Numbering In the Netherlands, one- and two-digit numbers are used. Nodes with the same number are placed far apart. Signs also name the network section and say which local authority maintains the network.Belgium uses two- and three-digit numbers, and they are assigned by the local maintaining organization instead of a central network organization. It is thus very rarely the case that two nearby nodes will have the same number.
Road signs:
In Germany, there is a one- to two-digit system; as in the Netherlands, it is organized at the national level (by the FGSV), but signs are usually implemented by local tourist boards.
Road signs:
Sign types Road signs vary depending on jurisdiction. At nodes, node signs give the direction of the adjacent nodes, and between nodes, the internode signs show cyclists heading for the next node which direction to go, reassuring them that they are on the right track (see images). Signs are bidirectional. Signage also includes maps at a variety of scales. Internode distances are given.The Netherlands and Belgium have signs set at or below cyclist eye-level, with minimal, large, text (readable without slowing down). There is a principle in the Netherlands that cyclists should not be slowed or stopped; a constant speed is more comfortable and efficient, and makes for shorter travel times.In Germany, standard signage is set high on poles. German nodes are usually marked by a large red "Node-point-hat" ("Knotenpunkthut") sign on top of the pole, visible from a distance; just below, the directions of adjacent nodes are indicated in the wordier format of traditional pedestrian arrow signage (see images below). German node signage is usually white-on-red. Most nodes in Germany have wayside-map signs at eye level. The system has been described as ideal for those with no sense of direction. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Mangler Transformation**
Mangler Transformation:
Mangler transformation, also known as Mangler-Stepanov transformation (Stepanov 1947, Mangler 1948, Schlichting 1955), reduces the axisymmetric boundary layer equations to the plane boundary layer equations.
Mangler Transformation:
The transformation transforms the equations of axisymmetric boundary layer with external velocity U in terms of original variables x,y,u,v into the equations of plane boundary layer with external velocity U¯ in terms of the new variables x¯,y¯,u¯,v¯ . The transformation is given by the formulas x¯=1L2∫0xr2(x)dx,y¯=r(x)Ly,u¯=u,v¯=Lr(v+r′ryu),U¯=U, where L is a constant length, r(x) is the distance from the point on the wall to the axis. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Pacinian corpuscle**
Pacinian corpuscle:
The Pacinian corpuscle, lamellar corpuscle or Vater-Pacini corpuscle is one of the four major types of mechanoreceptors (specialized nerve ending with adventitious tissue for mechanical sensation) found in mammalian skin. This type of mechanoreceptor is found in both hairy, and hairless skin, viscera, joints, and attached to periosteum of bone, primarily responsible for sensitivity to vibration. Few of them are also sensitive to quasi-static or low frequency pressure stimulus. Most of them respond only to sudden disturbances and are especially sensitive to vibration of few hundreds of Hz. The vibrational role may be used for detecting surface texture, e.g., rough vs. smooth. Most of the Pacinian corpuscles act as rapidly adapting mechanoreceptors. Groups of corpuscles respond to pressure changes, e.g. on grasping or releasing an object.
Structure:
Pacinian corpuscles are larger and fewer in number than Meissner's corpuscle, Merkel cells and Ruffini's corpuscles.The Pacinian corpuscle is approximately oval-cylindrical-shaped and 1 mm in length. The entire corpuscle is wrapped by a layer of connective tissue. Its capsule consists of 20 to 60 concentric lamellae (hence the alternative lamellar corpuscle) including fibroblasts and fibrous connective tissue (mainly Type IV and Type II collagen network), separated by gelatinous material, more than 92% of which is water. It presents a whorled pattern on micrographs.
Function:
Pacinian corpuscles are rapidly adapting (phasic) receptors that detect gross pressure changes and vibrations in the skin. Any deformation in the corpuscle leads to opening of pressure-sensitive or stretch-activated ion channels or Mechanosensitive channels present in the axon membrane or axolemma of the neurite inside the core of the corpuscles or end-organ. This initiates generation of the receptor potential inside the corpuscles which is also secondarily supported by the voltage-activated ion channels present in the core of the corpuscles. Finally the receptor potential is modulated to neural spikes or action potential with the help of opening of sodium ion channels present at the first Ranveir's Node of the axon.These corpuscles are especially sensitive to vibrations, which they can sense even centimeters away. Their optimal sensitivity is 250 Hz, and this is the frequency range generated upon fingertips by textures made of features smaller than 1 µm. Pacinian corpuscles respond when the skin is rapidly indented but not when the pressure is steady, due to the layers of connective tissue that cover the nerve ending. It is thought that they respond to high-velocity changes in joint position. They have also been implicated in detecting the location of touch sensations on handheld tools.Pacinian corpuscles have a large receptive field on the skin's surface with an especially sensitive center.
Function:
Mechanism Pacinian corpuscles sense stimuli due to the deformation of their lamellae, which press on the membrane of the sensory neuron and causes it to bend or stretch. When the lamellae are deformed, due to either application or release of pressure, a generator or receptor potential is created as it physically deforms the plasma membrane of the receptive area of the neuron, making it "leak" different cations through Mechanosensitive channels which initiates the receptor potential. This mechanotransduction process is also supported by distributed voltage sensitive ion channels in the inner-core and neurite of the corpuscles. Due to generation of receptor potential in the receptive area of the neurite (especially near the heminode or half-node of the axon) the potential at the first Ranvier's node can reach certain threshold, triggering nerve impulses or action potentials at the first node of Ranvier. The first Ranvier's node of the myelinated section of the neurite is often found inside the capsule. This impulse is then transferred along the axon from node to node with the use of sodium channels and sodium/potassium pumps in the axon membrane.
Function:
Once the receptive area of the neurite is depolarized, it will depolarize the first node of Ranvier; however, as it is a rapidly adapting fibre, this does not carry on indefinitely, and the signal propagation ceases. This is a graded response, meaning that the greater the deformation, the greater the generator potential. This information is encoded in the frequency of impulses, since a bigger or faster deformation induces a higher impulse frequency. Action potentials are formed when the skin is rapidly distorted but not when pressure is continuous because of the mechanical filtering of the stimulus in the lamellar structure. The frequencies of the impulses decrease quickly and soon stop due to the relaxation of the inner layers of connective tissue that cover the nerve ending.
Function:
Discovery Pacinian corpuscles were the first cellular sensory receptor ever observed. They were first reported by German anatomist and botanist Abraham Vater and his student Johannes Gottlieb Lehmann in 1741, but ultimately named after Italian anatomist Filippo Pacini, who rediscovered them in 1835. John Shekleton, a curator of the Royal College of Surgeons in Ireland, also discovered them before Pacini, but his results were published later. Similar to Pacinian corpuscles, Herbst corpuscles and Grandry corpuscles are found in bird species. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Glycosuria**
Glycosuria:
Glycosuria is the excretion of glucose into the urine. Ordinarily, urine contains no glucose because the kidneys are able to reabsorb all of the filtered glucose from the tubular fluid back into the bloodstream. Glycosuria is nearly always caused by elevated blood glucose levels, most commonly due to untreated diabetes mellitus. Rarely, glycosuria is due to an intrinsic problem with glucose reabsorption within the kidneys (such as Fanconi syndrome), producing a condition termed renal glycosuria. Glycosuria leads to excessive water loss into the urine with resultant dehydration, a process called osmotic diuresis.
Glycosuria:
Alimentary glycosuria is a temporary condition, when a high amount of carbohydrate is taken, it is rapidly absorbed in some cases where a part of the stomach is surgically removed, the excessive glucose appears in urine producing glycosuria.
Additionally, SGLT2 inhibitor medications ("gliflozins" or "flozins") produce glycosuria as their primary mechanism of action, by inhibiting sodium/glucose cotransporter 2 in the kidneys and thereby interfering with renal glucose reabsorption.
Follow-up:
In a patient with glucosuria, diabetes is confirmed by measuring fasting or random plasma glucose and glycated hemoglobin(HbA1c).
Pathophysiology:
Blood is filtered by millions of nephrons, the functional units that comprise the kidneys. In each nephron, blood flows from the arteriole into the glomerulus, a tuft of leaky capillaries. The Bowman's capsule surrounds each glomerulus, and collects the filtrate that the glomerulus forms. The filtrate contains waste products (e.g. urea), electrolytes (e.g. sodium, potassium, chloride), amino acids, and glucose. The filtrate passes into the renal tubules of the kidney. In the first part of the renal tubule, the proximal tubule, glucose is reabsorbed from the filtrate, across the tubular epithelium and into the bloodstream. The proximal tubule can only reabsorb a limited amount of glucose (~375 mg/min), known as the transport maximum. When the blood glucose level exceeds about 160–180 mg/dL (8.9-10 mmol/L), the proximal tubule becomes overwhelmed and begins to excrete glucose in the urine.
Pathophysiology:
This point is called the renal threshold for glucose (RTG). Some people, especially children and pregnant women, may have a low RTG (less than ~7 mmol/L glucose in blood to have glucosuria).
If the RTG is so low that even normal blood glucose levels produce the condition, it is referred to as renal glycosuria.
Glucose in urine can be identified by Benedict's qualitative test.
If yeast is present in the bladder, the sugar in the urine may begin to ferment, producing a rare condition known as urinary auto-brewery syndrome. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Waelz process**
Waelz process:
The Waelz process is a method of recovering zinc and other relatively low boiling point metals from metallurgical waste (typically EAF flue dust) and other recycled materials using a rotary kiln (waelz kiln).
The zinc enriched product is referred to as waelz oxide, and the reduced zinc by product as waelz slag.
History and description:
,The concept of using a rotary kiln for the recovery of Zinc by volatization dates to at least 1888. A process was patented by Edward Dedolph in 1910. Subsequently, the Dedpolph patent was taken up and developed by Metallgesellschaft (Frankfurt) with Chemische Fabrik Griesheim-Elektron but without leading to a production scale ready process. In 1923 the Krupp Grusonwerk independently developed a process (1923), named the Waelz process (from the German Waelzen, a reference to the motion of the materials in the kiln); the two German firms later collaborated and improved the process marketing under the name Waelz-Gemeinschaft (German for Waelz association).The process consists of treating zinc containing material, in which zinc can be in the form zinc oxide, zinc silicate, zinc ferrite, zinc sulphide together with a carbon containing reductant/fuel, within a rotary kiln at 1000 °C to 1500 °C. The kiln feed material comprising zinc 'waste', fluxes and reductant (coke) is typically pelletized before addition to the kiln. The chemical process involves the reduction of zinc compounds to elemental zinc (boiling point 907 °C) which volatilises, which oxidises in the vapour phase to zinc oxide. The zinc oxide is collected from the kiln outlet exhaust by filters/electrostatic precipitators/settling chambers etc.Kiln size is typically 50 by 3.6 metres (164 by 12 ft) long / internal diameter, with a rotation speed of around 1 rpm. The recovered dust (Waelz oxide) is enriched in zinc oxide and is a feed product for zinc smelters, the zinc reduced by-product is known as Waelz slag. Sub-optimal features of the process are high energy consumption, and lack of iron recovery (and iron rich slag). The process also captures other low boiling metals in the waelz oxide including lead, cadmium and silver. Halogen compounds are also present in the product oxide.Increased use of galvanised steel has resulted in increased levels of zinc in steel scrap which in turn leads to higher levels of zinc in electric arc furnace flue dusts. As of 2000, the waelz process is considered to be a "best available technology" for flue dust zinc recovery and the process is used at industrial scale worldwide.As of 2014, the Waelz process is the preferred or most widely used process for zinc recovery of zinc from electric arc furnace dust (90%).Alternative production and experimental scale zinc recovery processes include the rotary hearth treatment of pelletised zinc containing dust (Kimitsu works, Nippon Steel); the SDHL (Saage, Dittrich, Hasche, Langbein) process, an efficiency modification of the Waelz process; the "DK process" a modified blast furnace process producing pig iron and zinc (oxide) dust from blast furnace dusts, sludges and other wastes; and the PRIMUS process (multi-stage zinc volatilisation furnace). | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Magnitude (astronomy)**
Magnitude (astronomy):
In astronomy, magnitude is measure of the brightness of an object, usually in a defined passband. An imprecise but systematic determination of the magnitude of objects was introduced in ancient times by Hipparchus.
Magnitude (astronomy):
Magnitude values do not have a unit. The scale is logarithmic and defined such that a magnitude 1 star is exactly 100 times brighter than a magnitude 6 star. Thus each step of one magnitude is 100 2.512 times brighter than the magnitude 1 higher. The brighter an object appears, the lower the value of its magnitude, with the brightest objects reaching negative values.
Magnitude (astronomy):
Astronomers use two different definitions of magnitude: apparent magnitude and absolute magnitude. The apparent magnitude (m) is the brightness of an object and depends on an object's intrinsic luminosity, its distance, and the extinction reducing its brightness. The absolute magnitude (M) describes the intrinsic luminosity emitted by an object and is defined to be equal to the apparent magnitude that the object would have if it were placed at a certain distance 10 parsecs for stars. A more complex definition of absolute magnitude is used for planets and small Solar System bodies, based on its brightness at one astronomical unit from the observer and the Sun.
Magnitude (astronomy):
The Sun has an apparent magnitude of −27 and Sirius, the brightest visible star in the night sky, −1.46. Venus at its brightest is -5. The International Space Station (ISS) sometimes reaches a magnitude of −6.
Amateur astronomers commonly express the darkness of the sky in terms of limiting magnitude, i.e. the apparent magnitude of the faintest star they can see with the naked eye. At a dark site it usual for people to see stars of 6th magnitude or fainter.
Apparent magnitude is really a measure of illuminance, which can also be measured in photometric units such as lux.
History:
The Greek astronomer Hipparchus produced a catalogue which noted the apparent brightness of stars in the second century BCE. In the second century CE the Alexandrian astronomer Ptolemy classified stars on a six-point scale, and originated the term magnitude. To the unaided eye, a more prominent star such as Sirius or Arcturus appears larger than a less prominent star such as Mizar, which in turn appears larger than a truly faint star such as Alcor. In 1736, the mathematician John Keill described the ancient naked-eye magnitude system in this way: The fixed Stars appear to be of different Bignesses, not because they really are so, but because they are not all equally distant from us. Those that are nearest will excel in Lustre and Bigness; the more remote Stars will give a fainter Light, and appear smaller to the Eye. Hence arise the Distribution of Stars, according to their Order and Dignity, into Classes; the first Class containing those which are nearest to us, are called Stars of the first Magnitude; those that are next to them, are Stars of the second Magnitude ... and so forth, 'till we come to the Stars of the sixth Magnitude, which comprehend the smallest Stars that can be discerned with the bare Eye. For all the other Stars, which are only seen by the Help of a Telescope, and which are called Telescopical, are not reckoned among these six Orders. Altho' the Distinction of Stars into six Degrees of Magnitude is commonly received by Astronomers; yet we are not to judge, that every particular Star is exactly to be ranked according to a certain Bigness, which is one of the Six; but rather in reality there are almost as many Orders of Stars, as there are Stars, few of them being exactly of the same Bigness and Lustre. And even among those Stars which are reckoned of the brightest Class, there appears a Variety of Magnitude; for Sirius or Arcturus are each of them brighter than Aldebaran or the Bull's Eye, or even than the Star in Spica; and yet all these Stars are reckoned among the Stars of the first Order: And there are some Stars of such an intermedial Order, that the Astronomers have differed in classing of them; some putting the same Stars in one Class, others in another. For Example: The little Dog was by Tycho placed among the Stars of the second Magnitude, which Ptolemy reckoned among the Stars of the first Class: And therefore it is not truly either of the first or second Order, but ought to be ranked in a Place between both.
History:
Note that the brighter the star, the smaller the magnitude: Bright "first magnitude" stars are "1st-class" stars, while stars barely visible to the naked eye are "sixth magnitude" or "6th-class".
The system was a simple delineation of stellar brightness into six distinct groups but made no allowance for the variations in brightness within a group.
History:
Tycho Brahe attempted to directly measure the "bigness" of the stars in terms of angular size, which in theory meant that a star's magnitude could be determined by more than just the subjective judgment described in the above quote. He concluded that first magnitude stars measured 2 arc minutes (2′) in apparent diameter (1⁄30 of a degree, or 1⁄15 the diameter of the full moon), with second through sixth magnitude stars measuring 1+1⁄2′, 1+1⁄12′, 3⁄4′, 1⁄2′, and 1⁄3′, respectively. The development of the telescope showed that these large sizes were illusory—stars appeared much smaller through the telescope. However, early telescopes produced a spurious disk-like image of a star that was larger for brighter stars and smaller for fainter ones. Astronomers from Galileo to Jaques Cassini mistook these spurious disks for the physical bodies of stars, and thus into the eighteenth century continued to think of magnitude in terms of the physical size of a star. Johannes Hevelius produced a very precise table of star sizes measured telescopically, but now the measured diameters ranged from just over six seconds of arc for first magnitude down to just under 2 seconds for sixth magnitude. By the time of William Herschel astronomers recognized that the telescopic disks of stars were spurious and a function of the telescope as well as the brightness of the stars, but still spoke in terms of a star's size more than its brightness. Even well into the nineteenth century the magnitude system continued to be described in terms of six classes determined by apparent size, in which There is no other rule for classing the stars but the estimation of the observer; and hence it is that some astronomers reckon those stars of the first magnitude which others esteem to be of the second.
History:
However, by the mid-nineteenth century astronomers had measured the distances to stars via stellar parallax, and so understood that stars are so far away as to essentially appear as point sources of light. Following advances in understanding the diffraction of light and astronomical seeing, astronomers fully understood both that the apparent sizes of stars were spurious and how those sizes depended on the intensity of light coming from a star (this is the star's apparent brightness, which can be measured in units such as watts per square metre) so that brighter stars appeared larger.
History:
Modern definition Early photometric measurements (made, for example, by using a light to project an artificial “star” into a telescope's field of view and adjusting it to match real stars in brightness) demonstrated that first magnitude stars are about 100 times brighter than sixth magnitude stars.
History:
Thus in 1856 Norman Pogson of Oxford proposed that a logarithmic scale of 5√100 ≈ 2.512 be adopted between magnitudes, so five magnitude steps corresponded precisely to a factor of 100 in brightness. Every interval of one magnitude equates to a variation in brightness of 5√100 or roughly 2.512 times. Consequently, a magnitude 1 star is about 2.5 times brighter than a magnitude 2 star, about 2.52 times brighter than a magnitude 3 star, about 2.53 times brighter than a magnitude 4 star, and so on.
History:
This is the modern magnitude system, which measures the brightness, not the apparent size, of stars. Using this logarithmic scale, it is possible for a star to be brighter than “first class”, so Arcturus or Vega are magnitude 0, and Sirius is magnitude −1.46.
Scale:
As mentioned above, the scale appears to work 'in reverse', with objects with a negative magnitude being brighter than those with a positive magnitude. The more negative the value, the brighter the object.
Objects appearing farther to the left on this line are brighter, while objects appearing farther to the right are dimmer. Thus zero appears in the middle, with the brightest objects on the far left, and the dimmest objects on the far right.
Apparent and absolute magnitude:
Two of the main types of magnitudes distinguished by astronomers are: Apparent magnitude, the brightness of an object as it appears in the night sky.
Apparent and absolute magnitude:
Absolute magnitude, which measures the luminosity of an object (or reflected light for non-luminous objects like asteroids); it is the object's apparent magnitude as seen from a specific distance, conventionally 10 parsecs (32.6 light years).The difference between these concepts can be seen by comparing two stars. Betelgeuse (apparent magnitude 0.5, absolute magnitude −5.8) appears slightly dimmer in the sky than Alpha Centauri A (apparent magnitude 0.0, absolute magnitude 4.4) even though it emits thousands of times more light, because Betelgeuse is much farther away.
Apparent and absolute magnitude:
Apparent magnitude Under the modern logarithmic magnitude scale, two objects, one of which is used as a reference or baseline, whose flux (i.e., brightness, a measure of power per unit area) in units such as watts per square metre (W m−2) are F1 and Fref, will have magnitudes m1 and mref related by 2.5 log 10 (F1Fref).
Apparent and absolute magnitude:
Note that astronomers consistently using the term flux for what is often called intensity in physics, in order to avoid confusion with the specific intensity. Using this formula, the magnitude scale can be extended beyond the ancient magnitude 1–6 range, and it becomes a precise measure of brightness rather than simply a classification system. Astronomers now measure differences as small as one-hundredth of a magnitude. Stars that have magnitudes between 1.5 and 2.5 are called second-magnitude; there are some 20 stars brighter than 1.5, which are first-magnitude stars (see the list of brightest stars). For example, Sirius is magnitude −1.46, Arcturus is −0.04, Aldebaran is 0.85, Spica is 1.04, and Procyon is 0.34. Under the ancient magnitude system, all of these stars might have been classified as "stars of the first magnitude".
Apparent and absolute magnitude:
Magnitudes can also be calculated for objects far brighter than stars (such as the Sun and Moon), and for objects too faint for the human eye to see (such as Pluto).
Apparent and absolute magnitude:
Absolute magnitude Often, only apparent magnitude is mentioned since it can be measured directly. Absolute magnitude can be calculated from apparent magnitude and distance from: 2.5 log 10 10 log 10 d−1), because intensity falls off proportionally to distance squared. This is known as the distance modulus, where d is the distance to the star measured in parsecs, m is the apparent magnitude, and M is the absolute magnitude.
Apparent and absolute magnitude:
If the line of sight between the object and observer is affected by extinction due to absorption of light by interstellar dust particles, then the object's apparent magnitude will be correspondingly fainter. For A magnitudes of extinction, the relationship between apparent and absolute magnitudes becomes log 10 d−1)+A.
Apparent and absolute magnitude:
Stellar absolute magnitudes are usually designated with a capital M with a subscript to indicate the passband. For example, MV is the magnitude at 10 parsecs in the V passband. A bolometric magnitude (Mbol) is an absolute magnitude adjusted to take account of radiation across all wavelengths; it is typically smaller (i.e. brighter) than an absolute magnitude in a particular passband, especially for very hot or very cool objects. Bolometric magnitudes are formally defined based on stellar luminosity in watts, and are normalised to be approximately equal to MV for yellow stars.
Apparent and absolute magnitude:
Absolute magnitudes for Solar System objects are frequently quoted based on a distance of 1 AU. These are referred to with a capital H symbol. Since these objects are lit primarily by reflected light from the Sun, an H magnitude is defined as the apparent magnitude of the object at 1 AU from the Sun and 1 AU from the observer.
Apparent and absolute magnitude:
Examples The following is a table giving apparent magnitudes for celestial objects and artificial satellites ranging from the Sun to the faintest object visible with the Hubble Space Telescope (HST): Other scales Under Pogson's system the star Vega was used as the fundamental reference star, with an apparent magnitude defined to be zero, regardless of measurement technique or wavelength filter. This is why objects brighter than Vega, such as Sirius (Vega magnitude of −1.46. or −1.5), have negative magnitudes. However, in the late twentieth century Vega was found to vary in brightness making it unsuitable for an absolute reference, so the reference system was modernized to not depend on any particular star's stability. This is why the modern value for Vega' magnitude is close to, but no longer exactly zero, but rather 0.03 in the V (visual) band. Current absolute reference systems include the AB magnitude system, in which the reference is a source with a constant flux density per unit frequency, and the STMAG system, in which the reference source is instead defined to have constant flux density per unit wavelength.
Apparent and absolute magnitude:
Decibel Another logarithmic scale for intensity is the decibel. Although it is more commonly used for sound intensity, it is also used for light intensity. It is a parameter for photomultiplier tubes and similar camera optics for telescopes and microscopes. Each factor of 10 in intensity corresponds to 10 decibels. In particular, a multiplier of 100 in intensity corresponds to an increase of 20 decibels and also corresponds to a decrease in magnitude by 5. Generally, the change in decibels is related to a change in magnitude by ΔdB=−4Δm.
Apparent and absolute magnitude:
For example, an object that is 1 magnitude larger (fainter) than a reference would produce a signal that is 4 dB smaller (weaker) than the reference, which might need to be compensated by an increase in the capability of the camera by as many decibels. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Texas Journal of Science**
Texas Journal of Science:
The Texas Journal of Science is a peer reviewed academic journal covering all areas of basic and applied sciences, as well as science education. It is published by the Texas Academy of Science. The journal is abstracted and indexed in BIOSIS Previews and The Zoological Record and was in previous years also covered by Scopus. It obtained its last impact factor of 0.113 in 2010, but its listing in the Journal Citation Reports has since been discontinued. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Acute radiation syndrome**
Acute radiation syndrome:
Acute radiation syndrome (ARS), also known as radiation sickness or radiation poisoning, is a collection of health effects that are caused by being exposed to high amounts of ionizing radiation in a short period of time. Symptoms can start within an hour of exposure, and can last for several months. Early symptoms are usually nausea, vomiting and loss of appetite. In the following hours or weeks, initial symptoms may appear to improve, before the development of additional symptoms, after which either recovery or death follow.ARS involves a total dose of greater than 0.7 Gy (70 rad), that generally occurs from a source outside the body, delivered within a few minutes. Sources of such radiation can occur accidentally or intentionally. They may involve nuclear reactors, cyclotrons, certain devices used in cancer therapy, nuclear weapons, or radiological weapons. It is generally divided into three types: bone marrow, gastrointestinal, and neurovascular syndrome, with bone marrow syndrome occurring at 0.7 to 10 Gy, and neurovascular syndrome occurring at doses that exceed 50 Gy. The cells that are most affected are generally those that are rapidly dividing. At high doses, this causes DNA damage that may be irreparable. Diagnosis is based on a history of exposure and symptoms. Repeated complete blood counts (CBCs) can indicate the severity of exposure.Treatment of ARS is generally supportive care. This may include blood transfusions, antibiotics, colony-stimulating factors, or stem cell transplant. Radioactive material remaining on the skin or in the stomach should be removed. If radioiodine was inhaled or ingested, potassium iodide is recommended. Complications such as leukemia and other cancers among those who survive are managed as usual. Short term outcomes depend on the dose exposure.ARS is generally rare. A single event can affect a large number of people, as happened in the atomic bombing of Hiroshima and Nagasaki and the Chernobyl nuclear power plant disaster. ARS differs from chronic radiation syndrome, which occurs following prolonged exposures to relatively low doses of radiation.
Signs and symptoms:
Classically, ARS is divided into three main presentations: hematopoietic, gastrointestinal, and neuro vascular. These syndromes may be preceded by a prodrome. The speed of symptom onset is related to radiation exposure, with greater doses resulting in a shorter delay in symptom onset. These presentations presume whole-body exposure, and many of them are markers that are invalid if the entire body has not been exposed. Each syndrome requires that the tissue showing the syndrome itself be exposed (e.g., gastrointestinal syndrome is not seen if the stomach and intestines are not exposed to radiation). Some areas affected are: Hematopoietic. This syndrome is marked by a drop in the number of blood cells, called aplastic anemia. This may result in infections, due to a low number of white blood cells, bleeding, due to a lack of platelets, and anemia, due to too few red blood cells in circulation. These changes can be detected by blood tests after receiving a whole-body acute dose as low as 0.25 grays (25 rad), though they might never be felt by the patient if the dose is below 1 gray (100 rad). Conventional trauma and burns resulting from a bomb blast are complicated by the poor wound healing caused by hematopoietic syndrome, increasing mortality.
Signs and symptoms:
Gastrointestinal. This syndrome often follows absorbed doses of 6–30 grays (600–3,000 rad). The signs and symptoms of this form of radiation injury include nausea, vomiting, loss of appetite, and abdominal pain. Vomiting in this time-frame is a marker for whole body exposures that are in the fatal range above 4 grays (400 rad). Without exotic treatment such as bone marrow transplant, death with this dose is common, due generally more to infection than gastrointestinal dysfunction.
Signs and symptoms:
Neurovascular. This syndrome typically occurs at absorbed doses greater than 30 grays (3,000 rad), though it may occur at doses as low as 10 grays (1,000 rad). It presents with neurological symptoms such as dizziness, headache, or decreased level of consciousness, occurring within minutes to a few hours, with an absence of vomiting, and is almost always fatal, even with aggressive intensive care.Early symptoms of ARS typically include nausea, vomiting, headaches, fatigue, fever, and a short period of skin reddening. These symptoms may occur at radiation doses as low as 0.35 grays (35 rad). These symptoms are common to many illnesses, and may not, by themselves, indicate acute radiation sickness.
Signs and symptoms:
Dose effects A similar table and description of symptoms (given in rems, where 100 rem = 1 Sv), derived from data from the effects on humans subjected to the atomic bombings of Hiroshima and Nagasaki, the indigenous peoples of the Marshall Islands subjected to the Castle Bravo thermonuclear bomb, animal studies and lab experiment accidents, have been compiled by the U.S. Department of Defense.A person who was less than 1 mile (1.6 km) from the atomic bomb Little Boy's hypocenter at Hiroshima, Japan, was found to absorb about 9.46 grays (Gy) of ionizing radiation.The doses at the hypocenters of the Hiroshima and Nagasaki atomic bombings were 240 and 290 Gy, respectively.
Signs and symptoms:
Skin changes Cutaneous radiation syndrome (CRS) refers to the skin symptoms of radiation exposure. Within a few hours after irradiation, a transient and inconsistent redness (associated with itching) can occur. Then, a latent phase may occur and last from a few days up to several weeks, when intense reddening, blistering, and ulceration of the irradiated site is visible. In most cases, healing occurs by regenerative means; however, very large skin doses can cause permanent hair loss, damaged sebaceous and sweat glands, atrophy, fibrosis (mostly keloids), decreased or increased skin pigmentation, and ulceration or necrosis of the exposed tissue. As seen at Chernobyl, when skin is irradiated with high energy beta particles, moist desquamation (peeling of skin) and similar early effects can heal, only to be followed by the collapse of the dermal vascular system after two months, resulting in the loss of the full thickness of the exposed skin. Another example of skin loss caused by high-level exposure of radiation is during the 1999 Tokaimura nuclear accident, where technician Hisashi Ouchi had lost a majority of his skin due to the high amounts of radiation he absorbed during the irradiation. This effect had been demonstrated previously with pig skin using high energy beta sources at the Churchill Hospital Research Institute, in Oxford.
Cause:
ARS is caused by exposure to a large dose of ionizing radiation (> ~0.1 Gy) over a short period of time (> ~0.1 Gy/h). Alpha and beta radiation have low penetrating power and are unlikely to affect vital internal organs from outside the body. Any type of ionizing radiation can cause burns, but alpha and beta radiation can only do so if radioactive contamination or nuclear fallout is deposited on the individual's skin or clothing. Gamma and neutron radiation can travel much greater distances and penetrate the body easily, so whole-body irradiation generally causes ARS before skin effects are evident. Local gamma irradiation can cause skin effects without any sickness. In the early twentieth century, radiographers would commonly calibrate their machines by irradiating their own hands and measuring the time to onset of erythema.
Cause:
Accidental Accidental exposure may be the result of a criticality or radiotherapy accident. There have been numerous criticality accidents dating back to atomic testing during World War II, while computer-controlled radiation therapy machines such as Therac-25 played a major part in radiotherapy accidents. The latter of the two is caused by the failure of equipment software used to monitor the radiational dose given. Human error has played a large part in accidental exposure incidents, including some of the criticality accidents, and larger scale events such as the Chernobyl disaster. Other events have to do with orphan sources, in which radioactive material is unknowingly kept, sold, or stolen. The Goiânia accident is an example, where a forgotten radioactive source was taken from a hospital, resulting in the deaths of 4 people from ARS. Theft and attempted theft of radioactive material by clueless thieves has also led to lethal exposure in at least one incident.Exposure may also come from routine spaceflight and solar flares that result in radiation effects on earth in the form of solar storms. During spaceflight, astronauts are exposed to both galactic cosmic radiation (GCR) and solar particle event (SPE) radiation. The exposure particularly occurs during flights beyond low Earth orbit (LEO). Evidence indicates past SPE radiation levels that would have been lethal for unprotected astronauts. GCR levels that might lead to acute radiation poisoning are less well understood. The latter cause is rarer, with an event possibly occurring during the solar storm of 1859.
Cause:
Intentional Intentional exposure is controversial as it involves the use of nuclear weapons, human experiments, or is given to a victim in an act of murder. The intentional atomic bombings of Hiroshima and Nagasaki resulted in tens of thousands of casualties; the survivors of these bombings are known today as Hibakusha. Nuclear weapons emit large amounts of thermal radiation as visible, infrared, and ultraviolet light, to which the atmosphere is largely transparent. This event is also known as "Flash", where radiant heat and light are bombarded into any given victim's exposed skin, causing radiation burns. Death is highly likely, and radiation poisoning is almost certain if one is caught in the open with no terrain or building masking-effects within a radius of 0–3 km from a 1 megaton airburst. The 50% chance of death from the blast extends out to ~8 km from a 1 megaton atmospheric explosion.Scientific testing on humans within the United States occurred extensively throughout the atomic age. Experiments took place on a range of subjects including, but not limited to; the disabled, children, soldiers, and incarcerated persons, with the level of understanding and consent given by subjects varying from complete to none. Since 1997 there have been requirements for patients to give informed consent, and to be notified if experiments were classified. Across the world, the Soviet nuclear program involved human experiments on a large scale, which is still kept secret by the Russian government and the Rosatom agency. The human experiments that fall under intentional ARS exclude those that involved long term exposure. Criminal activity has involved murder and attempted murder carried out through abrupt victim contact with a radioactive substance such as polonium or plutonium.
Pathophysiology:
The most commonly used predictor of ARS is the whole-body absorbed dose. Several related quantities, such as the equivalent dose, effective dose, and committed dose, are used to gauge long-term stochastic biological effects such as cancer incidence, but they are not designed to evaluate ARS. To help avoid confusion between these quantities, absorbed dose is measured in units of grays (in SI, unit symbol Gy) or rads (in CGS), while the others are measured in sieverts (in SI, unit symbol Sv) or rems (in CGS). 1 rad = 0.01 Gy and 1 rem = 0.01 Sv.In most of the acute exposure scenarios that lead to radiation sickness, the bulk of the radiation is external whole-body gamma, in which case the absorbed, equivalent, and effective doses are all equal. There are exceptions, such as the Therac-25 accidents and the 1958 Cecil Kelley criticality accident, where the absorbed doses in Gy or rad are the only useful quantities, because of the targeted nature of the exposure to the body.
Pathophysiology:
Radiotherapy treatments are typically prescribed in terms of the local absorbed dose, which might be 60 Gy or higher. The dose is fractionated to about 2 Gy per day for "curative" treatment, which allows normal tissues to undergo repair, allowing them to tolerate a higher dose than would otherwise be expected. The dose to the targeted tissue mass must be averaged over the entire body mass, most of which receives negligible radiation, to arrive at a whole-body absorbed dose that can be compared to the table above.
Pathophysiology:
DNA damage Exposure to high doses of radiation causes DNA damage, later creating serious and even lethal chromosomal aberrations if left unrepaired. Ionizing radiation can produce reactive oxygen species, and does directly damage cells by causing localized ionization events. The former is very damaging to DNA, while the latter events create clusters of DNA damage. This damage includes loss of nucleobases and breakage of the sugar-phosphate backbone that binds to the nucleobases. The DNA organization at the level of histones, nucleosomes, and chromatin also affects its susceptibility to radiation damage. Clustered damage, defined as at least two lesions within a helical turn, is especially harmful. While DNA damage happens frequently and naturally in the cell from endogenous sources, clustered damage is a unique effect of radiation exposure. Clustered damage takes longer to repair than isolated breakages, and is less likely to be repaired at all. Larger radiation doses are more prone to cause tighter clustering of damage, and closely localized damage is increasingly less likely to be repaired.Somatic mutations cannot be passed down from parent to offspring, but these mutations can propagate in cell lines within an organism. Radiation damage can also cause chromosome and chromatid aberrations, and their effects depend on in which stage of the mitotic cycle the cell is when the irradiation occurs. If the cell is in interphase, while it is still a single strand of chromatin, the damage will be replicated during the S1 phase of cell cycle, and there will be a break on both chromosome arms; the damage then will be apparent in both daughter cells. If the irradiation occurs after replication, only one arm will bear the damage; this damage will be apparent in only one daughter cell. A damaged chromosome may cyclize, binding to another chromosome, or to itself.
Diagnosis:
Diagnosis is typically made based on a history of significant radiation exposure and suitable clinical findings. An absolute lymphocyte count can give a rough estimate of radiation exposure. Time from exposure to vomiting can also give estimates of exposure levels if they are less than 10 Gray (1000 rad).
Prevention:
A guiding principle of radiation safety is as low as reasonably achievable (ALARA). This means try to avoid exposure as much as possible and includes the three components of time, distance, and shielding.
Prevention:
Time The longer that humans are subjected to radiation the larger the dose will be. The advice in the nuclear war manual entitled Nuclear War Survival Skills published by Cresson Kearny in the U.S. was that if one needed to leave the shelter then this should be done as rapidly as possible to minimize exposure.In chapter 12, he states that "[q]uickly putting or dumping wastes outside is not hazardous once fallout is no longer being deposited. For example, assume the shelter is in an area of heavy fallout and the dose rate outside is 400 roentgen (R) per hour, enough to give a potentially fatal dose in about an hour to a person exposed in the open. If a person needs to be exposed for only 10 seconds to dump a bucket, in this 1/360 of an hour he will receive a dose of only about 1 R. Under war conditions, an additional 1-R dose is of little concern." In peacetime, radiation workers are taught to work as quickly as possible when performing a task that exposes them to radiation. For instance, the recovery of a radioactive source should be done as quickly as possible.
Prevention:
Shielding Matter attenuates radiation in most cases, so placing any mass (e.g., lead, dirt, sandbags, vehicles, water, even air) between humans and the source will reduce the radiation dose. This is not always the case, however; care should be taken when constructing shielding for a specific purpose. For example, although high atomic number materials are very effective in shielding photons, using them to shield beta particles may cause higher radiation exposure due to the production of bremsstrahlung x-rays, and hence low atomic number materials are recommended. Also, using material with a high neutron activation cross section to shield neutrons will result in the shielding material itself becoming radioactive and hence more dangerous than if it were not present.There are many types of shielding strategies that can be used to reduce the effects of radiation exposure. Internal contamination protective equipment such as respirators are used to prevent internal deposition as a result of inhalation and ingestion of radioactive material. Dermal protective equipment, which protects against external contamination, provides shielding to prevent radioactive material from being deposited on external structures. While these protective measures do provide a barrier from radioactive material deposition, they do not shield from externally penetrating gamma radiation. This leaves anyone exposed to penetrating gamma rays at high risk of ARS.
Prevention:
Naturally, shielding the entire body from high energy gamma radiation is optimal, but the required mass to provide adequate attenuation makes functional movement nearly impossible. In the event of a radiation catastrophe, medical and security personnel need mobile protection equipment in order to safely assist in containment, evacuation, and many other necessary public safety objectives.
Prevention:
Research has been done exploring the feasibility of partial body shielding, a radiation protection strategy that provides adequate attenuation to only the most radio-sensitive organs and tissues inside the body. Irreversible stem cell damage in the bone marrow is the first life-threatening effect of intense radiation exposure and therefore one of the most important bodily elements to protect. Due to the regenerative property of hematopoietic stem cells, it is only necessary to protect enough bone marrow to repopulate the exposed areas of the body with the shielded supply. This concept allows for the development of lightweight mobile radiation protection equipment, which provides adequate protection, deferring the onset of ARS to much higher exposure doses. One example of such equipment is the 360 gamma, a radiation protection belt that applies selective shielding to protect the bone marrow stored in the pelvic area as well as other radio sensitive organs in the abdominal region without hindering functional mobility.
Prevention:
More information on bone marrow shielding can be found in the "Health Physics Radiation Safety Journal". article Waterman, Gideon; Kase, Kenneth; Orion, Itzhak; Broisman, Andrey; Milstein, Oren (September 2017). "Selective Shielding of Bone Marrow: An Approach to Protecting Humans from External Gamma Radiation". Health Physics. 113 (3): 195–208. doi:10.1097/HP.0000000000000688. PMID 28749810. S2CID 3300412., or in the Organisation for Economic Co-operation and Development (OECD) and the Nuclear Energy Agency (NEA)'s 2015 report: "Occupational Radiation Protection in Severe Accident Management" (PDF).
Prevention:
Reduction of incorporation Where radioactive contamination is present, an elastomeric respirator, dust mask, or good hygiene practices may offer protection, depending on the nature of the contaminant. Potassium iodide (KI) tablets can reduce the risk of cancer in some situations due to slower uptake of ambient radioiodine. Although this does not protect any organ other than the thyroid gland, their effectiveness is still highly dependent on the time of ingestion, which would protect the gland for the duration of a twenty-four-hour period. They do not prevent ARS as they provide no shielding from other environmental radionuclides.
Prevention:
Fractionation of dose If an intentional dose is broken up into a number of smaller doses, with time allowed for recovery between irradiations, the same total dose causes less cell death. Even without interruptions, a reduction in dose rate below 0.1 Gy/h also tends to reduce cell death. This technique is routinely used in radiotherapy.The human body contains many types of cells and a human can be killed by the loss of a single type of cells in a vital organ. For many short term radiation deaths (3–30 days), the loss of two important types of cells that are constantly being regenerated causes death. The loss of cells forming blood cells (bone marrow) and the cells in the digestive system (microvilli, which form part of the wall of the intestines) is fatal.
Management:
Treatment usually involves supportive care with possible symptomatic measures employed. The former involves the possible use of antibiotics, blood products, colony stimulating factors, and stem cell transplant.
Management:
Antimicrobials There is a direct relationship between the degree of the neutropenia that emerges after exposure to radiation and the increased risk of developing infection. Since there are no controlled studies of therapeutic intervention in humans, most of the current recommendations are based on animal research.The treatment of established or suspected infection following exposure to radiation (characterized by neutropenia and fever) is similar to the one used for other febrile neutropenic patients. However, important differences between the two conditions exist. Individuals that develop neutropenia after exposure to radiation are also susceptible to irradiation damage in other tissues, such as the gastrointestinal tract, lungs and central nervous system. These patients may require therapeutic interventions not needed in other types of neutropenic patients. The response of irradiated animals to antimicrobial therapy can be unpredictable, as was evident in experimental studies where metronidazole and pefloxacin therapies were detrimental.
Management:
Antimicrobials that reduce the number of the strict anaerobic component of the gut flora (i.e., metronidazole) generally should not be given because they may enhance systemic infection by aerobic or facultative bacteria, thus facilitating mortality after irradiation.An empirical regimen of antimicrobials should be chosen based on the pattern of bacterial susceptibility and nosocomial infections in the affected area and medical center and the degree of neutropenia. Broad-spectrum empirical therapy (see below for choices) with high doses of one or more antibiotics should be initiated at the onset of fever. These antimicrobials should be directed at the eradication of Gram-negative aerobic bacilli (i.e., Enterobacteriace, Pseudomonas) that account for more than three quarters of the isolates causing sepsis. Because aerobic and facultative Gram-positive bacteria (mostly alpha-hemolytic streptococci) cause sepsis in about a quarter of the victims, coverage for these organisms may also be needed.A standardized management plan for people with neutropenia and fever should be devised. Empirical regimens contain antibiotics broadly active against Gram-negative aerobic bacteria (quinolones: i.e., ciprofloxacin, levofloxacin, a third- or fourth-generation cephalosporin with pseudomonal coverage: e.g., cefepime, ceftazidime, or an aminoglycoside: i.e. gentamicin, amikacin).
Prognosis:
The prognosis for ARS is dependent on the exposure dose, with anything above 8 Gy being almost always lethal, even with medical care. Radiation burns from lower-level exposures usually manifest after 2 months, while reactions from the burns occur months to years after radiation treatment. Complications from ARS include an increased risk of developing radiation-induced cancer later in life. According to the controversial but commonly applied linear no-threshold model, any exposure to ionizing radiation, even at doses too low to produce any symptoms of radiation sickness, can induce cancer due to cellular and genetic damage. The probability of developing cancer is a linear function with respect to the effective radiation dose. Radiation cancer may occur after ionizing radiation exposure following a latent period averaging 20 to 40 years.
History:
Acute effects of ionizing radiation were first observed when Wilhelm Röntgen intentionally subjected his fingers to X-rays in 1895. He published his observations concerning the burns that developed that eventually healed, and misattributed them to ozone. Röntgen believed the free radical produced in air by X-rays from the ozone was the cause, but other free radicals produced within the body are now understood to be more important. David Walsh first established the symptoms of radiation sickness in 1897.Ingestion of radioactive materials caused many radiation-induced cancers in the 1930s, but no one was exposed to high enough doses at high enough rates to bring on ARS.
History:
The atomic bombings of Hiroshima and Nagasaki resulted in high acute doses of radiation to a large number of Japanese people, allowing for greater insight into its symptoms and dangers. Red Cross Hospital Surgeon Terufumi Sasaki led intensive research into the syndrome in the weeks and months following the Hiroshima and Nagasaki bombings. Sasaki and his team were able to monitor the effects of radiation in patients of varying proximities to the blast itself, leading to the establishment of three recorded stages of the syndrome. Within 25–30 days of the explosion, Sasaki noticed a sharp drop in white blood cell count and established this drop, along with symptoms of fever, as prognostic standards for ARS. Actress Midori Naka, who was present during the atomic bombing of Hiroshima, was the first incident of radiation poisoning to be extensively studied. Her death on 24 August 1945 was the first death ever to be officially certified as a result of ARS (or "Atomic bomb disease").
History:
There are two major databases that track radiation accidents: The American ORISE REAC/TS and the European IRSN ACCIRAD. REAC/TS shows 417 accidents occurring between 1944 and 2000, causing about 3000 cases of ARS, of which 127 were fatal. ACCIRAD lists 580 accidents with 180 ARS fatalities for an almost identical period. The two deliberate bombings are not included in either database, nor are any possible radiation-induced cancers from low doses. The detailed accounting is difficult because of confounding factors. ARS may be accompanied by conventional injuries such as steam burns, or may occur in someone with a pre-existing condition undergoing radiotherapy. There may be multiple causes for death, and the contribution from radiation may be unclear. Some documents may incorrectly refer to radiation-induced cancers as radiation poisoning, or may count all overexposed individuals as survivors without mentioning if they had any symptoms of ARS.
History:
Notable cases The following table includes only those known for their attempted survival with ARS. These cases exclude chronic radiation syndrome such as Albert Stevens, in which radiation is exposed to a given subject over a long duration. The "result" column represents the time of exposure to the time of death attributed to the short and long term effects attributed to initial exposure. As ARS is measured by a whole-body absorbed dose, the "exposure" column only includes units of Gray (Gy).
Other animals:
Thousands of scientific experiments have been performed to study ARS in animals. There is a simple guide for predicting survival and death in mammals, including humans, following the acute effects of inhaling radioactive particles. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Taltirelin**
Taltirelin:
Taltirelin (marketed under the tradename Ceredist) is a thyrotropin-releasing hormone (TRH) analog, which mimics the physiological actions of TRH, but with a much longer half-life and duration of effects, and little development of tolerance following prolonged dosing. It has nootropic, neuroprotective and analgesic effects.Taltirelin is primarily being researched for the treatment of spinocerebellar ataxia; limited research has also been carried out with regard to other neurodegenerative disorders, e.g., spinal muscular atrophy. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Estrogen and neurodegenerative diseases**
Estrogen and neurodegenerative diseases:
Neurodegenerative diseases can disrupt the normal human homeostasis and result in abnormal estrogen levels. For example, neurodegenerative diseases can cause different physiological effects in males and females. In particular, estrogen studies have revealed complex interactions with neurodegenerative diseases. Estrogen was initially proposed to be a possible treatment for certain types of neurodegenerative diseases but a plethora of harmful side effects such as increased susceptibility to breast cancer and coronary heart disease overshadowed any beneficial outcomes. On the other hand, Estrogen Replacement Therapy has shown some positive effects with postmenopausal women. Estrogen and estrogen-like molecules form a large family of potentially beneficial alternatives that can have dramatic effects on human homeostasis and disease. Subsequently, large-scale efforts were initiated to screen for useful estrogen family molecules. Furthermore, scientists discovered new ways to synthesize estrogen-like compounds that can avoid many side effects.
Estrogen:
Estrogen is a lipid hormone in humans can regulate many physiological processes. It is largely related to menstrual and estrous and its biological function is mediated by binding to two receptors: Estrogen Receptor alpha (ERα) and Estrogen Receptor beta (ERβ). These two receptors are tissue specific and have different influences on their downstream genes. A decrease in estrogen levels can lead to osteoporosis, cognitive disorders, and can affect many important genes related to normal physiological function.Estrogen can be divided into four classes: 1) Animal Estrogens that includes estrone (E1), estradiol (E2), and estriol (E3); 2) Plant Estrogens (Phytoestrogens); 3) Fungi Estrogens (Mycoestrogens) and 4) Synthetic Estrogens (xenoestrogens). Xenoestrogens contain a large number of compounds that are synthesized or naturally exist. These estrogens imitate estrogen structure and can be designed to satisfy the need of new drugs. They may have a significant impact on neurodegenerative disease treatment due to their ease of synthesis and targeted specificity.
Estrogen:
Application The application of estrogen on medicine can be divided into a number of aspects. The best known ones are breast cancer and coronary heart disease. Estrogen also plays very important role in animal metabolism balance. These unexpected diseases hindered estrogen to get involved in neurodegenerative disease therapy. So, when applying estrogen-like drugs to relieve neurodegenerative diseases, the concentration should be restrictly controlled to avoid these side effects.
Neurodegenerative Diseases:
Neurodegenerative diseases are diseases caused along the process of neurodegeneration. Neurodegeneration includes structural and functional loss of neurons or even the death of the neurons. The causes of such diseases can be various but can be concluded into four aspects: genetic mutation, protein misfolding, intracellular mechanisms and programmed cell death.
Main classes of neurodegenerative diseases are Alzheimer's disease, Parkinson's disease, Huntington's disease and Amyotrophic lateral sclerosis.
Efforts Made on Therapy Different neurodegenerative diseases have different causes and are not well studied until now. There is no clear cure for such diseases but some efforts have been made to research deeper into them.
Neurodegenerative Diseases:
The 10th Global College of Neuroproetction and Neuroregeneration Annual Conference together with the International Association of Neurorestoratology VI was held to discuss on neurorestoration, neuroprotection and neuroregeneration in various clinical neurodegenerative diseases like Alzheimer's, Parkinson's, Huntington's disease, stroke and brain or spinal cord injuries. The main aim was to enhance health care by the use of stem cells, nanodrug delivery of drugs and stem cells, use of multimodal drugs as well as a combination of different approaches. They concluded that the future of neuroprotection could be achieved by the use of stem cells and nanodrug delivery in chronic neurological disorders.
Estrogen and neurodegenerative diseases:
Although estrogen is best known for its effects on the maturation and differentiation of the primary and secondary sex organs, increasing evidence suggests that its influence extends beyond this system, and its activity in the CNS may initiate, or influence our susceptibility to neurodegenerative decline. Estrogen has been proposed to act as a neuroprotectant at several levels, and it is probable that deprivation of estrogen as a result of menopause exposes the aging or diseased brain to several insults. In addition, estrogen deprivation is likely to initiate or enhance degenerative changes caused by oxidative stress, and to reduce the brain's ability to maintain synaptic connectivity and cholinergic integrity leading to the cognitive decline seen in aged and disease-affected individuals.
Estrogen and neurodegenerative diseases:
There is sufficient evidence that estradiol is a powerful neuroprotectant which might have use against AD, stroke and Parkinson's disease both in women and men.
Estrogen and neurodegenerative diseases:
Estrogen and Alzheimer's disease Amyloid plaques formed by amyloid-β (Aβ) deposition and neurofibrillary tangles formed by tau protein phosphorylation are dominant physiological features of Alzheimer's disease. Amyloid precursor protein (APP) proteolysis is fundamental for production of Aβ peptides implicated in AD pathology. By using a cell line that contains high levels of estrogen receptors, scientists found that treatment with physiological concentrations of 17 beta-estradiol is associated with accumulation in the conditioned medium of an amino-terminal cleavage product of APP (soluble APP or protease nexin-2), indicative of non-amyloidogenic processing.
Estrogen and neurodegenerative diseases:
Estrogen and Parkinson's disease Recommendations on the use of postmenopausal hormonal replacement therapy in women with Parkinson's disease or those genetically at risk. But another group of scientists found a positive association between estrogen use and lower symptom severity in women with early PD not yet taking L-dopa.
Estrogen and Huntington's disease Huntington's disease (HD) is a polyglutamine disorder based on an expanded CAG triplet repeat leading to cerebral and striatal neurodegeneration. Potential sex differences concerning the age of onset and the course of the disease are poorly defined, as the difficulties of matching female and male HD patients regarding their CAG repeat lengths limit comparability.
Estrogen and Amyotrophic lateral sclerosis ALS occurs more commonly in men than in women, and women get the disease later in life compared to men.
This suggested the possible protective role of estrogen in ALS. By conducting treatment of 17β-estradiol to ovariectomy treated mice, scientists found significantly delay of disease progression.
Estrogen Replacement Therapy:
Estrogen Replacement Therapy (HRT) is a kind of hormone replacement therapy.
Its goal is to mitigate discomfort caused by diminished circulating estrogen after menopause.
Estrogen Replacement Therapy:
The 2002 Women's Health Initiative of the National Institutes of Health found disparate results for all cause mortality with hormone replacement, finding it to be lower when HRT was begun earlier, between age 50–59, but higher when begun after age 60. In older patients, there was an increased incidence of breast cancer, heart attacks and stroke, although a reduced incidence of colorectal cancer and bone fracture. Some of the WHI findings were again found in a larger national study done in the UK, known as The Million Women Study. As a result of these findings, the number of women taking hormone treatment dropped precipitously. The Women's Health Initiative recommended that women with non-surgical menopause take the lowest feasible dose of HRT for the shortest possible time to minimize associated risks.
Estrogen Replacement Therapy:
Main Pathways The role of estrogens is mostly mediated by two nuclear receptors (ER alpha and ER beta) and a membrane-associated G-protein (GPR30 or GPER), and it is not limited to reproduction, but it extends to the skeletal, cardiovascular and central nervous systems. Various pathologies such as cancer, inflammatory, neurodegenerative and metabolic diseases are often associated with dysfunctions of the estrogen system. Therapeutic interventions by agents that affect the estrogen signaling pathway might be useful in the treatment of many dissimilar diseases. These pathways also shown great impact on neurodegenerative disease.
Estrogen Replacement Therapy:
Application The receptors of estrogen are specially distributed in different tissues, which have different influence on their downstream genes. The activation of the two different estrogen receptors has different effects on human. ERα and ERβ also mediate Selective estrogen-receptor modulators' (SERMs') function, but the selective ERα agitator can always cause some side effects such as breast cancer or endometrial hyperplasia, while the selective ERβ agitator may play an active effect on such diseases. So, the selective ERβ agitator has more clinical value for neurodegenerative diseases). In post-menopausal women, high levels of testosterone and estrogen higher the risk 2-3 times than lower level situation. Women that are not taking hormone replacement therapy (HRT) have lesser risk of breast cancer because of the insulin level increase.
Nonsteroidal estrogens and neurodegenerative diseases:
Nonsteroidal estrogens include xenoestrogens, phytoestrogens and mycoestrogens. They are very useful in neurodegenerative diseases' therapy when considering about the side effects caused by estradiol.
As the development of chemical synthesis, it becomes possible for people to construct new molecules. Drug companies can exploit naturally existing compounds and synthetic compounds that have estrogen-like activity to produce patented proprietary drugs, especially the contraceptives.Phytoestrogens are plant derived estrogens and have similar structures with 17beta-estradiol thus may cause estrogenic or anti-estrogenic effects.
Application Nonsteroidal estrogens prevalently exist in our environment and have both positive and negative effects on our daily life. But as a possible way to get access to neurodegenerative disease treatment, scientists have developed multiple ways to screen these estrogens and select the ones that have less side effects.
Bipartite recombinant yeast system and dual fluorescence report system are designed to screen these potential chemicals. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Email forwarding**
Email forwarding:
Email forwarding generically refers to the operation of re-sending a previously delivered email to an email address to one or more different email addresses.
The term forwarding, used for mail since long before electronic communications, has no specific technical meaning, but it implies that the email has been moved "forward" to a new destination.
Email forwarding can also redirect mail going to a certain address and send it to one or more other addresses. Vice versa, email items going to several different addresses can converge via forwarding to end up in a single address in-box.Email users and administrators of email systems use the same term when speaking of both server-based and client-based forwarding.
Server-based forwarding:
The domain name (the part appearing to the right of @ in an email address) defines the target server(s) for the corresponding class of addresses. A domain may also define backup servers; they have no mailboxes and forward messages without changing any part of their envelopes. By contrast, primary servers can deliver a message to a user's mailbox and/or forward it by changing some envelope addresses. ~/.forward files (see below) provide a typical example of server-based forwarding to different recipients.
Server-based forwarding:
Email administrators sometimes use the term redirection as a synonym for server-based email-forwarding to different recipients. Protocol engineers sometimes use the term Mediator to refer to a forwarding server.Because of spam, it is becoming increasingly difficult to reliably forward mail across different domains, and some recommend avoiding it if at all possible.
Uses of server-based forwarding to different recipients Role-addresses info, sales, postmaster, and similar names can appear to the left of @ in email addresses. An organization may forward messages intended for a given role to the address of the person(s) currently functioning in that role or office.
Pseudonym-addresses Most domain name hosting facilities provide facilities to forward mail to another email address such as a mailbox at the user's Internet Service Provider; there are also separate providers of mail forwarding services. This allows users to have an email address that does not change if they change mailbox provider.
Multiple, or discontinued addresses When users change their email address, or have several addresses, the user or an administrator may set up forwarding from these addresses, if still valid, to a single current one, in order to avoid losing messages.
Server-based forwarding:
Forwarding versus remailing Plain message-forwarding changes the envelope recipient(s) and leaves the envelope sender field untouched. The "envelope sender" field does not equate to the From header which Email client software usually displays: it represents a field used in the early stages of the SMTP protocol, and subsequently saved as the Return-Path header. This field holds the address to which mail-systems must send bounce messages — reporting delivery-failure (or success) — if any.
Server-based forwarding:
By contrast, the terms remailing or redistribution can sometimes mean re-sending the message and also rewriting the "envelope sender" field. Electronic mailing lists furnish a typical example. Authors submit messages to a reflector that performs remailing to each list address. That way, bounce messages (which report a failure delivering a message to any list- subscriber) will not reach the author of a message. However, annoying misconfigured vacation autoreplies do reach authors.
Server-based forwarding:
Typically, plain message-forwarding does alias-expansion, while proper message-forwarding, also named forwarding tout-court serves for mailing-lists. When additional modifications to the message are carried out, so as to rather resemble the action of a Mail User Agent submitting a new message, the term forwarding becomes deceptive and remailing seems more appropriate.
Server-based forwarding:
In the Sender Policy Framework (SPF), the domain-name in the envelope sender remains subject to policy restrictions. Therefore, SPF generally disallows plain message-forwarding. In case of forwarding, the email is being sent from the forwarding server, which is not authorized to send emails for the original sender's domain. So the SPF will fail. Intra domain redirection complies with SPF as long as the relevant servers share a consistent configuration. Mail servers that practice inter-domain message-forwarding may break SPF even if they do not implement SPF themselves, i.e. they neither apply SPF checks nor publish SPF records.Sender Rewriting Scheme provides for a generic forwarding mechanism compatible with SPF.
Client-based forwarding:
Automated client-based forwarding Client forwarding can take place automatically using a non-interactive client such as a mail retrieval agent. Although the retrieval agent uses a client protocol, this forwarding resembles server forwarding in that it keeps the same message-identity. Concerns about the envelope-sender apply.
Client-based forwarding:
Manual client-based forwarding An end-user can manually forward a message using an email client. Forwarding inline quotes the message below the main text of the new message, and usually preserves original attachments as well as a choice of selected headers (e.g. the original From and Reply-To.) The recipient of a message forwarded this way may still be able to reply to the original message; the ability to do so depends on the presence of original headers and may imply manually copying and pasting the relevant destination addresses.
Client-based forwarding:
Forwarding as attachment prepares a MIME attachment (of type message/rfc822) that contains the full original message, including all headers and any attachment. Note that including all the headers discloses much information about the message, such as the servers that transmitted it and any client-tag added on the mailbox. The recipient of a message forwarded this way may be able to open the attached message and reply to it seamlessly.
Client-based forwarding:
This kind of forwarding actually constitutes a remailing from the points of view of the envelope-sender and of the recipient(s). The message identity also changes.
Historical development of email forwarding:
RFC 821, Simple Mail Transfer Protocol, by Jonathan B. Postel in 1982, provided for a forward-path for each recipient, in the form of, for example, @USC-ISIE.ARPA, @USC-ISIF.ARPA: Q-Smith@ISI-VAXA.ARPA — an optional list of hosts and a required destination-mailbox. When the list of hosts existed, it served as a source-route, indicating that each host had to relay the mail to the next host on the list. Otherwise, in the case of insufficient destination-information but where the server knew the correct destination, it could take the responsibility to deliver the message by responding as follows: The concept at that time envisaged the elements of the forward-path (source route) moving to the return-path (envelope sender) as a message got relayed from one SMTP server to another. Even if the system discouraged the use of source-routing, dynamically building the return-path implied that the "envelope sender" information could not remain in its original form during forwarding. Thus RFC 821 did not originally allow plain message-forwarding.
Historical development of email forwarding:
The introduction of the MX record made source-routing unnecessary. In 1989, RFC 1123 recommended accepting source-routing only for backward-compatibility. At that point, plain message forwarding became the recommended action for alias-expansion. In 2008, RFC 5321 still mentions that "systems may remove the return path and rebuild [it] as needed", taking into consideration that not doing so might inadvertently disclose sensitive information.
Historical development of email forwarding:
Actually, plain message-forwarding can be conveniently used for alias expansion within the same server or a set of coordinated servers.
Historical development of email forwarding:
~/.forward files The reference SMTP implementation in the early 1980s was sendmail, which provided for ~/.forward files, which can store the target email-addresses for given users. This kind of server-based forwarding is sometimes called dot-forwarding. One can configure some email-program filters to automatically perform forwarding or replying actions immediately after receiving. Forward files can also contain shell scripts, which have become a source of many security problems. Formerly only trusted users could utilize the command-line switch for setting the envelope sender, -f arg; some systems disabled this feature for security reasons.Email predates the formalization of client–server architectures in the 1990s.
Historical development of email forwarding:
Therefore, the distinction between client and server seems necessarily forced. The original distinction contrasted daemons and user-controlled programs which run on the same machine. The sendmail daemon used to run with root privileges so it could impersonate any user whose mail it had to manage. On the other hand, users can access their own individual mail-files and configuration files, including ~/.forward. Client programs may assist in editing the server configuration-files of a given user, thereby causing some confusion as to what role each program plays.
Historical development of email forwarding:
Virtual users The term "virtual users" refers to email users who never log on a mail-server system and only access their mailboxes using remote clients. A mail-server program may work for both virtual and regular users, or it may require minor modifications to take advantage of the fact that virtual users frequently share the same system id. The latter circumstance allows the server program to implement some features more easily, as it does not have to obey system-access restrictions. The same principles of operations apply. However, virtual users have more difficulty in accessing their configuration files, for good or ill. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**FAM221A**
FAM221A:
Family with sequence similarity 221 member A is a protein in humans that is encoded by the FAM221A gene. FAM221A is a gene that is not yet well understood by the scientific community. However, it appears that this gene may have a role in Parkinson's disease and prostate cancer.
Gene:
Location and Aliases FAM221A is located on Chromosome 7. Its exact location is 7p15.3. It has one alias, which is C7orf46.
Expression FAM221A has higher levels of expression in the liver, brain, fetal brain, thyroid and colon, but FAM221A has the highest level of expression in the spinal cord, pancreas and retina.The promoter region of FAM221A is 1222 base pairs long. This was found using ElDorado at Genomatix.
Protein:
Protein Analysis The molecular weight of FAM221A is 33.1 kDa, and the isoelectric point is 6.01. Relative to other proteins in humans, FAM221A has a lower level of asparagine.
Post-Translational Modifications Post-translational modifications of FAM221A include phosphorylation sites, glycosylation sites and sulfation sites. These have been conserved in mammals other than Homo sapiens, including the macaque, whale, finch and sometimes alligator. These sites were predicted using NetPhos 3.1, YinOYang 1.2 and The Sulfinator.
Secondary Structure Key structures predicted in FAM221A are random coils and alpha helices, with 71% of the protein being random coils and 21% being helices. Extended strands were also found with 7% of the protein being these. Secondary structure was predicted using RaptorX, and a diagram of the predicted secondary structure is included below.
Homology/evolution:
Paralogs There exists one paralog for FAM221A: FAM221B. This diverged from FAM221A approximately 1781 million years ago.
Homology/evolution:
Orthologs Orthologs have been found in mammals, birds, reptiles and fish. FAM221A has also been conserved in invertebrates, but the similarity levels decrease at a faster rate. Orthologs were discovered using BLAST and BLAT. While these are not the only orthologs that exist for FAM221A, a table of 20 orthologs is provided below. The ortholog with no accession number was created using BLAT.
Homology/evolution:
Divergence of FAM221A To understand the times when FAM221A diverged from different species, a graph was created. This compares the evolutionary history of FAM221A to Fibrinogen, which evolves quickly, and Cytochrome C, which evolves slowly. As seen in the graph, FAM221A diverges from other species at a moderate pace.
Clinical significance:
FAM221A has a relatively high amount of expression in the brain and has been seen to have an association with neurodegenerative disorders such as Parkinson's disease and Alzheimer's disease. FAM221A has also been seen to have a higher level of expression in those who have prostate cancer versus healthy individuals. Furthermore, FAM221A has also been expressed in those with colorectal tumors.
Interacting Proteins:
Three interacting proteins were found, which are SNX2, SNX5 and SNX6.
SNX2 and SNX6 share the same function, which is being involved in the stages of intracellular trafficking. SNX5 facilitates cargo retrieval from endosomes to the trans-golgi network. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Intel ADX**
Intel ADX:
Intel ADX (Multi-Precision Add-Carry Instruction Extensions) is Intel's arbitrary-precision arithmetic extension to the x86 instruction set architecture (ISA). Intel ADX was first supported in the Broadwell microarchitecture.The instruction set extension contains just two new instructions, though MULX from BMI2 is also considered as a part of the large integer arithmetic support.Both instructions are more efficient variants of the existing ADC instruction, with the difference that each of the two new instructions affects only one flag, where ADC as a signed addition may set both overflow and carry flags, and as an old-style x86 instruction also reset the rest of the CPU flags. Having two versions affecting different flags means that two chains of additions with carry can be calculated in parallel.AMD added support in their processors for these instructions starting with Ryzen. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Small-conductance mechanosensitive channel**
Small-conductance mechanosensitive channel:
Small conductance mechanosensitive ion channels (MscS) provide protection against hypo-osmotic shock in bacteria, responding both to stretching of the cell membrane and to membrane depolarization. In eukaryotes, they fulfill a multitude of important functions in addition to osmoregulation. They are present in the membranes of organisms from the three domains of life: bacteria, archaea, fungi and plants.
Structure:
There are two families of mechanosensitive (MS) channels: large-conductance MS channels (MscL) and small-conductance MS channels (MscS or YGGB). The MscS family is much larger and more variable in size and sequence than the MscL family. MscS family homologues vary in length between 248 and 1120 amino acyl residues and in topology, but the homologous region that is shared by most of them is only 200-250 residues long, exhibiting 4-5 transmembrane regions (TMSs). Much of the diversity in MscS proteins occurs in the number of TMSs, which ranges from three to eleven TMSs, although the three C-terminal helices are conserved.
Structure:
Crystal structures of the Escherichia coli MscS in the open and closed conformations are available. E. coli MscS folds as a homoheptamer with a cylindrical shape, and can be divided into transmembrane and extramembrane regions: an N-terminal periplasmic region, a transmembrane region, and a C-terminal cytoplasmic region (middle and C-terminal domains). The transmembrane region forms a channel through the membrane that opens into a chamber enclosed by the extramembrane portion, the latter connecting to the cytoplasm through distinct portals.
Function:
MS channels function as electromechanical switches with the capability to sense the physical state of lipid bilayers. Interactions with the membrane lipids are responsible for the sensing of mechanical force for most known MS channels. In bacterial and animal systems, MS ion channels are thought to mediate the perception of pressure, touch, and sound. With numerous members now electrophysiologically characterized, these channels displays a breadth of ion selectivity with both anion- and cation-selective members. The selectivities of these channels may be relatively weak in comparison to voltage-gated channels. In addition, some MscS channels may function in amino acid efflux, Ca2+ regulation and cell division.
Function:
Transport reaction The generalized transport reaction proposed for MscS channels is: Osmolytes (in) and ions (in) ⇌ osmolytes (out) and ions (out)
Mechanism:
Application of a ramp of negative pressure to a patch excised from an E. coli giant spheroplast gave a small conductance (MscS; ~1 nS in 400 mM salt) with a sustained open state, and a large conductance (MscL; ~3 nS) with faster kinetics, activated at higher pressure. MscS was reported to exhibit a weak anionic preference and a voltage dependency, tending to open upon depolarization. Activation by membrane-intercalating amphipathic compounds suggested that the MscS channel is sensitive to mechanical perturbations in the lipid bilayer.Sensitivity towards tension changes can be explained as result of the hydrophobic coupling between the membrane and TMSs of the channel. Pockets in between TMSs were identified in MscS and YnaI that are filled with lipids. Fewer lipids are present in the open state of MscS than the closed. Thus, exclusion of lipid fatty acyl chains from these pockets, as a consequence of increased tension, may trigger gating. Similarly, in the eukaryotic MS channel TRAAK it was found that a lipid chain blocks the conducting path in the closed state. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Flail chest**
Flail chest:
Flail chest is a life-threatening medical condition that occurs when a segment of the rib cage breaks due to trauma and becomes detached from the rest of the chest wall. Two of the symptoms of flail chest are chest pain and shortness of breath.It occurs when multiple adjacent ribs are broken in multiple places, separating a segment, so a part of the chest wall moves independently. The number of ribs that must be broken varies by differing definitions: some sources say at least two adjacent ribs are broken in at least two places, some require three or more ribs in two or more places. The flail segment moves in the opposite direction to the rest of the chest wall: because of the ambient pressure in comparison to the pressure inside the lungs, it goes in while the rest of the chest is moving out, and vice versa. This so-called "paradoxical breathing" is painful and increases the work involved in breathing.
Flail chest:
Flail chest is usually accompanied by a pulmonary contusion, a bruise of the lung tissue that can interfere with blood oxygenation. Often, it is the contusion, not the flail segment, that is the main cause of respiratory problems in people with both injuries.Surgery to fix the fractures appears to result in better outcomes.
Signs and symptoms:
Two of the symptoms of flail chest are chest pain and shortness of breath.The characteristic paradoxical motion of the flail segment occurs due to pressure changes associated with respiration that the rib cage normally resists: During normal inspiration, the diaphragm contracts and intercostal muscles pull the rib cage out. Pressure in the thorax decreases below atmospheric pressure, and air rushes in through the trachea. The flail segment will be pulled in with the decrease in pressure while the rest of the rib cage expands.
Signs and symptoms:
During normal expiration, the diaphragm and intercostal muscles relax increasing internal pressure, allowing the abdominal organs to push air upwards and out of the thorax. However, a flail segment will also be pushed out while the rest of the rib cage contracts.Paradoxical motion is a late sign of flail segment; therefore, an absence of paradoxical motion does not mean the patient does not have a flail segment.
Signs and symptoms:
The constant motion of the ribs in the flail segment at the site of the fracture is extremely painful, and, untreated, the sharp broken edges of the ribs are likely to eventually puncture the pleural sac and lung, possibly causing a pneumothorax. The concern about "mediastinal flutter" (the shift of the mediastinum with paradoxical diaphragm movement) does not appear to be merited. Pulmonary contusions are commonly associated with flail chest and that can lead to respiratory failure. This is due to the paradoxical motions of the chest wall from the fragments interrupting normal breathing and chest movement. Typical paradoxical motion is associated with stiff lungs, which requires extra work for normal breathing, and increased lung resistance, which makes air flow difficult. The respiratory failure from the flail chest requires mechanical ventilation and a longer stay in an intensive care unit. It is the damage to the lungs from the flail segment that is life-threatening.
Causes:
The most common causes of flail chest injuries are vehicle collisions, which account for 76% of flail chest injuries. Another main cause of flail chest injuries is falling. This mainly occurs in the elderly, who are more impacted by the falls as a result of their weak and frail bones, unlike their younger counterparts who can fall without being impacted as severely. Falls account for 14% of flail chest injuries.Flail chest typically occurs when three or more adjacent ribs are fractured in two or more places, allowing that segment of the thoracic wall to displace and move independently of the rest of the chest wall. Flail chest can also occur when ribs are fractured proximally in conjunction with disarticulation of costal cartilages distally. For the condition to occur, generally there must be a significant force applied over a large surface of the thorax to create the multiple anterior and posterior rib fractures. Rollover and crushing injuries most commonly break ribs at only one point, whereas for flail chest to occur a significant impact is required, breaking the ribs in two or more places. This can be caused by forceful accidents such as the aforementioned vehicle collisions or significant falls. In the elderly, it can be caused by deterioration of bone, although rare. In children, the majority of flail chest injuries result from common blunt force traumas or metabolic bone diseases, including a group of genetic disorders known as osteogenesis imperfecta.
Diagnosis:
Diagnosis is by physical examination performed by a physician. The diagnosis may be assisted or confirmed by use of medical imaging with either plain X ray or CT scan.
Treatment:
Treatment of the flail chest initially follows the principles of advanced trauma life support. Further treatment includes: Good pain management includes early regional anesthesia (e.g. intercostal blocks or erector spinae plane blocks) and avoiding opioid pain medication as much as possible. This allows much better ventilation, with improved tidal volume, and increased blood oxygenation.
Positive pressure ventilation, meticulously adjusting the ventilator settings to avoid pulmonary barotrauma.
Chest tubes as required.
Adjustment of position to make the person most comfortable and provide relief of pain.
Aggressive pulmonary toiletA person may be intubated with a double lumen tracheal tube. In a double lumen endotracheal tube, each lumen may be connected to a different ventilator. Usually one side of the chest is affected more than the other, so each lung may require drastically different pressures and flows to adequately ventilate.
Surgical fixation can help in significantly reducing the duration of ventilatory support and in conserving the pulmonary function.
Surgical intervention has also been shown to reduce the need for tracheostomy, reduces the time spent in the intensive care unit following a traumatic flail chest injury and could reduce the risk of acquiring pneumonia after such an event.
Treatment:
Physiotherapy In order to begin a rehabilitation program for a flail chest it is important to treat the person's pain so they are able to perform the proper exercises. Due to the underlying conditions that the flail segment has caused onto the respiratory system, chest physiotherapy is important to reduce further complications. Proper positioning of the body is key, including postural alignment for proper drainage of mucous secretions. The therapy will consist of a variety of postural positioning and changes in order to increase normal breathing. Along with postural repositioning, a variety of breathing exercises are also very important in order to allow the chest wall to reposition itself back to normal conditions. Breathing exercises will also include coughing procedures. Furthermore, range of motion exercises are given to reduce the atrophy of the musculature. With progression, resistance exercises are added to the regimen to the shoulder and arm of the side containing the injury. Moreover, trunk exercises will be introduced while sitting and will progress to during standing. Hip flexion exercises can be done to expand the thorax. This is done by lying supine on a flat surface, flexing the knees and hips and bringing them in toward the chest. The knees should come in toward the chest while the person inhales, and exhale when the knees are lowered. This exercise can be done in 3 sets of 6–8 repetitions with a pause in between sets. The person should always make sure to maintain controlled breaths.Eventually, the person will be progressed to walking and posture correction while walking. Before the person is discharged from the hospital, the person should be able to perform mobility exercises to the core and should have attained good posture.
Prognosis:
The death rate of people with flail chest depends on the severity of their condition, ranging from 10 to 25%.
Prognosis:
A systematic review comparing the safety and effectiveness of surgical fixation versus non-surgical methods for the treatment of flail chest, reported that there was no statistically significant difference in the reported deaths between patients treated surgically and those treated non-surgically i.e. with conservative management methods. The results of the systematic review suggested that surgical intervention reduces the need for tracheostomy, reduces the time spent in the intensive care unit following a traumatic flail chest injury and could reduce the risk of acquiring pneumonia after such an event.
Epidemiology:
Approximately 1 out of 13 people admitted to the hospital with fractured ribs are found to have flail chest. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Uvular consonant**
Uvular consonant:
Uvulars are consonants articulated with the back of the tongue against or near the uvula, that is, further back in the mouth than velar consonants. Uvulars may be stops, fricatives, nasals, trills, or approximants, though the IPA does not provide a separate symbol for the approximant, and the symbol for the voiced fricative is used instead. Uvular affricates can certainly be made but are rare: they occur in some southern High-German dialects, as well as in a few African and Native American languages. (Ejective uvular affricates occur as realizations of uvular stops in Lillooet, Kazakh, or as allophonic realizations of the ejective uvular fricative in Georgian.) Uvular consonants are typically incompatible with advanced tongue root, and they often cause retraction of neighboring vowels.
Uvular consonants in IPA:
The uvular consonants identified by the International Phonetic Alphabet are:
Descriptions in different languages:
English has no uvular consonants (at least in most major dialects), and they are unknown in the indigenous languages of Australia and the Pacific, though uvular consonants separate from velar consonants are believed to have existed in the Proto-Oceanic language and are attested in the modern Formosan languages of Taiwan. Uvular consonants are, however, found in many Middle-Eastern and African languages, most notably Arabic and Somali, and in native American languages. In parts of the Caucasus mountains and northwestern North America, nearly every language has uvular stops and fricatives. Two uvular R phonemes are found in various languages in northwestern Europe, including French, some Occitan dialects, a majority of German dialects, some Dutch dialects, and Danish.
Descriptions in different languages:
The voiceless uvular stop is transcribed as [q] in both the IPA and X-SAMPA. It is pronounced somewhat like the voiceless velar stop [k], but with the middle of the tongue further back on the velum, against or near the uvula. The most familiar use will doubtless be in the transliteration of Arabic place names such as Qatar and Iraq into English, though, since English lacks this sound, this is generally pronounced as [k], the most similar sound that occurs in English.
Descriptions in different languages:
[qʼ], the uvular ejective, is found in Ubykh, Tlingit, Cusco Quechua, and some others. In Georgian, the existence of this phoneme is debatable, since the general realization of the letter "ყ" is /χʼ/. This is due to /qʰ/ merging with /χ/ and therefore /qʼ/ being influenced by this merger and becoming /χʼ/.
Descriptions in different languages:
[ɢ], the voiced equivalent of [q], is much rarer. It is like the voiced velar stop [ɡ], but articulated in the same uvular position as [q]. Few languages use this sound, but it is found in Persian and in some Northeast Caucasian languages, notably Tabasaran, and Pacific Northwest, such as Kwakʼwala. It may also occur as an allophone of another uvular consonant. In Kazakh, the voiced uvular stop is an allophone of the voiced uvular fricative after the velar nasal.
Descriptions in different languages:
The voiceless uvular fricative [χ] is similar to the voiceless velar fricative [x], except that it is articulated near the uvula. It is found in Georgian, and instead of [x] in some dialects of German, Spanish, and colloquial Arabic, as well as in some Dutch varieties and in standard Afrikaans.
Uvular flaps have been reported for Kube (Trans–New Guinea) and for the variety of Khmer spoken in Battambang province.
Descriptions in different languages:
The Enqi dialect of the Bai language has an unusually complete series of uvular consonants consisting of the stops /q/, /qʰ/ and /ɢ/, the fricatives /χ/ and /ʁ/, and the nasal /ɴ/. All of these contrast with a corresponding velar consonant of the same manner of articulation. The existence of the uvular nasal is especially unusual, even more so than the existence of the voiced stop.
Descriptions in different languages:
The Tlingit language of the Alaska Panhandle has ten uvular consonants, all of which are voiceless obstruents: And the extinct Ubykh language of Turkey has twenty.
Phonological representation:
In featural phonology, uvular consonants are most often considered to contrast with velar consonants in terms of being [–high] and [+back]. Prototypical uvulars also appear to be [-ATR].Two variants can then be established. Since palatalized consonants are [-back], the appearance of palatalized uvulars in a few languages such as Ubykh is difficult to account for. According to Vaux (1999), they possibly hold the features [+high], [-back], [-ATR], the last being the distinguishing feature from a palatalized velar consonant.
Uvular rhotics:
The uvular trill [ʀ] is used in certain dialects (especially those associated with European capitals) of French, German, Dutch, Portuguese, Danish, Swedish and Norwegian, as well as sometimes in Modern Hebrew, for the rhotic phoneme. In many of these it has a uvular fricative (either voiced [ʁ] or voiceless [χ]) as an allophone when it follows one of the voiceless stops /p/, /t/, or /k/ at the end of a word, as in the French example maître [mɛtχ], or even a uvular approximant.
Uvular rhotics:
As with most trills, uvular trills are often reduced to a single contact, especially between vowels.
Unlike other uvular consonants, the uvular trill is articulated without a retraction of the tongue, and therefore doesn't lower neighboring high vowels the way uvular stops commonly do.
Uvular rhotics:
Several other languages, including Inuktitut, Abkhaz, Uyghur and some varieties of Arabic, have a voiced uvular fricative but do not treat it as a rhotic consonant. However, Modern Hebrew and some modern varieties of Arabic also both have at least one uvular fricative that is considered non-rhotic, and one that is considered rhotic.In Lakhota the uvular trill is an allophone of the voiced uvular fricative before /i/. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Philosophy of space and time**
Philosophy of space and time:
Philosophy of space and time is the branch of philosophy concerned with the issues surrounding the ontology and epistemology of space and time. While such ideas have been central to philosophy from its inception, the philosophy of space and time was both an inspiration for and a central aspect of early analytic philosophy. The subject focuses on a number of basic issues, including whether time and space exist independently of the mind, whether they exist independently of one another, what accounts for time's apparently unidirectional flow, whether times other than the present moment exist, and questions about the nature of identity (particularly the nature of identity over time).
Ancient and medieval views:
The earliest recorded philosophy of time was expounded by the ancient Egyptian thinker Ptahhotep (c. 2650–2600 BC) who said: Follow your desire as long as you live, and do not perform more than is ordered, do not lessen the time of the following desire, for the wasting of time is an abomination to the spirit... The Vedas, the earliest texts on Indian philosophy and Hindu philosophy, dating back to the late 2nd millennium BC, describe ancient Hindu cosmology, in which the universe goes through repeated cycles of creation, destruction, and rebirth, with each cycle lasting 4,320,000,000 years. Ancient Greek philosophers, including Parmenides and Heraclitus, wrote essays on the nature of time.Incas regarded space and time as a single concept, named pacha (Quechua: pacha, Aymara: pacha).Plato, in the Timaeus, identified time with the period of motion of the heavenly bodies, and space as that in which things come to be. Aristotle, in Book IV of his Physics, defined time as the number of changes with respect to before and after, and the place of an object as the innermost motionless boundary of that which surrounds it.
Ancient and medieval views:
In Book 11 of St. Augustine's Confessions, he reflects on the nature of time, asking, "What then is time? If no one asks me, I know: if I wish to explain it to one who asks, I know not." He goes on to comment on the difficulty of thinking about time, pointing out the inaccuracy of common speech: "For but few things are there of which we speak properly; of most things we speak improperly, still, the things intended are understood." But Augustine presented the first philosophical argument for the reality of Creation (against Aristotle) in the context of his discussion of time, saying that knowledge of time depends on the knowledge of the movement of things, and therefore time cannot be where there are no creatures to measure its passing (Confessions Book XI ¶30; City of God Book XI ch.6).
Ancient and medieval views:
In contrast to ancient Greek philosophers who believed that the universe had an infinite past with no beginning, medieval philosophers and theologians developed the concept of the universe having a finite past with a beginning, now known as temporal finitism. The Christian philosopher John Philoponus presented early arguments, adopted by later Christian philosophers and theologians of the form "argument from the impossibility of the existence of an actual infinite", which states: "An actual infinite cannot exist." "An infinite temporal regress of events is an actual infinite." "∴ An infinite temporal regress of events cannot exist."In the early 11th century, the Muslim physicist Ibn al-Haytham (Alhacen or Alhazen) discussed space perception and its epistemological implications in his Book of Optics (1021). He also rejected Aristotle's definition of topos (Physics IV) by way of geometric demonstrations and defined place as a mathematical spatial extension. His experimental disproof of the extramission hypothesis of vision led to changes in the understanding of the visual perception of space, contrary to the previous emission theory of vision supported by Euclid and Ptolemy. In "tying the visual perception of space to prior bodily experience, Alhacen unequivocally rejected the intuitiveness of spatial perception and, therefore, the autonomy of vision. Without tangible notions of distance and size for correlation, sight can tell us next to nothing about such things."
Realism and anti-realism:
A traditional realist position in ontology is that time and space have existence apart from the human mind. Idealists, by contrast, deny or doubt the existence of objects independent of the mind. Some anti-realists, whose ontological position is that objects outside the mind do exist, nevertheless doubt the independent existence of time and space.
Realism and anti-realism:
In 1781, Immanuel Kant published the Critique of Pure Reason, one of the most influential works in the history of the philosophy of space and time. He describes time as an a priori notion that, together with other a priori notions such as space, allows us to comprehend sense experience. Kant holds that neither space nor time are substance, entities in themselves, or learned by experience; he holds, rather, that both are elements of a systematic framework we use to structure our experience. Spatial measurements are used to quantify how far apart objects are, and temporal measurements are used to quantitatively compare the interval between (or duration of) events. Although space and time are held to be transcendentally ideal in this sense—that is, mind-dependent—they are also empirically real—that is, according to Kant's definitions, a priori features of experience, and therefore not simply "subjective," variable, or accidental perceptions in a given consciousness.Some idealist writers, such as J. M. E. McTaggart in The Unreality of Time, have argued that time is an illusion (see also The flow of time, below).
Realism and anti-realism:
The writers discussed here are for the most part realists in this regard; for instance, Gottfried Leibniz held that his monads existed, at least independently of the mind of the observer.
Absolutism and relationalism:
Leibniz and Newton The great debate between defining notions of space and time as real objects themselves (absolute), or mere orderings upon actual objects (relational), began between physicists Isaac Newton (via his spokesman, Samuel Clarke) and Gottfried Leibniz in the papers of the Leibniz–Clarke correspondence.
Absolutism and relationalism:
Arguing against the absolutist position, Leibniz offers a number of thought experiments with the purpose of showing that there is contradiction in assuming the existence of facts such as absolute location and velocity. These arguments trade heavily on two principles central to his philosophy: the principle of sufficient reason and the identity of indiscernibles. The principle of sufficient reason holds that for every fact, there is a reason that is sufficient to explain what and why it is the way it is and not otherwise. The identity of indiscernibles states that if there is no way of telling two entities apart, then they are one and the same thing.
Absolutism and relationalism:
The example Leibniz uses involves two proposed universes situated in absolute space. The only discernible difference between them is that the latter is positioned five feet to the left of the first. The example is only possible if such a thing as absolute space exists. Such a situation, however, is not possible, according to Leibniz, for if it were, a universe's position in absolute space would have no sufficient reason, as it might very well have been anywhere else. Therefore, it contradicts the principle of sufficient reason, and there could exist two distinct universes that were in all ways indiscernible, thus contradicting the identity of indiscernibles.
Absolutism and relationalism:
Standing out in Clarke's (and Newton's) response to Leibniz's arguments is the bucket argument: Water in a bucket, hung from a rope and set to spin, will start with a flat surface. As the water begins to spin in the bucket, the surface of the water will become concave. If the bucket is stopped, the water will continue to spin, and while the spin continues, the surface will remain concave. The concave surface is apparently not the result of the interaction of the bucket and the water, since the surface is flat when the bucket first starts to spin, it becomes concave as the water starts to spin, and it remains concave as the bucket stops.
Absolutism and relationalism:
In this response, Clarke argues for the necessity of the existence of absolute space to account for phenomena like rotation and acceleration that cannot be accounted for on a purely relationalist account. Clarke argues that since the curvature of the water occurs in the rotating bucket as well as in the stationary bucket containing spinning water, it can only be explained by stating that the water is rotating in relation to the presence of some third thing—absolute space.
Absolutism and relationalism:
Leibniz describes a space that exists only as a relation between objects, and which has no existence apart from the existence of those objects. Motion exists only as a relation between those objects. Newtonian space provided the absolute frame of reference within which objects can have motion. In Newton's system, the frame of reference exists independently of the objects contained within it. These objects can be described as moving in relation to space itself. For almost two centuries, the evidence of a concave water surface held authority.
Absolutism and relationalism:
Mach Another important figure in this debate is 19th-century physicist Ernst Mach. While he did not deny the existence of phenomena like that seen in the bucket argument, he still denied the absolutist conclusion by offering a different answer as to what the bucket was rotating in relation to: the fixed stars.
Absolutism and relationalism:
Mach suggested that thought experiments like the bucket argument are problematic. If we were to imagine a universe that only contains a bucket, on Newton's account, this bucket could be set to spin relative to absolute space, and the water it contained would form the characteristic concave surface. But in the absence of anything else in the universe, it would be difficult to confirm that the bucket was indeed spinning. It seems equally possible that the surface of the water in the bucket would remain flat.
Absolutism and relationalism:
Mach argued that, in effect, the water experiment in an otherwise empty universe would remain flat. But if another object were introduced into this universe, perhaps a distant star, there would now be something relative to which the bucket could be seen as rotating. The water inside the bucket could possibly have a slight curve. To account for the curve that we observe, an increase in the number of objects in the universe also increases the curvature in the water. Mach argued that the momentum of an object, whether angular or linear, exists as a result of the sum of the effects of other objects in the universe (Mach's Principle).
Absolutism and relationalism:
Einstein Albert Einstein proposed that the laws of physics should be based on the principle of relativity. This principle holds that the rules of physics must be the same for all observers, regardless of the frame of reference that is used, and that light propagates at the same speed in all reference frames. This theory was motivated by Maxwell's equations, which show that electromagnetic waves propagate in a vacuum at the speed of light. However, Maxwell's equations give no indication of what this speed is relative to. Prior to Einstein, it was thought that this speed was relative to a fixed medium, called the luminiferous ether. In contrast, the theory of special relativity postulates that light propagates at the speed of light in all inertial frames, and examines the implications of this postulate.
Absolutism and relationalism:
All attempts to measure any speed relative to this ether failed, which can be seen as a confirmation of Einstein's postulate that light propagates at the same speed in all reference frames. Special relativity is a formalization of the principle of relativity that does not contain a privileged inertial frame of reference, such as the luminiferous ether or absolute space, from which Einstein inferred that no such frame exists.
Absolutism and relationalism:
Einstein generalized relativity to frames of reference that were non-inertial. He achieved this by positing the Equivalence Principle, which states that the force felt by an observer in a given gravitational field and that felt by an observer in an accelerating frame of reference are indistinguishable. This led to the conclusion that the mass of an object warps the geometry of the space-time surrounding it, as described in Einstein's field equations.
Absolutism and relationalism:
In classical physics, an inertial reference frame is one in which an object that experiences no forces does not accelerate. In general relativity, an inertial frame of reference is one that is following a geodesic of space-time. An object that moves against a geodesic experiences a force. An object in free fall does not experience a force, because it is following a geodesic. An object standing on the earth, however, will experience a force, as it is being held against the geodesic by the surface of the planet.
Absolutism and relationalism:
Einstein partially advocates Mach's principle in that distant stars explain inertia because they provide the gravitational field against which acceleration and inertia occur. But contrary to Leibniz's account, this warped space-time is as integral a part of an object as are its other defining characteristics, such as volume and mass. If one holds, contrary to idealist beliefs, that objects exist independently of the mind, it seems that relativistics commits one to also hold the idea that space and temporality have exactly the same type of independent existence.
Conventionalism:
The position of conventionalism states that there is no fact of the matter as to the geometry of space and time, but that it is decided by convention. The first proponent of such a view, Henri Poincaré, reacting to the creation of the new non-Euclidean geometry, argued that which geometry applied to a space was decided by convention, since different geometries will describe a set of objects equally well, based on considerations from his sphere-world.
Conventionalism:
This view was developed and updated to include considerations from relativistic physics by Hans Reichenbach. Reichenbach's conventionalism, applying to space and time, focuses around the idea of coordinative definition.
Conventionalism:
Coordinative definition has two major features. The first has to do with coordinating units of length with certain physical objects. This is motivated by the fact that we can never directly apprehend length. Instead we must choose some physical object, say the Standard Metre at the Bureau International des Poids et Mesures (International Bureau of Weights and Measures), or the wavelength of cadmium to stand in as our unit of length. The second feature deals with separated objects. Although we can, presumably, directly test the equality of length of two measuring rods when they are next to one another, we can not find out as much for two rods distant from one another. Even supposing that two rods, whenever brought near to one another are seen to be equal in length, we are not justified in stating that they are always equal in length. This impossibility undermines our ability to decide the equality of length of two distant objects. Sameness of length, to the contrary, must be set by definition.
Conventionalism:
Such a use of coordinative definition is in effect, on Reichenbach's conventionalism, in the General Theory of Relativity where light is assumed, i.e. not discovered, to mark out equal distances in equal times. After this setting of coordinative definition, however, the geometry of spacetime is set.
As in the absolutism/relationalism debate, contemporary philosophy is still in disagreement as to the correctness of the conventionalist doctrine.
Structure of space-time:
Building from a mix of insights from the historical debates of absolutism and conventionalism as well as reflecting on the import of the technical apparatus of the General Theory of Relativity, details as to the structure of space-time have made up a large proportion of discussion within the philosophy of space and time, as well as the philosophy of physics. The following is a short list of topics.
Structure of space-time:
Relativity of simultaneity According to special relativity each point in the universe can have a different set of events that compose its present instant. This has been used in the Rietdijk–Putnam argument to demonstrate that relativity predicts a block universe in which events are fixed in four dimensions.
Invariance vs. covariance Bringing to bear the lessons of the absolutism/relationalism debate with the powerful mathematical tools invented in the 19th and 20th century, Michael Friedman draws a distinction between invariance upon mathematical transformation and covariance upon transformation.
Invariance, or symmetry, applies to objects, i.e. the symmetry group of a space-time theory designates what features of objects are invariant, or absolute, and which are dynamical, or variable.
Covariance applies to formulations of theories, i.e. the covariance group designates in which range of coordinate systems the laws of physics hold.
Structure of space-time:
This distinction can be illustrated by revisiting Leibniz's thought experiment, in which the universe is shifted over five feet. In this example the position of an object is seen not to be a property of that object, i.e. location is not invariant. Similarly, the covariance group for classical mechanics will be any coordinate systems that are obtained from one another by shifts in position as well as other translations allowed by a Galilean transformation.
Structure of space-time:
In the classical case, the invariance, or symmetry, group and the covariance group coincide, but they part ways in relativistic physics. The symmetry group of the general theory of relativity includes all differentiable transformations, i.e., all properties of an object are dynamical, in other words there are no absolute objects. The formulations of the general theory of relativity, unlike those of classical mechanics, do not share a standard, i.e., there is no single formulation paired with transformations. As such the covariance group of the general theory of relativity is just the covariance group of every theory.
Structure of space-time:
Historical frameworks A further application of the modern mathematical methods, in league with the idea of invariance and covariance groups, is to try to interpret historical views of space and time in modern, mathematical language.
Structure of space-time:
In these translations, a theory of space and time is seen as a manifold paired with vector spaces, the more vector spaces the more facts there are about objects in that theory. The historical development of spacetime theories is generally seen to start from a position where many facts about objects are incorporated in that theory, and as history progresses, more and more structure is removed.
Structure of space-time:
For example, Aristotelian space and time has both absolute position and special places, such as the center of the cosmos, and the circumference. Newtonian space and time has absolute position and is Galilean invariant, but does not have special positions.
Holes With the general theory of relativity, the traditional debate between absolutism and relationalism has been shifted to whether spacetime is a substance, since the general theory of relativity largely rules out the existence of, e.g., absolute positions. One powerful argument against spacetime substantivalism, offered by John Earman is known as the "hole argument".
This is a technical mathematical argument but can be paraphrased as follows: Define a function d as the identity function over all elements over the manifold M, excepting a small neighbourhood H belonging to M. Over H d comes to differ from identity by a smooth function.
With use of this function d we can construct two mathematical models, where the second is generated by applying d to proper elements of the first, such that the two models are identical prior to the time t=0, where t is a time function created by a foliation of spacetime, but differ after t=0.
These considerations show that, since substantivalism allows the construction of holes, that the universe must, on that view, be indeterministic. Which, Earman argues, is a case against substantivalism, as the case between determinism or indeterminism should be a question of physics, not of our commitment to substantivalism.
Direction of time:
The problem of the direction of time arises directly from two contradictory facts. Firstly, the fundamental physical laws are time-reversal invariant; if a cinematographic film were taken of any process describable by means of the aforementioned laws and then played backwards, it would still portray a physically possible process. Secondly, our experience of time, at the macroscopic level, is not time-reversal invariant. Glasses can fall and break, but shards of glass cannot reassemble and fly up onto tables. We have memories of the past, and none of the future. We feel we can't change the past but can influence the future.
Direction of time:
Causation solution One solution to this problem takes a metaphysical view, in which the direction of time follows from an asymmetry of causation. We know more about the past because the elements of the past are causes for the effect that is our perception. We feel we can't affect the past and can affect the future because we can't affect the past and can affect the future.
Direction of time:
There are two main objections to this view. First is the problem of distinguishing the cause from the effect in a non-arbitrary way. The use of causation in constructing a temporal ordering could easily become circular. The second problem with this view is its explanatory power. While the causation account, if successful, may account for some time-asymmetric phenomena like perception and action, it does not account for many others.
Direction of time:
However, asymmetry of causation can be observed in a non-arbitrary way which is not metaphysical in the case of a human hand dropping a cup of water which smashes into fragments on a hard floor, spilling the liquid. In this order, the causes of the resultant pattern of cup fragments and water spill is easily attributable in terms of the trajectory of the cup, irregularities in its structure, angle of its impact on the floor, etc. However, applying the same event in reverse, it is difficult to explain why the various pieces of the cup should fly up into the human hand and reassemble precisely into the shape of a cup, or why the water should position itself entirely within the cup. The causes of the resultant structure and shape of the cup and the encapsulation of the water by the hand within the cup are not easily attributable, as neither hand nor floor can achieve such formations of the cup or water. This asymmetry is perceivable on account of two features: i) the relationship between the agent capacities of the human hand (i.e., what it is and is not capable of and what it is for) and non-animal agency (i.e., what floors are and are not capable of and what they are for) and ii) that the pieces of cup came to possess exactly the nature and number of those of a cup before assembling. In short, such asymmetry is attributable to the relationship between i) temporal direction and ii) the implications of form and functional capacity.
Direction of time:
The application of these ideas of form and functional capacity only dictates temporal direction in relation to complex scenarios involving specific, non-metaphysical agency which is not merely dependent on human perception of time. However, this last observation in itself is not sufficient to invalidate the implications of the example for the progressive nature of time in general.
Thermodynamics solution The second major family of solutions to this problem, and by far the one that has generated the most literature, finds the existence of the direction of time as relating to the nature of thermodynamics.
The answer from classical thermodynamics states that while our basic physical theory is, in fact, time-reversal symmetric, thermodynamics is not. In particular, the second law of thermodynamics states that the net entropy of a closed system never decreases, and this explains why we often see glass breaking, but not coming back together.
Direction of time:
But in statistical mechanics things become more complicated. On one hand, statistical mechanics is far superior to classical thermodynamics, in that thermodynamic behavior, such as glass breaking, can be explained by the fundamental laws of physics paired with a statistical postulate. But statistical mechanics, unlike classical thermodynamics, is time-reversal symmetric. The second law of thermodynamics, as it arises in statistical mechanics, merely states that it is overwhelmingly likely that net entropy will increase, but it is not an absolute law.
Direction of time:
Current thermodynamic solutions to the problem of the direction of time aim to find some further fact, or feature of the laws of nature to account for this discrepancy.
Direction of time:
Laws solution A third type of solution to the problem of the direction of time, although much less represented, argues that the laws are not time-reversal symmetric. For example, certain processes in quantum mechanics, relating to the weak nuclear force, are not time-reversible, keeping in mind that when dealing with quantum mechanics time-reversibility comprises a more complex definition. But this type of solution is insufficient because 1) the time-asymmetric phenomena in quantum mechanics are too few to account for the uniformity of macroscopic time-asymmetry and 2) it relies on the assumption that quantum mechanics is the final or correct description of physical processes.One recent proponent of the laws solution is Tim Maudlin who argues that the fundamental laws of physics are laws of temporal evolution (see Maudlin [2007]). However, elsewhere Maudlin argues: "[the] passage of time is an intrinsic asymmetry in the temporal structure of the world... It is the asymmetry that grounds the distinction between sequences that runs from past to future and sequences which run from future to past" [ibid, 2010 edition, p. 108]. Thus it is arguably difficult to assess whether Maudlin is suggesting that the direction of time is a consequence of the laws or is itself primitive.
Flow of time:
The problem of the flow of time, as it has been treated in analytic philosophy, owes its beginning to a paper written by J. M. E. McTaggart, in which he proposes two "temporal series". The first series, which means to account for our intuitions about temporal becoming, or the moving Now, is called the A-series. The A-series orders events according to their being in the past, present or future, simpliciter and in comparison to each other. The B-series eliminates all reference to the present, and the associated temporal modalities of past and future, and orders all events by the temporal relations earlier than and later than. In many ways, the debate between proponents of these two views can be seen as a continuation of the early modern debate between the view that there is absolute time (defended by Isaac Newton) and the view that there is only merely relative time (defended by Gottfried Leibniz).
Flow of time:
McTaggart, in his paper "The Unreality of Time", argues that time is unreal since a) the A-series is inconsistent and b) the B-series alone cannot account for the nature of time as the A-series describes an essential feature of it.
Flow of time:
Building from this framework, two camps of solution have been offered. The first, the A-theorist solution, takes becoming as the central feature of time, and tries to construct the B-series from the A-series by offering an account of how B-facts come to be out of A-facts. The second camp, the B-theorist solution, takes as decisive McTaggart's arguments against the A-series and tries to construct the A-series out of the B-series, for example, by temporal indexicals.
Dualities:
Quantum field theory models have shown that it is possible for theories in two different space-time backgrounds, like AdS/CFT or T-duality, to be equivalent.
Presentism and eternalism:
According to Presentism, time is an ordering of various realities. At a certain time, some things exist and others do not. This is the only reality we can deal with and we cannot, for example, say that Homer exists because at the present time he does not. An Eternalist, on the other hand, holds that time is a dimension of reality on a par with the three spatial dimensions, and hence that all things—past, present and future—can be said to be just as real as things in the present. According to this theory, then, Homer really does exist, though we must still use special language when talking about somebody who exists at a distant time—just as we would use special language when talking about something far away (the very words near, far, above, below, and such are directly comparable to phrases such as in the past, a minute ago, and so on).
Endurantism and perdurantism:
The positions on the persistence of objects are somewhat similar. An endurantist holds that for an object to persist through time is for it to exist completely at different times (each instance of existence we can regard as somehow separate from previous and future instances, though still numerically identical with them). A perdurantist on the other hand holds that for a thing to exist through time is for it to exist as a continuous reality, and that when we consider the thing as a whole we must consider an aggregate of all its "temporal parts" or instances of existing. Endurantism is seen as the conventional view and flows out of our pre-philosophical ideas (when I talk to somebody I think I am talking to that person as a complete object, and not just a part of a cross-temporal being), but perdurantists such as David Lewis have attacked this position. They argue that perdurantism is the superior view for its ability to take account of change in objects.
Endurantism and perdurantism:
On the whole, Presentists are also endurantists and Eternalists are also perdurantists (and vice versa), but this is not a necessary relation and it is possible to claim, for instance, that time's passage indicates a series of ordered realities, but that objects within these realities somehow exist outside of the reality as a whole, even though the realities as wholes are not related. However, such positions are rarely adopted. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Chopart's fracture–dislocation**
Chopart's fracture–dislocation:
Chopart's fracture–dislocation is a dislocation of the mid-tarsal (talonavicular and calcaneocuboid) joints of the foot, often with associated fractures of the calcaneus, cuboid and navicular.
Presentation:
The foot is usually dislocated medially (80%) and superiorly, which occurs when the foot is plantar flexed and inverted.
Lateral displacement occurs during eversion injuries.
Associated fractures of calcaneus, cuboid and navicular are frequent.
Open fractures occur in a small percentage.
Mechanism:
Chopart's fracture–dislocation is usually caused by falls from height, traffic collisions and twisting injuries to the foot as seen in basketball players.
Diagnosis:
Diagnosis is made on plain radiograph of the foot, although the extent of injury is often underestimated.
Treatment:
Treatment comprises early reduction of the dislocation, and frequently involves open reduction internal fixation to restore and stabilise the talonavicular joint. Open reduction and fusion of the calcaneocuboid joint is occasionally required.
Prognosis:
With prompt treatment, particularly open reduction, and early mobilisation the outcome is generally good. High energy injuries and associated fractures worsen the outcome. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Stress distribution in soil**
Stress distribution in soil:
Stress distribution in soil is a function of the type of soil, the relative rigidity of the soil and the footing, and the depth of foundation at level of contact between footing and soil.The estimation of vertical stresses at any point in a soil mass due to external loading is essential to the prediction of settlements of buildings, bridges and pressure.
Some cases:
Finitely loaded area Vertical line load at the surface Vertical Point Load at the Surface The solution to the problem of calculating the stresses in an elastic half space subjected to a vertical point load at the surface will be of value in estimating the stresses induced in a deposit of soil whose depth is large compared to the dimensions of that part of the surface that is loaded.
Some cases:
cos 3θ cos sin cos θ) cos cos θ) cos sin θ cos θ=zR , R=r2+z2 Δσz=−3Pz32πR5=−3P2πz3(r2+z2)5/2=−3P2πz2[1+(rz)2]52 Below centre of uniformly loaded circular area σ=q{1−1[(Rz)2+1]3/2} | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**L1TD1P1**
L1TD1P1:
LINE-1 type transposase domain containing 1 pseudogene 1 is a protein that in humans is encoded by the L1TD1P1 gene. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Chink in one's armor**
Chink in one's armor:
The idiom "chink in one's armor" refers to an area of vulnerability. It has traditionally been used to refer to a weak spot in a figurative suit of armor. The standard meaning is similar to that of Achilles' heel.Grammarist provides a sample usage by The Daily Telegraph that they find acceptable: "Such hype was anathema for the modest professional fighter, who has 22 victories under his belt, and not a perceptible chink in his armour."
Etymology:
The phrase "chink in one's armor" has been used idiomatically since the mid-17th century. It is based on a definition of chink meaning "a crack or gap," dating back to around 1400.
Notable controversies:
While the phrase itself is innocuous, its use in contemporary times has caused controversy in the United States due to it including the homonym "chink", which can be interpreted as an ethnic slur to refer to someone of Chinese or East Asian descent.
Notable controversies:
ESPN Considerable controversy was generated in the United States after two incidents regarding Taiwanese American basketball player Jeremy Lin and the network ESPN occurred in the same week. An editor used the phrase as a headline on the company's web site in February 2012; the headline was titled "Chink In The Armor", and referred specifically to Lin. The headline was a reference to Lin's unsuccessful game against the New Orleans Hornets, suggesting that Jeremy Lin's popularity and winning streak were weakening. While ESPN has used the phrase "chink in the armor" on its website over 3,000 times before, its usage in this instance was considered offensive because it directly referred to a person of Asian descent. Many viewed the usage of the phrase as a double entendre. ESPN quickly removed the headline, apologized, and said it was conducting an internal review. The editor, Anthony Federico, denied any idiomatic usage, saying "This had nothing to do with me being cute or punny ... I'm so sorry that I offended people. I'm so sorry if I offended Jeremy." Nevertheless, he was fired.On-air ESPN commentator Max Bretos also used the same phrase to refer to Lin, asking "If there is a chink in the armor, where can Lin improve his game?" Bretos apologized, saying "My wife is Asian, would never intentionally say anything to disrespect her and that community." He was suspended for 30 days. Forbes believes he did so without racist intent.Comedy television show Saturday Night Live satirized ESPN's use of the phrase, pointing out the difference in society's reaction to racial jokes about Asian people versus racial jokes about black people. In the skit, three sports commentators were featured happily making jokes about Lin's race, while a fourth drew contempt for making similar comments about black players.
Notable controversies:
Other A commentator on CNBC in 2013 used the phrase when referring to Rupert Murdoch's divorce of his Asian-American wife, sparking outrage. In 2015, The Wall Street Journal used the idiom in a tweet to promote an article about various difficulties China's paramount leader Xi Jinping was encountering. The organization subsequently deleted the post, stating that "a common idiom used might be seen as a slur. No offense was intended."In October 2018, TBS baseball announcer Ron Darling, who himself is of Chinese descent, used the phrase during a Yankees-Red Sox playoff game, referring to the performance of Japanese pitcher Masahiro Tanaka and immediately received similar criticism. Darling later apologized for his unintentional choice of words. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**In vitro spermatogenesis**
In vitro spermatogenesis:
In vitro spermatogenesis is the process of creating male gametes (spermatozoa) outside of the body in a culture system. The process could be useful for fertility preservation, infertility treatment and may further develop the understanding of spermatogenesis at the cellular and molecular level. Spermatogenesis is a highly complex process and artificially rebuilding it in vitro is challenging. These include creating a similar microenvironment to that of the testis as well as supporting endocrine and paracrine signalling, and ensuring survival of the somatic and germ cells from spermatogonial stem cells (SSCs) to mature spermatozoa.Different methods of culturing can be used in the process such as isolated cell cultures, fragment cultures and 3D cultures
Culture techniques:
Isolated cell cultures Cell cultures can include either monocultures, where one cell population is cultured, or co-culturing systems, where several cell lines (must be at least two) can be cultured together. Cells are initially isolated for culture by enzymatically digesting the testis tissue to separate out the different cell types for culture The process of isolating cells can lead to cell damage.The main advantage of monoculture is that the effect of different influences on one specific cell population of cells can be investigated. Co-culture allows for the interactions between cell populations to be observed and experimented on, which is seen as an advantage over the monoculture model.Isolated cell culture, specifically co-culture of testis tissue, has been a useful technique for examining the influences of specific factors such as hormones or different feeder cells on the progression of spermatogenesis in vitro. For example, factors such as temperature, feeder cell influence and the role of testosterone and follicle-stimulating hormone (FSH) have all been investigated using isolated cell culture techniques.Studies have concluded that different factors can influence the culture of germ cells e.g. media, growth factors, hormones and temperature. For example, when culturing immortalized mouse germ cells at temperatures of 35, 37 and 29°C, these cells proliferate most rapidly at the highest temperature and least rapidly at the lowest but there were varying levels of differentiation. At the highest temperature no differentiation were detected, some was seen at 37°C and some early spermatids appearing at 32°C. Isolated cell culture technique has been successfully used for in vitro production of sperm using mouse as an animal model.Investigations of appropriate feeder cells concluded that a variety of cells could encourage development of germ cells such as Sertoli cells, Leydig cells and peritubular myoid cells but the most essential is Sertoli cells, but Leydig and peritubular myoid cells both contribute to the microenvironment that encourage stem cells to remain pluripotent and self renew in the testis.
Culture techniques:
Testes fragment cultures In fragment cultures, the testis is removed and fragments of tissue are cultured in supplemental media containing different growth factors to induce spermatogenesis and form functional gametes. The development of this culture technique has taken place mainly with the use of animal models e.g. mice or rat testis tissue.
Culture techniques:
The advantage of using this method is that it maintains the natural spatial arrangement of the seminiferous tubules. However, hypoxia is a recurring problem in these cultures where the low oxygen supply hinders the development and maturation of spermatids (significantly more in adult than immature testis tissues). Other challenges with this type of culture include maintaining the structure of the seminiferous tubules which makes it more difficult for longer-term cell cultures as the tissue structures can flatten out making it hard to work with. To resolve some of these issues, 3D cultures can be used.
Culture techniques:
In 2012, mature spermatozoa capable of fertilization was isolated from in vitro culture of immature mouse testis tissue.
Culture techniques:
3D cultures 3D cultures use sponge, models or scaffolds that resemble the elements of the extracellular matrix to achieve a more natural spatial structure of the seminiferous tubules and to better represent the tissues and the interaction between different cell types in an ex vivo experiment. Different components of the extracellular matrix such as collagen, agar and calcium alginate are commonly used to form the gel or scaffold which can provide oxygen and nutrients. To propagate 3D cultures, testicular cell cultures are imbedded into the porous sponge/scaffold and allowed to colonise the structure which can then survive for several weeks to allow spermatogonia to differentiate and mature into spermatozoa.
Culture techniques:
In addition, shaking 3D cultures during the seeding process allows for an increased oxygen supply which helps overcome the issue of hypoxia and so improves the lifespan of cells.In contrast to monocultures, fragment/3D cultures are able to establish in vitro conditions that can somewhat resemble the testicular microenvironment to allow a more accurate study of the testicular physiology and its associations with the in vitro development of sperm cells.
Future implications:
Scientific The ability to recapitulate spermatogenesis In vitro provides a unique opportunity to study this biological process through oftentimes cheaper and faster method of research than in vivo work. Observation is often easier in vitro, as the targeted cells are mostly isolated and immobile. Another significant advantage of in vitro research is the ease with which environmental factors can be changed and monitored. There are also techniques which are not practical or feasible in vivo which can now be explored.In vitro work is not without its own challenges. For example, one loses the natural structure provided by the in vivo tissue, and thus cell connections which could be important to the function of the tissue.
Future implications:
Clinical While rodent spermatogenesis is not identical to its human counterpart, especially due to the high evolution rate of the male reproductive tract, these techniques are a solid starting point for future human applications.Various categories of infertile men may benefit from advances in these techniques, especially those with a lack of viable gamete production. These men cannot benefit, for example, from sperm extraction techniques, and currently have little to no options for producing genetic descendants.Notably, males who have undergone chemo/radiotherapy prepubertally may benefit from in vitro spermatogenesis. These people did not have the option to cryopreserve viable sperm before their procedure, and thus the ability to generate genetically descended sperm later in life is invaluable. Possible methods that could be applied (to this and other groups) are induction of spermatogenesis in testis samples taken prepubertally, or, if these samples are not available/viable, new methods that manipulate stem cell differentiation could produce SSCs 'from scratch', using adult stem cell samples.An alternative method is to graft preserved tissue back onto adult cancer survivors, however this comes with operational risks, as well as a risk of reintroducing malignant cells. Even if using this method however, in vitro spermatogenesis advances would allow for sample expansion and observation to better ensure quality and quantity of graft tissue.In those with healthy or preserved SSCs but without a cellular environment to support them, in vitro spermatogenesis could be used following transplant of the SSCs into healthy donor tissue.Another group that could be helped by in vitro spermatogenesis are those with any form of genetic impediment to sperm production. Those with no viable SSC development are an obvious target, but also those with varying levels of spermatogenic arrest; previously their underdeveloped germ cells have been injected into oocytes, however this has a success rate of only 3% in humans.Finally, in vitro spermatogenesis using animal or human cells can be used to assess the effects and toxicity of drugs before in vivo testing. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**National Institute for Environmental eScience**
National Institute for Environmental eScience:
The National Institute for Environmental eScience (NIEeS) was a collaboration between Natural Environment Research Council (NERC) and the University of Cambridge. It was established in July 2002 and in addition to its main role of promoting and supporting the use of e-Science and grid technologies within the field of environmental research, its purpose was to: Train scientists in environmental eScience Demonstrate environmental eScience Help develop the environmental eScience community Aid collaborations between scientists and industriesIt was intended as a national resource to be "owned by the whole community". The website remains available; however, the contract for the project ended in August 2008. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Topographic map**
Topographic map:
In modern mapping, a topographic map or topographic sheet is a type of map characterized by large-scale detail and quantitative representation of relief features, usually using contour lines (connecting points of equal elevation), but historically using a variety of methods. Traditional definitions require a topographic map to show both natural and artificial features. A topographic survey is typically based upon a systematic observation and published as a map series, made up of two or more map sheets that combine to form the whole map. A topographic map series uses a common specification that includes the range of cartographic symbols employed, as well as a standard geodetic framework that defines the map projection, coordinate system, ellipsoid and geodetic datum. Official topographic maps also adopt a national grid referencing system.
Topographic map:
Natural Resources Canada provides this description of topographic maps:These maps depict in detail ground relief (landforms and terrain), drainage (lakes and rivers), forest cover, administrative areas, populated areas, transportation routes and facilities (including roads and railways), and other man-made features.
Topographic map:
Other authors define topographic maps by contrasting them with another type of map; they are distinguished from smaller-scale "chorographic maps" that cover large regions, "planimetric maps" that do not show elevations, and "thematic maps" that focus on specific topics.However, in the vernacular and day to day world, the representation of relief (contours) is popularly held to define the genre, such that even small-scale maps showing relief are commonly (and erroneously, in the technical sense) called "topographic".The study or discipline of topography is a much broader field of study, which takes into account all natural and man-made features of terrain. Maps were among the first artifacts to record observations about topography.
History:
Topographic maps are based on topographical surveys. Performed at large scales, these surveys are called topographical in the old sense of topography, showing a variety of elevations and landforms. This is in contrast to older cadastral surveys, which primarily show property and governmental boundaries. The first multi-sheet topographic map series of an entire country, the Carte géométrique de la France, was completed in 1789. The Great Trigonometric Survey of India, started by the East India Company in 1802, then taken over by the British Raj after 1857 was notable as a successful effort on a larger scale and for accurately determining heights of Himalayan peaks from viewpoints over one hundred miles distant.
History:
Topographic surveys were prepared by the military to assist in planning for battle and for defensive emplacements (thus the name and history of the United Kingdom's Ordnance Survey). As such, elevation information was of vital importance.As they evolved, topographic map series became a national resource in modern nations in planning infrastructure and resource exploitation. In the United States, the national map-making function which had been shared by both the Army Corps of Engineers and the Department of the Interior migrated to the newly created United States Geological Survey in 1879, where it has remained since.1913 saw the beginning of the International Map of the World initiative, which set out to map all of Earth's significant land areas at a scale of 1:1 million, on about one thousand sheets, each covering four degrees latitude by six or more degrees longitude. Excluding borders, each sheet was 44 cm high and (depending on latitude) up to 66 cm wide. Although the project eventually foundered, it left an indexing system that remains in use.
History:
By the 1980s, centralized printing of standardized topographic maps began to be superseded by databases of coordinates that could be used on computers by moderately skilled end users to view or print maps with arbitrary contents, coverage and scale. For example, the federal government of the United States' TIGER initiative compiled interlinked databases of federal, state and local political borders and census enumeration areas, and of roadways, railroads, and water features with support for locating street addresses within street segments. TIGER was developed in the 1980s and used in the 1990 and subsequent decennial censuses. Digital elevation models (DEM) were also compiled, initially from topographic maps and stereographic interpretation of aerial photographs and then from satellite photography and radar data. Since all these were government projects funded with taxes and not classified for national security reasons, the datasets were in the public domain and freely usable without fees or licensing.
History:
TIGER and DEM datasets greatly facilitated geographic information systems and made the Global Positioning System much more useful by providing context around locations given by the technology as coordinates. Initial applications were mostly professionalized forms such as innovative surveying instruments and agency-level GIS systems tended by experts. By the mid-1990s, increasingly user-friendly resources such as online mapping in two and three dimensions, integration of GPS with mobile phones and automotive navigation systems appeared. As of 2011, the future of standardized, centrally printed topographical maps is left somewhat in doubt.
Uses:
Topographic maps have many multiple uses in the present day: any type of geographic planning or large-scale architecture; earth sciences and many other geographic disciplines; mining and other earth-based endeavours; civil engineering and recreational uses such as hiking and orienteering.
It takes practice and skill to read and interpret a topographic map. This includes not only how to identify map features, but also how to interpret contour lines to infer landforms like cliffs, ridges, draws, etc. Training in map reading is often given in orienteering, scouting, and the military.
Conventions:
The various features shown on the map are represented by conventional signs or symbols. For example, colors can be used to indicate a classification of roads. These signs are usually explained in the margin of the map, or on a separately published characteristic sheet.Topographic maps are also commonly called contour maps or topo maps. In the United States, where the primary national series is organized by a strict 7.5-minute grid, they are often called or quads or quadrangles.
Conventions:
Topographic maps conventionally show topography, or land contours, by means of contour lines. Contour lines are curves that connect contiguous points of the same altitude (isohypse). In other words, every point on the marked line of 100 m elevation is 100 m above mean sea level.
These maps usually show not only the contours, but also any significant streams or other bodies of water, forest cover, built-up areas or individual buildings (depending on scale), and other features and points of interest such as what direction those streams are flowing.
Most topographic maps were prepared using photogrammetric interpretation of aerial photography using a stereoplotter. Modern mapping also employs lidar and other Remote sensing techniques. Older topographic maps were prepared using traditional surveying instruments.
The cartographic style (content and appearance) of topographic maps is highly variable between national mapping organizations. Aesthetic traditions and conventions persist in topographic map symbology, particularly amongst European countries at medium map scales.
Publishers of national topographic map series:
Although virtually the entire terrestrial surface of Earth has been mapped at scale 1:1,000,000, medium and large-scale mapping has been accomplished intensively in some countries and much less in others. Several commercial vendors supply international topographic map series.
According to 2007/2/EC European directive, national mapping agencies of European Union countries must have publicly available services for searching, viewing and downloading their official map series. Topographic maps produced by some of them are available under a free license that allows re-use, such as a Creative Commons license. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Lexical approach**
Lexical approach:
The lexical approach is a method of teaching foreign languages described by Michael Lewis in the early 1990s. The basic concept on which this approach rests is the idea that an important part of learning a language consists of being able to understand and produce lexical phrases as chunks. Students are taught to be able to perceive patterns of language (grammar) as well as have meaningful set uses of words at their disposal when they are taught in this way. In 2000, Norbert Schmitt, an American linguist and a Professor of Applied Linguistics at the University of Nottingham in the United Kingdom, contributed to a learning theory supporting the lexical approach he stated that "the mind stores and processes these [lexical] chunks as individual wholes." The short-term capacity of the brain is much more limited than long-term and so it is much more efficient for our brain to pull up a lexical chunk as if it were one piece of information as opposed to pulling up each word as separate pieces of information.The lexical method concentrates on teaching fixed terms that are common in conversations since, according to Lewis, they make up a bigger portion of speech than original words and sentences. Vocabulary is prized over grammar per se in this approach. The teaching of chunks and set phrases has become common in English as a foreign or second language, though this is not necessarily primarily due to the Lexical Approach. This is because anywhere from 55-80% of native speakers' speech are derived from prefabricated phrases. Fluency could be considered unachievable if one did not learn prefabricated chunks or expressions.
Lexical approach:
Common lexical chunks include: Have you ever ... been / seen / had / heard / tried Most language learners are accustomed to learning basic conversation starts, which are lexical chunks, including: "Good morning," "How are you?" "Where is the restroom?" "Thank you," "How much does this cost?" Language learners also use lexical chunks as templates or formulas to create new phrases: What are you doing? What are you saying? What are you cooking? What are you looking for?
Syllabus:
The lexical syllabus is a form of the propositional paradigm that takes 'word' as the unit of analysis and content for syllabus design. Various vocabulary selection studies can be traced back to the 1920s and 1930s (West 1926; Ogden 1930; Faucet et al. 1936), and recent advances in techniques for the computer analysis of large databases of authentic text have helped to resuscitate this line of work. The modern lexical syllabus is discussed in Sinclair & Renouf (1988), who state that the main benefit of a lexical syllabus is that it emphasizes utility - the student learns that which is most valuable because it is most frequent. Related work on collocation is reported by Sinclair (1987) and Kennedy (1989), and the Collins COBUILD English Course (Willis & Willis 1988) is cited as an exemplary pedagogic implementation of the work, though "in fact, however, the COBUILD textbooks utilize one of the more complex hybrid syllabi in current ESL texts" (Long & Crookes 1993:23).
Syllabus:
Sinclair & Renouf (1988:155) find that (as with other synthetic syllabi), claims made for the lexical syllabus are not supported by evidence, and the assertion that the lexical syllabus is "an independent syllabus, unrelated by any principles to any methodology" (Sinclair et al. 1988:155) is subject to the criticism levelled by Brumfit against notional functional syllabi, i.e. that it (in this case, deliberately) takes no cognisance of how a second language is learned. Since these observations were made, however, Willis (1990) and Lewis (1993) have gone some way to provide such a theoretical justification.
Sources:
Boers, Frank (2006) "Formulaic sequences and perceived oral proficiency: putting a Lexical Approach to the test," Language Teaching Research, Vol. 10, No. 3, 245-261 doi:10.1191/1362168806lr195oa.
Faucet, L., West, M., Palmer, H. & Thorndike, E.L. (1936). The Interim Report on Vocabulary Selection for the Teaching of English as a Foreign Language. London: P.S. King.
Lewis, Michael, ed. (1997). Implementing the Lexical Approach, Language Teaching Publications, Hove, England.
Lewis, Michael (1993) The Lexical Approach.
Ogden, C.K. (1930). Basic English: An Introduction with Rules and Grammar. London: Kegan Paul, Trench & Trubner.
Sinclair, B. (1996). Materials design for the promotion of learner autonomy: how explicit is explicit? In R. Pemberton, S.L. Edward, W.W.F. Or, and H.D. Pierson (Eds.). Taking Control: Autonomy in Language Learning. Hong Kong: Hong Kong University Press. 149-165.
West, M. (1926). Bilingualism (With Special Reference to Bengal). Calcutta: Bureau of Education, India.
Willis, J. & Willis, D. (Eds.) (1996). Challenge and Change in Language Teaching. Oxford: Heinemann Willis, D. (1990). The Lexical Syllabus. London: Collins. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Hawk/goose effect**
Hawk/goose effect:
In ethology and cognitive ethology, the hawk/goose effect refers to a behavior observed in some young birds when another bird flies above them: if the flying bird is a goose, the young birds show no reaction, but if the flying bird is a hawk, the young birds either become more agitated or cower to reduce the danger. The observation that short-necked and long-tailed birds flying overhead caused alarm was noted by Oskar Heinroth. Friedrich Goethe conducted experiments with silhouettes to examine alarm reactions in 1937 and a more systematic study was conducted in the same year by Konrad Lorenz and Nikolaas Tinbergen which is considered one of the classic experiments of ethology.
Hawk/goose effect:
As part of their introducing experimentalism into animal behavior research they performed experiments in which they made 2-dimensional silhouettes of various bird-like shapes and moved them across the young birds' line of vision. Goose-like shapes were ignored while hawk-like shapes produced the response. Later Tinbergen reported that a single shape that was sort of an abstract composite of the hawk and goose silhouettes could produce the effect if moved in one direction but not the other. A study later confirmed that perception of an object was influenced by the direction of motion because the object in question was considered to be moving forwards in that direction. Initially thought to be an inborn instinct developed from natural selection, it was subsequently shown by others to be socially reinforced by other birds.
Hawk or goose distinguished by direction of movement:
Just like what is seen in Tinbergen’s 1951 experiment the same figure is used to represent both the hawk and the goose in most hawk/goose experiments. When moving the figure in one direction, it represents a shape resembling a hawk (short neck long tail) while moving the figure in the opposite direction resembles a goose (long neck short tail). The perceived identity influences how the figure is perceived to move, such that the figure is assumed to be a hawk or a goose based on the movement in direction of the head and the protrusion of the wings (short on one end and a long one on the other).
Innate or learned behaviour:
A Brief History Pointing to Innate Behavior Friedrich Goethe was the first to perform experiments using silhouettes (1937, 1940). He found that naive Capercaillie exhibited a greater fear response to a silhouette of a hawk than to a circle, a triangle, or a generalized bird silhouette, but that this varied with both species, and prior experience. Tinbergen, in 1951, pointed out that he was inspired by Oscar Heinroth's observations in which he stated that domestic chickens are more alarmed by short necked birds, over long necks ones. This provoked Konrad Lorenz's and Nikolaas Tinbergen's to design and explore the Hawk/Goose effect. Konrad Lorenz and Nikolaas Tinbergen worked together in 1937 on experiments that were each published separately in 1939. Lorenz and Tinbergen reported differences in their experiments with Lorenz arguing that a short neck only elicits a flight response in turkeys, while Tinbergen claimed: “The reactions of young gallinaceous birds, ducks, and geese to a flying bird of prey are released by the sign-stimulus ‘short neck’ among others”. Tinbergen published 2 papers in 1948 on the subject. In 1951 Tinbergen continued to report on what he described as innate behavior and stated that goslings display a fear response when an ambiguous goose-hawk figure was moved in the "hawk" direction, implying that goslings associate a particular shape with a particular direction of motion.There have been a number of other studies supporting Tinbergen's short neck hypothesis with some as recent as the 1980s such as Helmut C. Mueller and Patricia G. Parker, in 1980, demonstrated that naive mallard ducklings shows a greater variance in heart rate to the hawk models over the goose models. They concluded that cardiac response is an excellent measurement of fear and controlled for learned behavior by maintaining the ducklings in a brooder until they were transported to their laboratory in opaque containers and in 1982, Elizabeth L. Moore and Helmut C. Mueller found that a chick's heart rate was of greater variance in response to the hawk model without prior, pertinent experience, suggesting a greater innate fear response to the hawk over the goose. Obvious behavioral responses to fear were not identified.Most ethologists today believe that the behaviors elicited by the hawk/goose modals are socially reinforced or are more likely to support the Schleit's “selective habituation hypothesis".
Innate or learned behaviour:
A Brief History Pointing to Learned Behavior After discrepancies between the results of Konrad Lorenz and Nikolaas Tinbergen, Hirsch et al. in 1955, concluded that Tinbergen's hypothesis could not be replicated in Leghorn Chickens In 1960, McNiven, concluded that Tinbergen's hypothesis could not be replicated in ducklings and in an attempt to replicate Tinbergen's experiments, Schleit's, in 1961 believed Tinbergen falsified his data. Like the experiments that still support Tinbergan's short neck hypothesis, there are many experiments that do not
Controversy:
Konrad Lorenz met Nikolaas Tinbergen in 1936 at the Leiden Instinct Symposium. In 1937, Tinbergen and Lorenz worked on two projects, an experimental analysis of egg-rolling behavior in the greylag goose, supporting the fixed action pattern hypothesis, and the responses of various young birds to cardboard models of raptors and other flying birds.
Controversy:
Based on Oscar Heinroth’s hypothesis that domestic chickens showed the greatest amount of fear towards long tailed, short necked birds flying overhead, Tinbergen and Lorenz designed silhouettes that could represent a hawk like figure if moved in one direction, or a goose like figure if moved in the other direction. Tinbergen and Lorenz moved the models overhead of varies species of birds and recorded their responses.Tinbergen believed that these experiments prove Heinroth’s hypothesis but Lorenz noted that the shape of the model did not seem to matter for all species except turkeys. Lorenz pointed out that all other species presented alarm responses (fixating, alarm calling and marching off to cover), regardless of the model used and that “slow relative speed” of a flying object can elicit an anti-predator response. Lorenz and Tinbergen published their original findings separately in 1939. Re-examination of the experiments were invoked due to the differences in results.
Controversy:
In 1955, Hirsch et al. presented that Tinbergen's hypothesis could not be replicated in Leghorn Chickens. Schleidt, in 1961 did his best to replicate Tinbergen’s experiments using, free-ranging geese, ducks, and turkeys and found that regardless of the shape, these birds presented a fear response that diminished over a number of trials, pointing to the likelihood of habituation. Schleidt did find evidence to support Lorenz’s “slow relative speed” findings. A second experiment was performed by Schleidt in 1961 to determine if turkeys, who have never been exposed to a flying object, would support Tinbergen’s predisposition hypothesis and present a fear response to the hawk-shaped object in the hawk/goose model. Schleidt used 5 bronze turkeys that were raised indoors with no windows. Schleidt’s results again, supported the “selective habituation hypothesis,” and not innate behavior. Thus, the short neck hypothesis appears to have been falsified by Tinbergen. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Dynamic topic model**
Dynamic topic model:
Within statistics, Dynamic topic models' are generative models that can be used to analyze the evolution of (unobserved) topics of a collection of documents over time. This family of models was proposed by David Blei and John Lafferty and is an extension to Latent Dirichlet Allocation (LDA) that can handle sequential documents.In LDA, both the order the words appear in a document and the order the documents appear in the corpus are oblivious to the model. Whereas words are still assumed to be exchangeable, in a dynamic topic model the order of the documents plays a fundamental role. More precisely, the documents are grouped by time slice (e.g.: years) and it is assumed that the documents of each group come from a set of topics that evolved from the set of the previous slice.
Topics:
Similarly to LDA and pLSA, in a dynamic topic model, each document is viewed as a mixture of unobserved topics. Furthermore, each topic defines a multinomial distribution over a set of terms. Thus, for each word of each document, a topic is drawn from the mixture and a term is subsequently drawn from the multinomial distribution corresponding to that topic.
Topics:
The topics, however, evolve over time. For instance, the two most likely terms of a topic at time t could be "network" and "Zipf" (in descending order) while the most likely ones at time t+1 could be "Zipf" and "percolation" (in descending order).
Model:
Define αt as the per-document topic distribution at time t.
βt,k as the word distribution of topic k at time t.
ηt,d as the topic distribution for document d in time t, zt,d,n as the topic for the nth word in document d in time t, and wt,d,n as the specific word.In this model, the multinomial distributions αt+1 and βt+1,k are generated from αt and βt,k , respectively.
Even though multinomial distributions are usually written in terms of the mean parameters, representing them in terms of the natural parameters is better in the context of dynamic topic models.
Model:
The former representation has some disadvantages due to the fact that the parameters are constrained to be non-negative and sum to one. When defining the evolution of these distributions, one would need to assure that such constraints were satisfied. Since both distributions are in the exponential family, one solution to this problem is to represent them in terms of the natural parameters, that can assume any real value and can be individually changed.
Model:
Using the natural parameterization, the dynamics of the topic model are given by βt,k|βt−1,k∼N(βt−1,k,σ2I) and αt|αt−1∼N(αt−1,δ2I) .The generative process at time slice 't' is therefore: Draw topics βt,k|βt−1,k∼N(βt−1,k,σ2I)∀k Draw mixture model αt|αt−1∼N(αt−1,δ2I) For each document: Draw ηt,d∼N(αt,a2I) For each word: Draw topic Mult (π(ηt,d)) Draw word Mult (π(βt,Zt,d,n)) where π(x) is a mapping from the natural parameterization x to the mean parameterization, namely exp exp (xi)
Inference:
In the dynamic topic model, only Wt,d,n is observable. Learning the other parameters constitutes an inference problem. Blei and Lafferty argue that applying Gibbs sampling to do inference in this model is more difficult than in static models, due to the nonconjugacy of the Gaussian and multinomial distributions. They propose the use of variational methods, in particular, the Variational Kalman Filtering and the Variational Wavelet Regression.
Applications:
In the original paper, a dynamic topic model is applied to the corpus of Science articles published between 1881 and 1999 aiming to show that this method can be used to analyze the trends of word usage inside topics. The authors also show that the model trained with past documents is able to fit documents of an incoming year better than LDA.
Applications:
A continuous dynamic topic model was developed by Wang et al. and applied to predict the timestamp of documents.Going beyond text documents, dynamic topic models were used to study musical influence, by learning musical topics and how they evolve in recent history. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Beer chemistry**
Beer chemistry:
The chemical compounds in beer give it a distinctive taste, smell and appearance. The majority of compounds in beer come from the metabolic activities of plants and yeast and so are covered by the fields of biochemistry and organic chemistry. The main exception is that beer contains over 90% water and the mineral ions in the water (hardness) can have a significant effect upon the taste.
Four main ingredients:
Four main ingredients are used for making beer in the process of brewing: carbohydrates (from malt), hops, yeast, and water.
Four main ingredients:
Carbohydrates (from malt) The carbohydrate source is an essential part of the beer because unicellular yeast organisms convert carbohydrates into energy to live. Yeast metabolize the carbohydrate source to form a number of compounds including ethanol. The process of brewing beer starts with malting and mashing, which breaks down the long carbohydrates in the barley grain into more simple sugars. This is important because yeast can only metabolize very short chains of sugars. Long-carbohydrates are polymers, large branching linkages of the same molecule over and over. In the case of barley, we mostly see polymers called amylopectin and amylose which are made of repeating linkages of glucose. On very large time-scales (thermodynamically) these polymers would break down on their own, and there would be no need for the malting process. The process is normally sped up by heating up the barley grain. This heating process activates enzymes called amylases. The shape of these enzymes, their active site, gives them the unique and powerful ability to speed these degradation reactions to over 100,000 times faster. The reaction that takes place at the active site is called a hydrolysis reaction, which is a cleavage of the linkages between the sugars. Repeated hydrolysis breaks the long amylopectin polymers into simpler sugars that can be digested by the yeast.
Four main ingredients:
Hops Hops are the flowers of the hops plant Humulus lupulus. These flowers contain over 440 essential oils, which contribute to the aroma and non-bitter flavors of beer. However, the distinct bitterness especially characteristic of pale ales comes from a family of compounds called alpha-acids (also called humulones) and beta-acids (also called lupulones). Generally, brewers believe that α-acids give the beer a pleasant bitterness whereas β-acids are considered less pleasant. α-acids isomerize during the boiling process in the reaction pictured. The six-member ring in the humulone isomerizes to a five-member ring, but it is not commonly discussed how this affects perceived bitterness.
Four main ingredients:
Yeast In beer, the metabolic waste products of yeast are a significant factor. In aerobic conditions, the yeast will use in the glycolysis the simple sugars obtained from the malting process, and convert pyruvate, the major organic product of glycolysis, into carbon dioxide and water via the cellular respiration. Many homebrewers use this aspect of yeast metabolism to carbonate their beers. However, under industrial anaerobic conditions yeasts cannot use pyruvate, the end products of glycolysis, to generate energy in cellular respiration. Instead, they rely on a process called fermentation. Fermentation converts pyruvate into ethanol through the intermediate acetaldehyde.
Four main ingredients:
Water Water can often play, directly or indirectly, a very important role in the way a beer tastes, as it is the main ingredient. The ion species present in water can affect the metabolic pathways of yeast, and thus the metabolites one can taste. For example, calcium and iron ions are essential in small amounts for yeast to survive, because these metal ions are usually required cofactors for yeast enzymes.
Beer carbonation:
In aerobic conditions, yeast turns sugars into pyruvate then converts pyruvate into water and carbon dioxide. This process can carbonate beers. In commercial production, the yeast works in anaerobic conditions to convert pyruvate into ethanol, and does not carbonate beer. Beer is carbonated with pressurized CO2. When beer is poured, carbon dioxide dissolved in the beer escapes and forms tiny bubbles. These bubbles grow and accelerate as they rise by feeding off of nearby smaller bubbles, a phenomenon known as Ostwald ripening. These larger bubbles lead to “coarser” foam on top of poured beer.
Beer carbonation:
Nitro beer (CO2 replaced by N2 gas) Beers can be carbonated with CO2 or made sparkling with an inert gas such as nitrogen (N2), argon (Ar), or helium (He). Inert gases are not as soluble in water as carbon dioxide, so they form bubbles that do not grow through Ostwald ripening. This means that the beer has smaller bubbles and a more creamy and stable head. These less soluble inert gases give the beer a different and flatter texture. In beer terms, the mouthfeel is smooth, not bubbly like beers with normal carbonation. Nitro beer (for nitrogen beer) could taste less acidic than normal beer.
Aromatic compounds:
Beers contain many aromatic substances. Up to now, chemists using advanced analytical instruments such as gas and high performance liquid chromatographs coupled to mass spectrometers, have discovered over 7,700 different chemical compounds in beers.
Foam stabilizers:
The beer foam stability depends amongst other on the presence of transition metal ions (Fe2+, Co2+, Ni2+, Cu2+...), macromolecules such as polysaccharides, proteins, and isohumulone compounds from hops in the beer. Foam stability is an important concern for the first perception of the beer by the consumer and is therefore the object of the greatest care by the brewers and the barmen in charge to serve draft beer, or to properly pour beer into a glass from the bottle (with a good head retention and without overfoaming, or gushing when opening the bottle).
Foam stabilizers:
Many patents for various types of beer foam stabilizers have been filed by breweries and the agro-chemical industry in the last decades. Cobalt salts added at low concentration (1 – 2 ppm) were popular in the sixties, but raised the question of cobalt toxicity in case of undetected accidental overdosage during beer production. As an alternative, organic foam stabilizers are produced by hydrolysis of recovered by-products of beer manufacture, such as spent grains or hops residues.Amongst the large spectrum of purified, or modified, natural food additives available on the market, soluble carboxymethyl hydroxyethyl cellulose, propylene glycol alginate (PGA, food additive with E number E405), pectins and gellan gum have also been investigated as foam stabilizer.
Foam stabilizers:
Cobalt salts In 1957, two brewing chemists, Thorne and Helm, discovered that the Co2+ cation was able to stabilize beer foam and to avoid beer overfoaming and gushing. The addition of a tiny amount of cobalt ions in the range 1 – 2 mg/L (ppm) was effective. Higher concentrations would be toxic and lower ones ineffective.
Foam stabilizers:
Cobalt is a transition metal whose atomic orbitals are able to interact with ligands, or functional groups (–OH, –COOH, –NH2), attached to organic molecules naturally present in the beer, making macromolecular coordination complexes stabilizing the beer foam. Cobalt could behave as an inter- or intra-molecular bridge between different polysaccharide molecules (changing their shape or size), or cause some conformational changes of different types of molecules present in solution, affecting their absolute configuration and thus the foam molecular structure and its behavior.
Foam stabilizers:
Thorne and Helm (1957) also formulated the hypothesis that cobalt, by being complexed with certain nitrogenous constituents of the beer (e.g., amino acids from malt proteins), might produce surface-active substances inactivating the gaseous nuclei responsible for overfoaming and gushing.Gushing is a specific problem also studied into more details by Rudin and Hudson (1958). These authors discovered that gushing is also promoted by other transition metal ions such as these of nickel and iron, but not by cobalt ions. Isohumulone (an iso-alpha acid responsible for the bitter taste of hops) and its combinations with Ni, or Fe, also favor gushing, while pure Co ions or their combination with isohumulone do not exhibit gushing and overfoaming. This explains why cobalt salts were specifically selected at a concentration of 1 – 2 mg/L as anti-gushing agent for beer. Rudin and Hudson (1958) and other authors also found that Co, Ni and Fe ions preferentially concentrate in the foam itself.In the sixties, after approval by the US FDA, cobalt sulfate was commonly used at low concentration in the USA as an additive to stabilize beer foam and to prevent gushing after beer is exposed to vibrations during its transport or handling.
Foam stabilizers:
Although cobalt is an essential micronutrient needed for vitamin B12 synthesis, excess levels of cobalt in the body can lead to cobalt poisoning and must be avoided. It triggered the development of qualitative and quantitative analysis methods to accurately assay cobalt in beer in order to prevent accidental overdosage and cobalt poisoning.Too high levels of cobalt are known to be responsible for the beer drinker's cardiomyopathy. The first issues mentioned in the literature were reported in Canada in the middle of the sixties after an accidental overdosage in the Dow Breweries in Quebec city.
Foam stabilizers:
In August 1965, a person presented to a hospital in Quebec City with symptoms suggestive of alcoholic cardiomyopathy. Over the next eight months, fifty more cases with similar findings appeared in the same area with twenty of these being fatal. It was noted that all were heavy drinkers who mostly drank beer and preferred the Dow brand; thirty out of those drank more than 6 litres (12 pints) of beer per day. Epidemiological studies found that the Dow Breweries had been adding cobalt sulfate to the beer for foam stability since July 1965 and that the concentration added in the Quebec city brewery was ten times that of the same beer brewed in Montreal where there were no reported cases.
Storage and degradation:
A particular problem with beer is that, unlike wine, its quality tends to deteriorate as it ages. A cat urine smell and flavor called ribes, named for the genus of the black currant, tends to develop and peak. A cardboard smell then dominates which is due to the release of 2-nonenal. In general, chemists believe that the "off-flavors" that come from old beers are due to reactive oxygen species. These may come in the form of oxygen free radicals, for example, which can change the chemical structures of compounds in beer that give them their taste. Oxygen radicals can cause increased concentrations of aldehydes from the Strecker degradation reactions of amino acids in beer.Beer is unique when compared to other alcoholic drinks because it is unstable in the final package. There are many variables and chemical compounds that affect the flavor of beer during the production steps, but also during the storage of beer. Beer will develop an off-flavor during storage because of many factors, including sunlight and the amount of oxygen in the headspace of the bottle. Other than changes in taste, beer can also develop visual changes. Beer can become hazy during storage. This is called colloidal stability (haze formation) and is typically caused by the raw materials used during the brewing process. The primary reaction that causes beer haze is the polymerization of polyphenols and binding with specific proteins. This type of haze can be seen when beer is cooled below 0 degrees Celsius. When the beer is raised to room temperature, the haze dissolves. But if a beer is stored at room temperature for too long (about 6 months) a permanent haze will form. A study done by Heuberger et al. (2012) concludes that storage temperature of beers affects the flavor stability. They found that the metabolite profile of room temperature and cold temperature stored beer differed significantly from fresh beer. They also have evidence to support significant beer oxidation after weeks of storage, which also has an effect on the flavor of beer.The off-flavour in beer, such as a cardboard or green apple taste, is often associated with the appearance of staling aldehydes. The Strecker aldehydes responsible for the flavor change are formed during storage of the beers. Philip Wietstock et al. performed experiments to test what causes the formation of Strecker aldehydes during storage. They found that only amino acid concentration (leucine (Leu), isoleucine (Ile), and phenylalanine (Phe), specifically) and dissolved oxygen concentration caused Strecker aldehyde formation. They also tested carbohydrate and Fe2+ additions. A linear relationship was found between Strecker aldehydes formed and total packaged oxygen. This is important for brewers to know so that they can control the taste of their beer. Wietstock concludes that capping beers with oxygen barrier crown corks will diminish Strecker aldehyde formation.In another study done by Vanderhaegen et al. (2003), different aging conditions were tested on a bottled beer after 6 months. They found a decrease in volatile esters was responsible for a reduced fruity flavor. They also found an increase in many other compounds including carbonyl compounds, ethyl esters, Maillard compounds, dioxolanes, and furanic ethers. The carbonyl compounds, as stated previously in the Wietstock experiments, will create Strecker aldehydes, which tend to cause a green apple flavor. Esters are known to cause fruity flavors such as pears, roses, and bananas. Maillard compounds will cause a toasty, malty flavor.
Storage and degradation:
A study done by Charles Bamforth and Roy Parsons (1985) also confirms that beer staling flavors are caused by various carbonyl compounds. They used thiobarbituric acid (TBA) to estimate the staling substances after using an accelerated aging technique. They found that beer staling is reduced by scavengers of the hydroxyl radical (•OH), such as mannitol and ascorbic acid. They also tested the hypothesis that soybean extracts included in the fermenting wort enhance the shelf life of beer flavor. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**The Nuttall Encyclopædia**
The Nuttall Encyclopædia:
The Nuttall Encyclopædia: Being a Concise and Comprehensive Dictionary of General Knowledge is a late 19th-century encyclopedia, edited by Rev. James Wood, first published in London in 1900 by Frederick Warne & Co Ltd.
Editions were recorded for 1920, 1930, 1938 and 1956 and was still being sold in 1966. Editors included G. Elgie Christ and A. L. Hayden for 1930, Lawrence Hawkins Dawson for 1938 and C. M. Prior for 1956.
Description:
The Nuttall Encyclopædia is named for Dr. Peter Austin Nuttall (d. 1869), whose works, such as Standard Pronouncing Dictionary of the English Language (published in 1863), were eventually acquired by Frederick Warne, and would be published for decades to come.
The title page proclaims this encyclopedia to be "a concise and comprehensive dictionary of general knowledge consisting of over 16,000 terse and original articles on nearly all subjects discussed in larger encyclopædias, and specially dealing with such as come under the categories of history, biography, geography, literature, philosophy, religion, science, and art".
Description:
The entries or articles in this work are generally very short, and are mostly about individuals and places; while it has entries for fictional characters from Charles Dickens' books, the encyclopedia lacks entries for fruit. It often reflects the personal worldview of the author, viewing events from a definite perspective. This can be seen in entries like Dates of Epoch-Making Events. As another example, the entry for Venezuela presents a British view of an 1899 event: ...the boundary line between the British colony and Venezuela was for long matter of keen dispute, but by the intervention of the United States at the request of the latter a treaty between the contending parties was concluded, referring the matter to a court of arbitration, which met at Paris in 1895, and settled it in 1899, in vindication, happily, of the British claim, the Schomburgk line being now declared to be the true line, and the gold-fields ours.
Description:
In 2004, Project Gutenberg published a version of the 1907 edition, which is now in the public domain. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Nakamachi, Setagaya**
Nakamachi, Setagaya:
Nakamachi (中町) is a district of Setagaya, Tokyo, Japan.
Education:
Setagaya Board of Education operates public elementary and junior high schools.
Education:
1 and 2-chome are zoned to Tamagawa Elementary School (玉川小学校). 3, 4, and parts of 5-chome are zoned to Nakamachi Elementary School (中町小学校). Other parts of 5-chome are zoned to Sakuramachi Elementary School (桜町小学校). 1-4 chome and parts of 5 chome are zoned to Tamagawa Junior High School (玉川中学校), while other parts of 5-chome are zoned to Fukasawa Junior High School (深沢中学校). | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Humble pie**
Humble pie:
To eat humble pie, in common usage, is to face humiliation and subsequently apologize for a serious mistake. Humble pie, or umble pie, is also a term for a variety of pastries based on medieval meat pies.
Humble pie:
The expression derives from umble pie, a pie filled with chopped or minced offal, especially of deer but often other meats. Umble evolved from numble (after the Middle French nombles), meaning "deer's innards".Although "umbles" and the modern word "humble" are etymologically unrelated, each word has appeared with and without the initial "h" after the Middle Ages until the 19th century. Since the sound "h" is dropped in many dialects, the phrase was hypercorrected as "humble pie". While "umble" is now gone from the language, the phrase remains, carrying the fossilized word as an idiom. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Uranium**
Uranium:
Uranium is a chemical element with symbol U and atomic number 92. It is a silvery-grey metal in the actinide series of the periodic table. A uranium atom has 92 protons and 92 electrons, of which 6 are valence electrons. Uranium radioactively decays by emitting an alpha particle. The half-life of this decay varies between 159,200 and 4.5 billion years for different isotopes, making them useful for dating the age of the Earth. The most common isotopes in natural uranium are uranium-238 (which has 146 neutrons and accounts for over 99% of uranium on Earth) and uranium-235 (which has 143 neutrons). Uranium has the highest atomic weight of the primordially occurring elements. Its density is about 70% higher than that of lead, and slightly lower than that of gold or tungsten. It occurs naturally in low concentrations of a few parts per million in soil, rock and water, and is commercially extracted from uranium-bearing minerals such as uraninite.Many contemporary uses of uranium exploit its unique nuclear properties. Uranium-235 is the only naturally occurring fissile isotope, which makes it widely used in nuclear power plants and nuclear weapons. However, because of the extreme lowness of concentrations of uranium-235 in naturally occurring uranium (which is, overwhelmingly, mostly uranium-238), uranium needs to undergo enrichment so that enough uranium-235 is present. Uranium-238 is fissionable by fast neutrons, and is fertile, meaning it can be transmuted to fissile plutonium-239 in a nuclear reactor. Another fissile isotope, uranium-233, can be produced from natural thorium and is studied for future industrial use in nuclear technology. Uranium-238 has a small probability for spontaneous fission or even induced fission with fast neutrons; uranium-235, and to a lesser degree uranium-233, have a much higher fission cross-section for slow neutrons. In sufficient concentration, these isotopes maintain a sustained nuclear chain reaction. This generates the heat in nuclear power reactors, and produces the fissile material for nuclear weapons. Depleted uranium (238U) is used in kinetic energy penetrators and armor plating.The 1789 discovery of uranium in the mineral pitchblende is credited to Martin Heinrich Klaproth, who named the new element after the recently discovered planet Uranus. Eugène-Melchior Péligot was the first person to isolate the metal and its radioactive properties were discovered in 1896 by Henri Becquerel. Research by Otto Hahn, Lise Meitner, Enrico Fermi and others, such as J. Robert Oppenheimer starting in 1934 led to its use as a fuel in the nuclear power industry and in Little Boy, the first nuclear weapon used in war. An ensuing arms race during the Cold War between the United States and the Soviet Union produced tens of thousands of nuclear weapons that used uranium metal and uranium-derived plutonium-239. Dismantling of these weapons and related nuclear facilities is carried out within various nuclear disarmament programs and costs billions of dollars. Weapon-grade uranium obtained from nuclear weapons is diluted with uranium-238 and reused as fuel for nuclear reactors. The development and deployment of these nuclear reactors continue on a global base as they are powerful sources of CO2-free energy. Spent nuclear fuel forms radioactive waste, which mostly consists of uranium-238 and poses significant health threat and environmental impact.
Characteristics:
Uranium is a silvery white, weakly radioactive metal. It has a Mohs hardness of 6, sufficient to scratch glass and approximately equal to that of titanium, rhodium, manganese and niobium. It is malleable, ductile, slightly paramagnetic, strongly electropositive and a poor electrical conductor. Uranium metal has a very high density of 19.1 g/cm3, denser than lead (11.3 g/cm3), but slightly less dense than tungsten and gold (19.3 g/cm3).Uranium metal reacts with almost all non-metal elements (with the exception of the noble gases) and their compounds, with reactivity increasing with temperature. Hydrochloric and nitric acids dissolve uranium, but non-oxidizing acids other than hydrochloric acid attack the element very slowly. When finely divided, it can react with cold water; in air, uranium metal becomes coated with a dark layer of uranium oxide. Uranium in ores is extracted chemically and converted into uranium dioxide or other chemical forms usable in industry.
Characteristics:
Uranium-235 was the first isotope that was found to be fissile. Other naturally occurring isotopes are fissionable, but not fissile. On bombardment with slow neutrons, its uranium-235 isotope will most of the time divide into two smaller nuclei, releasing nuclear binding energy and more neutrons. If too many of these neutrons are absorbed by other uranium-235 nuclei, a nuclear chain reaction occurs that results in a burst of heat or (in special circumstances) an explosion. In a nuclear reactor, such a chain reaction is slowed and controlled by a neutron poison, absorbing some of the free neutrons. Such neutron absorbent materials are often part of reactor control rods (see nuclear reactor physics for a description of this process of reactor control).
Characteristics:
As little as 15 lb (6.8 kg) of uranium-235 can be used to make an atomic bomb. The nuclear weapon detonated over Hiroshima, called Little Boy, relied on uranium fission. However, the first nuclear bomb (the Gadget used at Trinity) and the bomb that was detonated over Nagasaki (Fat Man) were both plutonium bombs.
Uranium metal has three allotropic forms: α (orthorhombic) stable up to 668 °C (1,234 °F). Orthorhombic, space group No. 63, Cmcm, lattice parameters a = 285.4 pm, b = 587 pm, c = 495.5 pm.
β (tetragonal) stable from 668 to 775 °C (1,234 to 1,427 °F). Tetragonal, space group P42/mnm, P42nm, or P4n2, lattice parameters a = 565.6 pm, b = c = 1075.9 pm.
γ (body-centered cubic) from 775 °C (1,427 °F) to melting point—this is the most malleable and ductile state. Body-centered cubic, lattice parameter a = 352.4 pm.
Applications:
Military The major application of uranium in the military sector is in high-density penetrators. This ammunition consists of depleted uranium (DU) alloyed with 1–2% other elements, such as titanium or molybdenum. At high impact speed, the density, hardness, and pyrophoricity of the projectile enable the destruction of heavily armored targets. Tank armor and other removable vehicle armor can also be hardened with depleted uranium plates. The use of depleted uranium became politically and environmentally contentious after the use of such munitions by the US, UK and other countries during wars in the Persian Gulf and the Balkans raised questions concerning uranium compounds left in the soil (see Gulf War syndrome).Depleted uranium is also used as a shielding material in some containers used to store and transport radioactive materials. While the metal itself is radioactive, its high density makes it more effective than lead in halting radiation from strong sources such as radium. Other uses of depleted uranium include counterweights for aircraft control surfaces, as ballast for missile re-entry vehicles and as a shielding material. Due to its high density, this material is found in inertial guidance systems and in gyroscopic compasses. Depleted uranium is preferred over similarly dense metals due to its ability to be easily machined and cast as well as its relatively low cost. The main risk of exposure to depleted uranium is chemical poisoning by uranium oxide rather than radioactivity (uranium being only a weak alpha emitter).
Applications:
During the later stages of World War II, the entire Cold War, and to a lesser extent afterwards, uranium-235 has been used as the fissile explosive material to produce nuclear weapons. Initially, two major types of fission bombs were built: a relatively simple device that uses uranium-235 and a more complicated mechanism that uses plutonium-239 derived from uranium-238. Later, a much more complicated and far more powerful type of fission/fusion bomb (thermonuclear weapon) was built, that uses a plutonium-based device to cause a mixture of tritium and deuterium to undergo nuclear fusion. Such bombs are jacketed in a non-fissile (unenriched) uranium case, and they derive more than half their power from the fission of this material by fast neutrons from the nuclear fusion process.
Applications:
Civilian The main use of uranium in the civilian sector is to fuel nuclear power plants. One kilogram of uranium-235 can theoretically produce about 20 terajoules of energy (2×1013 joules), assuming complete fission; as much energy as 1.5 million kilograms (1,500 tonnes) of coal.Commercial nuclear power plants use fuel that is typically enriched to around 3% uranium-235. The CANDU and Magnox designs are the only commercial reactors capable of using unenriched uranium fuel. Fuel used for United States Navy reactors is typically highly enriched in uranium-235 (the exact values are classified). In a breeder reactor, uranium-238 can also be converted into plutonium through the following reaction: 23892U + n → 23992U + γ β−→ 23993Np β−→ 23994PuBefore (and, occasionally, after) the discovery of radioactivity, uranium was primarily used in small amounts for yellow glass and pottery glazes, such as uranium glass and in Fiestaware.The discovery and isolation of radium in uranium ore (pitchblende) by Marie Curie sparked the development of uranium mining to extract the radium, which was used to make glow-in-the-dark paints for clock and aircraft dials. This left a prodigious quantity of uranium as a waste product, since it takes three tonnes of uranium to extract one gram of radium. This waste product was diverted to the glazing industry, making uranium glazes very inexpensive and abundant. Besides the pottery glazes, uranium tile glazes accounted for the bulk of the use, including common bathroom and kitchen tiles which can be produced in green, yellow, mauve, black, blue, red and other colors.
Applications:
Uranium was also used in photographic chemicals (especially uranium nitrate as a toner), in lamp filaments for stage lighting bulbs, to improve the appearance of dentures, and in the leather and wood industries for stains and dyes. Uranium salts are mordants of silk or wool. Uranyl acetate and uranyl formate are used as electron-dense "stains" in transmission electron microscopy, to increase the contrast of biological specimens in ultrathin sections and in negative staining of viruses, isolated cell organelles and macromolecules.
Applications:
The discovery of the radioactivity of uranium ushered in additional scientific and practical uses of the element. The long half-life of the isotope uranium-238 (4.47×109 years) makes it well-suited for use in estimating the age of the earliest igneous rocks and for other types of radiometric dating, including uranium–thorium dating, uranium–lead dating and uranium–uranium dating. Uranium metal is used for X-ray targets in the making of high-energy X-rays.
History:
Pre-discovery use The use of uranium in its natural oxide form dates back to at least the year 79 CE, when it was used in the Roman Empire to add a yellow color to ceramic glazes. Yellow glass with 1% uranium oxide was found in a Roman villa on Cape Posillipo in the Bay of Naples, Italy, by R. T. Gunther of the University of Oxford in 1912. Starting in the late Middle Ages, pitchblende was extracted from the Habsburg silver mines in Joachimsthal, Bohemia (now Jáchymov in the Czech Republic), and was used as a coloring agent in the local glassmaking industry. In the early 19th century, the world's only known sources of uranium ore were these mines. Mining for uranium in the Ore Mountains ceased on the German side after the Cold War ended and SDAG Wismut was wound down. On the Czech side there were attempts during the uranium price bubble of 2007 to restart mining, but those were quickly abandoned following a fall in uranium prices.
History:
Discovery The discovery of the element is credited to the German chemist Martin Heinrich Klaproth. While he was working in his experimental laboratory in Berlin in 1789, Klaproth was able to precipitate a yellow compound (likely sodium diuranate) by dissolving pitchblende in nitric acid and neutralizing the solution with sodium hydroxide. Klaproth assumed the yellow substance was the oxide of a yet-undiscovered element and heated it with charcoal to obtain a black powder, which he thought was the newly discovered metal itself (in fact, that powder was an oxide of uranium). He named the newly discovered element after the planet Uranus (named after the primordial Greek god of the sky), which had been discovered eight years earlier by William Herschel.In 1841, Eugène-Melchior Péligot, Professor of Analytical Chemistry at the Conservatoire National des Arts et Métiers (Central School of Arts and Manufactures) in Paris, isolated the first sample of uranium metal by heating uranium tetrachloride with potassium.
History:
Henri Becquerel discovered radioactivity by using uranium in 1896. Becquerel made the discovery in Paris by leaving a sample of a uranium salt, K2UO2(SO4)2 (potassium uranyl sulfate), on top of an unexposed photographic plate in a drawer and noting that the plate had become "fogged". He determined that a form of invisible light or rays emitted by uranium had exposed the plate.
History:
During World War I when the Central Powers suffered a shortage of molybdenum to make artillery gun barrels and high speed tool steels, they routinely used ferrouranium alloy as a substitute, as it presents many of the same physical characteristics as molybdenum. When this practice became known in 1916 the US government requested several prominent universities to research the use of uranium in manufacturing and metalwork. Tools made with these formulas remained in use for several decades, until the Manhattan Project and the Cold War placed a large demand on uranium for fission research and weapon development.
History:
Fission research A team led by Enrico Fermi in 1934 observed that bombarding uranium with neutrons produces the emission of beta rays (electrons or positrons from the elements produced; see beta particle). The fission products were at first mistaken for new elements with atomic numbers 93 and 94, which the Dean of the Faculty of Rome, Orso Mario Corbino, christened ausonium and hesperium, respectively. The experiments leading to the discovery of uranium's ability to fission (break apart) into lighter elements and release binding energy were conducted by Otto Hahn and Fritz Strassmann in Hahn's laboratory in Berlin. Lise Meitner and her nephew, the physicist Otto Robert Frisch, published the physical explanation in February 1939 and named the process "nuclear fission". Soon after, Fermi hypothesized that the fission of uranium might release enough neutrons to sustain a fission reaction. Confirmation of this hypothesis came in 1939, and later work found that on average about 2.5 neutrons are released by each fission of the rare uranium isotope uranium-235. Fermi urged Alfred O. C. Nier to separate uranium isotopes for determination of the fissile component, and on 29 February 1940, Nier used an instrument he built at the University of Minnesota to separate the world's first uranium-235 sample in the Tate Laboratory. Using Columbia University's cyclotron, John Dunning confirmed the sample to be the isolated fissile material on 1 March. Further work found that the far more common uranium-238 isotope can be transmuted into plutonium, which, like uranium-235, is also fissile by thermal neutrons. These discoveries led numerous countries to begin working on the development of nuclear weapons and nuclear power. Despite fission having been discovered in Germany, the Uranverein ("uranium club") Germany's wartime project to research nuclear power and/or weapons was hampered by limited resources, infighting, the exile or non-involvement of several prominent scientists in the field and several crucial mistakes such as failing to account for impurities in available graphite samples which made it appear less suitable as a neutron moderator than it is in reality. Germany's attempts to build a natural uranium / heavy water reactor had not come close to reaching criticality by the time the Americans reached Haigerloch, the site of the last German wartime reactor experiment.On 2 December 1942, as part of the Manhattan Project, another team led by Enrico Fermi was able to initiate the first artificial self-sustained nuclear chain reaction, Chicago Pile-1. An initial plan using enriched uranium-235 was abandoned as it was as yet unavailable in sufficient quantities. Working in a lab below the stands of Stagg Field at the University of Chicago, the team created the conditions needed for such a reaction by piling together 360 tonnes of graphite, 53 tonnes of uranium oxide, and 5.5 tonnes of uranium metal, a majority of which was supplied by Westinghouse Lamp Plant in a makeshift production process.
History:
Nuclear weaponry Two major types of atomic bombs were developed by the United States during World War II: a uranium-based device (codenamed "Little Boy") whose fissile material was highly enriched uranium, and a plutonium-based device (see Trinity test and "Fat Man") whose plutonium was derived from uranium-238. The uranium-based Little Boy device became the first nuclear weapon used in war when it was detonated over the Japanese city of Hiroshima on 6 August 1945. Exploding with a yield equivalent to 12,500 tonnes of trinitrotoluene, the blast and thermal wave of the bomb destroyed nearly 50,000 buildings and killed approximately 75,000 people (see Atomic bombings of Hiroshima and Nagasaki). Initially it was believed that uranium was relatively rare, and that nuclear proliferation could be avoided by simply buying up all known uranium stocks, but within a decade large deposits of it were discovered in many places around the world.
History:
Reactors The X-10 Graphite Reactor at Oak Ridge National Laboratory (ORNL) in Oak Ridge, Tennessee, formerly known as the Clinton Pile and X-10 Pile, was the world's second artificial nuclear reactor (after Enrico Fermi's Chicago Pile) and was the first reactor designed and built for continuous operation. Argonne National Laboratory's Experimental Breeder Reactor I, located at the Atomic Energy Commission's National Reactor Testing Station near Arco, Idaho, became the first nuclear reactor to create electricity on 20 December 1951. Initially, four 150-watt light bulbs were lit by the reactor, but improvements eventually enabled it to power the whole facility (later, the town of Arco became the first in the world to have all its electricity come from nuclear power generated by BORAX-III, another reactor designed and operated by Argonne National Laboratory). The world's first commercial scale nuclear power station, Obninsk in the Soviet Union, began generation with its reactor AM-1 on 27 June 1954. Other early nuclear power plants were Calder Hall in England, which began generation on 17 October 1956, and the Shippingport Atomic Power Station in Pennsylvania, which began on 26 May 1958. Nuclear power was used for the first time for propulsion by a submarine, the USS Nautilus, in 1954.
History:
Prehistoric naturally occurring fission In 1972, the French physicist Francis Perrin discovered fifteen ancient and no longer active natural nuclear fission reactors in three separate ore deposits at the Oklo mine in Gabon, West Africa, collectively known as the Oklo Fossil Reactors. The ore deposit is 1.7 billion years old; then, uranium-235 constituted about 3% of the total uranium on Earth. This is high enough to permit a sustained nuclear fission chain reaction to occur, provided other supporting conditions exist. The capacity of the surrounding sediment to contain the health-threatening nuclear waste products has been cited by the U.S. federal government as supporting evidence for the feasibility to store spent nuclear fuel at the Yucca Mountain nuclear waste repository.
History:
Contamination and the Cold War legacy Above-ground nuclear tests by the Soviet Union and the United States in the 1950s and early 1960s and by France into the 1970s and 1980s spread a significant amount of fallout from uranium daughter isotopes around the world. Additional fallout and pollution occurred from several nuclear accidents.Uranium miners have a higher incidence of cancer. An excess risk of lung cancer among Navajo uranium miners, for example, has been documented and linked to their occupation. The Radiation Exposure Compensation Act, a 1990 law in the US, required $100,000 in "compassion payments" to uranium miners diagnosed with cancer or other respiratory ailments.During the Cold War between the Soviet Union and the United States, huge stockpiles of uranium were amassed and tens of thousands of nuclear weapons were created using enriched uranium and plutonium made from uranium. After the break-up of the Soviet Union in 1991, an estimated 600 short tons (540 metric tons) of highly enriched weapons grade uranium (enough to make 40,000 nuclear warheads) had been stored in often inadequately guarded facilities in the Russian Federation and several other former Soviet states. Police in Asia, Europe, and South America on at least 16 occasions from 1993 to 2005 have intercepted shipments of smuggled bomb-grade uranium or plutonium, most of which was from ex-Soviet sources. From 1993 to 2005 the Material Protection, Control, and Accounting Program, operated by the federal government of the United States, spent approximately US $550 million to help safeguard uranium and plutonium stockpiles in Russia. This money was used for improvements and security enhancements at research and storage facilities.Safety of nuclear facilities in Russia has been significantly improved since the stabilization of political and economical turmoil of the early 1990s. For example, in 1993 there were 29 incidents ranking above level 1 on the International Nuclear Event Scale, and this number dropped under four per year in 1995–2003. The number of employers receiving annual radiation doses above 20 mSv, which is equivalent to a single full-body CT scan, saw a strong decline around 2000. In November 2015, the Russian government approved a federal program for nuclear and radiation safety for 2016 to 2030 with a budget of 562 billion rubles (ca. 8 billion dollars). Its key issue is "the deferred liabilities accumulated during the 70 years of the nuclear industry, particularly during the time of the Soviet Union". Approximately 73% of the budget will be spent on decommissioning aged and obsolete nuclear reactors and nuclear facilities, especially those involved in state defense programs; 20% will go in processing and disposal of nuclear fuel and radioactive waste, and 5% into monitoring and ensuring of nuclear and radiation safety.
Occurrence:
Origin Along with all elements having atomic weights higher than that of iron, uranium is only naturally formed by the r-process (rapid neutron capture) in supernovae and neutron star mergers. Primordial thorium and uranium are only produced in the r-process, because the s-process (slow neutron capture) is too slow and cannot pass the gap of instability after bismuth. Besides the two extant primordial uranium isotopes, 235U and 238U, the r-process also produced significant quantities of 236U, which has a shorter half-life and so is an extinct radionuclide, having long since decayed completely to 232Th. Uranium-236 was itself enriched by the decay of 244Pu, accounting for the observed higher-than-expected abundance of thorium and lower-than-expected abundance of uranium. While the natural abundance of uranium has been supplemented by the decay of extinct 242Pu (half-life 0.375 million years) and 247Cm (half-life 16 million years), producing 238U and 235U respectively, this occurred to an almost negligible extent due to the shorter half-lives of these parents and their lower production than 236U and 244Pu, the parents of thorium: the 247Cm:235U ratio at the formation of the Solar System was (7.0±1.6)×10−5.
Occurrence:
Biotic and abiotic Uranium is a naturally occurring element that can be found in low levels within all rock, soil, and water. Uranium is the 51st element in order of abundance in the Earth's crust. Uranium is also the highest-numbered element to be found naturally in significant quantities on Earth and is almost always found combined with other elements. The decay of uranium, thorium, and potassium-40 in the Earth's mantle is thought to be the main source of heat that keeps the Earth's outer core in the liquid state and drives mantle convection, which in turn drives plate tectonics.
Occurrence:
Uranium's average concentration in the Earth's crust is (depending on the reference) 2 to 4 parts per million, or about 40 times as abundant as silver. The Earth's crust from the surface to 25 km (15 mi) down is calculated to contain 1017 kg (2×1017 lb) of uranium while the oceans may contain 1013 kg (2×1013 lb). The concentration of uranium in soil ranges from 0.7 to 11 parts per million (up to 15 parts per million in farmland soil due to use of phosphate fertilizers), and its concentration in sea water is 3 parts per billion.Uranium is more plentiful than antimony, tin, cadmium, mercury, or silver, and it is about as abundant as arsenic or molybdenum. Uranium is found in hundreds of minerals, including uraninite (the most common uranium ore), carnotite, autunite, uranophane, torbernite, and coffinite. Significant concentrations of uranium occur in some substances such as phosphate rock deposits, and minerals such as lignite, and monazite sands in uranium-rich ores (it is recovered commercially from sources with as little as 0.1% uranium).
Occurrence:
Some bacteria, such as Shewanella putrefaciens, Geobacter metallireducens and some strains of Burkholderia fungorum, use uranium for their growth and convert U(VI) to U(IV). Recent research suggests that this pathway includes reduction of the soluble U(VI) via an intermediate U(V) pentavalent state.
Occurrence:
Other organisms, such as the lichen Trapelia involuta or microorganisms such as the bacterium Citrobacter, can absorb concentrations of uranium that are up to 300 times the level of their environment. Citrobacter species absorb uranyl ions when given glycerol phosphate (or other similar organic phosphates). After one day, one gram of bacteria can encrust themselves with nine grams of uranyl phosphate crystals; this creates the possibility that these organisms could be used in bioremediation to decontaminate uranium-polluted water.
Occurrence:
The proteobacterium Geobacter has also been shown to bioremediate uranium in ground water. The mycorrhizal fungus Glomus intraradices increases uranium content in the roots of its symbiotic plant.In nature, uranium(VI) forms highly soluble carbonate complexes at alkaline pH. This leads to an increase in mobility and availability of uranium to groundwater and soil from nuclear wastes which leads to health hazards. However, it is difficult to precipitate uranium as phosphate in the presence of excess carbonate at alkaline pH. A Sphingomonas sp. strain BSAR-1 has been found to express a high activity alkaline phosphatase (PhoK) that has been applied for bioprecipitation of uranium as uranyl phosphate species from alkaline solutions. The precipitation ability was enhanced by overexpressing PhoK protein in E. coli.Plants absorb some uranium from soil. Dry weight concentrations of uranium in plants range from 5 to 60 parts per billion, and ash from burnt wood can have concentrations up to 4 parts per million. Dry weight concentrations of uranium in food plants are typically lower with one to two micrograms per day ingested through the food people eat.
Occurrence:
Production and mining Worldwide production of uranium in 2021 amounted to 48,332 tonnes, of which 21,819 t (45%) was mined in Kazakhstan. Other important uranium mining countries are Namibia (5,753 t), Canada (4,693 t), Australia (4,192 t), Uzbekistan (3,500 t), and Russia (2,635 t).Uranium ore is mined in several ways: by open pit, underground, in-situ leaching, and borehole mining (see uranium mining). Low-grade uranium ore mined typically contains 0.01 to 0.25% uranium oxides. Extensive measures must be employed to extract the metal from its ore. High-grade ores found in Athabasca Basin deposits in Saskatchewan, Canada can contain up to 23% uranium oxides on average. Uranium ore is crushed and rendered into a fine powder and then leached with either an acid or alkali. The leachate is subjected to one of several sequences of precipitation, solvent extraction, and ion exchange. The resulting mixture, called yellowcake, contains at least 75% uranium oxides U3O8. Yellowcake is then calcined to remove impurities from the milling process before refining and conversion.Commercial-grade uranium can be produced through the reduction of uranium halides with alkali or alkaline earth metals. Uranium metal can also be prepared through electrolysis of KUF5 or UF4, dissolved in molten calcium chloride (CaCl2) and sodium chloride (NaCl) solution. Very pure uranium is produced through the thermal decomposition of uranium halides on a hot filament.
Occurrence:
Resources and reserves It is estimated that 6.1 million tonnes of uranium exists in ore reserves that are economically viable at US$130 per kg of uranium, while 35 million tonnes are classed as mineral resources (reasonable prospects for eventual economic extraction).Australia has 28% of the world's known uranium ore reserves and the world's largest single uranium deposit is located at the Olympic Dam Mine in South Australia. There is a significant reserve of uranium in Bakouma, a sub-prefecture in the prefecture of Mbomou in the Central African Republic.Some uranium also originates from dismantled nuclear weapons. For example, in 1993–2013 Russia supplied the United States with 15,000 tonnes of low-enriched uranium within the Megatons to Megawatts Program.An additional 4.6 billion tonnes of uranium are estimated to be dissolved in sea water (Japanese scientists in the 1980s showed that extraction of uranium from sea water using ion exchangers was technically feasible). There have been experiments to extract uranium from sea water, but the yield has been low due to the carbonate present in the water. In 2012, ORNL researchers announced the successful development of a new absorbent material dubbed HiCap which performs surface retention of solid or gas molecules, atoms or ions and also effectively removes toxic metals from water, according to results verified by researchers at Pacific Northwest National Laboratory.
Occurrence:
Supplies In 2005, ten countries accounted for the majority of the world's concentrated uranium oxides: Canada (27.9%), Australia (22.8%), Kazakhstan (10.5%), Russia (8.0%), Namibia (7.5%), Niger (7.4%), Uzbekistan (5.5%), the United States (2.5%), Argentina (2.1%) and Ukraine (1.9%). In 2008 Kazakhstan was forecast to increase production and become the world's largest supplier of uranium by 2009. The prediction came true, and Kazakhstan does dominate the world's uranium market since 2010. In 2021, its share was 45.1%, followed by Namibia (11.9%), Canada (9.7%), Australia (8.7%), Uzbekistan (7.2%), Niger (4.7%), Russia (5.5%), China (3.9%), India (1.3%), Ukraine (0.9%), and South Africa (0.8%), with a world total production of 48,332 tonnes. Most of uranium was produced not by conventional underground mining of ores (29% of production), but by in situ leaching (66%).In the late 1960s, UN geologists also discovered major uranium deposits and other rare mineral reserves in Somalia. The find was the largest of its kind, with industry experts estimating the deposits at over 25% of the world's then known uranium reserves of 800,000 tons.The ultimate available supply is believed to be sufficient for at least the next 85 years, although some studies indicate underinvestment in the late twentieth century may produce supply problems in the 21st century.
Occurrence:
Uranium deposits seem to be log-normal distributed. There is a 300-fold increase in the amount of uranium recoverable for each tenfold decrease in ore grade.
In other words, there is little high grade ore and proportionately much more low grade ore available.
Compounds:
Oxidation states and oxides Oxides Calcined uranium yellowcake, as produced in many large mills, contains a distribution of uranium oxidation species in various forms ranging from most oxidized to least oxidized. Particles with short residence times in a calciner will generally be less oxidized than those with long retention times or particles recovered in the stack scrubber. Uranium content is usually referenced to U3O8, which dates to the days of the Manhattan Project when U3O8 was used as an analytical chemistry reporting standard.Phase relationships in the uranium-oxygen system are complex. The most important oxidation states of uranium are uranium(IV) and uranium(VI), and their two corresponding oxides are, respectively, uranium dioxide (UO2) and uranium trioxide (UO3). Other uranium oxides such as uranium monoxide (UO), diuranium pentoxide (U2O5), and uranium peroxide (UO4·2H2O) also exist.
Compounds:
The most common forms of uranium oxide are triuranium octoxide (U3O8) and UO2. Both oxide forms are solids that have low solubility in water and are relatively stable over a wide range of environmental conditions. Triuranium octoxide is (depending on conditions) the most stable compound of uranium and is the form most commonly found in nature. Uranium dioxide is the form in which uranium is most commonly used as a nuclear reactor fuel. At ambient temperatures, UO2 will gradually convert to U3O8. Because of their stability, uranium oxides are generally considered the preferred chemical form for storage or disposal.
Compounds:
Aqueous chemistry Salts of many oxidation states of uranium are water-soluble and may be studied in aqueous solutions. The most common ionic forms are U3+ (brown-red), U4+ (green), UO+2 (unstable), and UO2+2 (yellow), for U(III), U(IV), U(V), and U(VI), respectively. A few solid and semi-metallic compounds such as UO and US exist for the formal oxidation state uranium(II), but no simple ions are known to exist in solution for that state. Ions of U3+ liberate hydrogen from water and are therefore considered to be highly unstable. The UO2+2 ion represents the uranium(VI) state and is known to form compounds such as uranyl carbonate, uranyl chloride and uranyl sulfate. UO2+2 also forms complexes with various organic chelating agents, the most commonly encountered of which is uranyl acetate.Unlike the uranyl salts of uranium and polyatomic ion uranium-oxide cationic forms, the uranates, salts containing a polyatomic uranium-oxide anion, are generally not water-soluble.
Compounds:
Carbonates The interactions of carbonate anions with uranium(VI) cause the Pourbaix diagram to change greatly when the medium is changed from water to a carbonate containing solution. While the vast majority of carbonates are insoluble in water (students are often taught that all carbonates other than those of alkali metals are insoluble in water), uranium carbonates are often soluble in water. This is because a U(VI) cation is able to bind two terminal oxides and three or more carbonates to form anionic complexes.
Compounds:
Effects of pH The uranium fraction diagrams in the presence of carbonate illustrate this further: when the pH of a uranium(VI) solution increases, the uranium is converted to a hydrated uranium oxide hydroxide and at high pHs it becomes an anionic hydroxide complex.
When carbonate is added, uranium is converted to a series of carbonate complexes if the pH is increased. One effect of these reactions is increased solubility of uranium in the pH range 6 to 8, a fact that has a direct bearing on the long term stability of spent uranium dioxide nuclear fuels.
Compounds:
Hydrides, carbides and nitrides Uranium metal heated to 250 to 300 °C (482 to 572 °F) reacts with hydrogen to form uranium hydride. Even higher temperatures will reversibly remove the hydrogen. This property makes uranium hydrides convenient starting materials to create reactive uranium powder along with various uranium carbide, nitride, and halide compounds. Two crystal modifications of uranium hydride exist: an α form that is obtained at low temperatures and a β form that is created when the formation temperature is above 250 °C.Uranium carbides and uranium nitrides are both relatively inert semimetallic compounds that are minimally soluble in acids, react with water, and can ignite in air to form U3O8. Carbides of uranium include uranium monocarbide (UC), uranium dicarbide (UC2), and diuranium tricarbide (U2C3). Both UC and UC2 are formed by adding carbon to molten uranium or by exposing the metal to carbon monoxide at high temperatures. Stable below 1800 °C, U2C3 is prepared by subjecting a heated mixture of UC and UC2 to mechanical stress. Uranium nitrides obtained by direct exposure of the metal to nitrogen include uranium mononitride (UN), uranium dinitride (UN2), and diuranium trinitride (U2N3).
Compounds:
Halides All uranium fluorides are created using uranium tetrafluoride (UF4); UF4 itself is prepared by hydrofluorination of uranium dioxide. Reduction of UF4 with hydrogen at 1000 °C produces uranium trifluoride (UF3). Under the right conditions of temperature and pressure, the reaction of solid UF4 with gaseous uranium hexafluoride (UF6) can form the intermediate fluorides of U2F9, U4F17, and UF5.At room temperatures, UF6 has a high vapor pressure, making it useful in the gaseous diffusion process to separate the rare uranium-235 from the common uranium-238 isotope. This compound can be prepared from uranium dioxide and uranium hydride by the following process: UO2 + 4 HF → UF4 + 2 H2O (500 °C, endothermic) UF4 + F2 → UF6 (350 °C, endothermic)The resulting UF6, a white solid, is highly reactive (by fluorination), easily sublimes (emitting a vapor that behaves as a nearly ideal gas), and is the most volatile compound of uranium known to exist.One method of preparing uranium tetrachloride (UCl4) is to directly combine chlorine with either uranium metal or uranium hydride. The reduction of UCl4 by hydrogen produces uranium trichloride (UCl3) while the higher chlorides of uranium are prepared by reaction with additional chlorine. All uranium chlorides react with water and air.
Compounds:
Bromides and iodides of uranium are formed by direct reaction of, respectively, bromine and iodine with uranium or by adding UH3 to those element's acids. Known examples include: UBr3, UBr4, UI3, and UI4. UI5 has never been prepared. Uranium oxyhalides are water-soluble and include UO2F2, UOCl2, UO2Cl2, and UO2Br2. Stability of the oxyhalides decrease as the atomic weight of the component halide increases.
Isotopes:
Uranium, like all elements with an atomic number greater than 82, has no stable isotopes. All isotopes of uranium are radioactive because the strong nuclear force does not prevail over electromagnetic repulsion in nuclides containing more than 82 protons. Nevertheless, the two most stable isotopes, uranium-238 and uranium-235, have half-lives long enough to occur in nature as primordial radionuclides, with measurable quantities having survived since the formation of the Earth. These two nuclides, along with thorium-232, are the only confirmed primordial nuclides heavier than nearly-stable bismuth-209.Natural uranium consists of three major isotopes: uranium-238 (99.28% natural abundance), uranium-235 (0.71%), and uranium-234 (0.0054%). There are also four other trace isotopes: uranium-239, which is formed when 238U undergoes spontaneous fission, releasing neutrons that are captured by another 238U atom; uranium-237, which is formed when 238U captures a neutron but emits two more, which then decays to neptunium-237; uranium-236, which occurs in trace quantities due to neutron capture on 235U and as a decay product of plutonium-244; and finally, uranium-233, which is formed in the decay chain of neptunium-237.
Isotopes:
Uranium-238 is the most stable isotope of uranium, with a half-life of about 4.463×109 years, roughly the age of the Earth. Uranium-238 is predominantly an alpha emitter, decaying to thorium-234. It ultimately decays through the uranium series, which has 18 members, into lead-206. Uranium-238 is not fissile, but is a fertile isotope, because after neutron activation it can be converted to plutonium-239, another fissile isotope. Indeed, the 238U nucleus can absorb one neutron to produce the radioactive isotope uranium-239. 239U decays by beta emission to neptunium-239, also a beta-emitter, that decays in its turn, within a few days into plutonium-239. 239Pu was used as fissile material in the first atomic bomb detonated in the "Trinity test" on 15 July 1945 in New Mexico.Uranium-235 has a half-life of about 7.04×108 years; it is the next most stable uranium isotope after 238U and is also predominantly an alpha emitter, decaying to thorium-231. Uranium-235 is important for both nuclear reactors and nuclear weapons, because it is the only uranium isotope existing in nature on Earth in any significant amount that is fissile. This means that it can be split into two or three fragments (fission products) by thermal neutrons. The decay chain of 235U, which is called the actinium series, has 15 members and eventually decays into lead-207. The constant rates of decay in these decay series makes the comparison of the ratios of parent to daughter elements useful in radiometric dating.
Isotopes:
Uranium-236 has a half-life of 2.342×107 years and is not found in significant quantities in nature. The half-life of uranium-236 is too short for it to be primordial, though it has been identified as an extinct progenitor of its alpha decay daughter, thorium-232. Uranium-236 occurs in spent nuclear fuel when neutron capture on 235U does not induce fission, or as a decay product of plutonium-240. Uranium-236 is not fertile, as three more neutron captures are required to produce fissile 239Pu, and is not itself fissile; as such, it is considered long-lived radioactive waste.Uranium-234 is a member of the uranium series and occurs in equilibrium with its progenitor, 238U; it undergoes alpha decay with a half-life of 245,500 years and decays to lead-206 through a series of relatively short-lived isotopes.
Isotopes:
Uranium-233 undergoes alpha decay with a half-life of 160,000 years and, like 235U, is fissile. It can be bred from thorium-232 via neutron bombardment, usually in a nuclear reactor; this process is known as the thorium fuel cycle. Owing to the fissility of 233U and the greater natural abundance of thorium (three times that of uranium), 233U has been investigated for use as nuclear fuel as a possible alternative to 235U and 239Pu, though is not in widespread use as of 2022. The decay chain of uranium-233 forms part of the neptunium series and ends at nearly-stable bismuth-209 (half-life 2.01×1019 years) and stable thallium-205.
Isotopes:
Uranium-232 is an alpha emitter with a half-life of 68.9 years. This isotope is produced as a byproduct in production of 233U and is considered a nuisance, as it is not fissile and decays through short-lived alpha and gamma emitters such as 208Tl. It is also expected that thorium-232 should be able to undergo double beta decay, which would produce uranium-232, but this has not yet been observed experimentally.All isotopes from 232U to 236U inclusive have minor cluster decay branches (less than 10−10%), and all these bar 233U, in addition to 238U, have minor spontaneous fission branches; the greatest branching ratio for spontaneous fission is about 5×10−5% for 238U, or about one in every two million decays. The shorter-lived trace isotopes 237U and 239U exclusively undergo beta decay, with respective half-lives of 6.752 days and 23.45 minutes.In total, 28 isotopes of uranium have been identified, ranging in mass number from 214 to 242, with the exception of 220. Among the uranium isotopes not found in natural samples or nuclear fuel, the longest-lived is 230U, an alpha emitter with a half-life of 20.23 days. This isotope has been considered for use in targeted alpha-particle therapy (TAT). All other isotopes have half-lives shorter than one hour, except for 231U (half-life 4.2 days) and 240U (half-life 14.1 hours). The shortest-lived known isotope is 221U, with a half-life of 660 nanoseconds, and it is expected that the hitherto unknown 220U has an even shorter half-life. The proton-rich isotopes lighter than 232U primarily undergo alpha decay, except for 229U and 231U, which decay to protactinium isotopes via positron emission and electron capture, respectively; the neutron-rich 240U and 242U undergo beta decay to form neptunium isotopes.
Isotopes:
Enrichment In nature, uranium is found as uranium-238 (99.2742%) and uranium-235 (0.7204%). Isotope separation concentrates (enriches) the fissile uranium-235 for nuclear weapons and most nuclear power plants, except for gas cooled reactors and pressurised heavy water reactors. Most neutrons released by a fissioning atom of uranium-235 must impact other uranium-235 atoms to sustain the nuclear chain reaction. The concentration and amount of uranium-235 needed to achieve this is called a 'critical mass'.
Isotopes:
To be considered 'enriched', the uranium-235 fraction should be between 3% and 5%. This process produces huge quantities of uranium that is depleted of uranium-235 and with a correspondingly increased fraction of uranium-238, called depleted uranium or 'DU'. To be considered 'depleted', the uranium-235 isotope concentration should be no more than 0.3%. The price of uranium has risen since 2001, so enrichment tailings containing more than 0.35% uranium-235 are being considered for re-enrichment, driving the price of depleted uranium hexafluoride above $130 per kilogram in July 2007 from $5 in 2001.The gas centrifuge process, where gaseous uranium hexafluoride (UF6) is separated by the difference in molecular weight between 235UF6 and 238UF6 using high-speed centrifuges, is the cheapest and leading enrichment process. The gaseous diffusion process had been the leading method for enrichment and was used in the Manhattan Project. In this process, uranium hexafluoride is repeatedly diffused through a silver-zinc membrane, and the different isotopes of uranium are separated by diffusion rate (since uranium-238 is heavier it diffuses slightly slower than uranium-235). The molecular laser isotope separation method employs a laser beam of precise energy to sever the bond between uranium-235 and fluorine. This leaves uranium-238 bonded to fluorine and allows uranium-235 metal to precipitate from the solution. An alternative laser method of enrichment is known as atomic vapor laser isotope separation (AVLIS) and employs visible tunable lasers such as dye lasers. Another method used is liquid thermal diffusion.The only significant deviation from the 235U to 238U ratio in any known natural samples occurs in Oklo, Gabon, where natural nuclear fission reactors consumed some of the 235U some two billion years ago when the ratio of 235U to 238U was more akin to that of low enriched uranium allowing regular ("light") water to act as a neutron moderator akin to the process in humanmade light water reactors. The existence of such natural fission reactors which had been theoretically predicted beforehand was proven as the slight deviation of 235U concentration from the expected values were discovered during uranium enrichment in France. Subsequent investigations to rule out any nefarious human action (such as stealing of 235U) confirmed the theory by finding isotope ratios of common fission products (or rather their stable daughter nuclides) in line with the values expected for fission but deviating from the values expected for non-fission derived samples of those elements.
Human exposure:
A person can be exposed to uranium (or its radioactive daughters, such as radon) by inhaling dust in air or by ingesting contaminated water and food. The amount of uranium in air is usually very small; however, people who work in factories that process phosphate fertilizers, live near government facilities that made or tested nuclear weapons, live or work near a modern battlefield where depleted uranium weapons have been used, or live or work near a coal-fired power plant, facilities that mine or process uranium ore, or enrich uranium for reactor fuel, may have increased exposure to uranium. Houses or structures that are over uranium deposits (either natural or man-made slag deposits) may have an increased incidence of exposure to radon gas. The Occupational Safety and Health Administration (OSHA) has set the permissible exposure limit for uranium exposure in the workplace as 0.25 mg/m3 over an 8-hour workday. The National Institute for Occupational Safety and Health (NIOSH) has set a recommended exposure limit (REL) of 0.2 mg/m3 over an 8-hour workday and a short-term limit of 0.6 mg/m3. At levels of 10 mg/m3, uranium is immediately dangerous to life and health.Most ingested uranium is excreted during digestion. Only 0.5% is absorbed when insoluble forms of uranium, such as its oxide, are ingested, whereas absorption of the more soluble uranyl ion can be up to 5%. However, soluble uranium compounds tend to quickly pass through the body, whereas insoluble uranium compounds, especially when inhaled by way of dust into the lungs, pose a more serious exposure hazard. After entering the bloodstream, the absorbed uranium tends to bioaccumulate and stay for many years in bone tissue because of uranium's affinity for phosphates. Incorporated uranium becomes uranyl ions, which accumulate in bone, liver, kidney, and reproductive tissues.Radiological and chemical toxicity of uranium combine by the fact that elements of high atomic number Z like uranium exhibit phantom or secondary radiotoxicity though absorption of natural background gamma and X-rays and re-emission of photoelectrons, which in combination with the high affinity of uranium to the phosphate moiety of the DNA cause an increasing numbers of single and double strand DNA breaks.Uranium is not absorbed through the skin, and alpha particles released by uranium cannot penetrate the skin.Uranium can be decontaminated from steel surfaces and aquifers.
Human exposure:
Effects and precautions Normal functioning of the kidney, brain, liver, heart, and other systems can be affected by uranium exposure, because, besides being weakly radioactive, uranium is a toxic metal. Uranium is also a reproductive toxicant. Radiological effects are generally local because alpha radiation, the primary form of 238U decay, has a very short range, and will not penetrate skin. Alpha radiation from inhaled uranium has been demonstrated to cause lung cancer in exposed nuclear workers. While the CDC has published one study that no human cancer has been seen as a result of exposure to natural or depleted uranium, exposure to uranium and its decay products, especially radon, is a significant health threat. Exposure to strontium-90, iodine-131, and other fission products is unrelated to uranium exposure, but may result from medical procedures or exposure to spent reactor fuel or fallout from nuclear weapons.Although accidental inhalation exposure to a high concentration of uranium hexafluoride has resulted in human fatalities, those deaths were associated with the generation of highly toxic hydrofluoric acid and uranyl fluoride rather than with uranium itself. Finely divided uranium metal presents a fire hazard because uranium is pyrophoric; small grains will ignite spontaneously in air at room temperature.Uranium metal is commonly handled with gloves as a sufficient precaution. Uranium concentrate is handled and contained so as to ensure that people do not inhale or ingest it. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Regional Playback Control**
Regional Playback Control:
RPC-1 and RPC-2 are designations applied to firmware for DVD drives. Older DVD drives use RPC-1 firmware, which allows DVDs from any region to play. Newer drives use RPC-2 firmware, which enforces DVD region coding at the hardware level. See DVD region code#Computer DVD drives for further information.
Some RPC-2 drives can be converted to RPC-1 with the same features as before by using alternative firmware on the drive, or on some drives by setting a secret flag in the drive's EEPROM. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Automatic trip**
Automatic trip:
An automatic trip is an action performed by some system, usually a safety instrumented system, programmable logic controller, or distributed control system, to put an industrial process into a safe state. It is triggered by some parameter going into a pre-determined unsafe state. It is usually preceded by an alarm to give a process operator a chance to correct the condition to prevent the trip, since trips are usually costly because of lost production. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Wiskott–Aldrich syndrome protein**
Wiskott–Aldrich syndrome protein:
The Wiskott–Aldrich Syndrome protein (WASp) is a 502-amino acid protein expressed in cells of the hematopoietic system that in humans is encoded by the WAS gene. In the inactive state, WASp exists in an autoinhibited conformation with sequences near its C-terminus binding to a region near its N-terminus. Its activation is dependent upon CDC42 and PIP2 acting to disrupt this interaction, causing the WASp protein to 'open'. This exposes a domain near the WASp C-terminus that binds to and activates the Arp2/3 complex. Activated Arp2/3 nucleates new F-actin.
Wiskott–Aldrich syndrome protein:
WASp is the founding member of a gene family which also includes the broadly expressed N-WASP (neuronal Wiskott–Aldrich Syndrome protein), SCAR/WAVE1, WASH, WHAMM, and JMY. WAML (WASP and MIM like), WAWH (WASP without WH1 domain), and WHIMP (WAVE Homology in Membrane Protrusions) have more recently been discovered.
Structure and function:
The Wiskott–Aldrich syndrome (WAS) family of proteins share similar domain structure, and are involved in transduction of signals from receptors on the cell surface to the actin cytoskeleton. The presence of a number of different motifs suggests they are regulated by a number of different stimuli, and interact with multiple proteins. These proteins, directly or indirectly, associate with the small GTPase CDC42, known to regulate formation of actin filaments, and the cytoskeletal organising complex, Arp2/3.
Structure and function:
The WASp family proteins includes WASp, N-WASp, SCAR/WAVE, WHAMM and WASH. The five of them share a C- terminal VCA (verprolin, central, acidic) domain where they interact with actin nucleating complex (ARP2/3) and they differ in their terminal domains. WASp and N-WASP are analogs, they contain an N-terminal EVH1 domain, a C-terminal VCA domain and central B and GBD (GTP binding domain) domains. WASp, is expressed exclusively in hematopoietic cells and neuronal WASp (N-WASp), is ubiquitously expressed. N-WASp contains an output region and a control region that are essential for its regulation. The output region is called the VVCA domain. It is located towards the C-terminal end of the protein and contains four motifs: two verprolin homology motifs (VV) binds actin monomers and delivers them to Arp2/3; the central domain (C) was once thought to bind cofilin but is now believed to enhance the interactions between the V domains and actin monomers, as well as the interaction between the A domain and Arp2/3; and the acidic motif (A) binds Arp2/3. In isolation, the VCA region is constitutively active. However, in full-length N-WASp the control region suppresses VCA domain activity. The control region is located at N-terminal end of N-WASp. The control region contains a CDC42-binding domain (GBP) and a PIP2-binding domain (B), both of which are critical for proper regulation of N-WASp. Cooperative binding of CDC42 and PIP2 relieve the autoinhibition of N-WASp, causing Arp2/3 to carry out actin polymerization. WASp interacting protein (WIP) interacts with WASp N-terminal domain (WH1) preventing it from degradation and stabilising its auto-inhibitory conformation.
Structure and function:
In the absence of CDC42 and PIP2, N-WASp is in an inactive, locked conformation. Cooperative binding of both CDC42 and PIP2 relieve the autoinhibition. The cooperative binding of CDC42 and PIP2 is thermodynamically favored; binding of one enhances binding of the other. CDC42 and PIP2 localize the N-WASp-Arp2/3 complex to the plasma membrane. This localization ensures the actin polymers will be able to push through the plasma membrane and form filopodium required for cell motility.WASp is required for various functions in myeloid and lymphoid immune cells. Many of these, such as phagocytosis and podosome formation, related to its role in regulating the polymerization of actin filaments. Other functions of WASP depend on its activity as a scaffold protein for assembly of effective signalling complexes downstream of antigen receptor or integrin engagement. Particularly in NK cells it participates in the synapse formation and polarization of perforin to the immune synapse for NK cell cytotoxicity. When WASp is absent or mutated T cells and B cells formation of immune synapse and TCR/BCR downstream signaling is also affected.
Clinical significance:
Wiskott–Aldrich syndrome is a rare, inherited, X-linked, recessive disease characterized by immune dysregulation and microthrombocytopenia, and is caused by mutations in the WASp gene. The WASp gene product is a cytoplasmic protein, expressed exclusively in hematopoietic cells, which show signalling and cytoskeletal abnormalities in WAS patients. A transcript variant arising as a result of alternative promoter usage, and containing a different 5' UTR sequence, has been described, but its full-length nature is not known.WASp is a product of the WASp, and mutations in the WASp can lead to Wiskott–Aldrich syndrome (an X-linked disease that mainly affects males with symptoms that include thrombocytopenia, eczema, recurrent infections, and small-sized platelets) in these patients the protein is usually significantly reduced or absent. Other, less inactivating mutations affecting the WASp cause X-linked thrombocytopenia, or XLT, where there is usually detectable protein levels by flow cytometry. The majority of the mutations causing classic WAS are located in the WH1 domain of the protein and these mutations affect binding with the WASp Interacting Protein. Mutations located in the GBD domain disrupt autoinhibition and lead to an unfolded protein that is constitutively active. Unlike WAS and XLT, WASp in this case is present and active. Activated WASp leads to nuclear localization of actin filaments and this can lead to premature apoptosis, aneuploidy and failure to undergo cytokinesis. This, in turn, causes myelodysplasia and X-linked neutropenia. A prospective gene therapy for Wiskott–Aldrich syndrome, OTL-103, uses autologous CD34+ lymphocytes that are transfected with a lentiviral vector to produce functional WASp. As of 28 June 2021, OTL-103 was undergoing Phase I/II clinical trials at the San Raffaele Hospital in Milan, Italy.
Interactions:
Wiskott–Aldrich syndrome protein has been shown to interact with: | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Glass-to-metal seal**
Glass-to-metal seal:
Glass-to-metal seals are a type of mechanical seal which joins glass and metal surfaces. They are very important elements in the construction of vacuum tubes, electric discharge tubes, incandescent light bulbs, glass-encapsulated semiconductor diodes, reed switches, glass windows in metal cases, and metal or ceramic packages of electronic components.
Glass-to-metal seal:
Properly done, such a seal is hermetic (capable of supporting a vacuum, good electrical insulation, special optical properties e.g. UV lamps). To achieve such a seal, two properties must hold: The molten glass must be capable of wetting the metal, in order to form a tight bond, and The thermal expansion of the glass and metal must be closely matched so that the seal remains solid as the assembly cools.Thinking for example about a metal wire in a glass bulb sealing, the metal glass contact can break if the coefficients of thermal expansion (CTE) are not well aligned. For the case that the CTE of the metal is larger than the CTE of the glass, the sealing shows a high probability to break upon cooling. By lowering the temperature, the metal wire shrinks more than the glass does, leading to a strong tensile force on the glass, which finally lead to breakage. On the other hand, if the CTE of the glass is larger than the CTE of the metal wire, the seal will tighten upon cooling since compression force is applied on the glass.
Glass-to-metal seal:
According to all requirements that need to be fulfilled and the strong necessity to align the CTE of both materials, there are only a few companies offering specialty glass for glass-metal sealing, such as SCHOTT AG and Morgan Advanced Materials.
Glass-to-metal bonds:
Glass and metal can bond together by purely mechanical means, which usually gives weaker joints, or by chemical interaction, where the oxide layer on the metal surface forms a strong bond with the glass (the glass itself is about 73% composed of a silicon dioxide (SiO2)) . The acid-base reactions are main causes of interaction between glass-metal in the presence of metal oxides on the surface of metal. After complete dissolution of the surface oxides into the glass, further progress of interaction depends on the oxygen activity at the interface. The oxygen activity can be increased by diffusion of molecular oxygen through some defects like cracks.
Glass-to-metal bonds:
Also, reduction of the thermodynamically less stable components in the glass (and releasing the oxygen ions) can increase the oxygen activity at the interface. In other words, the redox reactions are main causes of interaction between glass-metal in the absence of metal oxides on the surface of metal.For achieving a vacuum-tight seal, the seal must not contain bubbles. The bubbles are most commonly created by gases escaping the metal at high temperature; degassing the metal before its sealing is therefore important, especially for nickel and iron and their alloys. This is achieved by heating the metal in vacuum or sometimes in hydrogen atmosphere or in some cases even in air at temperatures above those used during the sealing process. Oxidizing of the metal surface also reduces gas evolution. Most of the evolved gas is produced due to the presence of carbon impurities in the metals; these can be removed by heating in hydrogen.The glass-oxide bond is stronger than glass-metal. The oxide forms a layer on the metal surface, with the proportion of oxygen changing from zero in the metal to the stoichiometry of the oxide and the glass itself. A too-thick oxide layer tends to be porous on the surface and mechanically weak, flaking, compromising the bond strength and creating possible leakage paths along the metal-oxide interface. Proper thickness of the oxide layer is therefore critical.
Glass-to-metal bonds:
Copper Metallic copper does not bond well to glass. Copper(I) oxide, however, is wetted by molten glass and partially dissolves in it, forming a strong bond. The oxide also bonds well to the underlying metal. But copper(II) oxide causes weak joints that may leak and its formation must be prevented.
Glass-to-metal bonds:
For bonding copper to glass, the surface needs to be properly oxidized. The oxide layer is to have the right thickness; too little oxide would not provide enough material for the glass to anchor to, too much oxide would cause the oxide layer to fail, and in both cases the joint would be weak and possibly non-hermetic. To improve the bonding to glass, the oxide layer should be borated; this is achieved by e.g. dipping the hot part into a concentrated solution of borax and then heating it again for certain time. This treatment stabilizes the oxide layer by forming a thin protective layer of sodium borate on its surface, so the oxide does not grow too thick during subsequent handling and joining. The layer should have uniform deep red to purple sheen. The boron oxide from the borated layer diffuses into glass and lowers its melting point. The oxidation occurs by oxygen diffusing through the molten borate layer and forming copper(I) oxide, while formation of copper(II) oxide is inhibited.The copper-to-glass seal should look brilliant red, almost scarlet; pink, sherry and honey colors are also acceptable. Too thin an oxide layer appears light, up to the color of metallic copper, while too thick oxide looks too dark.
Glass-to-metal bonds:
Oxygen-free copper has to be used if the metal comes in contact with hydrogen (e.g. in a hydrogen-filled tube or during handling in the flame). Normally, copper contains small inclusions of copper(I) oxide. Hydrogen diffuses through the metal and reacts with the oxide, reducing it to copper and yielding water. The water molecules however can not diffuse through the metal, are trapped in the location of the inclusion, and cause embrittlement.
Glass-to-metal bonds:
As copper(I) oxide bonds well to the glass, it is often used for combined glass-metal devices. The ductility of copper can be used for compensation of the thermal expansion mismatch in e.g. the knife-edge seals. For wire feed throughs, dumet wire – nickel-iron alloy plated with copper – is frequently used. Its maximum diameter is however limited to about 0.5 mm due to its thermal expansion.
Glass-to-metal bonds:
Copper can be sealed to glass without the oxide layer, but the resulting joint is less strong.
Platinum Platinum has similar thermal expansion as glass and is well-wetted with molten glass. It however does not form oxides, so its bond strength is lower. The seal has metallic color and limited strength.
Gold Like platinum, gold does not form oxides that could assist in bonding. Glass-gold bonds are therefore metallic in color and weak. Gold tends to be used for glass-metal seals only rarely. Special compositions of soda-lime glasses that match the thermal expansion of gold, containing tungsten trioxide and oxides of lanthanum, aluminum and zirconium, exist.
Silver Silver forms a thin layer of silver oxide on its surface. This layer dissolves in molten glass and forms silver silicate, facilitating a strong bond.
Nickel Nickel can bond with glass either as a metal, or via the nickel(II) oxide layer. The metal joint has metallic color and inferior strength. The oxide-layer joint has characteristic green-grey color. Nickel plating can be used in similar way as copper plating, to facilitate better bonding with the underlying metal.
Glass-to-metal bonds:
Iron Iron is only rarely used for feedthroughs, but frequently gets coated with vitreous enamel, where the interface is also a glass-metal bond. The bond strength is also governed by the character of the oxide layer on its surface. A presence of cobalt in the glass leads to a chemical reaction between the metallic iron and cobalt oxide, yielding iron oxide dissolved in glass and cobalt alloying with the iron and forming dendrites, growing into the glass and improving the bond strength.Iron can not be directly sealed to lead glass, as it reacts with the lead oxide and reduces it to metallic lead. For sealing to lead glasses, it has to be copper-plated or an intermediate lead-free glass has to be used. Iron is prone to creating gas bubbles in glass due to the residual carbon impurities; these can be removed by heating in wet hydrogen. Plating with copper, nickel or chromium is also advised.
Glass-to-metal bonds:
Chromium Chromium is a highly reactive metal present in many iron alloys. Chromium may react with glass, reducing the silicon and forming crystals of chromium silicide growing into the glass and anchoring together the metal and glass, improving the bond strength.
Glass-to-metal bonds:
Kovar Kovar, an iron-nickel-cobalt alloy, has low thermal expansion similar to high-borosilicate glass and is frequently used for glass-metal seals especially for the application in x-ray tubes or glass lasers. It can bond to glass via the intermediate oxide layer of nickel(II) oxide and cobalt(II) oxide; the proportion of iron oxide is low due to its reduction with cobalt. The bond strength is highly dependent on the oxide layer thickness and character. The presence of cobalt makes the oxide layer easier to melt and dissolve in the molten glass. A grey, grey-blue or grey-brown color indicates a good seal. A metallic color indicates lack of oxide, while black color indicates overly oxidized metal, in both cases leading to a weak joint.
Glass-to-metal bonds:
Molybdenum Molybdenum bonds to the glass via the intermediate layer of molybdenum(IV) oxide. Due to its low thermal expansion coefficient, matched to glass, molybdenum, like tungsten, is often used for glass-metal bonds especially in conjunction with aluminium-silicate glass. Its high electrical conductivity makes it superior over nickel-cobalt-iron alloys. It is favored by the lighting industry as feedthroughs for lightbulbs and other devices. Molybdenum oxidizes much faster than tungsten and quickly develops a thick oxide layer that does not adhere well, its oxidation should be therefore limited to just yellowish or at most blue-green color. The oxide is volatile and evaporates as a white smoke above 700 °C; excess oxide can be removed by heating in inert gas (argon) at 1000 °C. Molybdenum strips are used instead of wires where higher currents (and higher cross-sections of the conductors) are needed.
Glass-to-metal bonds:
Tungsten Tungsten bonds to the glass via the intermediate layer of tungsten(VI) oxide. A properly formed bond has characteristic coppery/orange/brown-yellow color in lithium-free glasses; in lithium-containing glasses the bond is blue due to formation of lithium tungstate. Due to its low thermal expansion coefficient, matched to glass, tungsten is frequently used for glass-metal bonds. Tungsten forms satisfying bonds with glasses with similar thermal expansion coefficient such as high-borosilicate glass. The surface of both the metal and glass should be smooth, without scratches. Tungsten has the lowest expansion coefficient of metals and the highest melting point.
Glass-to-metal bonds:
Stainless steel 304 Stainless steel forms bonds with glass via an intermediate layer of chromium(III) oxide and iron(III) oxide. Further reactions of chromium, forming chromium silicide dendrites, are possible. The thermal expansion coefficient of steel is however fairly different from the glass; like with copper, this can be alleviated by using knife-edge (Houskeeper) seals.
Zirconium Zirconium wire can be sealed to glass with just little treatment – rubbing with abrasive paper and short heating in flame. Zirconium is used in applications demanding chemical resistance or lack of magnetism.
Titanium Titanium, like zirconium, can be sealed to some glasses with just little treatment.
Glass-to-metal bonds:
Indium Indium and some of its alloys can be used as a solder capable of wetting glass, ceramics, and metals and joining them together. Indium has low melting point and is very soft; the softness allows it to deform plastically and absorb the stresses from thermal expansion mismatches. Due to its very low vapor pressure, indium finds use in glass-metal seals used in vacuum technology and cryogenic applications.
Glass-to-metal bonds:
Gallium Gallium is a soft metal with melting point at 30 °C. It readily wets glasses and most metals and can be used for seals that can be assembled/disassembled by just slight heating. It can be used as a liquid seal up to high temperatures or even at lower temperatures when alloyed with other metals (e.g. as galinstan).
Mercury Mercury is a metal liquid at normal temperature. It was used as the earliest glass-to-metal seal and is still in use for liquid seals for e.g. rotary shafts.
Mercury seal:
The first technological use of a glass-to-metal seal was the encapsulation of the vacuum in the barometer by Torricelli. The liquid mercury wets the glass and thus provides for a vacuum tight seal. Liquid mercury was also used to seal the metal leads of early mercury arc lamps into the fused silica bulbs.
A less toxic and more expensive alternative to mercury is gallium.
Mercury and gallium seals can be used for vacuum-sealing rotary shafts.
Platinum wire seal:
The next step was to use thin platinum wire. Platinum is easily wetted by glass and has a similar coefficient of thermal expansion as typical soda-lime and lead glass. It is also easy to work with because of its non-oxidibility and high melting point. This type of seal was used in scientific equipment throughout the 19th century and also in the early incandescent lamps and radio tubes.
Dumet wire seal:
In 1911 the Dumet-wire seal was invented which is still the common practice to seal copper leads through soda-lime or lead glass.
Dumet wire seal:
If copper is properly oxidised before it is wetted by molten glass a vacuum tight seal of good mechanical strength can be obtained. After copper is oxidized, it is often dipped in a borax solution, as borating the copper helps prevents over-oxidation when reintroduced to a flame. Simple copper wire is not usable because its CTE is much higher than that of the glass. Thus, on cooling a strong tensile force acts on the glass-to-metal interface and it breaks.
Dumet wire seal:
Glass and glass-to-metal interfaces are especially sensitive to tensile stress. Dumet-wire is a copper clad wire (25% of copper by weight) with a core of nickel-iron alloy 42 (42% of nickel by weight). The core having low CTE makes it possible to produce a wire with a radial CTE lower than a linear CTE of the glass, so that the glass-to-metal interface is under a low compression stress. It is not possible to adjust the axial thermal expansion of the wire as well. Because of the much higher mechanical strength of the nickel-iron core compared to the copper, the axial CTE the wire is about the same as of the core. Therefore, a shear stress builds up which is limited to a safe value by the low tensile strength of the copper. This is also the reason why Dumet is only useful for wire diameters lower than about 0.5 mm.In a typical Dumet seal through the base of a vacuum tube a short piece of Dumet-wire is butt welded to a nickel wire at one end and a copper wire at the other end. When the base is pressed of lead glass the Dumet-wire and a short part of the nickel and the copper wire are enclosed in the glass. Then the nickel wire and the glass around the Dumet-wire are heated by a gas flame and the glass seals to the Dumet-wire.
Dumet wire seal:
The nickel and copper do not seal vacuum tight to the glass but are mechanically supported. The butt welding also avoids problems with gas-leakages at the interface between the core wire and the copper.
Copper tube seal:
Another possibility to avoid a strong tensile stress when sealing copper through glass is the use of a thin walled copper tube instead of a solid wire. Here a shear stress builds up in the glass-to-metal interface which is limited by the low tensile strength of the copper combined with a low tensile stress. The copper tube is insensitive to high electric current compared to a Dumet-seal because on heating the tensile stress converts into a compression stress which is again limited by the tensile strength of the copper. Also, it is possible to lead an additional solid copper wire through the copper tube. In a later variant, only a short section of the copper tube has a thin wall and the copper tube is hindered to shrink at cooling by a ceramic tube inside the copper tube.
Copper tube seal:
If large parts of copper are to be fitted to glass like the water cooled copper anode of a high power radio transmitter tube or an x-ray tube historically the Houskeeper knife edge seal is used. Here the end of a copper tube is machined to a sharp knife edge, invented by O. Kruh in 1917. In the method described by W.G. Houskeeper the outside or the inside of the copper tube right to the knife edge is wetted with glass and connected to the glass tube. In later descriptions the knife edge is just wetted several millimeters deep with glass, usually deeper on the inside, and then connected to the glass tube.
Copper tube seal:
If copper is sealed to glass, it is an advantage to get a thin bright red Cu2O containing layer between copper and glass. This is done by borating. After W.J. Scott a copper plated tungsten wire is immersed for about 30 s in chromic acid and then washed thoroughly in running tap water. Then it is dipped into a saturated solution of borax and heated to bright red heat in the oxidizing part of a gas flame. Possibly followed by quenching in water and drying. Another method is to oxidize the copper slightly in a gas flame and then to dip it into borax solution and let it dry. The surface of the borated copper is black when hot and turns to dark wine red on cooling.
Copper tube seal:
It is also possible to make a bright seal between copper and glass where it is possible to see the blank copper surface through the glass, but this gives less adherence than the seal with the red Cu2O containing layer. If glass is melted on copper in a reducing hydrogen atmosphere the seal is extremely weak. If copper is to be heated in hydrogen-containing atmosphere e.g. a gas flame it needs to be oxygen-free to prevent hydrogen embrittlement. Copper which is meant to be used as an electrical conductor is not necessarily oxygen-free and contains particles of Cu2O which react with hydrogen that diffuses into the copper to H2O which cannot diffuse out-off the copper and thus causes embrittlement. The copper usually used in vacuum applications is of the very pure OFHC (oxygen-free-high-conductivity) quality which is both free of Cu2O and deoxidising additives which might evaporate at high temperature in vacuum.
Copper disc seal:
In the copper disc seal, as proposed by W.G. Houskeeper, the end of a glass tube is closed by a round copper disc. An additional ring of glass on the opposite side of the disc increases the possible thickness of the disc to more than 0.3 mm. Best mechanical strength is obtained if both sides of the disc are fused to the same type of glass tube and both tubes are under vacuum. The disc seal is of special practical interest because it is a simple method to make a seal to low expansion borosilicate glass without the need of special tools or materials. The keys to success are proper borating, heating of the joint to a temperature as close to the melting point of the copper as possible and to slow down the cooling, at least by packing the assembly into glass wool while it is still red hot.
Matched seal:
In a matched seal the thermal expansion of metal and glass is matched. Copper-plated tungsten wire can be used to seal through borosilicate glass with a low coefficient of thermal expansion which is matched by tungsten. The tungsten is electrolytically copper plated and heated in hydrogen atmosphere to fill cracks in the tungsten and to get a proper surface to easily seal to glass. The borosilicate glass of usual laboratory glassware has a lower coefficient of thermal expansion than tungsten, thus it is necessary to use an intermediate sealing glass to get a stress-free seal.
Matched seal:
There are combinations of glass and iron-nickel-cobalt alloys (Kovar) where even the non-linearity of the thermal expansion is matched. These alloys can be directly sealed to glass, but then the oxidation is critical. Also, their low electrical conductivity is a disadvantage. Thus, they are often gold plated. It is also possible to use silver plating, but then an additional gold layer is necessary as an oxygen diffusion barrier to prevent the formation of iron oxide.
Matched seal:
While there are Fe-Ni alloys which match the thermal expansion of tungsten at room temperature, they are not useful to seal to glass because of a too strong increase of their thermal expansion at higher temperatures.
Reed switches use a matched seal between an iron-nickel alloy (NiFe 52) and a matched glass. The glass of reed switches is usually green due to its iron content because the sealing of reed switches is done by heating with infrared radiation and this glass shows a high absorption in the near infrared.
Matched seal:
The electrical connections of high-pressure sodium vapour lamps, the light yellow lamps for street lighting, are made of niobium alloyed with 1% of zirconium.Historically, some television cathode ray tubes were made by using ferric steel for the funnel and glass matched in expansion to ferric steel. The steel plate used had a diffusion layer enriched with chromium at the surface made by heating the steel together with chromium oxide in a HCl-containing atmosphere. In contrast to copper, pure iron does not bond strongly to silicate glass. Also, technical iron contains some carbon which forms bubbles of CO when it is sealed to glass under oxidizing conditions. Both are a major source of problems for the technical enamel coating of steel and make direct seals between iron and glass unsuitable for high vacuum applications. The oxide layer formed on chromium-containing steel can seal vacuum tight to glass and the chromium strongly reacts with carbon. Silver-plated iron was used in early microwave tubes.
Matched seal:
It is possible to make matched seals between copper or austenitic steel and glass, but silicate glass with that high thermal expansion is especially fragile and has a low chemical durability.
Molybdenum foil seal:
Another widely used method to seal through glass with low coefficient of thermal expansion is the use of strips of thin molybdenum foil. This can be done with matched coefficients of thermal expansion. Then the edges of the strip also have to be knife sharp. The disadvantage here is that the tip of the edge which is a local point of high tensile stress reaches through the wall of the glass container. This can lead to low gas leakages. In the tube to tube knife edge seal the edge is either outside, inside, or buried into the glass wall.
Compression seal:
Another possibility of seal construction is the compression seal. This type of glass-to-metal seal can be used to feed through the wall of a metal container. Here the wire is usually matched to the glass which is inside of the bore of a strong metal part with higher coefficient of thermal expansion. Compression seals can withstand extremely high pressures and physical stress such as mechanical and thermal shock.
Silver chloride:
Silver chloride, which melts at 457 C bonds to glass, metals and other materials and has been used for vacuum seals. Even if it can be a convenient way to seal metal into glass it will not be a true glass to metal seal but rather a combination of a glass to silver chloride and a silver chloride to metal bond; an inorganic alternative to wax or glue bonds.
Design aspects:
Also the mechanical design of a glass-to-metal seal has an important influence on the reliability of the seal. In practical glass-to-metal seals cracks usually start at the edge of the interface between glass and metal either inside or outside the glass container. If the metal and the surrounding glass are symmetric the crack propagates in an angle away from the axis. So, if the glass envelope of the metal wire extends far enough from the wall of the container the crack will not go through the wall of the container but it will reach the surface on the same side where it started and the seal will not leak despite the crack.
Design aspects:
Another important aspect is the wetting of the metal by the glass. If the thermal expansion of the metal is higher than the thermal expansion of the glass like with the Houskeeper seal, a high contact angle (bad wetting) means that there is a high tensile stress in the surface of the glass near the metal. Such seals usually break inside the glass and leave a thin cover of glass on the metal. If the contact angle is low (good wetting) the surface of the glass is everywhere under compression stress like an enamel coating. Ordinary soda-lime glass does not flow on copper at temperatures below the melting point of the copper and, thus, does not give a low contact angle. The solution is to cover the copper with a solder glass which has a low melting point and does flow on copper and then to press the soft soda-lime glass onto the copper. The solder glass must have a coefficient of thermal expansion which is equal or a little lower than that of the soda-lime glass. Classically high lead containing glasses are used, but it is also possible to substitute these by multi-component glasses e.g. based on the system Li2O-Na2O-K2O-CaO-SiO2-B2O3-ZnO-TiO2-BaO-Al2O3. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Second inversion**
Second inversion:
The second inversion of a chord is the voicing of a triad, seventh chord, or ninth chord in which the fifth of the chord is the bass note. In this inversion, the bass note and the root of the chord are a fourth apart which traditionally qualifies as a dissonance. There is therefore a tendency for movement and resolution. In notation form, it is referred to with a c following the chord position (For e.g., Ic. Vc or IVc). In figured bass, a second-inversion triad is a 64 chord (as in I64), while a second-inversion seventh chord is a 43 chord.
Second inversion:
Inversions are not restricted to the same number of tones as the original chord, nor to any fixed order of tones except with regard to the interval between the root, or its octave, and the bass note, hence, great variety results.
Note that any voicing above the bass is allowed. A second inversion chord must have the fifth chord factor in the bass, but it may have any arrangement of the root and third above that, including doubled notes, compound intervals, and omission (G-C-E, G-C-E-G', G-E-G-C'-E', etc.)
Examples:
In the second inversion of a C-major triad, the bass is G — the fifth of the triad — with the root and third stacked above it, forming the intervals of a fourth and a sixth above the inverted bass of G, respectively.
In the second inversion of a G dominant seventh chord, the bass note is D, the fifth of the seventh chord.
Types:
There are four types of second-inversion chords: cadential, passing, auxiliary, and bass arpeggiation.
Types:
Cadential Cadential second-inversion chords are typically used in the authentic cadence I64-V-I, or one of its variation, like I64-V 7-I. In this form, the chord is sometimes referred to as a cadential 64 chord. The chord preceding I64 is most often a chord that would introduce V as a weak to strong progression, for example, making -II-V into II-I64-V or making IV-V into IV-I64-V.
Types:
The cadential 64 can be analyzed in two ways: the first labels it as a second-inversion chord, while the second treats it instead as part of a horizontal progression involving voice leading above a stationary bass.
In the first designation, the cadential 64 chord features the progression: I64-V-I. Most older harmony textbooks use this label, and it can be traced back to the early 19th century.
Types:
In the second designation, this chord is not considered an inversion of a tonic triad but as a dissonance resolving to a consonant dominant harmony. This is notated as V6–54–3-I, in which the 64 is not the inversion of the V chord but a double appoggiatura on the V that resolves down by step to V53 (that is, V64-V). This function is very similar to the resolution of a 4–3 suspension. Several modern textbooks prefer this conception of the cadential 64, which can also be traced back to the early 19th century.
Types:
Passing In a progression with a passing second-inversion chord, the bass passes between two tones a third apart (usually of the same harmonic function). When moving from I to I 6, the passing chord V64 is placed between them – though some prefer VII 6 to V64 – creating stepwise motion in the bass (scale degrees – – ). It can also be used in the reverse direction: I 6-V64-I. The important point is that the V64 chord functions as a passing chord between the two more stable chords. It occurs on the weaker beat between these two chords. The upper voices usually move in step (or remain stationary) in this progression.
Types:
Auxiliary (or pedal) In a progression with an auxiliary (or pedal) second-inversion chord, the IV64 chord functions as the harmonization of a neighbor note in the progression, I-IV64-I. In this progression, the third and fifth rise a step each and then fall back, creating a harmonization for the scale degrees – – in the top voice.
Bass arpeggiation In this progression, the bass arpeggiates the root, third, and fifth of the chord. This is just a florid movement but since the fifth is present in the bass, it is referred to as a bass arpeggiation flavour of the second inversion. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Leukoencephalopathy with vanishing white matter**
Leukoencephalopathy with vanishing white matter:
Leukoencephalopathy with vanishing white matter (VWM disease) is an autosomal recessive neurological disease. The cause of the disease are mutations in any of the 5 genes encoding subunits of the translation initiation factor eIF2B: EIF2B1, EIF2B2, EIF2B3, EIF2B4, or EIF2B5. The disease belongs to a family of conditions called the Leukodystrophies.
Symptoms and signs:
Onset usually occurs in childhood, however some adult cases have been found. Generally, physicians look for the symptoms in children. Symptoms include cerebellar ataxia, spasticity, optic atrophy, epilepsy, loss of motor functions, irritability, vomiting, coma, and even fever has been tied to VWM. The neurological disorders and symptoms which occur with VWM are not specific to countries; they are the same all over the world. Neurological abnormalities may not always be present in those who experience onset as adults. Symptoms generally appear in young children or infants who were previously developing fairly normally.
Causes:
VWM is a leukodystrophy which has unique biochemical abnormalities. A unique characteristic of VWM is that only oligodendrocytes and astrocytes are negatively affected while other glial cells and neurons seem to be unaffected. This is the central question behind VWM. The real reasons behind this behavior are unknown since the cells are in the brain and have been rarely studied. However, there is a theory which is generally accepted by most experts in the field. The main characteristic of these cells is the fact that they synthesize a lot of proteins. These cells produce a large amount of proteins from a small amount of precursors and so are constantly working and under a reasonable amount of stress. So with a mutation in eIF2B, slight increases in the amount of stress these cells encounter occur, making them more susceptible to failure due to stress. The large amount of oligodendrocytes which display apoptotic characteristics and express apoptotic proteins suggests cell number reduction in the early stages of the disease. Premature ovarian failure has also been associated with diminishing white matter. However through an intensive survey, it was determined that even if an individual has premature ovarian failure, she does not necessarily have VWM.
Causes:
eIF2B's role eIF2B is the guanine nucleotide-exchange factor for eIF2, and is composed of 5 subunits. The largest subunit, eIF2B5 contains the most mutations for VWM. eIF2B is a complex which is very involved with the regulation of in the translation of mRNA into proteins. eIF2B is essential for the exchange of guanosine diphosphate(GDP) for guanosine-5'-triphosphate(GTP) in the initiation of translation via eIF2, because eIF2 is regenerated through this exchange. A decrease in eIF2B activity has been correlated with the onset of VWM. A common factor among VWM patients is mutations in the five subunits of eIF2B (21 discovered thus far), expressed in over 60% of the patients. These mutations lead to the decreased activity of eIF2B. The most common mutation is R113H, which is the mutation of histidine to arginine. The homozygous form of the mutation is the least severe form. This mutation has also been documented in rodents, but they do not acquire VWM, while humans do. Another common mutation is G584A found in the eIF2B5 subunit. A correlation with stress has also been made, as eIF2B plays a central role in stress management – it is essential in down regulation protein synthesis in different stress conditions – and VWM patients are highly sensitive to stress. Protein eIF2B exists in all cells, and if this protein is reduced enough the cell will be negatively affected, and if it is reduced to zero, the cell will die. In affected cells, the protein is reduced to about 50%, which is acceptable for functionality in most cells, but not in glial cells since they synthesize a large amount of proteins constantly and need as many functioning proteins within them as possible. This would lower the baseline of the amount of stress a cell can handle, and thus in a stressed environment, it would have detrimental effects on these cells. Mutations in three of the subunits of eIF2B (2,4,&5) has been seen in both VWM and premature ovarian failure. The North American Cree population has also been found to have a distinctive mutation, R195H, which can lead to VWM. All patients who have been studied only have one mutation present in the gene, causing the eIF2B to still be active, which leads to VWM. If two mutations occurred, then eIF2B activity would be stopped by the body.
Neuropathology:
Upon autopsy, the full effect of VWM has been documented. The gray matter remains normal in all characteristics while the white matter changes texture, becoming soft and gelatinous. Rarefaction of the white matter is seen through light microscopy and the small number of axons and U-fibers that were affected can also be seen. Numerous small cavities in the white matter are also apparent. The key characteristic that sets VWM apart from the other leukodystrophies is the presence of foamy oligodendrocytes. These foamy oligodendrocytes tend to have increased cytoplasmic structures, a greater number of irregular mitochondria and a higher rate of apoptosis. Abnormally shaped astrocytes with fibrile infections are very prevalent throughout the capillaries in the brain. Strangely, astrocytes are affected more than oligodendrocytes; there is even a reduction in the astrocyte progenitors, yet axons remain relatively unharmed.
Diagnosis:
Most diagnosis occurs in the early years of life around 2 to 6 years old. There have been cases in which onset and diagnosis have occurred late into adulthood. Those with onset at this time have different signs, particularly the lack of cognitive deterioration. Overall, detection of adult forms of VWM is difficult as MRI was not a common tool when they were diagnosed. Common signs to look for include chronic progressive neurological deterioration with cerebellar ataxia, spasticity, mental decline, decline of vision, mild epilepsy, hand tremor, the ability to chew and swallow food becomes difficult, rapid deterioration and fibrile infections following head trauma or fright, loss of motor functions, irritability, behavioural changes, vomiting, and even coma. Those who go into coma, if they do come out usually die within a few years. The diagnosis can be difficult if the physician does not take an MRI.
Diagnosis:
Case report on diagnosis of adult-onset VWM The individual was examined at age 32, but he stated that he started noting differences 5 years before. He noticed sexual impotency, social isolation, unexplained aggression and sadness, loss of motivation, inert laughs, auditory hallucinations, thought insertion, delusions, and imperative commenting. He showed very minimal physical impairments, commonly seen in child-onsets. However, his MRI showed characteristic signs of VWM disease.
Diagnosis:
MRI The MRI of patients with VWM shows a well defined leukodystrophy. These MRIs display reversal of signal intensity of the white matter in the brain. Recovery sequences and holes in the white matter are also visible. Over time, the MRI is excellent at showing rarefaction and cystic degeneration of the white matter as it is replaced by fluid. To show this change, displaying white matter as a high signal (T2-weighted), proton density, and Fluid attenuated inversion recovery (FLAIR) images are the best approach. T2-weighted images also displaying cerebrospinal fluid and rarefied/cystic white matter. To view the remaining tissue, and get perspective on the damage done (also helpful in determining the rate of deterioration) (T1-weighted), proton density, and FLAIR images are ideal as they show radiating stripe patterns in the degenerating white matter. A failure of MRI images is their ineffectiveness and difficulty in interpretation in infants since the brain has not fully developed yet. Though some patterns and signs may be visible, it is still difficult to conclusively diagnose. This often leads to misdiagnosis in infants particularly if the MRI results in equivocal patterns or because of the high water content in infants' brains. The easiest way to fix this problem is a follow-up MRI in the following weeks. A potentially similar appearance of MRI with white matter abnormalities and cystic changes may be seen in some patients with hypomelanosis of Ito, some forms of Lowe's (oculocerebrorenal) disease, or some of the mucopolysaccharidoses.
Diagnosis:
Common misdiagnosis Often with VWM, the lack of knowledge of the disease causes a misdiagnosis among physicians. As VWM is a member of the large group of leukodystrophy syndromes, it is often misdiagnosed as another type such as metachromatic leukodystrophy. More often than not, it is simply classified as a non-specific leukodystrophy. The characteristics of the brain upon autopsy are often very similar to atypical diffuse sclerosis, such as the presence of fibrillary astrocytes and scant sudanophilic lipids. Adult-onset VWM disease can present with psychosis and may be hard to differentiate from schizophrenia. Common misdiagnosis from misinterpreting the MRI include asphyxia, congenital infections, metabolic diseases.Multiple sclerosis is often a misdiagnosis, but only in children due to its neurological characteristics, onset in early years, and MRI abnormalities. However, there are many differences between the two diseases. The glial cells express a loss of myelin. This loss of myelin is different from that seen in other diseases where hypomyelination occurs. In VWM, the cells never produce the normal amounts, whereas with diseases like MS, the cells' normal amounts are deteriorated. Also, with MS, the demyelination occurs due to inflammation, which is not the case in VWM. Cell differences include a lower penetration of the macrophages and microglia, as well as the lack of T cells and B cells in VWM. Finally, patients with MS have widespread demyelination, but those with VWM only express demyelination in a localized area.Some atypical forms of multiple sclerosis (multiple sclerosis with cavitary lesions) can be specially difficult to differentiate but there are some clues in MRI imaging that can help.
Treatment:
There are no treatments, only precautions which can be taken, mainly to reduce trauma to the head and avoiding physiological stress. Melatonin has been shown to provide cytoprotective traits to glial cells exposed to stressors such as excitotoxicity and oxidative stress. These stressors would be detrimental to cells with a genetically reduced activity of protein eIF2B. However, research connecting these ideas have not been conducted yet.
Epidemiology:
Extensive pathological and biochemical tests were performed, however the cause was found by studying a small population in which mutations in the eIF2B gene were found. No effective systemic studies have been conducted to determine the incidence around the world, but through the studies conducted thus far, it appears to be more prevalent in the white populations. VWM appears to have a lower number of cases in the Middle East, and Turkey has not yet had a reported case. Its prevalence is limited by the physician's ability to identify the disease. As of 2006, more than 200 people have been identified with VWM, many of whom were originally diagnosed with an unclassified leukodystrophy.
History:
The first time this disease was documented was in 1962 when Eickle studied a 36-year-old woman. Her first symptoms, gait difficulties and secondary amenorrhoea, occurred when she was 31 years old. Throughout the duration of her life, she experienced chronic episodes with extensive deterioration of her brain following minor physical trauma. Upon death, autopsy was performed in which the cerebral white matter displayed dispersed cystic areas. These areas were surrounded by a dense net of oligodendrocytes in which only mild fibrillary astrocytes and scant sudanophilic lipids were found.As the years progressed, more accounts of similar patients with similar symptoms were documented; however no one classified all the accounts as the same disease. It was not until 1993-94 when Dr. Hanefeld and Dr. Schiffmann and their colleagues identified the disease as childhood-onset progressive leukoencephalopathy. They determined it was autosomal recessive. They too saw that head trauma was a trigger for the onset of VWM. The key factor which allowed them to connect these patients together was the results of the magnetic-resonance spectroscopy in which the normal white matter signals were gone and often replaced with resonances indicative of lactate and glucose. They determined the cause was hypomyelination. in 1997–98, Dr. Marjo S. van der Knaap and colleagues saw the same characteristics in another set of patients, but these patients also expressed febrile infections. Dr. van der Knaap used MRI as well as magnetic-resonance spectroscopy and determined that ongoing cystic degeneration of the cerebral white matter and matter rarefaction was more descriptive of the disease rather than hypomyelination and proposed the name vanishing white matter. The name proposed by Dr. Schiffmann in 1994, childhood ataxia with central hypomyelination (CACH) is another commonly accepted name. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Thioacetone**
Thioacetone:
Thioacetone is an organosulfur compound belonging to the -thione group called thioketones with a chemical formula (CH3)2CS. It is an unstable orange or brown substance that can be isolated only at low temperatures. Above −20 °C (−4 °F), thioacetone readily converts to a polymer and a trimer, trithioacetone. It has an extremely potent, unpleasant odor, and is considered one of the worst-smelling chemicals known to humanity.Thioacetone was first obtained in 1889 by Baumann and Fromm, as a minor impurity in their synthesis of trithioacetone.
Preparation:
Thioacetone is usually obtained by cracking the cyclic trimer trithioacetone, [(CH3)2CS]3. The trimer is prepared by pyrolysis of allyl isopropyl sulfide or by treating acetone with hydrogen sulfide in the presence of a Lewis acid. The trimer cracks at 500–600 °C (932–1,112 °F) to give the thione.
Polymerization:
Unlike its oxygen analogue acetone, which does not polymerise easily, thioacetone spontaneously polymerizes even at very low temperatures, pure or dissolved in ether or ethylene oxide, yielding a white solid that is a varying mixture of a linear polymer ···–[C(CH3)2–S–]n–··· and the cyclic trimer trithioacetone. Infrared absorption of this product occurs mainly at 2950, 2900, 1440, 1150, 1360, and 1375 cm−1 due to the geminal methyl pairs, and at 1085 and 643 cm−1 due to the C–S bond. The 1H NMR spectra shows a single peak at x = 8.1.The mean molecular weight of the polymer varies from 2000 to 14000 depending on the preparation method, temperature, and presence of the thioenol tautomer. The polymer melts in the range of about 70 °C to 125 °C. Polymerization is promoted by free radicals and light.The cyclic trimer of thioacetone (trithioacetone) is a white or colorless compound with a melting point of 24 °C (75 °F), near room temperature. It also has a disagreeable odor.
Odor:
Thioacetone has an intensely foul odour. Like many low molecular weight organosulfur compounds, the smell is potent and can be detected even when highly diluted. In 1889, an attempt to distill the chemical in the German city of Freiburg was followed by cases of vomiting, nausea, and unconsciousness in an area with a radius of 0.75 kilometres (0.47 mi) around the laboratory due to the smell. In an 1890 report, British chemists at the Whitehall Soap Works in Leeds noted that dilution seemed to make the smell worse and described the smell as "fearful".
Odor:
In 1967, Esso researchers repeated the experiment of cracking trithioacetone at a laboratory south of Oxford, UK. They reported their experience as follows: Recently we found ourselves with an odour problem beyond our worst expectations. During early experiments, a stopper jumped from a bottle of residues, and, although replaced at once, resulted in an immediate complaint of nausea and sickness from colleagues working in a building two hundred yards [180 m] away. Two of our chemists who had done no more than investigate the cracking of minute amounts of trithioacetone found themselves the object of hostile stares in a restaurant and suffered the humiliation of having a waitress spray the area around them with a deodorant. The odours defied the expected effects of dilution since workers in the laboratory did not find the odours intolerable ... and genuinely denied responsibility since they were working in closed systems. To convince them otherwise, they were dispersed with other observers around the laboratory, at distances up to a quarter of a mile [0.40 km], and one drop of either acetone gem-dithiol or the mother liquors from crude trithioacetone crystallisations were placed on a watch glass in a fume cupboard. The odour was detected downwind in seconds.
Odor:
Thioacetone is sometimes considered a dangerous chemical due to its extremely foul odor and its supposed ability to render people unconscious, induce vomiting, and be detected over long distances. However, modern-day tests suggest that these risks could be somewhat exaggerated. As of 2023, at least two YouTube personalities (Nigel of "NileRed" and Zach of "LabCoatz") have published videos showing themselves smelling freshly-prepared thioacetone, both up-close and from a distance. In both cases, the individuals simply described the smell as "sulfurous", with none of the side effects (nausea, vomiting, or unconsciousness) being observed. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Zeeman's comparison theorem**
Zeeman's comparison theorem:
In homological algebra, Zeeman's comparison theorem, introduced by Christopher Zeeman (Zeeman (1957)), gives conditions for a morphism of spectral sequences to be an isomorphism.
Illustrative example:
As an illustration, we sketch the proof of Borel's theorem, which says the cohomology ring of a classifying space is a polynomial ring.First of all, with G as a Lie group and with Q as coefficient ring, we have the Serre spectral sequence E2p,q for the fibration G→EG→BG . We have: E∞≃Q since EG is contractible. We also have a theorem of Hopf stating that H∗(G;Q)≃Λ(u1,…,un) , an exterior algebra generated by finitely many homogeneous elements.
Illustrative example:
Next, we let E(i) be the spectral sequence whose second page is E(i)2=Λ(xi)⊗Q[yi] and whose nontrivial differentials on the r-th page are given by d(xi)=yi and the graded Leibniz rule. Let ′Er=⊗iEr(i) . Since the cohomology commutes with tensor products as we are working over a field, ′Er is again a spectral sequence such that ′E∞≃Q⊗⋯⊗Q≃Q . Then we let f:′Er→Er,xi↦ui.
Illustrative example:
Note, by definition, f gives the isomorphism ′Er0,q≃Er0,q=Hq(G;Q).
A crucial point is that f is a "ring homomorphism"; this rests on the technical conditions that ui are "transgressive" (cf. Hatcher for detailed discussion on this matter.) After this technical point is taken care, we conclude: E2p,0≃′E2p,0 as ring by the comparison theorem; that is, E2p,0=Hp(BG;Q)≃Q[y1,…,yn]. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Attention**
Attention:
Attention is the concentration of awareness on some phenomenon to the exclusion of other stimuli. It is a process of selectively concentrating on a discrete aspect of information, whether considered subjective or objective. William James (1890) wrote that "Attention is the taking possession by the mind, in clear and vivid form, of one out of what seem several simultaneously possible objects or trains of thought. Focalization, concentration, of consciousness are of its essence." Attention has also been described as the allocation of limited cognitive processing resources. Attention is manifested by an attentional bottleneck, in terms of the amount of data the brain can process each second; for example, in human vision, only less than 1% of the visual input data (at around one megabyte per second) can enter the bottleneck, leading to inattentional blindness.Attention remains a crucial area of investigation within education, psychology, neuroscience, cognitive neuroscience, and neuropsychology. Areas of active investigation involve determining the source of the sensory cues and signals that generate attention, the effects of these sensory cues and signals on the tuning properties of sensory neurons, and the relationship between attention and other behavioral and cognitive processes, which may include working memory and psychological vigilance. A relatively new body of research, which expands upon earlier research within psychopathology, is investigating the diagnostic symptoms associated with traumatic brain injury and its effects on attention. Attention also varies across cultures.The relationships between attention and consciousness are complex enough that they have warranted perennial philosophical exploration. Such exploration is both ancient and continually relevant, as it can have effects in fields ranging from mental health and the study of disorders of consciousness to artificial intelligence and its domains of research.
Contemporary definition and research:
Prior to the founding of psychology as a scientific discipline, attention was studied in the field of philosophy. Thus, many of the discoveries in the field of attention were made by philosophers. Psychologist John B. Watson calls Juan Luis Vives the father of modern psychology because, in his book De Anima et Vita (The Soul and Life), he was the first to recognize the importance of empirical investigation. In his work on memory, Vives found that the more closely one attends to stimuli, the better they will be retained.
Contemporary definition and research:
By the 1990s, psychologists began using positron emission tomography (PET) and later functional magnetic resonance imaging (fMRI) to image the brain while monitoring tasks involving attention. Considering this expensive equipment was generally only available in hospitals, psychologists sought cooperation with neurologists. Psychologist Michael Posner (then already renowned for his influential work on visual selective attention) and neurologist Marcus Raichle pioneered brain imaging studies of selective attention. Their results soon sparked interest from the neuroscience community, which until then had simply been focused on monkey brains. With the development of these technological innovations, neuroscientists became interested in this type of research that combines sophisticated experimental paradigms from cognitive psychology with these new brain imaging techniques. Although the older technique of electroencephalography (EEG) had long been used to study the brain activity underlying selective attention by cognitive psychophysiologists, the ability of the newer techniques to actually measure precisely localized activity inside the brain generated renewed interest by a wider community of researchers. A growing body of such neuroimaging research has identified a frontoparietal attention network which appears to be responsible for control of attention.
Selective and visual:
In cognitive psychology there are at least two models which describe how visual attention operates. These models may be considered metaphors which are used to describe internal processes and to generate hypotheses that are falsifiable. Generally speaking, visual attention is thought to operate as a two-stage process. In the first stage, attention is distributed uniformly over the external visual scene and processing of information is performed in parallel. In the second stage, attention is concentrated to a specific area of the visual scene (i.e., it is focused), and processing is performed in a serial fashion.
Selective and visual:
The first of these models to appear in the literature is the spotlight model. The term "spotlight" was inspired by the work of William James, who described attention as having a focus, a margin, and a fringe. The focus is an area that extracts information from the visual scene with a high-resolution, the geometric center of which being where visual attention is directed. Surrounding the focus is the fringe of attention, which extracts information in a much more crude fashion (i.e., low-resolution). This fringe extends out to a specified area, and the cut-off is called the margin.
Selective and visual:
The second model is called the zoom-lens model and was first introduced in 1986. This model inherits all properties of the spotlight model (i.e., the focus, the fringe, and the margin), but it has the added property of changing in size. This size-change mechanism was inspired by the zoom lens one might find on a camera, and any change in size can be described by a trade-off in the efficiency of processing. The zoom-lens of attention can be described in terms of an inverse trade-off between the size of focus and the efficiency of processing: because attention resources are assumed to be fixed, then it follows that the larger the focus is, the slower processing will be of that region of the visual scene, since this fixed resource will be distributed over a larger area. It is thought that the focus of attention can subtend a minimum of 1° of visual angle, however the maximum size has not yet been determined.
Selective and visual:
A significant debate emerged in the last decade of the 20th century in which Treisman's 1993 Feature Integration Theory (FIT) was compared to Duncan and Humphrey's 1989 attentional engagement theory (AET).: 5–7 FIT posits that "objects are retrieved from scenes by means of selective spatial attention that picks out objects' features, forms feature maps, and integrates those features that are found at the same location into forming objects." Treismans's theory is based on a two-stage process to help solve the binding problem of attention. These two stages are the preattentive stage and the focused attention stage.
Selective and visual:
Preattentive Stage: The unconscious detection and separation of features of an item (color, shape, size). Treisman suggests that this happens early in cognitive processing and that individuals are not aware of the occurrence due to the counter intuitiveness of separating a whole into its part. Evidence shows that preattentive focuses are accurate due to illusory conjunctions.
Selective and visual:
Focused Attention Stage: The combining of all feature identifiers to perceive all parts as one whole. This is possible through prior knowledge and cognitive mapping. When an item is seen within a known location and has features that people have knowledge of, then prior knowledge will help bring features all together to make sense of what is perceived. The case of R.M's damage to his parietal lobe, also known as Balint's syndrome, shows the incorporation of focused attention and combination of features in the role of attention.Through sequencing these steps, parallel and serial search is better exhibited through the formation of conjunctions of objects. Conjunctive searches, according to Treismans, are done through both stages in order to create selective and focused attention on an object, though Duncan and Humphrey would disagree. Duncan and Humphrey's AET understanding of attention maintained that "there is an initial pre-attentive parallel phase of perceptual segmentation and analysis that encompasses all of the visual items present in a scene. At this phase, descriptions of the objects in a visual scene are generated into structural units; the outcome of this parallel phase is a multiple-spatial-scale structured representation. Selective attention intervenes after this stage to select information that will be entered into visual short-term memory.": 5–7 The contrast of the two theories placed a new emphasis on the separation of visual attention tasks alone and those mediated by supplementary cognitive processes. As Rastophopoulos summarizes the debate: "Against Treisman's FIT, which posits spatial attention as a necessary condition for detection of objects, Humphreys argues that visual elements are encoded and bound together in an initial parallel phase without focal attention, and that attention serves to select among the objects that result from this initial grouping.": 8
Neuropsychological model:
In the twentieth century, the pioneering research of Lev Vygotsky and Alexander Luria led to the three-part model of neuropsychology defining the working brain as being represented by three co-active processes listed as Attention, Memory, and Activation. A.R. Luria published his well-known book The Working Brain in 1973 as a concise adjunct volume to his previous 1962 book Higher Cortical Functions in Man. In this volume, Luria summarized his three-part global theory of the working brain as being composed of three constantly co-active processes which he described as the; (1) Attention system, (2) Mnestic (memory) system, and (3) Cortical activation system. The two books together are considered by Homskaya's account as "among Luria's major works in neuropsychology, most fully reflecting all the aspects (theoretical, clinical, experimental) of this new discipline." The product of the combined research of Vygotsky and Luria have determined a large part of the contemporary understanding and definition of attention as it is understood at the start of the 21st-century.
Multitasking and divided:
Multitasking can be defined as the attempt to perform two or more tasks simultaneously; however, research shows that when multitasking, people make more mistakes or perform their tasks more slowly. Attention must be divided among all of the component tasks to perform them. In divided attention, individuals attend or give attention to multiple sources of information at once or perform more than one task at the same time.Older research involved looking at the limits of people performing simultaneous tasks like reading stories, while listening and writing something else, or listening to two separate messages through different ears (i.e., dichotic listening). Generally, classical research into attention investigated the ability of people to learn new information when there were multiple tasks to be performed, or to probe the limits of our perception (c.f. Donald Broadbent). There is also older literature on people's performance on multiple tasks performed simultaneously, such as driving a car while tuning a radio or driving while being on the phone.The vast majority of current research on human multitasking is based on performance of doing two tasks simultaneously, usually that involves driving while performing another task, such as texting, eating, or even speaking to passengers in the vehicle, or with a friend over a cellphone. This research reveals that the human attentional system has limits for what it can process: driving performance is worse while engaged in other tasks; drivers make more mistakes, brake harder and later, get into more accidents, veer into other lanes, and/or are less aware of their surroundings when engaged in the previously discussed tasks.There has been little difference found between speaking on a hands-free cell phone or a hand-held cell phone, which suggests that it is the strain of attentional system that causes problems, rather than what the driver is doing with his or her hands. While speaking with a passenger is as cognitively demanding as speaking with a friend over the phone, passengers are able to change the conversation based upon the needs of the driver. For example, if traffic intensifies, a passenger may stop talking to allow the driver to navigate the increasingly difficult roadway; a conversation partner over a phone would not be aware of the change in environment.
Multitasking and divided:
There have been multiple theories regarding divided attention. One, conceived by Kahneman, explains that there is a single pool of attentional resources that can be freely divided among multiple tasks. This model seems oversimplified, however, due to the different modalities (e.g., visual, auditory, verbal) that are perceived. When the two simultaneous tasks use the same modality, such as listening to a radio station and writing a paper, it is much more difficult to concentrate on both because the tasks are likely to interfere with each other. The specific modality model was theorized by Navon and Gopher in 1979. However, more recent research using well controlled dual-task paradigms points at the importance of tasks.As an alternative, resource theory has been proposed as a more accurate metaphor for explaining divided attention on complex tasks. Resource theory states that as each complex task is automatized, performing that task requires less of the individual's limited-capacity attentional resources. Other variables play a part in our ability to pay attention to and concentrate on many tasks at once. These include, but are not limited to, anxiety, arousal, task difficulty, and skills.
Simultaneous:
Simultaneous attention is a type of attention, classified by attending to multiple events at the same time. Simultaneous attention is demonstrated by children in Indigenous communities, who learn through this type of attention to their surroundings. Simultaneous attention is present in the ways in which children of indigenous backgrounds interact both with their surroundings and with other individuals. Simultaneous attention requires focus on multiple simultaneous activities or occurrences. This differs from multitasking, which is characterized by alternating attention and focus between multiple activities, or halting one activity before switching to the next.
Simultaneous:
Simultaneous attention involves uninterrupted attention to several activities occurring at the same time. Another cultural practice that may relate to simultaneous attention strategies is coordination within a group. Indigenous heritage toddlers and caregivers in San Pedro were observed to frequently coordinate their activities with other members of a group in ways parallel to a model of simultaneous attention, whereas middle-class European-descent families in the U.S. would move back and forth between events. Research concludes that children with close ties to Indigenous American roots have a high tendency to be especially wide, keen observers. This points to a strong cultural difference in attention management.
Alternative topics and discussions:
Overt and covert orienting Attention may be differentiated into "overt" versus "covert" orienting.Overt orienting is the act of selectively attending to an item or location over others by moving the eyes to point in that direction. Overt orienting can be directly observed in the form of eye movements. Although overt eye movements are quite common, there is a distinction that can be made between two types of eye movements; reflexive and controlled. Reflexive movements are commanded by the superior colliculus of the midbrain. These movements are fast and are activated by the sudden appearance of stimuli. In contrast, controlled eye movements are commanded by areas in the frontal lobe. These movements are slow and voluntary.
Alternative topics and discussions:
Covert orienting is the act of mentally shifting one's focus without moving one's eyes. Simply, it is changes in attention that are not attributable to overt eye movements. Covert orienting has the potential to affect the output of perceptual processes by governing attention to particular items or locations (for example, the activity of a V4 neuron whose receptive field lies on an attended stimuli will be enhanced by covert attention) but does not influence the information that is processed by the senses. Researchers often use "filtering" tasks to study the role of covert attention of selecting information. These tasks often require participants to observe a number of stimuli, but attend to only one. The current view is that visual covert attention is a mechanism for quickly scanning the field of view for interesting locations. This shift in covert attention is linked to eye movement circuitry that sets up a slower saccade to that location.There are studies that suggest the mechanisms of overt and covert orienting may not be controlled separately and independently as previously believed. Central mechanisms that may control covert orienting, such as the parietal lobe, also receive input from subcortical centres involved in overt orienting. In support of this, general theories of attention actively assume bottom-up (reflexive) processes and top-down (voluntary) processes converge on a common neural architecture, in that they control both covert and overt attentional systems. For example, if individuals attend to the right hand corner field of view, movement of the eyes in that direction may have to be actively suppressed.
Alternative topics and discussions:
Exogenous and endogenous orienting Orienting attention is vital and can be controlled through external (exogenous) or internal (endogenous) processes. However, comparing these two processes is challenging because external signals do not operate completely exogenously, but will only summon attention and eye movements if they are important to the subject.Exogenous (from Greek exo, meaning "outside", and genein, meaning "to produce") orienting is frequently described as being under control of a stimulus. Exogenous orienting is considered to be reflexive and automatic and is caused by a sudden change in the periphery. This often results in a reflexive saccade. Since exogenous cues are typically presented in the periphery, they are referred to as peripheral cues. Exogenous orienting can even be observed when individuals are aware that the cue will not relay reliable, accurate information about where a target is going to occur. This means that the mere presence of an exogenous cue will affect the response to other stimuli that are subsequently presented in the cue's previous location.Several studies have investigated the influence of valid and invalid cues. They concluded that valid peripheral cues benefit performance, for instance when the peripheral cues are brief flashes at the relevant location before the onset of a visual stimulus. Posner and Cohen (1984) noted a reversal of this benefit takes place when the interval between the onset of the cue and the onset of the target is longer than about 300 ms. The phenomenon of valid cues producing longer reaction times than invalid cues is called inhibition of return.
Alternative topics and discussions:
Endogenous (from Greek endo, meaning "within" or "internally") orienting is the intentional allocation of attentional resources to a predetermined location or space. Simply stated, endogenous orienting occurs when attention is oriented according to an observer's goals or desires, allowing the focus of attention to be manipulated by the demands of a task. In order to have an effect, endogenous cues must be processed by the observer and acted upon purposefully. These cues are frequently referred to as central cues. This is because they are typically presented at the center of a display, where an observer's eyes are likely to be fixated. Central cues, such as an arrow or digit presented at fixation, tell observers to attend to a specific location.When examining differences between exogenous and endogenous orienting, some researchers suggest that there are four differences between the two kinds of cues: exogenous orienting is less affected by cognitive load than endogenous orienting; observers are able to ignore endogenous cues but not exogenous cues; exogenous cues have bigger effects than endogenous cues; and expectancies about cue validity and predictive value affects endogenous orienting more than exogenous orienting.There exist both overlaps and differences in the areas of the brain that are responsible for endogenous and exogenous orientating. Another approach to this discussion has been covered under the topic heading of "bottom-up" versus "top-down" orientations to attention. Researchers of this school have described two different aspects of how the mind focuses attention to items present in the environment. The first aspect is called bottom-up processing, also known as stimulus-driven attention or exogenous attention. These describe attentional processing which is driven by the properties of the objects themselves. Some processes, such as motion or a sudden loud noise, can attract our attention in a pre-conscious, or non-volitional way. We attend to them whether we want to or not. These aspects of attention are thought to involve parietal and temporal cortices, as well as the brainstem. More recent experimental evidence support the idea that the primary visual cortex creates a bottom-up saliency map, which is received by the superior colliculus in the midbrain area to guide attention or gaze shifts.
Alternative topics and discussions:
The second aspect is called top-down processing, also known as goal-driven, endogenous attention, attentional control or executive attention. This aspect of our attentional orienting is under the control of the person who is attending. It is mediated primarily by the frontal cortex and basal ganglia as one of the executive functions. Research has shown that it is related to other aspects of the executive functions, such as working memory, and conflict resolution and inhibition.
Alternative topics and discussions:
Influence of processing load A "hugely influential" theory regarding selective attention is the perceptual load theory, which states that there are two mechanisms that affect attention: cognitive and perceptual. The perceptual considers the subject's ability to perceive or ignore stimuli, both task-related and non task-related. Studies show that if there are many stimuli present (especially if they are task-related), it is much easier to ignore the non-task related stimuli, but if there are few stimuli the mind will perceive the irrelevant stimuli as well as the relevant. The cognitive refers to the actual processing of the stimuli. Studies regarding this showed that the ability to process stimuli decreased with age, meaning that younger people were able to perceive more stimuli and fully process them, but were likely to process both relevant and irrelevant information, while older people could process fewer stimuli, but usually processed only relevant information.Some people can process multiple stimuli, e.g. trained Morse code operators have been able to copy 100% of a message while carrying on a meaningful conversation. This relies on the reflexive response due to "overlearning" the skill of morse code reception/detection/transcription so that it is an autonomous function requiring no specific attention to perform. This overtraining of the brain comes as the "practice of a skill [surpasses] 100% accuracy," allowing the activity to become autonomic, while your mind has room to process other actions simultaneously.Based on the primary role of the perceptual load theory, assumptions regarding its functionality surrounding that attentional resources are that of limited capacity which signify the need for all of the attentional resources to be used. This performance, however, is halted when put hand in hand with accuracy and reaction time (RT). This limitation arises through the measurement of literature when obtaining outcomes for scores. This affects both cognitive and perceptual attention because there is a lack of measurement surrounding distributions of temporal and spatial attention. Only a concentrated amount of attention on how effective one is completing the task and how long they take is being analyzed making a more redundant analysis on overall cognition of being able to process multiple stimuli through perception.
Alternative topics and discussions:
Clinical model Attention is best described as the sustained focus of cognitive resources on information while filtering or ignoring extraneous information. Attention is a very basic function that often is a precursor to all other neurological/cognitive functions. As is frequently the case, clinical models of attention differ from investigation models. One of the most used models for the evaluation of attention in patients with very different neurologic pathologies is the model of Sohlberg and Mateer. This hierarchic model is based in the recovering of attention processes of brain damage patients after coma. Five different kinds of activities of growing difficulty are described in the model; connecting with the activities those patients could do as their recovering process advanced.
Alternative topics and discussions:
Focused attention: The ability to respond discretely to specific sensory stimuli.
Sustained attention (vigilance and concentration): The ability to maintain a consistent behavioral response during continuous and repetitive activity.
Selective attention: The ability to maintain a behavioral or cognitive set in the face of distracting or competing stimuli. Therefore, it incorporates the notion of "freedom from distractibility." Alternating attention: The ability of mental flexibility that allows individuals to shift their focus of attention and move between tasks having different cognitive requirements.
Alternative topics and discussions:
Divided attention: This refers to the ability to respond simultaneously to multiple tasks or multiple task demands.This model has been shown to be very useful in evaluating attention in very different pathologies, correlates strongly with daily difficulties and is especially helpful in designing stimulation programs such as attention process training, a rehabilitation program for neurological patients of the same authors.
Other descriptors for types of attention:
Mindfulness: Mindfulness has been conceptualized as a clinical model of attention. Mindfulness practices are clinical interventions that emphasize training attention functions.
Other descriptors for types of attention:
Vigilant attention: Remaining focused on a non-arousing stimulus or uninteresting task for a sustained period is far more difficult than attending to arousing stimuli and interesting tasks, and requires a specific type of attention called 'vigilant attention'. Thereby, vigilant attention is the ability to give sustained attention to a stimulus or task that might ordinarily be insufficiently engaging to prevent our attention being distracted by other stimuli or tasks.
Other descriptors for types of attention:
Neural correlates Most experiments show that one neural correlate of attention is enhanced firing. If a neuron has a certain response to a stimulus when the animal is not attending to the stimulus, then when the animal does attend to the stimulus, the neuron's response will be enhanced even if the physical characteristics of the stimulus remain the same.
In a 2007 review, Knudsen describes a more general model which identifies four core processes of attention, with working memory at the center: Working memory temporarily stores information for detailed analysis.
Competitive selection is the process that determines which information gains access to working memory.
Other descriptors for types of attention:
Through top-down sensitivity control, higher cognitive processes can regulate signal intensity in information channels that compete for access to working memory, and thus give them an advantage in the process of competitive selection. Through top-down sensitivity control, the momentary content of working memory can influence the selection of new information, and thus mediate voluntary control of attention in a recurrent loop (endogenous attention).
Other descriptors for types of attention:
Bottom-up saliency filters automatically enhance the response to infrequent stimuli, or stimuli of instinctive or learned biological relevance (exogenous attention).Neurally, at different hierarchical levels spatial maps can enhance or inhibit activity in sensory areas, and induce orienting behaviors like eye movement.
At the top of the hierarchy, the frontal eye fields (FEF) and the dorsolateral prefrontal cortex contain a retinocentric spatial map. Microstimulation in the FEF induces monkeys to make a saccade to the relevant location. Stimulation at levels too low to induce a saccade will nonetheless enhance cortical responses to stimuli located in the relevant area.
At the next lower level, a variety of spatial maps are found in the parietal cortex. In particular, the lateral intraparietal area (LIP) contains a saliency map and is interconnected both with the FEF and with sensory areas.
Exogenous attentional guidance in humans and monkeys is by a bottom-up saliency map in the primary visual cortex. In lower vertebrates, this saliency map is more likely in the superior colliculus (optic tectum).
Certain automatic responses that influence attention, like orienting to a highly salient stimulus, are mediated subcortically by the superior colliculi.
Other descriptors for types of attention:
At the neural network level, it is thought that processes like lateral inhibition mediate the process of competitive selection.In many cases attention produces changes in the EEG. Many animals, including humans, produce gamma waves (40–60 Hz) when focusing attention on a particular object or activity.Another commonly used model for the attention system has been put forth by researchers such as Michael Posner. He divides attention into three functional components: alerting, orienting, and executive attention that can also interact and influence each other.
Other descriptors for types of attention:
Alerting is the process involved in becoming and staying attentive toward the surroundings. It appears to exist in the frontal and parietal lobes of the right hemisphere, and is modulated by norepinephrine.
Orienting is the directing of attention to a specific stimulus.
Other descriptors for types of attention:
Executive attention is used when there is a conflict between multiple attention cues. It is essentially the same as the central executive in Baddeley's model of working memory. The Eriksen flanker task has shown that the executive control of attention may take place in the anterior cingulate cortex Cultural variation Children appear to develop patterns of attention related to the cultural practices of their families, communities, and the institutions in which they participate.In 1955, Jules Henry suggested that there are societal differences in sensitivity to signals from many ongoing sources that call for the awareness of several levels of attention simultaneously. He tied his speculation to ethnographic observations of communities in which children are involved in a complex social community with multiple relationships.Many Indigenous children in the Americas predominantly learn by observing and pitching in. There are several studies to support that the use of keen attention towards learning is much more common in Indigenous Communities of North and Central America than in a middle-class European-American setting. This is a direct result of the Learning by Observing and Pitching In model.
Other descriptors for types of attention:
Keen attention is both a requirement and result of learning by observing and pitching-in. Incorporating the children in the community gives them the opportunity to keenly observe and contribute to activities that were not directed towards them. It can be seen from different Indigenous communities and cultures, such as the Mayans of San Pedro, that children can simultaneously attend to multiple events. Most Maya children have learned to pay attention to several events at once in order to make useful observations.One example is simultaneous attention which involves uninterrupted attention to several activities occurring at the same time. Another cultural practice that may relate to simultaneous attention strategies is coordination within a group. San Pedro toddlers and caregivers frequently coordinated their activities with other members of a group in multiway engagements rather than in a dyadic fashion. Research concludes that children with close ties to Indigenous American roots have a high tendency to be especially keen observers.This learning by observing and pitching-in model requires active levels of attention management. The child is present while caretakers engage in daily activities and responsibilities such as: weaving, farming, and other skills necessary for survival. Being present allows the child to focus their attention on the actions being performed by their parents, elders, and/or older siblings. In order to learn in this way, keen attention and focus is required. Eventually the child is expected to be able to perform these skills themselves.
Other descriptors for types of attention:
Modelling In the domain of computer vision, efforts have been made to model the mechanism of human attention, especially the bottom-up intentional mechanism and its semantic significance in classification of video contents. Both spatial attention and temporal attention have been incorporated in such classification efforts.
Other descriptors for types of attention:
Generally speaking, there are two kinds of models to mimic the bottom-up salience mechanism in static images. One way is based on the spatial contrast analysis. For example, a center–surround mechanism has been used to define salience across scales, inspired by the putative neural mechanism. It has also been hypothesized that some visual inputs are intrinsically salient in certain background contexts and that these are actually task-independent. This model has established itself as the exemplar for salience detection and consistently used for comparison in the literature; the other way is based on the frequency domain analysis. This method was first proposed by Hou et al.. This method was called SR. Then, the PQFT method was also introduced. Both SR and PQFT only use the phase information. In 2012, the HFT method was introduced, and both the amplitude and the phase information are made use of. The Neural Abstraction Pyramid is a hierarchical recurrent convolutional model, which incorporates bottom-up and top-down flow of information to iteratively interpret images.
Other descriptors for types of attention:
Hemispatial neglect Hemispatial neglect, also called unilateral neglect, often occurs when people have damage to their right hemisphere. This damage often leads to a tendency to ignore the left side of one's body or even the left side of an object that can be seen. Damage to the left side of the brain (the left hemisphere) rarely yields significant neglect of the right side of the body or object in the person's local environments.The effects of spatial neglect, however, may vary and differ depending on what area of the brain was damaged. Damage to different neural substrates can result in different types of neglect. Attention disorders (lateralized and nonlaterized) may also contribute to the symptoms and effects. Much research has asserted that damage to gray matter within the brain results in spatial neglect.New technology has yielded more information, such that there is a large, distributed network of frontal, parietal, temporal, and subcortical brain areas that have been tied to neglect. This network can be related to other research as well; the dorsal attention network is tied to spatial orienting. The effect of damage to this network may result in patients neglecting their left side when distracted about their right side or an object on their right side.
Other descriptors for types of attention:
Attention in social contexts Social attention is one special form of attention that involves the allocation of limited processing resources in a social context. Previous studies on social attention often regard how attention is directed toward socially relevant stimuli such as faces and gaze directions of other individuals. In contrast to attending-to-others, a different line of researches has shown that self-related information such as own face and name automatically captures attention and is preferentially processed comparing to other-related information. These contrasting effects between attending-to-others and attending-to-self prompt a synthetic view in a recent Opinion article proposing that social attention operates at two polarizing states: In one extreme, individual tends to attend to the self and prioritize self-related information over others', and, in the other extreme, attention is allocated to other individuals to infer their intentions and desires. Attending-to-self and attending-to-others mark the two ends of an otherwise continuum spectrum of social attention. For a given behavioral context, the mechanisms underlying these two polarities might interact and compete with each other in order to determine a saliency map of social attention that guides our behaviors. An imbalanced competition between these two behavioral and cognitive processes will cause cognitive disorders and neurological symptoms such as autism spectrum disorders and Williams syndrome.
Other descriptors for types of attention:
Distracting factors According to Daniel Goleman's book, Focus: The Hidden Driver of Excellence, there are two types of distracting factors affecting focus – sensory and emotional.
A sensory distracting factor would be, for example, while a person is reading this article, they are neglecting the white field surrounding the text.
Other descriptors for types of attention:
An emotional distracting factor would be when someone is focused on answering an email, and somebody shouts their name. It would be almost impossible to neglect the voice speaking it. Attention is immediately directed toward the source. Positive emotions have also been found to affect attention. Induction of happiness has led to increased response times and an increase in inaccurate responses in the face of irrelevant stimuli. Two possible theories as to why emotions might make one more susceptible to distracting stimuli is that emotions take up too much of one's cognitive resources and make it harder to control your focus of attention. The other theory is that emotions make it harder to filter out distractions, specifically with positive emotions due to a feeling of security.Another distracting factor to attention processes is insufficient sleep. Sleep deprivation is found to impair cognition, specifically performance in divided attention. Divided attention is possibly linked with the circadian processes.
Other descriptors for types of attention:
Failure to attend Inattentional blindness was first introduced in 1998 by Arien Mack and Irvic Rock. Their studies show that when people are focused on specific stimuli, they often miss other stimuli that are clearly present. Though actual blindness is not occurring here, the blindness that happens is due to the perceptual load of what is being attended to. Based on the experiment performed by Mack and Rock, Ula Finch and Nilli Lavie tested participants with a perceptual task. They presented subjects with a cross, one arm being longer than the other, for 5 trials. On the sixth trial, a white square was added to the top left of the screen. The results conclude that out of 10 participants, only 2 (20%) actually saw the square. This would suggest that when a higher focus was attended to the length of the crossed arms, the more likely someone would altogether miss an object that was in plain sight.Change blindness was first tested by Rensink and coworkers in 1997. Their studies show that people have difficulty detecting changes from scene to scene due to the intense focus on one thing, or lack of attention overall. This was tested by Rensink through a presentation of a picture, and then a blank field, and then the same picture but with an item missing. The results showed that the pictures had to be alternated back and forth a good number of times for participants to notice the difference. This idea is greatly portrayed in films that have continuity errors. Many people do not pick up on differences when in reality, the changes tend to be significant.
History of the study:
Philosophical period Psychologist Daniel E. Berlyne credits the first extended treatment of attention to philosopher Nicolas Malebranche in his work "The Search After Truth". "Malebranche held that we have access to ideas, or mental representations of the external world, but not direct access to the world itself." Thus in order to keep these ideas organized, attention is necessary. Otherwise we will confuse these ideas. Malebranche writes in "The Search After Truth", "because it often happens that the understanding has only confused and imperfect perceptions of things, it is truly a cause of our errors.... It is therefore necessary to look for means to keep our perceptions from being confused and imperfect. And, because, as everyone knows, there is nothing that makes them clearer and more distinct than attentiveness, we must try to find the means to become more attentive than we are". According to Malebranche, attention is crucial to understanding and keeping thoughts organized.
History of the study:
Philosopher Gottfried Wilhelm Leibniz introduced the concept of apperception to this philosophical approach to attention. Apperception refers to "the process by which new experience is assimilated to and transformed by the residuum of past experience of an individual to form a new whole." Apperception is required for a perceived event to become a conscious event. Leibniz emphasized a reflexive involuntary view of attention known as exogenous orienting. However, there is also endogenous orienting which is voluntary and directed attention. Philosopher Johann Friedrich Herbart agreed with Leibniz's view of apperception; however, he expounded on it in by saying that new experiences had to be tied to ones already existing in the mind. Herbart was also the first person to stress the importance of applying mathematical modeling to the study of psychology.Throughout the philosophical era, various thinkers made significant contributions to the field of attention studies, beginning with research on the extent of attention and how attention is directed. In the beginning of the 19th century, it was thought that people were not able to attend to more than one stimulus at a time. However, with research contributions by Sir William Hamilton, 9th Baronet this view was changed. Hamilton proposed a view of attention that likened its capacity to holding marbles. You can only hold a certain number of marbles at a time before it starts to spill over. His view states that we can attend to more than one stimulus at once. William Stanley Jevons later expanded this view and stated that we can attend to up to four items at a time.
History of the study:
1860–1909 This period of attention research took the focus from conceptual findings to experimental testing. It also involved psychophysical methods that allowed measurement of the relation between physical stimulus properties and the psychological perceptions of them. This period covers the development of attentional research from the founding of psychology to 1909.
History of the study:
Wilhelm Wundt introduced the study of attention to the field of psychology. Wundt measured mental processing speed by likening it to differences in stargazing measurements. Astronomers in this time would measure the time it took for stars to travel. Among these measurements when astronomers recorded the times, there were personal differences in calculation. These different readings resulted in different reports from each astronomer. To correct for this, a personal equation was developed. Wundt applied this to mental processing speed. Wundt realized that the time it takes to see the stimulus of the star and write down the time was being called an "observation error" but actually was the time it takes to switch voluntarily one's attention from one stimulus to another. Wundt called his school of psychology voluntarism. It was his belief that psychological processes can only be understood in terms of goals and consequences.
History of the study:
Franciscus Donders used mental chronometry to study attention and it was considered a major field of intellectual inquiry by authors such as Sigmund Freud. Donders and his students conducted the first detailed investigations of the speed of mental processes. Donders measured the time required to identify a stimulus and to select a motor response. This was the time difference between stimulus discrimination and response initiation. Donders also formalized the subtractive method which states that the time for a particular process can be estimated by adding that process to a task and taking the difference in reaction time between the two tasks. He also differentiated between three types of reactions: simple reaction, choice reaction, and go/no-go reaction.
History of the study:
Hermann von Helmholtz also contributed to the field of attention relating to the extent of attention. Von Helmholtz stated that it is possible to focus on one stimulus and still perceive or ignore others. An example of this is being able to focus on the letter u in the word house and still perceiving the letters h, o, s, and e.
History of the study:
One major debate in this period was whether it was possible to attend to two things at once (split attention). Walter Benjamin described this experience as "reception in a state of distraction." This disagreement could only be resolved through experimentation.
History of the study:
In 1890, William James, in his textbook The Principles of Psychology, remarked: Everyone knows what attention is. It is the taking possession by the mind, in clear and vivid form, of one out of what seem several simultaneously possible objects or trains of thought. Focalization, concentration, of consciousness are of its essence. It implies withdrawal from some things in order to deal effectively with others, and is a condition which has a real opposite in the confused, dazed, scatterbrained state which in French is called distraction, and Zerstreutheit in German.
History of the study:
James differentiated between sensorial attention and intellectual attention. Sensorial attention is when attention is directed to objects of sense, stimuli that are physically present. Intellectual attention is attention directed to ideal or represented objects; stimuli that are not physically present. James also distinguished between immediate or derived attention: attention to the present versus to something not physically present. According to James, attention has five major effects. Attention works to make us perceive, conceive, distinguish, remember, and shorten reactions time.
History of the study:
1910–1949 During this period, research in attention waned and interest in behaviorism flourished, leading some to believe, like Ulric Neisser, that in this period, "There was no research on attention". However, Jersild published very important work on "Mental Set and Shift" in 1927. He stated, "The fact of mental set is primary in all conscious activity. The same stimulus may evoke any one of a large number of responses depending upon the contextual setting in which it is placed". This research found that the time to complete a list was longer for mixed lists than for pure lists. For example, if a list was names of animals versus a list of the same size with names of animals, books, makes and models of cars, and types of fruits, it takes longer to process the second list. This is task switching.
History of the study:
In 1931, Telford discovered the psychological refractory period. The stimulation of neurons is followed by a refractory phase during which neurons are less sensitive to stimulation. In 1935 John Ridley Stroop developed the Stroop Task which elicited the Stroop Effect. Stroop's task showed that irrelevant stimulus information can have a major impact on performance. In this task, subjects were to look at a list of colors. This list of colors had each color typed in a color different from the actual text. For example, the word Blue would be typed in Orange, Pink in Black, and so on.
History of the study:
Example: Blue Purple Red Green Purple Green Subjects were then instructed to say the name of the ink color and ignore the text. It took 110 seconds to complete a list of this type compared to 63 seconds to name the colors when presented in the form of solid squares. The naming time nearly doubled in the presence of conflicting color words, an effect known as the Stroop Effect.
History of the study:
1950–1974 In the 1950s, research psychologists renewed their interest in attention when the dominant epistemology shifted from positivism (i.e., behaviorism) to realism during what has come to be known as the "cognitive revolution". The cognitive revolution admitted unobservable cognitive processes like attention as legitimate objects of scientific study.
History of the study:
Modern research on attention began with the analysis of the "cocktail party problem" by Colin Cherry in 1953. At a cocktail party how do people select the conversation that they are listening to and ignore the rest? This problem is at times called "focused attention", as opposed to "divided attention". Cherry performed a number of experiments which became known as dichotic listening and were extended by Donald Broadbent and others.: 112 In a typical experiment, subjects would use a set of headphones to listen to two streams of words in different ears and selectively attend to one stream. After the task, the experimenter would question the subjects about the content of the unattended stream.
History of the study:
Broadbent's Filter Model of Attention states that information is held in a pre-attentive temporary store, and only sensory events that have some physical feature in common are selected to pass into the limited capacity processing system. This implies that the meaning of unattended messages is not identified. Also, a significant amount of time is required to shift the filter from one channel to another. Experiments by Gray and Wedderburn and later Anne Treisman pointed out various problems in Broadbent's early model and eventually led to the Deutsch–Norman model in 1968. In this model, no signal is filtered out, but all are processed to the point of activating their stored representations in memory. The point at which attention becomes "selective" is when one of the memory representations is selected for further processing. At any time, only one can be selected, resulting in the attentional bottleneck.: 115–116 This debate became known as the early-selection vs. late-selection models. In the early selection models (first proposed by Donald Broadbent), attention shuts down (in Broadbent's model) or attenuates (in Triesman's refinement) processing in the unattended ear before the mind can analyze its semantic content. In the late selection models (first proposed by J. Anthony Deutsch and Diana Deutsch), the content in both ears is analyzed semantically, but the words in the unattended ear cannot access consciousness. Lavie's perceptual load theory, however, "provided elegant solution to" what had once been a "heated debate". | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Strephosymbolia**
Strephosymbolia:
Strephosymbolia was Samuel Orton's theory of dyslexia which he first published in 1925. The root strepho is Ancient Greek for "twisted" or "reversed" and he used this in preference to the phrase "word blindness", which he thought inaccurate as the difficulty was not that those with strephosymbolia could not see the words but that they had difficulty comprehending them. As he developed his theory, he attributed the difficulty to an imperfect dominance of the hemisphere of the brain which processed the symbols when reading, being confused by a residual but reversed equivalent in the other hemisphere. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Argument from beauty**
Argument from beauty:
The argument from beauty (also the aesthetic argument) is an argument for the existence of a realm of immaterial ideas or, most commonly, for the existence of God, that roughly states that the elegance of the laws of physics or the elegant laws of mathematics is evidence of a creator deity who has arranged these things to be beautiful (aesthetically pleasing, or "good") and not ugly. Plato argued there is a transcendent plane of abstract ideas, or universals, which are more perfect than real-world examples of those ideas. Later philosophers connected this plane to the idea of goodness, beauty, and then the Christian God.
Argument from beauty:
Various observers have also argued that the experience of beauty is evidence of the existence of a universal God. Depending on the observer, this might include artificially beautiful things like music or art, natural beauty like landscapes or astronomical bodies, or the elegance of abstract ideas like the laws of mathematics or physics.
The best-known defender of the aesthetic argument is Richard Swinburne.
History of the argument from Platonic universals:
The argument from beauty has two aspects. The first is connected with the independent existence of what philosophers term a "universal" (see Universal (metaphysics) and also Problem of universals). Plato argued that particular examples of, say a circle, all fall short of the perfect exemplar of a circle that exists outside the realm of the senses as an eternal Idea. Beauty for Plato is a particularly important type of universal. Perfect beauty exists only in the eternal Form of beauty (see Platonic epistemology). For Plato the argument for a timeless idea of beauty does not involve so much whether the gods exist (Plato was not a monotheist) but rather whether there is an immaterial realm independent and superior to the imperfect world of sense. Later Greek thinkers such as Plotinus (c. 204/5–270 CE) expanded Plato's argument to support the existence of a totally transcendent "One", containing no parts. Plotinus identified this "One" with the concept of "Good" and the principle of "Beauty". Christianity adopted this Neo-Platonic conception and saw it as a strong argument for the existence of a supreme God. In the early fifth century, for example, Augustine of Hippo discusses the many beautiful things in nature and asks "Who made these beautiful changeable things, if not one who is beautiful and unchangeable?" This second aspect is what most people today understand as the argument from beauty.
Richard Swinburne:
A contemporary British philosopher of religion, Richard Swinburne, known for philosophical arguments about the existence of God, advocates a variation of the argument from beauty: God has reason to make a basically beautiful world, although also reason to leave some of the beauty or ugliness of the world within the power of creatures to determine; but he would seem to have overriding reason not to make a basically ugly world beyond the powers of creatures to improve. Hence, if there is a God there is more reason to expect a basically beautiful world than a basically ugly one. A priori, however, there is no particular reason for expecting a basically beautiful rather than a basically ugly world. In consequence, if the world is beautiful, that fact would be evidence for God's existence. For, in this case, if we let k be 'there is an orderly physical universe', e be 'there is a beautiful universe', and h be 'there is a God', P(e/h.k) will be greater than P(e/k)... Few, however, would deny that our universe (apart from its animal and human inhabitants, and aspects subject to their immediate control) has that beauty. Poets and painters and ordinary men down the centuries have long admired the beauty of the orderly procession of the heavenly bodies, the scattering of the galaxies through the heavens (in some ways random, in some ways orderly), and the rocks, sea, and wind interacting on earth, The spacious firmament on high, and all the blue ethereal sky, the water lapping against 'the old eternal rocks', and the plants of the jungle and of temperate climates, contrasting with the desert and the Arctic wastes. Who in his senses would deny that here is beauty in abundance? If we confine ourselves to the argument from the beauty of the inanimate and plant worlds, the argument surely works."
Art as a route to God:
The most frequent invocation of the argument from beauty today involves the aesthetic experience one obtains from great literature, music or art. In the concert hall or museum one can easily feel carried away from the mundane. For many people this feeling of transcendence approaches the religious in intensity. It is a commonplace to regard concert halls and museums as the cathedrals of the modern age because they seem to translate beauty into meaning and transcendence.Dostoevsky was a proponent of the transcendent nature of beauty. His enigmatic statement: "Beauty will save the world" is frequently cited. Aleksandr Solzhenitsyn in his Nobel Prize lecture reflected upon this phrase: And so perhaps that old trinity of Truth and Good and Beauty is not just the formal outworn formula it used to seem to us during our heady, materialistic youth. If the crests of these three trees join together, as the investigators and explorers used to affirm, and if the too obvious, too straight branches of Truth and Good are crushed or amputated and cannot reach the light—yet perhaps the whimsical, unpredictable, unexpected branches of Beauty will make their way through and soar up to that very place and in this way perform the work of all three. And in that case it was not a slip of the tongue for Dostoyevsky to say that "Beauty will save the world" but a prophecy. After all, he was given the gift of seeing much, he was extraordinarily illumined. And consequently perhaps art, literature, can in actual fact help the world of today.
Philosophical basis of science and mathematics:
Exactly what role to attribute to beauty in mathematics and science is hotly contested, see Philosophy of mathematics. The argument from beauty in science and mathematics is an argument for philosophical realism against nominalism. The debate revolves around the question, "Do things like scientific laws, numbers and sets have an independent 'real' existence outside individual human minds?". The argument is quite complex and still far from settled. Scientists and philosophers often marvel at the congruence between nature and mathematics. In 1960 the Nobel Prize–winning physicist and mathematician Eugene Wigner wrote an article entitled "The Unreasonable Effectiveness of Mathematics in the Natural Sciences". He pointed out that "the enormous usefulness of mathematics in the natural sciences is something bordering on the mysterious and that there is no rational explanation for it." In applying mathematics to understand the natural world, scientists often employ aesthetic criteria that seem far removed from science. Albert Einstein once said that "the only physical theories that we are willing to accept are the beautiful ones." Conversely, beauty can sometimes be misleading; Thomas Huxley wrote that "Science is organized common sense, where many a beautiful theory was killed by an ugly fact."When developing hypotheses, scientists use beauty and elegance as valuable selective criteria. The more beautiful a theory, the more likely is it to be true. The mathematical physicist Hermann Weyl said with evident amusement, "My work has always tried to unite the true with the beautiful and when I had to choose one or the other, I usually chose the beautiful." The quantum physicist Werner Heisenberg wrote to Einstein, "You may object that by speaking of simplicity and beauty I am introducing aesthetic criteria of truth, and I frankly admit that I am strongly attracted by the simplicity and beauty of the mathematical schemes which nature presents us."
Criticisms:
The argument implies beauty is something immaterial instead of being a subjective neurological response to stimuli. Philosophers since Immanuel Kant increasingly argue that beauty is an artifact of individual human minds. A 'beautiful' sunset is according to this perspective aesthetically neutral in itself. It is our cognitive response that interprets it as 'beautiful.' Others would argue that this cognitive response has been developed through the evolutionary development of the brain and its exposure to particular stimuli over long ages. Others point to the existence of evil and various types of ugliness as invalidating the argument. Joseph McCabe, a freethought writer of the early 20th century, questioned the argument in The Existence of God, when he asked whether God also created parasitic microbes.In his book, The God Delusion, Richard Dawkins describes the argument thus: Another character in the Aldous Huxley novel just mentioned proved the existence of God by playing Beethoven's string quartet no. 15 in A minor ('Heiliger Dankgesang') on a gramophone. Unconvincing as that sounds, it does represent a popular strand of argument. I have given up counting the number of times I receive the more or less truculent challenge: 'How do you account for Shakespeare, then?' (Substitute Schubert, Michelangelo, etc. to taste.) The argument will be so familiar, I needn’t document it further. But the logic behind it is never spelled out, and the more you think about it the more vacuous you realize it to be. Obviously Beethoven's late quartets are sublime. So are Shakespeare's sonnets. They are sublime if God is there and they are sublime if he isn't. They do not prove the existence of God; they prove the existence of Beethoven and of Shakespeare. A great conductor is credited with saying: 'If you have Mozart to listen to, why would you need God?' Bertrand Russell had no trouble seeing beauty in mathematics but he did not see it as a valid argument for the existence of God. In "The Study of Mathematics", he wrote: Mathematics, rightly viewed, possesses not only truth, but supreme beauty—a beauty cold and austere, like that of sculpture, without appeal to any part of our weaker nature, without the gorgeous trappings of painting or music, yet sublimely pure, and capable of a stern perfection such as only the greatest art can show. The true spirit of delight, the exaltation, the sense of being more than Man, which is the touchstone of the highest excellence, is to be found in mathematics as surely as in poetry.
Criticisms:
However, he also wrote: "My conclusion is that there is no reason to believe any of the dogmas of traditional theology and, further, that there is no reason to wish that they were true. Man, in so far as he is not subject to natural forces, is free to work out his own destiny. The responsibility is his, and so is the opportunity."H. L. Mencken stated that humans have created things of greater beauty when he wrote, "I also pass over the relatively crude contrivances of this Creator in the aesthetic field, wherein He has been far surpassed by man, as, for example, for adroitness of design, for complexity or for beauty, the sounds of an orchestra." | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Vacuum modulator**
Vacuum modulator:
Vacuum Modulator is an engine load sensing device that converts engine vacuum into a transmission valve body input.
Most vacuum modulators operate with manifold vacuum (below throttle blades) that offer more vacuum at idle, and proportionately changes (rises and falls) with engine load as opposed to operating on engine speed.
Vacuum modulators in some transmissions were essential in the proper operation of many automatic transmissions. Broken springs or diaphragms would cause it either to be repaired or replaced. Some were repairable (early units) as later models would need entire replacement.
As the throttle blades are open the manifold or engine vacuum drops as ported vacuum (above throttle blades) increases. Many vacuum modulators also allow for tuning via a small blade screw driver that turns the thread to increase or decrease spring pressure against a diaphragm inside. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Kawasaki Versys 650**
Kawasaki Versys 650:
The Kawasaki Versys 650 (codenamed KLE650) is a middleweight motorcycle. It borrows design elements from dual-sport bikes, standards, adventure tourers and sport bikes; sharing characteristics of all, but not neatly fitting into any of those categories. The name Versys is a portmanteau of the words versatile and system.
Kawasaki Versys 650:
It was introduced by Kawasaki to the European and Canadian markets as a 2007 model and to the US market in 2008. A California emissions compliant version was released in 2009. In 2010 new styling was applied to the headlight and fairings and several functional changes made including enlarged mirrors and improved rubber engine mounts. In 2015, a new model was introduced with a new fairing style that abandoned the older, stacked headlights for the more conventional twin headlight style commonly found on sportbikes.
Technical details:
The Versys is based on the same platform as Kawasaki's other 650cc twin motorcycles, the Ninja 650R and the ER-6n. It shares the same electronics, engine, wheels, brakes and main frame as its siblings. Where it differs is in riding position, rear sub frame, suspension components, and engine tuning.The Versys' 650 cc liquid cooled, four-stroke, parallel-twin engine has been retuned for more bottom-end and mid-range torque. This is achieved with different camshafts and fuel injection mapping. These changes cause peak torque to occur at a lower engine speed and provide better throttle response in the 3,000 to 6,000 rpm range. In addition a balance tube has been added between the exhaust headers to smooth out power delivery. Power is 68 hp (51 kW) at 8,500 rpm, compared with the Ninja's 67 hp (50 kW) at 8,000 rpm. Torque is 47.2 lb⋅ft (64.0 N⋅m), compared with the Ninja's 48.45 lb⋅ft (65.69 N⋅m). Improving the engine's low and mid range response comes at the expense of a slight reduction in peak power however. A similar approach was recently deployed by Honda with their CBF1000 model. The engine uses a 180 degree crankshaft. This in turn requires an uneven firing interval (180 degrees, 540 degrees) which gives the engine note a distinctive "throbbing" sound at idle.The suspension has greater vertical travel and more adjustability than the suspension on the Versys 650 siblings. On the front thicker/stiffer 41 mm inverted telescopic forks are externally adjustable for preload and rebound damping. The right fork leg carries a damping cartridge while both legs contain springs. The rear shock absorber is adjustable for rebound damping. Suspension preload is adjustable in the rear via a screw collar on the shock. 2015+ models have an external adjuster. The rear shock/spring is directly connected, without linkages, to a non symmetrical, gull wing, aluminium swing arm instead of the more basic steel swing arm used on the Ninja and ER-6.
Local variants:
In Australia, the Learner Approved Motorcycle Scheme (LAMS) is in place for riders on a restricted license during the first period after passing their motorbike test. For this market the Versys 650L is manufactured and sold with output power restrictions put into place using a custom program on the ECU and a screw installed near the throttle wheel on the right-side of the bike which prevents it from fully rotating. The specifications for the Australian LAMS and non-LAMS variants for the 2017 model year are below, but the different RPM measurements should also be noted: In addition to the above differences, Australia marketed Versys 650 and 650L are sold with a fuel capacity of 21 L and have a curb mass of 216 kg (wet) for both the 650L and 650.
2022 update:
In 2022 it received a color TFT display, Bluetooth, LED lighting, a 2-level traction control system, and a manually adjustable windshield.The Tourer Plus was released at a price of ฿329,500 (US$10,303.31) in Thailand. In Germany, the Versys 650 starts at €8,595 (US$10,165.31).
Reception:
The Versys was reviewed by motorcycling media and received the following notable reactions.
2008 Motorcycle of the Year award by Motorcyclist magazine 2008 Best in Class "Allrounder class" award by Motor Cycle News 2015 Comparison Winner: Kawasaki Versys 650 LT vs. Suzuki V-Strom 650XT by Motorcyclist magazine | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Novum**
Novum:
Novum (Latin for new thing) is a term used by science fiction scholar Darko Suvin and others to describe the scientifically plausible innovations used by science fiction narratives.Frequently used science fictional nova include aliens, time travel, the technological singularity, artificial intelligence, and psychic powers.
Origin:
Suvin learned the term from Ernst Bloch, whose work is cited frequently in Metamorphoses of Science Fiction.
Origin:
Suvin argues that the genre of science fiction is distinguished from fantasy by the story being driven by a novum validated by logic he calls cognitive estrangement. This means that the hypothetical "new thing" which the story is about can be imagined to exist by scientific means rather than by magic, i.e., by the factual reporting of fictions and by relating them in a plausible way to reality.
General references:
Cambridge Companion to Science Fiction Metamorphoses of Science Fiction: On the Poetics and History of a Literary Genre by Darko Suvin. 1979. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Stainless steel**
Stainless steel:
Stainless steel, also known as inox or corrosion-resistant steel (CRES), is an alloy of iron that is resistant to rusting and corrosion. It contains at least 10.5% chromium and usually nickel, and may also contain other elements, such as carbon, to obtain the desired properties. Stainless steel's resistance to corrosion results from the chromium, which forms a passive film that can protect the material and self-heal in the presence of oxygen.: 3 The alloy's properties, such as luster and resistance to corrosion, are useful in many applications. Stainless steel can be rolled into sheets, plates, bars, wire, and tubing. These can be used in cookware, cutlery, surgical instruments, major appliances, vehicles, construction material in large buildings, industrial equipment (e.g., in paper mills, chemical plants, water treatment), and storage tanks and tankers for chemicals and food products.
Stainless steel:
The biological cleanability of stainless steel is superior to both aluminium and copper, and comparable to glass. Its cleanability, strength, and corrosion resistance have prompted the use of stainless steel in pharmaceutical and food processing plants.Different types of stainless steel are labeled with an AISI three-digit number. The ISO 15510 standard lists the chemical compositions of stainless steels of the specifications in existing ISO, ASTM, EN, JIS, and GB standards in a useful interchange table.
Properties:
Conductivity Like steel, stainless steels are relatively poor conductors of electricity, with significantly lower electrical conductivities than copper. In particular, the non-electrical contact resistance (ECR) of stainless steel arises as a result of the dense protective oxide layer and limits its functionality in applications as electrical connectors. Copper alloys and nickel-coated connectors tend to exhibit lower ECR values and are preferred materials for such applications. Nevertheless, stainless steel connectors are employed in situations where ECR poses a lower design criteria and corrosion resistance is required, for example in high temperatures and oxidizing environments.
Properties:
Melting point As with most alloys, the melting point of stainless steel is expressed in the form of a range of temperatures, and not a singular temperature. This temperature range goes from 1,400 to 1,530 °C (2,550 to 2,790 °F; 1,670 to 1,800 K; 3,010 to 3,250 °R) depending on the specific consistency of the alloy in question.
Properties:
Hardness Stainless steel is a highly durable metal known for its impressive hardness. This quality is primarily due to the presence of two key components: chromium and nickel. Chromium forms an oxide layer on the metal's surface, protecting it from corrosion and wear. Meanwhile, nickel contributes to the metal's strength and ductility, enhancing its overall hardness. Stainless steel can also be hardened through heat treatment processes such as annealing or quenching, further improving its hardness.
Properties:
Thermal conduction The thermal conductivity of stainless steel depends on its composition and structure. Typically, stainless steel has a thermal conductivity ranging from 15 to 20 W/mK (watts per meter Kelvin). Due to this, it keeps more energy that stabilizes the surrounding temperature.
Properties:
Magnetism Martensitic, duplex and ferritic stainless steels are magnetic, while austenitic stainless steel is usually non-magnetic. Ferritic steel owes its magnetism to its body-centered cubic crystal structure, in which iron atoms are arranged in cubes (with one iron atom at each corner) and an additional iron atom in the center. This central iron atom is responsible for ferritic steel's magnetic properties. This arrangement also limits the amount of carbon the steel can absorb to around 0.025%. Grades with low coercive field have been developed for electro-valves used in household appliances and for injection systems in internal combustion engines. Some applications require non-magnetic materials, such as magnetic resonance imaging. Austenitic stainless steels, which are usually non-magnetic, can be made slightly magnetic through work hardening. Sometimes, if austenitic steel is bent or cut, magnetism occurs along the edge of the stainless steel because the crystal structure rearranges itself.
Properties:
Corrosion The addition of nitrogen also improves resistance to pitting corrosion and increases mechanical strength. Thus, there are numerous grades of stainless steel with varying chromium and molybdenum contents to suit the environment the alloy must endure. Corrosion resistance can be increased further by the following means: increasing chromium content to more than 11% adding nickel to at least 8% adding molybdenum (which also improves resistance to pitting corrosion) Wear Galling, sometimes called cold welding, is a form of severe adhesive wear, which can occur when two metal surfaces are in relative motion to each other and under heavy pressure. Austenitic stainless steel fasteners are particularly susceptible to thread galling, though other alloys that self-generate a protective oxide surface film, such as aluminum and titanium, are also susceptible. Under high contact-force sliding, this oxide can be deformed, broken, and removed from parts of the component, exposing the bare reactive metal. When the two surfaces are of the same material, these exposed surfaces can easily fuse. Separation of the two surfaces can result in surface tearing and even complete seizure of metal components or fasteners. Galling can be mitigated by the use of dissimilar materials (bronze against stainless steel) or using different stainless steels (martensitic against austenitic). Additionally, threaded joints may be lubricated to provide a film between the two parts and prevent galling. Nitronic 60, made by selective alloying with manganese, silicon, and nitrogen, has demonstrated a reduced tendency to gall.
Properties:
Density The density of stainless steel can be somewhere between 7,500kg/m3 to 8,000kg/m3 depending on the alloy.
History:
The invention of stainless steel followed a series of scientific developments, starting in 1798 when chromium was first shown to the French Academy by Louis Vauquelin. In the early 1800s, British scientists James Stoddart, Michael Faraday, and Robert Mallet observed the resistance of chromium-iron alloys ("chromium steels") to oxidizing agents. Robert Bunsen discovered chromium's resistance to strong acids. The corrosion resistance of iron-chromium alloys may have been first recognized in 1821 by Pierre Berthier, who noted their resistance against attack by some acids and suggested their use in cutlery.In the 1840s, both of Britain's Sheffield steelmakers and then Krupp of Germany were producing chromium steel with the latter employing it for cannons in the 1850s. In 1861, Robert Forester Mushet took out a patent on chromium steel in Britain.These events led to the first American production of chromium-containing steel by J. Baur of the Chrome Steel Works of Brooklyn for the construction of bridges. A US patent for the product was issued in 1869.: 2261 This was followed with recognition of the corrosion resistance of chromium alloys by Englishmen John T. Woods and John Clark, who noted ranges of chromium from 5–30%, with added tungsten and "medium carbon". They pursued the commercial value of the innovation via a British patent for "Weather-Resistant Alloys".: 261, 11 In the late 1890s, German chemist Hans Goldschmidt developed an aluminothermic (thermite) process for producing carbon-free chromium. Between 1904 and 1911, several researchers, particularly Leon Guillet of France, prepared alloys that would be considered stainless steel today.In 1908, the Essen firm Friedrich Krupp Germaniawerft built the 366-ton sailing yacht Germania featuring a chrome-nickel steel hull, in Germany. In 1911, Philip Monnartz reported on the relationship between chromium content and corrosion resistance. On 17 October 1912, Krupp engineers Benno Strauss and Eduard Maurer patented as Nirosta the austenitic stainless steel known today as 18/8 or AISI type 304.Similar developments were taking place in the United States, where Christian Dantsizen of General Electric and Frederick Becket (1875-1942) at Union Carbide were industrializing ferritic stainless steel. In 1912, Elwood Haynes applied for a US patent on a martensitic stainless steel alloy, which was not granted until 1919.
History:
Harry Brearley While seeking a corrosion-resistant alloy for gun barrels in 1912, Harry Brearley of the Brown-Firth research laboratory in Sheffield, England, discovered and subsequently industrialized a martensitic stainless steel alloy, today known as AISI type 420. The discovery was announced two years later in a January 1915 newspaper article in The New York Times.The metal was later marketed under the "Staybrite" brand by Firth Vickers in England and was used for the new entrance canopy for the Savoy Hotel in London in 1929. Brearley applied for a US patent during 1915 only to find that Haynes had already registered one. Brearley and Haynes pooled their funding and, with a group of investors, formed the American Stainless Steel Corporation, with headquarters in Pittsburgh, Pennsylvania.: 360 Rustless steel Brearley initially called his new alloy "rustless steel". The alloy was sold in the US under different brand names like "Allegheny metal" and "Nirosta steel". Even within the metallurgy industry, the name remained unsettled; in 1921, one trade journal called it "unstainable steel". Brearley worked with a local cutlery manufacturer, who gave it the name "stainless steel". As late as 1932, Ford Motor Company continued calling the alloy rustless steel in automobile promotional materials.In 1929, before the Great Depression, over 25,000 tons of stainless steel were manufactured and sold in the US annually.Major technological advances in the 1950s and 1960s allowed the production of large tonnages at an affordable cost: AOD process (argon oxygen decarburization), for the removal of carbon and sulfur Continuous casting and hot strip rolling The Z-Mill, or Sendzimir cold rolling mill The Creusot-Loire Uddeholm (CLU) and related processes which use steam instead of some or all of the argon
Types:
Stainless steel is classified into five main families that are primarily differentiated by their crystalline structure: austenitic ferritic martensitic duplex precipitation hardening Austenitic Austenitic stainless steel is the largest family of stainless steels, making up about two-thirds of all stainless steel production. They possess an austenitic microstructure, which is a face-centered cubic crystal structure. This microstructure is achieved by alloying steel with sufficient nickel and/or manganese and nitrogen to maintain an austenitic microstructure at all temperatures, ranging from the cryogenic region to the melting point. Thus, austenitic stainless steels are not hardenable by heat treatment since they possess the same microstructure at all temperatures.However, "forming temperature is an essential factor for metastable austenitic stainless steel (M-ASS) products to accommodate microstructures and cryogenic mechanical performance. ... Metastable austenitic stainless steels (M-ASSs) are widely used in manufacturing cryogenic pressure vessels (CPVs), owing to their high cryogenic toughness, ductility, strength, corrosion-resistance, and economy."Cryogenic cold-forming of austenitic stainless steel is an extension of the heating-quenching-tempering cycle, where the final temperature of the material before full-load use is taken down to a cryogenic temperature range. This can remove residual stresses and improve wear resistance.Austenitic stainless steel sub-groups, 200 series and 300 series: 200 series are chromium-manganese-nickel alloys that maximize the use of manganese and nitrogen to minimize the use of nickel. Due to their nitrogen addition, they possess approximately 50% higher yield strength than 300-series stainless sheets of steel.
Types:
Type 201 is hardenable through cold working.
Type 202 is general-purpose stainless steel. Decreasing nickel content and increasing manganese results in weak corrosion resistance.
300 series are chromium-nickel alloys that achieve their austenitic microstructure almost exclusively by nickel alloying; some very highly alloyed grades include some nitrogen to reduce nickel requirements. 300 series is the largest group and the most widely used.
Type 304: The most common is type 304, also known as 18/8 and 18/10 for its composition of 18% chromium and 8% or 10% nickel, respectively.
Type 316: The second most common austenitic stainless steel is type 316. The addition of 2% molybdenum provides greater resistance to acids and localized corrosion caused by chloride ions. Low-carbon versions, such as 316L or 304L, have carbon contents below 0.03% and are used to avoid corrosion problems caused by welding.
Types:
Ferritic Ferritic stainless steels possess a ferrite microstructure like carbon steel, which is a body-centered cubic crystal structure, and contain between 10.5% and 27% chromium with very little or no nickel. This microstructure is present at all temperatures due to the chromium addition, so they are not hardenable by heat treatment. They cannot be strengthened by cold work to the same degree as austenitic stainless steels. They are magnetic. Additions of niobium (Nb), titanium (Ti), and zirconium (Zr) to type 430 allow good weldability. Due to the near-absence of nickel, they are less expensive than austenitic steels and are present in many products, which include: Automobile exhaust pipes (type 409 and 409 Cb are used in North America; stabilized grades type 439 and 441 are used in Europe) Architectural and structural applications (type 430, which contains 17% Cr) Building components, such as slate hooks, roofing, and chimney ducts Power plates in solid oxide fuel cells operating at temperatures around 700 °C (1,300 °F) (high-chromium ferritics containing 22% Cr) Martensitic Martensitic stainless steels have a body-centered cubic crystal structure, and offer a wide range of properties and are used as stainless engineering steels, stainless tool steels, and creep-resistant steels. They are magnetic, and not as corrosion-resistant as ferritic and austenitic stainless steels due to their low chromium content. They fall into four categories (with some overlap): Fe-Cr-C grades. These were the first grades used and are still widely used in engineering and wear-resistant applications.
Types:
Fe-Cr-Ni-C grades. Some carbon is replaced by nickel. They offer higher toughness and higher corrosion resistance. Grade EN 1.4303 (Casting grade CA6NM) with 13% Cr and 4% Ni is used for most Pelton, Kaplan, and Francis turbines in hydroelectric power plants because it has good casting properties, good weldability and good resistance to cavitation erosion.
Precipitation hardening grades. Grade EN 1.4542 (also known as 17-4 PH), the best-known grade, combines martensitic hardening and precipitation hardening. It achieves high strength and good toughness and is used in aerospace among other applications.
Types:
Creep-resisting grades. Small additions of niobium, vanadium, boron, and cobalt increase the strength and creep resistance up to about 650 °C (1,200 °F).Martensitic stainless steels can be heat treated to provide better mechanical properties. The heat treatment typically involves three steps: Austenitizing, in which the steel is heated to a temperature in the range 980–1,050 °C (1,800–1,920 °F), depending on grade. The resulting austenite has a face-centered cubic crystal structure.
Types:
Quenching. The austenite is transformed into martensite, a hard body-centered tetragonal crystal structure. The quenched martensite is very hard and too brittle for most applications. Some residual austenite may remain.
Types:
Tempering. Martensite is heated to around 500 °C (930 °F), held at temperature, then air-cooled. Higher tempering temperatures decrease yield strength and ultimate tensile strength but increase the elongation and impact resistance.Replacing some carbon in martensitic stainless steels by nitrogen is a recent development. The limited solubility of nitrogen is increased by the pressure electroslag refining (PESR) process, in which melting is carried out under high nitrogen pressure. Steel containing up to 0.4% nitrogen has been achieved, leading to higher hardness and strength and higher corrosion resistance. As PESR is expensive, lower but significant nitrogen contents have been achieved using the standard AOD process.
Types:
Duplex Duplex stainless steels have a mixed microstructure of austenite and ferrite, the ideal ratio being a 50:50 mix, though commercial alloys may have ratios of 40:60. They are characterized by higher chromium (19–32%) and molybdenum (up to 5%) and lower nickel contents than austenitic stainless steels. Duplex stainless steels have roughly twice the yield strength of austenitic stainless steel. Their mixed microstructure provides improved resistance to chloride stress corrosion cracking in comparison to austenitic stainless steel types 304 and 316. Duplex grades are usually divided into three sub-groups based on their corrosion resistance: lean duplex, standard duplex, and super duplex. The properties of duplex stainless steels are achieved with an overall lower alloy content than similar-performing super-austenitic grades, making their use cost-effective for many applications. The pulp and paper industry was one of the first to extensively use duplex stainless steel. Today, the oil and gas industry is the largest user and has pushed for more corrosion resistant grades, leading to the development of super duplex and hyper duplex grades. More recently, the less expensive (and slightly less corrosion-resistant) lean duplex has been developed, chiefly for structural applications in building and construction (concrete reinforcing bars, plates for bridges, coastal works) and in the water industry.
Types:
Precipitation hardening Precipitation hardening stainless steels have corrosion resistance comparable to austenitic varieties, but can be precipitation hardened to even higher strengths than other martensitic grades. There are three types of precipitation hardening stainless steels: Martensitic 17-4 PH (AISI 630 EN 1.4542) contains about 17% Cr, 4% Ni, 4% Cu, and 0.3% Nb.Solution treatment at about 1,040 °C (1,900 °F) followed by quenching results in a relatively ductile martensitic structure. Subsequent aging treatment at 475 °C (887 °F) precipitates Nb and Cu-rich phases that increase the strength up to above 1000 MPa yield strength. This outstanding strength level is used in high-tech applications such as aerospace (usually after remelting to eliminate non-metallic inclusions, which increases fatigue life). Another major advantage of this steel is that aging, unlike tempering treatments, is carried out at a temperature that can be applied to (nearly) finished parts without distortion and discoloration.
Types:
Semi-austenitic 17-7 PH (AISI 631 EN 1.4568) contains about 17% Cr, 7.2% Ni, and 1.2% Al.Typical heat treatment involves solution treatment and quenching. At this point, the structure remains austenitic. Martensitic transformation is then obtained either by a cryogenic treatment at −75 °C (−103 °F) or by severe cold work (over 70% deformation, usually by cold rolling or wire drawing). Aging at 510 °C (950 °F) — which precipitates the Ni3Al intermetallic phase—is carried out as above on nearly finished parts. Yield stress levels above 1400 MPa are then reached.
Types:
Austenitic A286(ASTM 660 EN 1.4980) contains about Cr 15%, Ni 25%, Ti 2.1%, Mo 1.2%, V 1.3%, and B 0.005%.The structure remains austenitic at all temperatures.
Types:
Typical heat treatment involves solution treatment and quenching, followed by aging at 715 °C (1,319 °F). Aging forms Ni3Ti precipitates and increases the yield strength to about 650 MPa (94 ksi) at room temperature. Unlike the above grades, the mechanical properties and creep resistance of this steel remain very good at temperatures up to 700 °C (1,300 °F). As a result, A286 is classified as an Fe-based superalloy, used in jet engines, gas turbines, and turbo parts.
Grades:
Over 150 grades of stainless steel are recognized, of which 15 are the most widely used. Many grading systems are in use, including US SAE steel grades. The Unified Numbering System for Metals and Alloys (UNS) was developed by the ASTM in 1970. The Europeans have adopted EN 10088.
Corrosion resistance:
Unlike carbon steel, stainless steels do not suffer uniform corrosion when exposed to wet environments. Unprotected carbon steel rusts readily when exposed to a combination of air and moisture. The resulting iron oxide surface layer is porous and fragile. In addition, as iron oxide occupies a larger volume than the original steel, this layer expands and tends to flake and fall away, exposing the underlying steel to further attack. In comparison, stainless steels contain sufficient chromium to undergo passivation, spontaneously forming a microscopically thin inert surface film of chromium oxide by reaction with the oxygen in the air and even the small amount of dissolved oxygen in the water. This passive film prevents further corrosion by blocking oxygen diffusion to the steel surface and thus prevents corrosion from spreading into the bulk of the metal.[3] This film is self-repairing, even when scratched or temporarily disturbed by an upset condition in the environment that exceeds the inherent corrosion resistance of that grade.The resistance of this film to corrosion depends upon the chemical composition of the stainless steel, chiefly the chromium content. It is customary to distinguish between four forms of corrosion: uniform, localized (pitting), galvanic, and SCC (stress corrosion cracking). Any of these forms of corrosion can occur when the grade of stainless steel is not suited for the working environment.
Corrosion resistance:
The designation "CRES" refers to corrosion-resistant (stainless) steel.
Uniform Uniform corrosion takes place in very aggressive environments, typically where chemicals are produced or heavily used, such as in the pulp and paper industries. The entire surface of the steel is attacked, and the corrosion is expressed as corrosion rate in mm/year (usually less than 0.1 mm/year is acceptable for such cases). Corrosion tables provide guidelines.
Corrosion resistance:
This is typically the case when stainless steels are exposed to acidic or basic solutions. Whether stainless steel corrodes depends on the kind and concentration of acid or base and the solution temperature. Uniform corrosion is typically easy to avoid because of extensive published corrosion data or easily performed laboratory corrosion testing.Acidic solutions can be put into two general categories: reducing acids, such as hydrochloric acid and dilute sulfuric acid, and oxidizing acids, such as nitric acid and concentrated sulfuric acid. Increasing chromium and molybdenum content provides increased resistance to reducing acids while increasing chromium and silicon content provides increased resistance to oxidizing acids. Sulfuric acid is one of the most-produced industrial chemicals. At room temperature, type 304 stainless steel is only resistant to 3% acid, while type 316 is resistant to 3% acid up to 50 °C (120 °F) and 20% acid at room temperature. Thus type 304 SS is rarely used in contact with sulfuric acid. type 904L and Alloy 20 are resistant to sulfuric acid at even higher concentrations above room temperature. Concentrated sulfuric acid possesses oxidizing characteristics like nitric acid, and thus silicon-bearing stainless steels are also useful. Hydrochloric acid damages any kind of stainless steel and should be avoided.: 118 All types of stainless steel resist attack from phosphoric acid and nitric acid at room temperature. At high concentrations and elevated temperatures, attack will occur, and higher-alloy stainless steels are required. In general, organic acids are less corrosive than mineral acids such as hydrochloric and sulfuric acid. As the molecular weight of organic acids increases, their corrosivity increases. Formic acid has the lowest molecular weight and so it is a weak acid. Type 304 can be used with formic acid, though it tends to discolor the solution. Type 316 is commonly used for storing and handling acetic acid, a commercially important organic acid.Type 304 and type 316 stainless steels are unaffected by weak bases such as ammonium hydroxide, even in high concentrations and at high temperatures. The same grades exposed to stronger bases such as sodium hydroxide at high concentrations and high temperatures will likely experience some etching and cracking. Increasing chromium and nickel contents provide increased resistance.
Corrosion resistance:
All grades resist damage from aldehydes and amines, though in the latter case type 316 is preferable to type 304; cellulose acetate damages type 304 unless the temperature is kept low. Fats and fatty acids only affect type 304 at temperatures above 150 °C (300 °F) and type 316 SS above 260 °C (500 °F), while type 317 SS is unaffected at all temperatures. Type 316L is required for the processing of urea.
Corrosion resistance:
Localized Localized corrosion can occur in several ways, e.g. pitting corrosion and crevice corrosion. These localized attacks are most common in the presence of chloride ions. Higher chloride levels require more highly alloyed stainless steels.
Corrosion resistance:
Localized corrosion can be difficult to predict because it is dependent on many factors, including: Chloride ion concentration. Even when chloride solution concentration is known, it is still possible for localized corrosion to occur unexpectedly. Chloride ions can become unevenly concentrated in certain areas, such as in crevices (e.g. under gaskets) or on surfaces in vapor spaces due to evaporation and condensation.
Corrosion resistance:
Temperature: increasing temperature increases susceptibility.
Acidity: increasing acidity increases susceptibility.
Stagnation: stagnant conditions increase susceptibility.
Corrosion resistance:
Oxidizing species: the presence of oxidizing species, such as ferric and cupric ions, increases susceptibility.Pitting corrosion is considered the most common form of localized corrosion. The corrosion resistance of stainless steels to pitting corrosion is often expressed by the PREN, obtained through the formula: PREN Cr 3.3 Mo 16 ⋅%N ,where the terms correspond to the proportion of the contents by mass of chromium, molybdenum, and nitrogen in the steel. For example, if the steel consisted of 15% chromium %Cr would be equal to 15.
Corrosion resistance:
The higher the PREN, the higher the pitting corrosion resistance. Thus, increasing chromium, molybdenum, and nitrogen contents provide better resistance to pitting corrosion.
Corrosion resistance:
Though the PREN of certain steel may be theoretically sufficient to resist pitting corrosion, crevice corrosion can still occur when the poor design has created confined areas (overlapping plates, washer-plate interfaces, etc.) or when deposits form on the material. In these select areas, the PREN may not be high enough for the service conditions. Good design, fabrication techniques, alloy selection, proper operating conditions based on the concentration of active compounds present in the solution causing corrosion, pH, etc. can prevent such corrosion.
Corrosion resistance:
Stress Stress corrosion cracking (SCC) is a sudden cracking and failure of a component without deformation. It may occur when three conditions are met: The part is stressed (by an applied load or by residual stress).
The environment is aggressive (high chloride level, temperature above 50 °C (120 °F), presence of H2S).
The stainless steel is not sufficiently SCC-resistant.The SCC mechanism results from the following sequence of events: Pitting occurs.
Cracks start from a pit initiation site.
Cracks then propagate through the metal in a transgranular or intergranular mode.
Failure occurs.Whereas pitting usually leads to unsightly surfaces and, at worst, to perforation of the stainless sheet, failure by SCC can have severe consequences. It is therefore considered as a special form of corrosion.
As SCC requires several conditions to be met, it can be counteracted with relatively easy measures, including: Reducing the stress level (the oil and gas specifications provide requirements for maximal stress level in H2S-containing environments).
Assessing the aggressiveness of the environment (high chloride content, temperature above 50 °C (120 °F), etc.).
Selecting the right type of stainless steel: super austenitic such as grade 904L or super-duplex (ferritic stainless steels and duplex stainless steels are very resistant to SCC).
Corrosion resistance:
Galvanic Galvanic corrosion (also called "dissimilar-metal corrosion") refers to corrosion damage induced when two dissimilar materials are coupled in a corrosive electrolyte. The most common electrolyte is water, ranging from freshwater to seawater. When a galvanic couple forms, one of the metals in the couple becomes the anode and corrodes faster than it would alone, while the other becomes the cathode and corrodes slower than it would alone. Stainless steel, due to having a more positive electrode potential than for example carbon steel and aluminium, becomes the cathode, accelerating the corrosion of the anodic metal. An example is the corrosion of aluminium rivets fastening stainless steel sheets in contact with water. The relative surface areas of the anode and the cathode are important in determining the rate of corrosion. In the above example, the surface area of the rivets is small compared to that of the stainless steel sheet, resulting in rapid corrosion. However, if stainless steel fasteners are used to assemble aluminium sheets, galvanic corrosion will be much slower because the galvanic current density on the aluminium surface will be many orders of magnitude smaller. A frequent mistake is to assemble stainless steel plates with carbon steel fasteners; whereas using stainless steel to fasten carbon-steel plates is usually acceptable, the reverse is not. Providing electrical insulation between the dissimilar metals, where possible, is effective at preventing this type of corrosion.
Corrosion resistance:
High-temperature At elevated temperatures, all metals react with hot gases. The most common high-temperature gaseous mixture is air, of which oxygen is the most reactive component. To avoid corrosion in air, carbon steel is limited to approximately 480 °C (900 °F). Oxidation resistance in stainless steels increases with additions of chromium, silicon, and aluminium. Small additions of cerium and yttrium increase the adhesion of the oxide layer on the surface. The addition of chromium remains the most common method to increase high-temperature corrosion resistance in stainless steels; chromium reacts with oxygen to form a chromium oxide scale, which reduces oxygen diffusion into the material. The minimum 10.5% chromium in stainless steels provides resistance to approximately 700 °C (1,300 °F), while 16% chromium provides resistance up to approximately 1,200 °C (2,200 °F). Type 304, the most common grade of stainless steel with 18% chromium, is resistant to approximately 870 °C (1,600 °F). Other gases, such as sulfur dioxide, hydrogen sulfide, carbon monoxide, chlorine, also attack stainless steel. Resistance to other gases is dependent on the type of gas, the temperature, and the alloying content of the stainless steel. With the addition of up to 5% aluminium, ferritic grades Fe-Cr-Al are designed for electrical resistance and oxidation resistance at elevated temperatures. Such alloys include Kanthal, produced in the form of wire or ribbons.
Standard finishes:
Standard mill finishes can be applied to flat rolled stainless steel directly by the rollers and by mechanical abrasives. Steel is first rolled to size and thickness and then annealed to change the properties of the final material. Any oxidation that forms on the surface (mill scale) is removed by pickling, and a passivation layer is created on the surface. A final finish can then be applied to achieve the desired aesthetic appearance.The following designations are used in the U.S. to describe stainless steel finishes by ASTM A480/A480M-18 (DIN): No. 0: Hot-rolled, annealed, thicker plates No. 1 (1D): Hot-rolled, annealed and passivated No. 2D (2D): Cold rolled, annealed, pickled and passivated No. 2B (2B): Same as above with additional pass through highly polished rollers No. 2BA (2R): Bright annealed (BA or 2R) same as above then bright annealed under oxygen-free atmospheric condition No. 3 (G-2G:) Coarse abrasive finish applied mechanically No. 4 (1J-2J): Brushed finish No. 5: Satin finish No. 6 (1K-2K): Matte finish (brushed but smoother than #4) No. 7 (1P-2P): Reflective finish No. 8: Mirror finish No. 9: Bead blast finish No. 10: Heat colored finish – offering a wide range of electropolished and heat colored surfaces
Joining:
A wide range of joining processes are available for stainless steels, though welding is by far the most common.The ease of welding largely depends on the type of stainless steel used. Austenitic stainless steels are the easiest to weld by electric arc, with weld properties similar to those of the base metal (not cold-worked). Martensitic stainless steels can also be welded by electric-arc but, as the heat-affected zone (HAZ) and the fusion zone (FZ) form martensite upon cooling, precautions must be taken to avoid cracking of the weld. Improper welding practices can additionally cause sugaring (oxide scaling) and/or heat tint on the backside of the weld. This can be prevented with the use of back-purging gases, backing plates, and fluxes. Post-weld heat treatment is almost always required while preheating before welding is also necessary in some cases. Electric arc welding of type 430 ferritic stainless steel results in grain growth in the HAZ, which leads to brittleness. This has largely been overcome with stabilized ferritic grades, where niobium, titanium, and zirconium form precipitates that prevent grain growth. Duplex stainless steel welding by electric arc is a common practice but requires careful control of the process parameters. Otherwise, the precipitation of unwanted intermetallic phases occurs, which reduces the toughness of the welds.Electric arc welding processes include: Gas metal arc welding, also known as MIG/MAG welding Gas tungsten arc welding, also known as tungsten inert gas (TIG) welding Plasma arc welding Flux-cored arc welding Shielded metal arc welding (covered electrode) Submerged arc weldingMIG, MAG and TIG welding are the most common methods.
Joining:
Other welding processes include: Stud welding Resistance spot welding Resistance seam welding Flash welding Laser beam welding Oxy-acetylene weldingStainless steel may be bonded with adhesives such as silicone, silyl modified polymers, and epoxies. Acrylic and polyurethane adhesives are also used in some situations.
Production:
Most of the world's stainless steel production is produced by the following processes: Electric arc furnace (EAF): stainless steel scrap, other ferrous scrap, and ferrous alloys (Fe Cr, Fe Ni, Fe Mo, Fe Si) are melted together. The molten metal is then poured into a ladle and transferred into the AOD process (see below).
Argon oxygen decarburization (AOD): carbon in the molten steel is removed (by turning it into carbon monoxide gas) and other compositional adjustments are made to achieve the desired chemical composition.
Continuous casting (CC): the molten metal is solidified into slabs for flat products (a typical section is 20 centimetres (7.9 in) thick and 2 metres (6.6 ft) wide) or blooms (sections vary widely but 25 by 25 centimetres (9.8 in × 9.8 in) is the average size).
Hot rolling (HR): slabs and blooms are reheated in a furnace and hot-rolled. Hot rolling reduces the thickness of the slabs to produce about 3 mm (0.12 in)-thick coils. Blooms, on the other hand, are hot-rolled into bars, which are cut into lengths at the exit of the rolling mill, or wire rod, which is coiled.
Production:
Cold finishing (CF) depends on the type of product being finished: Hot-rolled coils are pickled in acid solutions to remove the oxide scale on the surface, then subsequently cold rolled in Sendzimir rolling mills and annealed in a protective atmosphere until the desired thickness and surface finish is obtained. Further operations such as slitting and tube forming can be performed in downstream facilities.
Production:
Hot-rolled bars are straightened, then machined to the required tolerance and finish.
Production:
Wire rod coils are subsequently processed to produce cold-finished bars on drawing benches, fasteners on boltmaking machines, and wire on single or multipass drawing machines.World stainless steel production figures are published yearly by the International Stainless Steel Forum. Of the EU production figures, Italy, Belgium and Spain were notable, while Canada and Mexico produced none. China, Japan, South Korea, Taiwan, India the US and Indonesia were large producers while Russia reported little production.
Production:
Breakdown of production by stainless steels families in 2017: Austenitic stainless steels Cr-Ni (also called 300-series, see "Grades" section above): 54% Austenitic stainless steels Cr-Mn (also called 200-series): 21% Ferritic and martensitic stainless steels (also called 400-series): 23%
Applications:
Stainless steel is used in a multitude of fields including architecture, art, chemical engineering, food and beverage manufacture, vehicles, medicine, energy and firearms.
Life cycle cost:
Life cycle cost (LCC) calculations are used to select the design and the materials that will lead to the lowest cost over the whole life of a project, such as a building or a bridge.The formula, in a simple form, is the following: LCC AC IC OC LP RC (1+i)n where LCC is the overall life cycle cost, AC is the acquisition cost, IC the installation cost, OC the operating and maintenance costs, LP the cost of lost production due to downtime, and RC the replacement materials cost.
Life cycle cost:
In addition, N is the planned life of the project, i the interest rate, and n the year in which a particular OC or LP or RC is taking place. The interest rate (i) is used to convert expenses from different years to their present value (a method widely used by banks and insurance companies) so they can be added and compared fairly. The usage of the sum formula ( {\textstyle \sum } ) captures the fact that expenses over the lifetime of a project must be cumulated after they are corrected for interest rate.Application of LCC in materials selection Stainless steel used in projects often results in lower LCC values compared to other materials. The higher acquisition cost (AC) of stainless steel components are often offset by improvements in operating and maintenance costs, reduced loss of production (LP) costs, and the higher resale value of stainless steel components.LCC calculations are usually limited to the project itself. However, there may be other costs that a project stakeholder may wish to consider: Utilities, such as power plants, water supply & wastewater treatment, and hospitals, cannot be shut down. Any maintenance will require extra costs associated with continuing service.
Life cycle cost:
Indirect societal costs (with possible political fallout) may be incurred in some situations such as closing or reducing traffic on bridges, creating queues, delays, loss of working hours to the people, and increased pollution by idling vehicles.
Sustainability–recycling and reuse:
The average carbon footprint of stainless steel (all grades, all countries) is estimated to be 2.90 kg of CO2 per kg of stainless steel produced, of which 1.92 kg are emissions from raw materials (Cr, Ni, Mo); 0.54 kg from electricity and steam, and 0.44 kg are direct emissions (i.e., by the stainless steel plant). Note that stainless steel produced in countries that use cleaner sources of electricity (such as France, which uses nuclear energy) will have a lower carbon footprint. Ferritics without Ni will have a lower CO2 footprint than austenitics with 8% Ni or more. Carbon footprint must not be the only sustainability-related factor for deciding the choice of materials: Over any product life, maintenance, repairs or early end of life (planned obsolescence) can increase its overall footprint far beyond initial material differences. In addition, loss of service (typically for bridges) may induce large hidden costs, such as queues, wasted fuel, and loss of man-hours.
Sustainability–recycling and reuse:
How much material is used to provide a given service varies with the performance, particularly the strength level, which allows lighter structures and components.Stainless steel is 100% recyclable. An average stainless steel object is composed of about 60% recycled material of which approximately 40% originates from end-of-life products, while the remaining 60% comes from manufacturing processes. What prevents a higher recycling content is the availability of stainless steel scrap, in spite of a very high recycling rate. According to the International Resource Panel's Metal Stocks in Society report, the per capita stock of stainless steel in use in society is 80 to 180 kg (180 to 400 lb) in more developed countries and 15 kg (33 lb) in less-developed countries. There is a secondary market that recycles usable scrap for many stainless steel markets. The product is mostly coil, sheet, and blanks. This material is purchased at a less-than-prime price and sold to commercial quality stampers and sheet metal houses. The material may have scratches, pits, and dents but is made to the current specifications.The stainless steel cycle starts with carbon steel scrap, primary metals, and slag. The next step is the production of hot-rolled and cold-finished steel products in steel mills. Some scrap is produced, which is directly reused in the melting shop. The manufacturing of components is the third step. Some scrap is produced and enters the recycling loop. Assembly of final goods and their use does not generate any material loss. The fourth step is the collection of stainless steel for recycling at the end of life of the goods (such as kitchenware, pulp and paper plants, or automotive parts). This is where it is most difficult to get stainless steel to enter the recycling loop, as shown in the table below:
Nanoscale stainless steel:
Stainless steel nanoparticles have been produced in the laboratory. These may have applications as additives for high-performance applications. For example, sulfurization, phosphorization, and nitridation treatments to produce nanoscale stainless steel based catalysts could enhance the electrocatalytic performance of stainless steel for water splitting.
Health effects:
There is extensive research indicating some probable increased risk of cancer (particularly lung cancer) from inhaling fumes while welding stainless steel. Stainless steel welding is suspected of producing carcinogenic fumes from cadmium oxides, nickel, and chromium. According to Cancer Council Australia, "In 2017, all types of welding fumes were classified as a Group 1 carcinogen."Stainless steel is generally considered to be biologically inert. However, during cooking, small amounts of nickel and chromium leach out of new stainless steel cookware into highly acidic food. Nickel can contribute to cancer risks—particularly lung cancer and nasal cancer. However, no connection between stainless steel cookware and cancer has been established. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Wordfilter**
Wordfilter:
A wordfilter (sometimes referred to as just "filter" or "censor") is a script typically used on Internet forums or chat rooms that automatically scans users' posts or comments as they are submitted and automatically changes or censors particular words or phrases.
The most basic wordfilters search only for specific strings of letters, and remove or overwrite them regardless of their context. More advanced wordfilters make some exceptions for context (such as filtering "butt" but not "butter"), and the most advanced wordfilters may use regular expressions.
Functions:
Wordfilters can serve any of a number of functions.
Functions:
Removal of vulgar language A swear filter, also known as a profanity filter or language filter is a software subsystem which modifies text to remove words deemed offensive by the administrator or community of an online forum. Swear filters are common in custom-programmed chat rooms and online video games, primarily MMORPGs. This is not to be confused with content filtering, which is usually built into internet browsing programs by third-party developers to filter or block specific websites or types of websites. Swear filters are usually created or implemented by the developers of the Internet service.
Functions:
Most commonly, wordfilters are used to censor language considered inappropriate by the operators of the forum or chat room. Expletives are typically partially replaced, completely replaced, or replaced by nonsense words. This relieves the administrators or moderators of the task of constantly patrolling the board to watch for such language. This may also help the message board avoid content-control software installed on users' computers or networks, since such software often blocks access to Web pages that contain vulgar language.
Functions:
Filtered phrases may be permanently replaced as it is saved (example: phpBB 1.x), or the original phrase may be saved but displayed as the censored text. In some software users can view the text behind the wordfilter by quoting the post.
Swear filters typically take advantage of string replacement functions built into the programming language used to create the program, to swap out a list of inappropriate words and phrases with a variety of alternatives. Alternatives can include: Grawlix nonsense characters, such as !@#$%^&* Replacing a certain letter with a shift-number character or a similar looking one.
Asterisks (* or #) of either a set length, or the length of the original word being filtered. Alternatively, posters often replace certain letters with an asterisk.
Minced oaths such as "heck" or "darn", or invented words such as "flum".
Family friendly words or phrases, or euphemisms, like "LOVE" or "I LOVE YOU", or completely different words which have nothing to do with the original word.
Deletion of the post. In this case, the entire post is blocked and there is usually no way to fix it.
Functions:
Nothing at all. In this case, the offending word is deleted.Some swear filters do a simple search for a string. Others have measures that ignore whitespace, and still others go as far as ignoring all non-alphanumeric characters and then filtering the plain text. This means that if the word "you" was set to be filtered, "y o u" or "y.o!u" would also be filtered.
Functions:
Cliché control Clichés—particular words or phrases constantly reused in posts, also known as "memes"—often develop on forums. Some users find that these clichés add to the fun, but other users find them tedious, especially when overused. Administrators may configure the wordfilter to replace the annoying cliché with a more embarrassing phrase, or remove it altogether.
Vandalism control Internet forums are sometimes attacked by vandals who try to fill the forum with repeated nonsense messages, or by spammers who try to insert links to their commercial web sites. The site's wordfilter may be configured to remove the nonsense text used by the vandals, or to remove all links to particular websites from posts.
Functions:
Lameness filter Lameness filters are text-based wordfilters used by Slash-based websites (i.e. Textboards and Imageboards) to stop junk comments from being posted in response to stories. Some of the things they are designed to filter include: Too many capital letters Too much repetition ASCII art Comments which are too short or long Use of HTML tags that try to break web pages Comment titles consisting solely of "first post" Any occurrence of a word or term deemed (by the programmers) to be offensive/vulgar
Circumventing filters:
Since wordfilters are automated and look only for particular sequences of characters, users aware of the filters will sometimes try to circumvent them by changing their lettering just enough to avoid the filters. A user trying to avoid a vulgarity filter might replace one of the characters in the offending word into an asterisk, dash, or something similar. Some administrators respond by revising the wordfilters to catch common substitutions; others may make filter evasion a punishable offense of its own. A simple example of evading a wordfilter would be entering symbols between letters or using leet. More advanced techniques of wordfilter evasion include the use of images, using hidden tags, or Cyrillic characters (i.e. a homograph spoofing attack).
Circumventing filters:
Another method is to use a soft hyphen. A soft hyphen is only used to indicate where a word can be split when breaking text lines and is not displayed. By placing this halfway in a word, the word gets broken up and will in some cases not be recognised by the wordfilter.
Some more advanced filters, such as those in the online game RuneScape, can detect bypassing. However, the downside of sensitive wordfilters is that legitimate phrases get filtered out as well.
Censorship aspects:
Wordfilters are coded into the Internet forums or chat rooms, and operate only on material submitted to the forum or chat room in question. This distinguishes wordfilters from content-control software, which is typically installed on an end user's PC or computer network, and which can filter all Internet content sent to or from the PC or network in question. Since wordfilters alter users' words without their consent, some users still consider them to be censorship, while others consider them an acceptable part of a forum operator's right to control the contents of the forum.
False positives:
A common quirk with wordfilters, often considered either comical or aggravating by users, is that they often affect words that are not intended to be filtered. This is a typical problem when short words are filtered. For example, with the word "ass" censored, one may see, "Do you need istance for playing clical music?" instead of "Do you need assistance for playing classical music?" Multiple words may be filtered if whitespace is ignored, resulting in "as suspected" becoming " uspected". Prohibiting a phrase such as "hard on" will result in filtering innocuous statements such as "That was a hard one!" and "Sorry I was hard on you," into "That was a e!" and "Sorry I was you." Some words that have been filtered accidentally can become replacements for profane words. One example of this is found on the Myst forum Mystcommunity. There, the word 'manuscript' was accidentally censored for containing the word 'anus', which resulted in 'm****cript'. The word was adopted as a replacement swear and carried over when the forum moved, and many substitutes, such as " 'scripting ", are used (though mostly by the older community members).
False positives:
Place names may be filtered out unintentionally due to containing portions of swear words. In the early years of the internet, the British place name Penistone was often filtered out from spam and swear filters.
Implementation:
Many games, such as World of Warcraft, and more recently, Habbo Hotel and RuneScape allow users to turn the filters off. Other games, especially free Massively multiplayer online games, such as Knight Online do not have such an option. Other games such as Medal of Honor and Call of Duty (except Call of Duty: World at War, Call of Duty: Black Ops, Call of Duty: Black Ops 2, and Call of Duty: Black Ops 3) do not give users the option to turn off scripted foul language, while Gears of War does.
Implementation:
In addition to games, profanity filters can be used to moderate user generated content in forums, blogs, social media apps, kid's websites, and product reviews. There are many profanity filter APIs like WebPurify that help in replacing the swear words with other characters (i.e. "@#$!"). These profanity filters APIs work with profanity search and replace method. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Failure of electronic components**
Failure of electronic components:
Electronic components have a wide range of failure modes. These can be classified in various ways, such as by time or cause. Failures can be caused by excess temperature, excess current or voltage, ionizing radiation, mechanical shock, stress or impact, and many other causes. In semiconductor devices, problems in the device package may cause failures due to contamination, mechanical stress of the device, or open or short circuits.
Failure of electronic components:
Failures most commonly occur near the beginning and near the ending of the lifetime of the parts, resulting in the bathtub curve graph of failure rates. Burn-in procedures are used to detect early failures. In semiconductor devices, parasitic structures, irrelevant for normal operation, become important in the context of failures; they can be both a source and protection against failure.
Failure of electronic components:
Applications such as aerospace systems, life support systems, telecommunications, railway signals, and computers use great numbers of individual electronic components. Analysis of the statistical properties of failures can give guidance in designs to establish a given level of reliability. For example, power-handling ability of a resistor may be greatly derated when applied in high-altitude aircraft to obtain adequate service life.
Failure of electronic components:
A sudden fail-open fault can cause multiple secondary failures if it is fast and the circuit contains an inductance; this causes large voltage spikes, which may exceed 500 volts. A broken metallisation on a chip may thus cause secondary overvoltage damage. Thermal runaway can cause sudden failures including melting, fire or explosions.
Packaging failures:
The majority of electronic parts failures are packaging-related. Packaging, as the barrier between electronic parts and the environment, is very susceptible to environmental factors. Thermal expansion produces mechanical stresses that may cause material fatigue, especially when the thermal expansion coefficients of the materials are different. Humidity and aggressive chemicals can cause corrosion of the packaging materials and leads, potentially breaking them and damaging the inside parts, leading to electrical failure. Exceeding the allowed environmental temperature range can cause overstressing of wire bonds, thus tearing the connections loose, cracking the semiconductor dies, or causing packaging cracks. Humidity and subsequent high temperature heating may also cause cracking, as may mechanical damage or shock.
Packaging failures:
During encapsulation, bonding wires can be severed, shorted, or touch the chip die, usually at the edge. Dies can crack due to mechanical overstress or thermal shock; defects introduced during processing, like scribing, can develop into fractures. Lead frames may contain excessive material or burrs, causing shorts. Ionic contaminants like alkali metals and halogens can migrate from the packaging materials to the semiconductor dies, causing corrosion or parameter deterioration. Glass-metal seals commonly fail by forming radial cracks that originate at the pin-glass interface and permeate outwards; other causes include a weak oxide layer on the interface and poor formation of a glass meniscus around the pin.Various gases may be present in the package cavity, either as impurities trapped during manufacturing, outgassing of the materials used, or chemical reactions, as is when the packaging material gets overheated (the products are often ionic and facilitate corrosion with delayed failure). To detect this, helium is often in the inert atmosphere inside the packaging as a tracer gas to detect leaks during testing. Carbon dioxide and hydrogen may form from organic materials, moisture is outgassed by polymers and amine-cured epoxies outgas ammonia. Formation of cracks and intermetallic growth in die attachments may lead to formation of voids and delamination, impairing heat transfer from the chip die to the substrate and heatsink and causing a thermal failure. As some semiconductors like silicon and gallium arsenide are infrared-transparent, infrared microscopy can check the integrity of die bonding and under-die structures.Red phosphorus, used as a charring-promoter flame retardant, facilitates silver migration when present in packaging. It is normally coated with aluminium hydroxide; if the coating is incomplete, the phosphorus particles oxidize to the highly hygroscopic phosphorus pentoxide, which reacts with moisture to phosphoric acid. This is a corrosive electrolyte that in the presence of electric fields facilitates dissolution and migration of silver, short-circuiting adjacent packaging pins, lead frame leads, tie bars, chip mount structures, and chip pads. The silver bridge may be interrupted by thermal expansion of the package; thus, disappearance of the shorting when the chip is heated and its reappearance after cooling is an indication of this problem. Delamination and thermal expansion may move the chip die relative to the packaging, deforming and possibly shorting or cracking the bonding wires.
Contact failures:
Electrical contacts exhibit ubiquitous contact resistance, the magnitude of which is governed by surface structure and the composition of surface layers. Ideally contact resistance should be low and stable, however weak contact pressure, mechanical vibration, corrosion, and the formation of passivizing oxide layers and contacts can alter contact resistance significantly, leading to resistance heating and circuit failure.
Contact failures:
Soldered joints can fail in many ways like electromigration and formation of brittle intermetallic layers. Some failures show only at extreme joint temperatures, hindering troubleshooting. Thermal expansion mismatch between the printed circuit board material and its packaging strains the part-to-board bonds; while leaded parts can absorb the strain by bending, leadless parts rely on the solder to absorb stresses. Thermal cycling may lead to fatigue cracking of the solder joints, especially with elastic solders; various approaches are used to mitigate such incidents. Loose particles, like bonding wire and weld flash, can form in the device cavity and migrate inside the packaging, causing often intermittent and shock-sensitive shorts. Corrosion may cause buildup of oxides and other nonconductive products on the contact surfaces. When closed, these then show unacceptably high resistance; they may also migrate and cause shorts. Tin whiskers can form on tin-coated metals like the internal side of the packagings; loose whiskers then can cause intermittent short circuits inside the packaging. Cables, in addition to the methods described above, may fail by fraying and fire damage.
Printed circuit board failures:
Printed circuit boards (PCBs) are vulnerable to environmental influences; for example, the traces are corrosion-prone and may be improperly etched leaving partial shorts, while the vias may be insufficiently plated through or filled with solder. The traces may crack under mechanical loads, often resulting in unreliable PCB operation. Residues of solder flux may facilitate corrosion; those of other materials on PCBs can cause electrical leaks. Polar covalent compounds can attract moisture like antistatic agents, forming a thin layer of conductive moisture between the traces; ionic compounds like chlorides tend to facilitate corrosion. Alkali metal ions may migrate through plastic packaging and influence the functioning of semiconductors. Chlorinated hydrocarbon residues may hydrolyze and release corrosive chlorides; these are problems that occur after years. Polar molecules may dissipate high-frequency energy, causing parasitic dielectric losses.
Printed circuit board failures:
Above the glass transition temperature of PCBs, the resin matrix softens and becomes susceptible contaminant diffusion. For example, polyglycols from the solder flux can enter the board and increase its humidity intake, with corresponding deterioration of dielectric and corrosion properties. Multi-layer substrates using ceramics suffer from many of the same problems.
Printed circuit board failures:
Conductive anodic filaments (CAFs) may grow within the boards along the fibers of the composite material. Metal is introduced to a vulnerable surface typically from plating the vias, then migrates in presence of ions, moisture, and electrical potential; drilling damage and poor glass-resin bonding promotes such failures. The formation of CAFs usually begins by poor glass-resin bonding; a layer of adsorbed moisture then provides a channel through which ions and corrosion products migrate. In presence of chloride ions, the precipitated material is atacamite; its semiconductive properties lead to increased current leakage, deteriorated dielectric strength, and short circuits between traces. Absorbed glycols from flux residues aggravate the problem. The difference in thermal expansion of the fibers and the matrix weakens the bond when the board is soldered; the lead-free solders which require higher soldering temperatures increase the occurrence of CAFs. Besides this, CAFs depend on absorbed humidity; below a certain threshold, they do not occur. Delamination may occur to separate the board layers, cracking the vias and conductors to introduce pathways for corrosive contaminants and migration of conductive species.
Relay failures:
Every time the contacts of an electromechanical relay or contactor are opened or closed, there is a certain amount of contact wear. An electric arc occurs between the contact points (electrodes) both during the transition from closed to open (break) or from open to closed (make). The arc caused during the contact break (break arc) is akin to arc welding, as the break arc is typically more energetic and more destructive.The heat and current of the electrical arc across the contacts creates specific cone & crater formations from metal migration. In addition to the physical contact damage, there appears also a coating of carbon and other matter. This degradation drastically limits the overall operating life of a relay or contactor to a range of perhaps 100,000 operations, a level representing 1% or less than the mechanical life expectancy of the same device.
Semiconductor failures:
Many failures result in generation of hot electrons. These are observable under an optical microscope, as they generate near-infrared photons detectable by a CCD camera. Latchups can be observed this way. If visible, the location of failure may present clues to the nature of the overstress. Liquid crystal coatings can be used for localization of faults: cholesteric liquid crystals are thermochromic and are used for visualisation of locations of heat production on the chips, while nematic liquid crystals respond to voltage and are used for visualising current leaks through oxide defects and of charge states on the chip surface (particularly logical states). Laser marking of plastic-encapsulated packages may damage the chip if glass spheres in the packaging line up and direct the laser to the chip.Examples of semiconductor failures relating to semiconductor crystals include: Nucleation and growth of dislocations. This requires an existing defect in the crystal, as is done by radiation, and is accelerated by heat, high current density and emitted light. With LEDs, gallium arsenide and aluminium gallium arsenide are more susceptible to this than gallium arsenide phosphide and indium phosphide; gallium nitride and indium gallium nitride are insensitive to this defect.
Semiconductor failures:
Accumulation of charge carriers trapped in the gate oxide of MOSFETs. This introduces permanent gate biasing, influencing the transistor's threshold voltage; it may be caused by hot carrier injection, ionizing radiation or nominal use. With EEPROM cells, this is the major factor limiting the number of erase-write cycles.
Migration of charge carriers from floating gates. This limits the lifetime of stored data in EEPROM and flash EPROM structures.
Improper passivation. Corrosion is a significant source of delayed failures; semiconductors, metallic interconnects, and passivation glasses are all susceptible. The surface of semiconductors subjected to moisture has an oxide layer; the liberated hydrogen reacts with deeper layers of the material, yielding volatile hydrides.
Semiconductor failures:
Parameter failures Vias are a common source of unwanted serial resistance on chips; defective vias show unacceptably high resistance and therefore increase propagation delays. As their resistivity drops with increasing temperature, degradation of the maximum operating frequency of the chip the other way is an indicator of such a fault. Mousebites are regions where metallization has a decreased width; such defects usually do not show during electrical testing but present a major reliability risk. Increased current density in the mousebite can aggravate electromigration problems; a large degree of voiding is needed to create a temperature-sensitive propagation delay.Sometimes, circuit tolerances can make erratic behaviour difficult to trace; for example, a weak driver transistor, a higher series resistance and the capacitance of the gate of the subsequent transistor may be within tolerance but can significantly increase signal propagation delay. These can manifest only at specific environmental conditions, high clock speeds, low power supply voltages, and sometimes specific circuit signal states; significant variations can occur on a single die. Overstress-induced damage like ohmic shunts or a reduced transistor output current can increase such delays, leading to erratic behavior. As propagation delays depend heavily on supply voltage, tolerance-bound fluctuations of the latter can trigger such behavior.
Semiconductor failures:
Gallium arsenide monolithic microwave integrated circuits can have these failures: Degradation of IDSS by gate sinking and hydrogen poisoning. This failure is the most common and easiest to detect, and is affected by reduction of the active channel of the transistor in gate sinking and depletion of the donor density in the active channel for hydrogen poisoning.
Degradation in gate leakage current. This occurs at accelerated life tests or high temperatures and is suspected to be caused by surface-state effects.
Degradation in pinch-off voltage. This is a common failure mode for gallium arsenide devices operating at high temperature, and primarily stems from semiconductor-metal interactions and degradation of gate metal structures, with hydrogen being another reason. It can be hindered by a suitable barrier metal between the contacts and gallium arsenide.
Increase in drain-to-source resistance. It is observed in high-temperature devices, and is caused by metal-semiconductor interactions, gate sinking and ohmic contact degradation.
Semiconductor failures:
Metallisation failures Metallisation failures are more common and serious causes of FET transistor degradation than material processes; amorphous materials have no grain boundaries, hindering interdiffusion and corrosion. Examples of such failures include: Electromigration moving atoms out of active regions, causing dislocations and point defects acting as nonradiative recombination centers producing heat. This may occur with aluminium gates in MESFETs with RF signals, causing erratic drain current; electromigration in this case is called gate sinking. This issue does not occur with gold gates. With structures having aluminium over a refractory metal barrier, electromigration primarily affects aluminium but not the refractory metal, causing the structure's resistance to erratically increase. Displaced aluminium may cause shorts to neighbouring structures; 0.5-4% of copper in the aluminium increases electromigration resistance, the copper accumulating on the alloy grain boundaries and increasing the energy needed to dislodge atoms from them. Other than that, indium tin oxide and silver are subject to electromigration, causing leakage current and (in LEDs) nonradiative recombination along chip edges. In all cases, electromigration can cause changes in dimensions and parameters of the transistor gates and semiconductor junctions.
Semiconductor failures:
Mechanical stresses, high currents, and corrosive environments forming of whiskers and short circuits. These effects can occur both within packaging and on circuit boards.
Formation of silicon nodules. Aluminium interconnects may be silicon-doped to saturation during deposition to prevent alloy spikes. During thermal cycling, the silicon atoms may migrate and clump together forming nodules that act as voids, increasing local resistance and lowering device lifetime.
Semiconductor failures:
Ohmic contact degradation between metallisation and semiconductor layers. With gallium arsenide, a layer of gold-germanium alloy (sometimes with nickel) is used to achieve low contact resistance; an ohmic contact is formed by diffusion of germanium, forming a thin, highly n-doped region under the metal facilitating the connection, leaving gold deposited over it. Gallium atoms may migrate through this layer and get scavenged by the gold above, creating a defect-rich gallium-depleted zone under the contact; gold and oxygen then migrate oppositely, resulting in increased resistance of the ohmic contact and depletion of effective doping level. Formation of intermetallic compounds also plays a role in this failure mode.
Semiconductor failures:
Electrical overstress Most stress-related semiconductor failures are electrothermal in nature microscopically; locally increased temperatures can lead to immediate failure by melting or vaporising metallisation layers, melting the semiconductor or by changing structures. Diffusion and electromigration tend to be accelerated by high temperatures, shortening the lifetime of the device; damage to junctions not leading to immediate failure may manifest as altered current–voltage characteristics of the junctions. Electrical overstress failures can be classified as thermally-induced, electromigration-related and electric field-related failures; examples of such failures include: Thermal runaway, where clusters in the substrate cause localised loss of thermal conductivity, leading to damage producing more heat; the most common causes are voids caused by incomplete soldering, electromigration effects and Kirkendall voiding. Clustered distribution of current density over the junction or current filaments lead to current crowding localised hot spots, which may evolve to a thermal runaway.
Semiconductor failures:
Reverse bias. Some semiconductor devices are diode junction-based and are nominally rectifiers; however, the reverse-breakdown mode may be at a very low voltage, with a moderate reverse bias voltage causing immediate degradation and vastly accelerated failure. 5 V is a maximum reverse-bias voltage for typical LEDs, with some types having lower figures.
Semiconductor failures:
Severely overloaded Zener diodes in reverse bias shorting. A sufficiently high voltage causes avalanche breakdown of the Zener junction; that and a large current being passed through the diode causes extreme localised heating, melting the junction and metallisation and forming a silicon-aluminium alloy that shorts the terminals. This is sometimes intentionally used as a method of hardwiring connections via fuses.
Semiconductor failures:
Latchups (when the device is subjected to an over- or undervoltage pulse); a parasitic structure acting as a triggered SCR then may cause an overcurrent-based failure. In ICs, latchups are classified as internal (like transmission line reflections and ground bounces) or external (like signals introduced via I/O pins and cosmic rays); external latchups can be triggered by an electrostatic discharge while internal latchups cannot. Latchups can be triggered by charge carriers injected into chip substrate or another latchup; the JEDEC78 standard tests susceptibility to latchups.
Semiconductor failures:
Electrostatic discharge Electrostatic discharge (ESD) is a subclass of electrical overstress and may cause immediate device failure, permanent parameter shifts and latent damage causing increased degradation rate. It has at least one of three components, localized heat generation, high current density and high electric field gradient; prolonged presence of currents of several amperes transfer energy to the device structure to cause damage. ESD in real circuits causes a damped wave with rapidly alternating polarity, the junctions stressed in the same manner; it has four basic mechanisms: Oxide breakdown occurring at field strengths above 6–10 MV/cm.
Semiconductor failures:
Junction damage manifesting as reverse-bias leakage increases to the point of shorting.
Metallisation and polysilicon burnout, where damage is limited to metal and polysilicon interconnects, thin film resistors and diffused resistors.
Semiconductor failures:
Charge injection, where hot carriers generated by avalanche breakdown are injected into the oxide layer.Catastrophic ESD failure modes include: Junction burnout, where a conductive path forms through the junction and shorts it Metallisation burnout, where melting or vaporizing of a part of the metal interconnect interrupts it Oxide punch-through, formation of a conductive path through the insulating layer between two conductors or semiconductors; the gate oxides are thinnest and therefore most sensitive. The damaged transistor shows a low-ohmic junction between gate and drain terminals.A parametric failure only shifts the device parameters and may manifest in stress testing; sometimes, the degree of damage can lower over time. Latent ESD failure modes occur in a delayed fashion and include: Insulator damage by weakening of the insulator structures.
Semiconductor failures:
Junction damage by lowering minority carrier lifetimes, increasing forward-bias resistance and increasing reverse-bias leakage.
Semiconductor failures:
Metallisation damage by conductor weakening.Catastrophic failures require the highest discharge voltages, are the easiest to test for and are rarest to occur. Parametric failures occur at intermediate discharge voltages and occur more often, with latent failures the most common. For each parametric failure, there are 4–10 latent ones. Modern VLSI circuits are more ESD-sensitive, with smaller features, lower capacitance and higher voltage-to-charge ratio. Silicon deposition of the conductive layers makes them more conductive, reducing the ballast resistance that has a protective role.
Semiconductor failures:
The gate oxide of some MOSFETs can be damaged by 50 volts of potential, the gate isolated from the junction and potential accumulating on it causing extreme stress on the thin dielectric layer; stressed oxide can shatter and fail immediately. The gate oxide itself does not fail immediately but can be accelerated by stress induced leakage current, the oxide damage leading to a delayed failure after prolonged operation hours; on-chip capacitors using oxide or nitride dielectrics are also vulnerable. Smaller structures are more vulnerable because of their lower capacitance, meaning the same amount of charge carriers charges the capacitor to a higher voltage. All thin layers of dielectrics are vulnerable; hence, chips made by processes employing thicker oxide layers are less vulnerable.Current-induced failures are more common in bipolar junction devices, where Schottky and PN junctions are predominant. The high power of the discharge, above 5 kilowatts for less than a microsecond, can melt and vaporise materials. Thin-film resistors may have their value altered by a discharge path forming across them, or having part of the thin film vaporized; this can be problematic in precision applications where such values are critical.Newer CMOS output buffers using lightly doped silicide drains are more ESD sensitive; the N-channel driver usually suffers damage in the oxide layer or n+/p well junction. This is caused by current crowding during the snapback of the parasitic NPN transistor. In P/NMOS totem-pole structures, the NMOS transistor is almost always the one damaged. The structure of the junction influences its ESD sensitivity; corners and defects can lead to current crowding, reducing the damage threshold. Forward-biased junctions are less sensitive than reverse-biased ones because the Joule heat of forward-biased junctions is dissipated through a thicker layer of the material, as compared to the narrow depletion region in reverse-biased junction.
Passive element failures:
Resistors Resistors can fail open or short, alongside their value changing under environmental conditions and outside performance limits. Examples of resistor failures include: Manufacturing defects causing intermittent problems. For example, improperly crimped caps on carbon or metal resistors can loosen and lose contact, and the resistor-to-cap resistance can change the values of the resistor Surface-mount resistors delaminating where dissimilar materials join, like between the ceramic substrate and the resistive layer.
Passive element failures:
Nichrome thin-film resistors in integrated circuits attacked by phosphorus from the passivation glass, corroding them and increasing their resistance.
SMD resistors with silver metallization of contacts suffering open-circuit failure in a sulfur-rich environment, due to buildup of silver sulfide.
Copper dendrites growing from Copper(II) oxide present in some materials (like the layer facilitating adhesion of metallization to a ceramic substrate) and bridging the trimming kerf slot.
Passive element failures:
Potentiometers and trimmers Potentiometers and trimmers are three-terminal electromechanical parts, containing a resistive path with an adjustable wiper contact. Along with the failure modes for normal resistors, mechanical wear on the wiper and the resistive layer, corrosion, surface contamination, and mechanical deformations may lead to intermittent path-wiper resistance changes, which are a problem with audio amplifiers. Many types are not perfectly sealed, with contaminants and moisture entering the part; an especially common contaminant is the solder flux. Mechanical deformations (like an impaired wiper-path contact) can occur by housing warpage during soldering or mechanical stress during mounting. Excess stress on leads can cause substrate cracking and open failure when the crack penetrates the resistive path.
Passive element failures:
Capacitors Capacitors are characterized by their capacitance, parasitic resistance in series and parallel, breakdown voltage and dissipation factor; both parasitic parameters are often frequency- and voltage-dependent. Structurally, capacitors consist of electrodes separated by a dielectric, connecting leads, and housing; deterioration of any of these may cause parameter shifts or failure. Shorted failures and leakage due to increase of parallel parasitic resistance are the most common failure modes of capacitors, followed by open failures. Some examples of capacitor failures include: Dielectric breakdown due to overvoltage or aging of the dielectric, occurring when breakdown voltage falls below operating voltage. Some types of capacitors "self-heal", as internal arcing vaporizes parts of the electrodes around the failed spot. Others form a conductive pathway through the dielectric, leading to shorting or partial loss of dielectric resistance.
Passive element failures:
Electrode materials migrating across the dielectric, forming conductive paths.
Leads separated from the capacitor by rough handling during storage, assembly or operation, leading to an open failure. The failure can occur invisibly inside the packaging and is measurable.
Increase of dissipation factor due to contamination of capacitor materials, particularly from flux and solvent residues.
Passive element failures:
Electrolytic capacitors In addition to the problems listed above, electrolytic capacitors suffer from these failures: Aluminium versions having their electrolyte dry out for a gradual leakage, equivalent series resistance and loss of capacitance. Power dissipation by high ripple currents and internal resistances cause an increase of the capacitor's internal temperature beyond specifications, accelerating the deterioration rate; such capacitors usually fail short.
Passive element failures:
Electrolyte contamination (like from moisture) corroding the electrodes, leading to capacitance loss and shorts.
Electrolytes evolving a gas, increasing pressure inside the capacitor housing and sometimes causing an explosion; an example is the capacitor plague.
Tantalum versions being electrically overstressed, permanently degrading the dielectric and sometimes causing open or short failure. Sites that have failed this way are usually visible as a discolored dielectric or as a locally melted anode.
Passive element failures:
Metal oxide varistors Metal oxide varistors typically have lower resistance as they heat up; if connected directly across a power bus, for protection against voltage spikes, a varistor with a lowered trigger voltage can slide into catastrophic thermal runaway and sometimes a small explosion or fire. To prevent this, the fault current is typically limited by a thermal fuse, circuit breaker, or other current limiting device.
MEMS failures:
Microelectromechanical systems suffer from various types of failures: Stiction causing moving parts to stick; an external impulse sometimes restores functionality. Non-stick coatings, reduction of contact area, and increased awareness mitigate the problem in contemporary systems.
Particles migrating in the system and blocking their movements. Conductive particles may short out circuits like electrostatic actuators. Wear damages the surfaces and releases debris that can be a source of particle contamination.
Fractures causing loss of mechanical parts.
Material fatigue inducing cracks in moving structures.
Dielectric charging leading to change of functionality and at some point parameter failures.
Recreating failure modes:
In order to reduce failures, a precise knowledge of bond strength quality measurement during product design and subsequent manufacture is of vital importance. The best place to start is with the failure mode. This is based on the assumption that there is a particular failure mode, or range of modes, that may occur within a product. It is therefore reasonable to assume that the bond test should replicate the mode, or modes of interest. However, exact replication is not always possible. The test load must be applied to some part of the sample and transferred through the sample to the bond. If this part of the sample is the only option and is weaker than the bond itself, the sample will fail before the bond. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.