text
stringlengths
60
353k
source
stringclasses
2 values
**Champions Oncology** Champions Oncology: Champions Oncology is an American technology company that develops mouse avatars. Called TumorGrafts, they are used to test a panel of chemotherapy regimens, targeted therapies and monoclonal antibodies to identify potential therapeutic options for cancer patients. The company was founded in 2007 by David Sidransky, M.D., a Johns Hopkins University oncologist. TumorGrafts: Champions TumorGrafts maintain the microenvironment surrounding the tumor and have been shown to have high correlation to the patient’s tumor. Due to this close resemblance to the human tumor, TumorGrafts are highly predictive of treatment outcomes in patients. Studies have shown the mouse avatars predict clinical benefit in 80% of patients. Approximately 450 TumorGrafts have been established as of April 2014.TumorGrafts are also being used as a pre-clinical research tool to improve clinical drug development. Compared to traditional xenograft models, TumorGrafts, have a greater degree of accuracy in predicting clinical effectiveness of oncology drugs and thus can decrease clinical risk for drug developers. Champions has formed partnerships with multiple drug developers, including Teva and Pfizer. TumorGrafts: The process When a cancer patient undergoes surgery or biopsy, a living sample of the tumor is obtained and implanted into the mouse, creating a mouse avatar. Once the TumorGraft has successfully grown, the tumor is then propagated in a second generation of mice and tested against a panel of cancer drugs and drug combinations to help identify more accurately which treatment regimen is likely to be most effective in a specific patient. In this way, various drugs are tested on a live sample of the actual patient's tumor, rather than on the patient. This reduces the likelihood of treatment with ineffective drugs and their associated side effects, and it increases the likelihood of finding a treatment that will work against the patient's tumor. For patients whose tumors have also undergone molecular testing, such as next-generation sequencing, the selection of potential drugs is further guided by any and all applicable results. In the event the cancer progresses or recurs, Champions also banks, or stores, each successful TumorGraft for potential future patient use.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Charged Particle Lunar Environment Experiment** Charged Particle Lunar Environment Experiment: The Charged Particle Lunar Environment Experiment (CPLEE), placed on the lunar surface by the Apollo 14 mission as part of the Apollo Lunar Surface Experiments Package (ALSEP), was designed to measure the energy spectra of low-energy charged particles striking the lunar surface. It measured the fluxes of electrons and ions with energies from 40 eV to 20 keV. The primary purpose of the experiment was to examine plasma particles originating from the Sun and the low-energy particle flux in the Earth's magnetic tail. Design: The CPLEE had a mass of 2.7 kg (6.0 lb), a stowed volume of 2540 cubic cm, and used 3.0 W power normally and 6.0 W at night when the survival heater was on. The main part of the instrumentation consisted of two electrostatic analyzers. One of these (analyzer A) pointed toward local lunar vertical, and the other (analyzer B) to a point 60 deg from vertical toward lunar west. Both detectors had fields of view of 4 x 20 degrees; for analyzer A the long axis of the field of view was oriented N-S, and for analyzer B, E-W. As a first approximation, both detectors could be considered to point in the ecliptic plane. Each analyzer consisted of a set of direction-defining slits, deflection plates, five small-aperture (1 mm nominal) C-shaped channel electron multipliers, one large-aperture (8 mm nominal) helical channel electron multiplier and 6 accumulators. For a given voltage applied to the deflection plates, the five small-aperture multipliers were arranged to count particles of one polarity with differing energies, while the large-aperture multiplier simultaneously made a wide-band measurement of particles of the opposite polarity. During each 19.2-s interval in the automatic mode of experiment operation, deflection voltages of zero (twice, for background and calibration) and plus and minus 35, 350, and 3500 were applied to the deflection plates for 2.4 s at each voltage. Each analyzer would make measurements for 1.2 s and transmit while the other analyzer was operating. The little-used manual mode permitted the continuous application of a single deflection voltage, thus increasing temporal resolution for particles in a limited portion of the spectrum. Useful data obtained during each 19.2-s interval (automatic mode) where, for each analyzer, 1.2-s accumulated counts of electrons and ions in 18 energy windows between 40 eV and 20 keV. The windows utilizing all 6 detectors at 35 V are centered roughly at 40, 50, 65, 70, 95, and 200 eV, the windows at 350 V are 10x and at 3500 V are 100x these values. A dust cover with a 63Ni radioactive source on the underside over each aperture for calibration protected the instrument. Design: The instrument was designed by Australian Professor Brian J. O'Brien, who was a professor in the Department of Space Science at Rice University. After he left Rice University in 1968, his postdoctoral student David L. Reasoner (PhD., 1968) took over the job of PI of the instrument and its data analysis. Two Rice University students earned PhD's analyzing CPLEE data: Frederick J. Rich (PhD, 1973) and Patricia H. Reiff (PhD, 1975). Timelines: The ALSEP central station was located at 3.6440°S 17.4775°W / -3.6440; -17.4775 (Apollo 14 ALSEP). The charged particle lunar environment experiment was deployed approximately 3 meters northeast of the central station. Leveling to 1.7 degrees, tipped to the east, was accomplished with a bubble level and east-west alignment to within 1 degree with a Sun compass. The instrument was deployed at approximately 18:00 UT on 5 February 1971 and commanded on at 19:00 UT for 5 minutes of functional tests. A checkout procedure was conducted on 6 February from 4:00 to 6:10 UT. Following LM ascent on 6 February at 18:49 UT the dust cover was commanded to be removed at 19:30 UT. Timelines: The experiment worked normally from deployment until April 8, 1971, when the power supply for the analyzer pointing away from lunar vertical (analyzer B) failed. The other analyzer continued to function normally until June 6, 1971, when a partial failure of the power supply occurred. Operation of this analyzer was intermittent for the rest of 1971. During most of 1972, operation was continuous during lunar night and intermittent during lunar day because high temperatures caused a low voltage condition in the power supply. From December 1972 to February 1973 operation was continuous, after which time the voltage problems occurred again. The Apollo 14 central station signal was lost on 1 March 1975 and reacquired on 5 March. Loss and reacquisition of signal happened sporadically until termination of the ALSEP experiment. Loss-reacquisition occurred in 1976 on 18 January – 19 February, 17 March – 20 May, 8 June – 10 June, 9 October – 12 November and in 1977 on 30 July – 4 August. The CPLEE experiment was in standby mode when the ALSEP stations were turned off on 30 September 1977.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Auroral kilometric radiation** Auroral kilometric radiation: Auroral kilometric radiation (AKR) is the intense radio radiation emitted in the acceleration zone (at a height of three times the radius of the Earth) of the polar lights. The radiation mainly comes from cyclotron radiation from electrons orbiting around the magnetic field lines of the Earth. The radiation has a frequency of between 50 and 500 kHz and a total power of between about 1 million and 10 million watts. The radiation is absorbed by the ionosphere and therefore can only be measured by satellites positioned at vast heights, such as the Fast Auroral Snapshot Explorer (FAST). According to the data of the Cluster mission, it is beamed out in the cosmos in a narrow plane tangent to the magnetic field at the source. The sound produced by playing AKR over an audio device has been described as "whistles", "chirps", and even "screams". Auroral kilometric radiation: As some other planets emit cyclotron radiation too, AKR could be used to learn more about Jupiter, Saturn, Uranus and Neptune, and to detect extrasolar planets.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Split TEV** Split TEV: The split TEV technique is a molecular method to monitor protein-protein interactions in living cells. It is based on the functional reconstitution of two previously inactive fragments derived from the NIa protease of the tobacco etch virus (TEV protease). These fragments, either an N-terminal (NTEV) or C-terminal part (CTEV), are fused to protein interaction partners of choice. Upon interaction of the two candidate proteins, the NTEV and CTEV fragments get into close proximity, regain proteolytic activity, and activate specific TEV reporters which indicate an occurred protein-protein interaction.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Negobot** Negobot: Negobot also referred to as Lolita or Lolita chatbot is a chatterbot that was introduced to the public in 2013, designed by researchers from the University of Deusto and Optenet to catch online pedophiles. It is a conversational agent that utilizes natural language processing (NLP), information retrieval (IR) and Automatic Learning. Because the bot poses as a young female in order to entice and track potential predators, it became known in media as the "virtual Lolita", in reference to Vladimir Nabokov's novel. Background: In 2013, the University of Deusto researchers published a paper on their work with Negobot and disclosed the text online. In their abstract, the researchers addressed the issue that an increasing number of children are using the internet and that these young users are more susceptible to existing internet risks. Their main objective was to create a chatterbot with the ability to trap online predators that posed a threat to children. They intended to deploy the bot into sites frequented by predators such as social networks and chatrooms. The university researchers used information provided by anti-pedophilia activist organization Perverted-Justice, including examples of online encounters and conversations with sexual predators, to supplement the program's artificial intelligence system. Features: Programmed persona The chatterbot takes the guise of a naive and vulnerable 14-year-old girl. The bot's programmers used methods of artificial intelligence and natural language processing to create a conversational agent fluent in typical teenage slang, misspellings, and knowledge of pop culture. Through these linguistic features, the bot is able to mimic the conversational style of young teenagers. It also features split personalities and seven different patterns of conversation. Negobot's primary creator, Dr. Carlos Laorden, expressed the significance of the bot's distinguishable style of communication, stating that normally, "chatbots tend to be very predictable. Their behavior and interest in a conversation are flat, which is a problem when attempting to detect untrustworthy targets like paedophiles." What makes Negobot different is its game theory feature, which makes it able to "maintain a much more realistic conversation." Apart from being able to imitate a stereotypical teenager, the program is also able to translate messages into different languages. Features: Game theory Negobot's designers programmed it with the ability to treat conversations with potential predators as if it were a game, the objective being to collect as much information on the suspect as possible that could provide evidence of pedophilic characteristics and motives. The use of game theory shapes the decisions the bot makes and the overall direction of the conversation.The bot initiates its undercover operations by entering a chat as a passive participant, waiting to be chatted by a user. Once a user elicits conversation, the bot will frame the conversation in such a way that keeps the target engaged, extracting personal information and discouraging it from leaving the chat. The information is then recorded to be potentially sent to the police. If the target seems to lose interest, the bot attempts to make it feel guilty by expressing sentiments of loneliness and emotional need through strategic, formulated responses, ultimately prolonging interaction. In addition, the bot may provide fake information about itself in attempt to lure the target into physical meetings. Features: Limitations Despite being able to carry out a realistic conversation, Negobot is still unable to detect linguistic subtleties in the messages of others, including sarcasm. Controversy: John Carr, a specialist in online child safety, expressed his concern to BBC over the legality of this undercover investigation. He claimed that using the bot on unsuspecting internet users could be considered a form of entrapment or harassment. The type of information that Negobot collects from potential online predators, he said, is unlikely to be upheld in court. Furthermore, he warned that relying on only software without any real-world policing risks enticing individuals to do or say things that they would not have if real-world policing were a factor.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**LINPACK benchmarks** LINPACK benchmarks: The LINPACK Benchmarks are a measure of a system's floating-point computing power. Introduced by Jack Dongarra, they measure how fast a computer solves a dense n by n system of linear equations Ax = b, which is a common task in engineering. LINPACK benchmarks: The latest version of these benchmarks is used to build the TOP500 list, ranking the world's most powerful supercomputers.The aim is to approximate how fast a computer will perform when solving real problems. It is a simplification, since no single computational task can reflect the overall performance of a computer system. Nevertheless, the LINPACK benchmark performance can provide a good correction over the peak performance provided by the manufacturer. The peak performance is the maximal theoretical performance a computer can achieve, calculated as the machine's frequency, in cycles per second, times the number of operations per cycle it can perform. The actual performance will always be lower than the peak performance. The performance of a computer is a complex issue that depends on many interconnected variables. The performance measured by the LINPACK benchmark consists of the number of 64-bit floating-point operations, generally additions and multiplications, a computer can perform per second, also known as FLOPS. However, a computer's performance when running actual applications is likely to be far behind the maximal performance it achieves running the appropriate LINPACK benchmark.The name of these benchmarks comes from the LINPACK package, a collection of algebra Fortran subroutines widely used in the 1980s, and initially tightly linked to the LINPACK benchmark. The LINPACK package has since been replaced by other libraries. History: The LINPACK benchmark report appeared first in 1979 as an appendix to the LINPACK user's manual.LINPACK was designed to help users estimate the time required by their systems to solve a problem using the LINPACK package, by extrapolating the performance results obtained by 23 different computers solving a matrix problem of size 100. History: This matrix size was chosen due to memory and CPU limitations at that time: 10,000 floating-point entries from -1 to 1 are randomly generated to fill in a general, dense matrix, then, LU decomposition with partial pivoting is used for the timing.Over the years, additional versions with different problem sizes, like matrices of order 300 and 1000, and constraints were released, allowing new optimization opportunities as hardware architectures started to implement matrix-vector and matrix-matrix operations.Parallel processing was also introduced in the LINPACK Parallel benchmark in the late 1980s.In 1991, the LINPACK was modified for solving problems of arbitrary size, enabling high performance computers (HPC) to get near to their asymptotic performance. History: Two years later this benchmark was used for measuring the performance of the first TOP500 list. The benchmarks: LINPACK 100 LINPACK 100 is very similar to the original benchmark published in 1979 along with the LINPACK users' manual. The benchmarks: The solution is obtained by Gaussian elimination with partial pivoting, with 2/3n³ + 2n² floating-point operations where n is 100, the order of the dense matrix A that defines the problem. Its small size and the lack of software flexibility doesn't allow most modern computers to reach their performance limits. However, it can still be useful to predict performances in numerically intensive user written code using compiler optimization. The benchmarks: LINPACK 1000 LINPACK 1000 can provide a performance nearer to the machine's limit because in addition to offering a bigger problem size, a matrix of order 1000, changes in the algorithm are possible. The only constraints are that the relative accuracy can't be reduced and the number of operations will always be considered to be 2/3n³ + 2n², with n = 1000. The benchmarks: HPLinpack The previous benchmarks are not suitable for testing parallel computers, and the so-called Linpack's Highly Parallel Computing benchmark, or HPLinpack benchmark, was introduced. In HPLinpack the size n of the problem can be made as large as it is needed to optimize the performance results of the machine. Once again, 2/3n³ + 2n² will be taken as the operation count, with independence of the algorithm used. Use of the Strassen algorithm is not allowed because it distorts the real execution rate. The benchmarks: The accuracy must be such that the following expression is satisfied: ‖Ax−b‖‖A‖‖x‖nϵ≤O(1) , where ϵ is the machine's precision, and n is the size of the problem, ‖⋅‖ is the matrix norm and O(1) corresponds to the big-O notation. For each computer system, the following quantities are reported: Rmax: the performance in GFLOPS for the largest problem run on a machine. Nmax: the size of the largest problem run on a machine. N1/2: the size where half the Rmax execution rate is achieved. Rpeak: the theoretical peak performance GFLOPS for the machine.These results are used to compile the TOP500 list twice a year, with the world's most powerful computers. TOP500 measures these in double-precision floating-point format (FP64). LINPACK benchmark implementations: The previous section describes the ground rules for the benchmarks. The actual implementation of the program can diverge, with some examples being available in Fortran, C or Java. LINPACK benchmark implementations: HPL HPL is a portable implementation of HPLinpack that was written in C, originally as a guideline, but that is now widely used to provide data for the TOP500 list, though other technologies and packages can be used. HPL generates a linear system of equations of order n and solves it using LU decomposition with partial row pivoting. It requires installed implementations of MPI and either BLAS or VSIPL to run.Coarsely, the algorithm has the following characteristics: cyclic data distribution in 2D blocks LU factorization using the right-looking variant with various depths of look-ahead recursive panel factorization six different panel broadcasting variants bandwidth reducing swap-broadcast algorithm backward substitution with look-ahead of depth 1 Criticism: The LINPACK benchmark is said to have succeeded because of the scalability of HPLinpack, the fact that it generates a single number, making the results easily comparable and the extensive historical data base it has associated. Criticism: However, soon after its release, the LINPACK benchmark was criticized for providing performance levels "generally unobtainable by all but a very few programmers who tediously optimize their code for that machine and that machine alone", because it only tests the resolution of dense linear systems, which are not representative of all the operations usually performed in scientific computing.Jack Dongarra, the main driving force behind the LINPACK benchmarks, said that, while they only emphasize "peak" CPU speed and number of CPUs, not enough stress is given to local bandwidth and the network.Thom Dunning, Jr., director of the National Center for Supercomputing Applications, had this to say about the LINPACK benchmark: "The Linpack benchmark is one of those interesting phenomena -- almost anyone who knows about it will deride its utility. They understand its limitations but it has mindshare because it's the one number we've all bought into over the years."According to Dongarra, "the organizers of the Top500 are actively looking to expand the scope of the benchmark reporting" because "it is important to include more performance characteristic and signatures for a given system". Criticism: One of the possibilities that is being considered to extend the benchmark for the TOP500 is the HPC Challenge Benchmark Suite. With the advent of petascale computers, traversed edges per second have started to emerge as a complementary metric to FLOPS measured by LINPACK. Another such metric is the HPCG benchmark, proposed by Dongarra. The running time issue According to Jack Dongarra, the running time required to obtain good performance results with HPLinpack is expected to increase. At a conference held in 2010, he said he expects running times of 2.5 days in "a few years".
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Syntrophin, alpha 1** Syntrophin, alpha 1: Alpha-1-syntrophin is a protein that in humans is encoded by the SNTA1 gene. Alpha-1 syntrophin is a signal transducing adaptor protein and serves as a scaffold for various signaling molecules. Alpha-1 syntrophin contains a PDZ domain, two Pleckstrin homology domain and a 'syntrophin unique' domain. Function: Dystrophin is a large, rod-like cytoskeletal protein found at the inner surface of muscle fibers. Dystrophin is missing in Duchenne Muscular Dystrophy patients and is present in reduced amounts in Becker Muscular Dystrophy patients. The protein encoded by this gene is a peripheral membrane protein found associated with dystrophin and dystrophin-related proteins. This gene is a member of the syntrophin gene family, which contains at least two other structurally related genes. The PDZ domain of syntrophin-α1(SNTA1), the most abundant isoform in the heart, has been reported to bind to the C-terminal domain of murine cardiac voltage-gated sodium channels (SkM2) causing altering ion channel activity leading to Long QT syndrome. Interactions: Syntrophin, alpha 1 has been shown to interact with Dystrophin, Nav1.1 and Nav1.5, and Aquaporin 4.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Extramammary Paget's disease** Extramammary Paget's disease: Extramammary Paget's Disease (EMPD) is a rare and slow-growing malignancy which occurs within the epithelium and accounts for 6.5% of all Paget's disease. The clinical presentation of this disease is similar to the characteristics of mammary Paget's disease (MPD). However, unlike MPD, which occurs in large lactiferous ducts and then extends into the epidermis, EMPD originates in glandular regions rich in apocrine secretions outside the mammary glands. EMPD incidence is increasing by 3.2% every year, affecting hormonally-targeted tissues such as the vulva and scrotum. In women, 81.3% of EMPD cases are related to the vulva, while for men, 43.2% of the manifestations present at the scrotum.The disease can be classified as being either primary or secondary depending on the presence or absence of associated malignancies. EMPD presents with typical symptoms such as scaly, erythematous, eczematous lesions accompanied by itchiness. In addition to this, 10% of patients are often asymptomatic. As a consequence, EMPD has high rates of misdiagnoses and delayed diagnoses. There are a variety of treatment options available, but most are unsuccessful. If caught early and treated, prognosis is generally good. Presentation: Patients with EMPD present with typical symptoms, similar to MPD, such as severe itchiness (also called pruritus), rash, plaque formation, burning sensation, pain and tenderness. These symptoms are often confused for dermatitis or eczema. 10% of patients are asymptomatic resulting in delayed diagnosis. In rare cases bleeding can also be seen. Presentation: Disease of the vulva Vulvar Paget's disease affect women and presents as erythematous (red), eczematous lesions. It is itchy and sometimes pain can be associated with the affected area. The lesion is clearly separated from normal skin in most cases, and sometimes scattered areas of white scale can be present, giving a "strawberries and cream" appearance.Involvement may be extensive including the perianal region, genitocrural, and inguinal folds. Clinical examination should determine the presence of periurethral and perianal lesions. In these cases an involvement of the skin by a noncutaneous internal neoplasm may occur. Pathophysiology: EMPD occurs due to an invasion of the epidermis by Paget cells. The cause of the disease is still under debate with recent research indicating that the disease may be associated with Toker cells. Disease of the vulva Originates from local organs such as the Bartholin gland, the urethra, or the rectum. Predilection towards postmenopausal women. Metastatic disease Metastasis of Paget cells from the epidermis to distant regions is a multistep process that involves: Invasion of local lymph nodes and venous system Movement out from lymph nodes and venous system Proliferation at new siteProtein molecules HER2 and mTOR expressed in Paget cells are responsible for providing characteristics of proliferation and survival. Diagnosis: Due to the rarity of EMPD and lack of clinical knowledge, the disease is not very commonly diagnosed. Patients are often misdiagnosed with eczema or dermatitis and a delay of 2 years is expected from the onset of symptoms before a definitive diagnosis has been reached.It is important to include that the lesion is associated with another cancer. A biopsy will establish the diagnosis. Punch biopsies are not effective in differentially diagnosing for EMPD and as a result, excisional biopsies of the affected area are taken [XX]. A positive test result for EMPD shows increased numbers of large polygonal cells with a pale bluish cytoplasm, large nucleus and nucleolus, infiltrating the epidermal layer. These neoplastic cells can be found singly scattered or can appear in groups called nests.Paget cells contain mucin and cytokeratins which can be used in the diagnosis of EMPD [8] MUC5A2 is found in EMPD of the vulvar and male genitalia regions whereas MUC2 is expressed in perianal EMPD. Loss of MUC5A2 can indicate an invasive spread. Immunohistochemistry (IHC) can be used to determine whether EMPD is either primary or secondary. Primary EMPD tests positive for CK7 but negative for CK20, whereas secondary is positive for both. Lack of positivity for hormone receptors and HER2 protein is overexpressed meaning that the cells are dividing rapidly and can be indicate an aggressive and more recurrent disease. Diagnosis: Classification Primary EMPD is of cutaneous origins and is found within the epidermis or the underlying apocrine glands. Although it is limited to the epithelium, it has potential to spread and progress into an invasive tumour, metastasising to the local lymph nodes and distant organs. This form of EMPD is not associated with an adenocarcinoma.The secondary form results due to an underlying adenocarcinoma spreading to the epidermis. Similar to the primary form, if secondary EMPD invades the dermis, the neoplastic cells can metastasise to the lymph nodes and in some cases, the dermis.According to the Wilkinson and Brown subclassification system, there are three subtypes for each classification. Treatment: Many chemotherapy treatments have been used, however the results are not desirable as prognosis remains to be poor. Surgery remains to be the preferred treatment of choice. Wide local excision with a 3 cm margin and intra-operative frozen sections are suggested, due to high risk of local extension despite normal appearing tissue. In cases where Paget cells have invaded the dermis and metastasized, complete removal is often unsuccessful. Recurrence is a common result. Lymphadenoectomy is often performed for infiltrative cases. In lieu of surgery, radiotherapy is also an option and is especially preferred for elderly patients or for inoperable cases where the tumour size is too large. This form of treatment is also considered as possible adjuvant therapy following excision to combat the high recurrence rate. However, there are side effects of radiotherapy, including but not limited to: vulvitis, post-radiation atrophy of mucus membranes, vaginal stenosis and sexual dysfunction.Laser therapy and photodynamic therapies were also used in the past, but it was discovered the carbon dioxide laser did not penetrate deep enough and both treatment modalities resulted in high recurrence rates.Topical chemotherapy treatments are effective, with imiquimod showing promising results. However, overall survival begins to decline 10 months following treatment with chemotherapy. Patients with metastatic EMPD only survive for a median of 1.5 years, and have a 7% 5-year survival rate. Prognosis: Prognosis is generally good, but factors such as depth of invasion and duration of disease need to be considered. In primary EMPD, if invasion of the underlying tissue is non-existent or even minimal, treatment options are more likely to be effective, however, if there are signs that the disease has metastasised, the prognosis is usually poor. Epidemiology: EMPD is most prevalent in Caucasian women and Asian men over the age of 60. The invasive form occurs in 5–25% of all EMPD patients and 17–30% of the cases involve an underlying adenocarcinoma. 10-20% of EMPD is of the secondary form. Approximately 10% of patients develop invasive adenocarcinoma that may progress to metastatic disease.The disease affects regions that are rich in apocrine secretions. 65% EMPD occurs at the vulva, followed by 15% at perianal areas and 14% at the male genitalia. In terms of the vulva, the labia majora is the site that is most often involved, followed by labia minora, clitoris and perineum. EMPD originating at the vulva can spread to the upper vaginal mucosa and cervix. Other areas where EMPD can be found, although very rarely, include: the axillae, eyelids, external auditory canal, umbilical region, trunk and extremities. History: The first case of Paget's disease was described by James Paget in 1874. Radcliffe Crocker, in 1889, then described EMPD following an observation of a patient with urinary carcinoma affecting the penis and scrotum, showing symptoms that were almost identical to MPD as described by Paget. Later in 1893, Darier and Coulillaud described the perianal location of EMPD.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Vapour density** Vapour density: Vapour density is the density of a vapour in relation to that of hydrogen. It may be defined as mass of a certain volume of a substance divided by mass of same volume of hydrogen. vapour density = mass of n molecules of gas / mass of n molecules of hydrogen gas . vapour density = molar mass of gas / molar mass of H2 vapour density = molar mass of gas / 2.016 vapour density = 1⁄2 × molar mass(and thus: molar mass = ~2 × vapour density) For example, vapour density of mixture of NO2 and N2O4 is 38.3. Vapour density is a dimensionless quantity. Alternative definition: In many web sources, particularly in relation to safety considerations at commercial and industrial facilities in the U.S., vapour density is defined with respect to air, not hydrogen. Air is given a vapour density of one. For this use, air has a molecular weight of 28.97 atomic mass units, and all other gas and vapour molecular weights are divided by this number to derive their vapour density. For example, acetone has a vapour density of 2 in relation to air. That means acetone vapour is twice as heavy as air. This can be seen by dividing the molecular weight of Acetone, 58.1 by that of air, 28.97, which equals 2. Alternative definition: With this definition, the vapour density would indicate whether a gas is denser (greater than one) or less dense (less than one) than air. The density has implications for container storage and personnel safety—if a container can release a dense gas, its vapour could sink and, if flammable, collect until it is at a concentration sufficient for ignition. Even if not flammable, it could collect in the lower floor or level of a confined space and displace air, possibly presenting an asphyxiation hazard to individuals entering the lower part of that space.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Dioleoyl-3-trimethylammonium propane** Dioleoyl-3-trimethylammonium propane: 1,2-Dioleoyl-3-trimethylammonium propane (often abbreviated DOTAP or 18:1TAP) is a di-chain, or gemini, cationic surfactant. It is most commonly encountered as an active ingredient in certain fabric softeners. The pure material can also be used for the liposomal-transfection of DNA, RNA and other negatively charged molecules. Synthesis: The commercial material used for fabric softening is formed by the di-esterification of 2,3-epoxypropyltrimethylammoniumchloride (EPTAC) with partially hydrogenated palm oil and as such contains a mixture of fatty acid tails; palmitic (saturated C16), stearic (saturated C18), oleic (monounsaturated C18) and linoleic (polyunsaturated C18). In practice the saturated di-sterate compound tend to be the major component of these mixtures. Material intended for transfection is prepared similarly from high purity oleic acid. Applications: Fabric softener It was originally introduced into European markets during the 1990s due to concerns over the environmental effects of DODAC, which was the principle softener used at the time. The main difference was the incorporation of cleavable ester groups intended to accelerate its biodegradation. It is a superior softener to di- and triethanolamine based softeners but suffers from an increased tendency to hydrolyse. Small patch test studies have not shown any clear evidence of it acting is a skin irritant. Applications: Liposomal transfection agent DOTAP is a cationic surfactant and is able to form stable cationic liposomes in solution, these readily absorb DNA and other negatively charged organic compounds. The DNA laden liposomes can then be added directly to cell culture medium, where they will combine with the cell membrane and release their payload into the cell.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Sensacell** Sensacell: Sensacell is an interactive interface technology developed by the Sensacell Corporation. A Sensacell surface functions is an interactive touchscreen display, but on a large-scale framework. Individual tile-like modules—each containing LED (Light-emitting diode) lighting and capacitive sensors—are connected in an open-ended array. As the sensors can read through solid materials. A constructed surface essentially functions as a multi-touch touchscreen, but with additional capabilities due to the nature of the capacitive sensors used in the tiles. The sensing electrodes can detect, without physical contact, persons and objects moving in proximity to the surface, to a distance of 150mm. The ability to detect proximity provides a third variable of user input. A traditional touchscreen collects information on the two-dimensional plane of the surface itself; a “touch” or other input is translated into x-axis and y-axis coordinates on a Cartesian grid. Sensacell surfaces can track the relative distance of an object, adding a three-dimensional, or z-axis coordinate, data object that can be captured and processed. The technology was developed by Leo Fernekes and architect Joakim Hannerz in 2004.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Sega Smash Pack** Sega Smash Pack: Sega Smash Pack (Sega Archives from USA in Japan) is a series of game compilations featuring mostly Sega Genesis games. Pack 1 (Windows): The first pack titled Sega Smash Pack (Sega Archives from USA Vol. 1 in Japan) featured eight games. Altered Beast Columns Golden Axe Out Run Phantasy Star II Sonic Spinball The Revenge of Shinobi Vectorman Pack 2 (Windows): The second pack titled Sega Puzzle Pack (Sega Archives from USA Vol. 2 in Japan) featured three games. Columns III Dr. Robotnik's Mean Bean Machine Lose Your Marbles Pack 3 (Windows): The third pack titled Sega Smash Pack 2 (Sega Archives from USA Vol. 3 in Japan) featured eight games. Comix Zone Flicky Kid Chameleon Sega Swirl Shining Force Sonic the Hedgehog 2 Super Hang-On Vectorman 2 Console (Dreamcast): The console version of Sega Smash Pack was released for Dreamcast titled Sega Smash Pack Volume 1 and featured the following twelve games: Altered Beast Columns Golden Axe Phantasy Star II The Revenge of Shinobi Sega Swirl Shining Force Sonic the Hedgehog Streets of Rage 2 Vectorman Virtua Cop 2 Wrestle WarJeff Gerstmann from GameSpot gave the console version a 4.5/10. He criticised the console version for its patchy performance and poorly emulated music.The Genesis emulator built inside the compilation gained popularity with homebrew groups, as Echelon released a kit that allowed users to add and load their own Genesis ROMs. Gary Lake, the programmer, had himself deliberately left a documentation of the built-in emulator, with the documentation seemingly intended at them due to the filename (ECHELON.TXT). Additionally, Sega Swirl and Virtua Cop 2 were the only non-Genesis games in the compilation. Handheld (Game Boy Advance): The handheld version of Sega Smash Pack was released for Game Boy Advance simply titled Sega Smash Pack and featured three games, two of which had been included in the first Smash Pack. While Ecco the Dolphin and Sonic Spinball were developed using the original source code, Golden Axe had to be recreated from scratch. Ecco the Dolphin Golden Axe Sonic SpinballCraig Harris from IGN gave the handheld version a 6/10. He criticised the handheld version for several technical issues and lack of cooperative multiplayer in Golden Axe. It was nominated for GameSpot's annual "Most Disappointing Game on Game Boy Advance" award, which went to The Revenge of Shinobi.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**EVM Pilot Project** EVM Pilot Project: EVM Pilot Project is an under process Electronic Voting Machine for the forthcoming general elections in Pakistan and giving the right to vote to Pakistanis living abroad. The ruling party has already passed the bill in the National Assembly on the basis of majority in a joint session. Controversy: Opposition parties have stated they will not run in the elections, saying the use of electronic voting machines is tantamount to rigging the upcoming elections.The Election Commission has also expressed its concerns over the use of this machine and has submitted 37 points in this regard in writing to the Standing Committee.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Low-ball** Low-ball: The low-ball is a persuasion, negotiation, and selling technique. Overview: By buyers When used by buyer, the low-ball is an offer for goods or services far lower than the price the buyer is willing to pay, made in the hope that the seller will at least counter-offer a price lower than the original asking price. Sellers looking to maximize profit but expecting would-be buyers to haggle may conversely make a "high-ball" offer and/or asking price. Overview: By sellers When a seller makes a low-ball offer this means an item or service is offered at a lower price than what is needed actually for the desired profit margin to be realized. The seller makes the offer with the intent of quickly raising the price in order to increase profits and/or with the intent of selling would-be buyers additional, more profitable products and services. An explanation for the effect is provided by cognitive dissonance theory. If a person is already enjoying the prospect of an excellent deal and the future benefits of the item or idea, then backing out would create cognitive dissonance, which is prevented by playing down the negative effect of the "extra" costs. Overview: The converse offer from a buyer, a "high-ball" offer, is an offer at a price the buyer hopes is not quickly accepted, made with the intention of being replaced with a reduced price to pressure a reluctant seller. By taxpayers Low-balling is also a form of tax evasion where a filer misrepresents the amount of taxable income on a tax return. It is most common in situations where the tax authorities reasonably expect taxable income to exist but cannot, without the taxpayer's cooperation, independently determine the amount for want of any reliable paper trail and/or other documentation. Overview: For example, most jurisdictions legally require taxpayers to report gratuities and pay taxes on the full amount. However, if a taxpayer receives all of his or her gratuities in cash, (s)he may low-ball on his or her tax return by declaring only a portion of the gratuities received. Unless the taxpayer has failed to disclose anything at all (or declared an unrealistically low figure), then without reliable documentation to prove any suspicions tax authorities and the governments they serve face a dilemma – they can either choose not to pursue their suspicions or they can employ highly subjective and/or arbitrary enforcement methods (such as so-called "lifestyle audits") to provide legal basis to their claims. Either approach carries the risk of damaging public confidence in the integrity and/or fairness of the tax system with a segment of the population. Overview: Tax authorities employ various methods to deter such activities. For example, the Internal Revenue Service in the United States requires employers in industries where tipping is common to maintain meticulous records of all tips earned and to account for tips when calculating payroll deductions, and also levies heavy penalties against employers and employees alike in cases of noncompliance. Even absent such rigorous and targeted recordkeeping requirements, the increasing prevalence of tipping using electronic payment methods makes it far easier today for tax authorities to obtain credible evidence of low-balling compared to past years. Overview: Taxpayers able to claim deductions may sometimes "high-ball" these figures to low-ball their taxable income. For example, a taxpayer who is allowed to deduct fuel expenses may high-ball this write-off by also claiming fuel purchased for personal use. Especially if the taxpayer has falsified a mileage log and/or purchases personal use fuel from the same vendors (s)he uses for legitimate business fuel purchases (and obtains the same sort of receipts for both) then proving a taxpayer has illegally claimed such personal expenses can often be extremely difficult. In response, tax authorities suspecting such activities sometimes forgo criminal charges in favor of civil proceedings since these have a much lower standard of proof. Overview: To further deter low-balling, lawmakers in some jurisdictions have even enacted measures to apply reverse onus in civil tax proceedings, meaning that when the tax authorities choose to pursue civil proceedings it is up to the taxpayer to prove that (s)he did not earn the disputed income and/or incur the disputed expenses legitimately and not the other way around. Negotiation: In negotiation, an ambit claim is an initial demand made over and above what is expected in counter-offers and settlement. Studies: Cialdini, Cacioppo, Bassett, and Miller (1978) demonstrated the technique of low-balling in a university setting. They asked an initial group of first-year psychology students to volunteer to be part of a study on cognition. The researchers were clear about the meeting time being 7 a.m. Only 31 per cent (control group) of the first-year college students were willing to sacrifice and wake up early to support research in psychology. In a second group condition (lowballed group), the subjects were asked the same favour, but this time they were not told a time. Of them, 56 per cent agreed to take part. After agreeing to help in the study, they were told that they would have to meet at 7 a.m. and that they could back out if they so wished. None backed out of their commitment. Studies: On the day of the actual meeting, 95% of the lowballed group who agreed to participate showed up for their 7 a.m. appointment and 79% of the control group who agreed to participate. Hence, when people have already showed commitment it's least likely for them to back off as they have already made up their mind.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Deletion (music industry)** Deletion (music industry): Deletion is a music industry term referring to the removal of a record or records from a label's official catalog, so that it is out of print. This is usually done when a title becomes unprofitable to manufacture, but it may also occur at a record artist's request. Process: Deletion can be for a variety of reasons, but usually reflects a decline in sales so that distributing the record is no longer profitable. Singles are routinely deleted after a period of weeks, but an album by a major artist may remain in the catalog indefinitely. Process: When titles are deleted in the US, the remaining stock would be defaced with a cut-out through the sleeve or case. Cut-out records formed a grey market outside the major distribution channels. In the 1993 book Stiffed: A True Story of MCA, the Music Business, and the Mafia Bill Knoedelseder wrote of how MCA Records became the subject of a federal investigation of its cut-out sales practices after a deal allegedly involving organized crime. Effects: Deletion in the music industry differs from print publishing in that recording contracts generally do not return the rights to the artist when a title ceases to be manufactured. When PolyGram took over JMT Records, a small jazz label, in 1995, it was understood to have announced that the entire JMT catalogue would be deleted, shocking dozens of artists. According to Tim Berne, "this means that the majority of my work simply vanishes."According to Louis Barfe, "many deleted gems are locked in archives, unheard and quite possibly deteriorating." Although he recommends that they digitize this music and offer it for download, he notes that "niche labels have sprung up specialising in reissuing out-of-copyright recordings". Some bootlegs have been issued just so fans can obtain deleted recordings without having to search the second hand market for them. Digital media: More recently, the rise of digital media has eliminated much of the cost of music distribution, and companies have begun to see deleted records for their long tail potential, selling via iTunes and other online means. A single company, ArkivMusic, has struck deals with all four major publishers (and numerous minor ones) of classical music recordings to make their deleted records available via a burn-on-demand service. Exceptions: A prominent exception to the practice was the label Folkways Records, whose founder Moe Asch "never deleted a single title from the ... catalogue". According to Asch, "Just because the letter J is less popular than the letter S, you don't take it out of the dictionary." When the label was disbanded, Asch enlisted the Smithsonian Institution to maintain the catalogue "in perpetuity". Examples: In July 1972, the British music paper, Melody Maker, reported that a cutprice LP issued by Virgin Records was facing deletion because, ironically, it was too popular. Faust's The Faust Tapes, then at number 18 in Melody Maker's chart, actually cost more to produce than its selling price (49p) and so Virgin lost supposedly £2,000 on sales of 60,000. It has since been argued that this move was merely a publicity stunt by Virgin's owner, Richard Branson. Examples: On November 16, 1990, Arista Records deleted Milli Vanilli's album Girl You Know It's True very quickly after Frank Farian admitted that Rob Pilatus and Fab Morvan did not sing on the record. In addition to this, the duo's Grammy Award was revoked a few days later.American heavy metal band Pantera's first four albums have been notably deleted from label catalogs: Metal Magic, Projects in the Jungle, I Am the Night, and Power Metal. The largely glam metal-oriented albums are not favorites of the band, who transitioned to groove and thrash metal from the release of Cowboys From Hell onward. They are only available in bootleg form. Rex Brown himself said that there will never be a reissue of them, citing every member of the band's most well-known lineup having been against it.The British duo The KLF summarily deleted their entire back catalogue when they 'retired' from the music industry in 1992.Manic Street Preachers' 2000 single "The Masses Against The Classes" was deleted on day of release as a promotional gimmick. However, copies of the single continued to be available until supplies ran out, which allowed it to reach Number 1, and remain in the charts for 7 weeks. Examples: The 2006 Gnarls Barkley single "Crazy" was deleted by Warner Music after six weeks at #1 in the UK as a deliberate move to protect it from overexposure. Deleted singles could not then remain on the UK Singles Chart, so the physical single no longer charted after two weeks. However, it remained as a high-selling download single and has continued to receive heavy airplay well after the single was deleted. Examples: On 20 April 2013, Dutch composer John Ewbank deleted his song "Koningslied" ("The King's Song") only two days after its initial release, citing an overload of criticism aimed at him personally and at the song itself from the general public and the media. The song had been commissioned to act as the official song of Willem Alexander, Prince of Orange's upcoming investiture as the new King of the Netherlands on 30 April 2013. The song, already at number one in the iTunes download charts on the day of its release, was performed by a large number of well known Dutch artists.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Holton Taxol total synthesis** Holton Taxol total synthesis: The Holton Taxol total synthesis, published by Robert A. Holton and his group at Florida State University in 1994, was the first total synthesis of Taxol (generic name: paclitaxel).The Holton Taxol total synthesis is a good example of a linear synthesis. The synthesis starts from patchoulene oxide, a commercially available natural compound . Holton Taxol total synthesis: This epoxide can be obtained in two steps from the terpene patchoulol and also from borneol. The reaction sequence is also enantioselective, synthesizing (+)-Taxol from (−)-patchoulene oxide or (−)-Taxol from (−)-borneol with a reported specific rotation of +- 47° (c=0.19 / MeOH). The Holton sequence to Taxol is relatively short compared to that of the other groups (46 linear steps from patchoulene oxide). One of the reasons is that patchoulene oxide already contains 15 of the 20 carbon atoms required for the Taxol ABCD ring framework. Other raw materials required for this synthesis include 4-pentenal, m-chloroperoxybenzoic acid, methyl magnesium bromide and phosgene. Two key chemical transformations in this sequence are a Chan rearrangement and a sulfonyloxaziridine enolate oxidation. Retrosynthesis: It was envisaged that Taxol (51) could be accessed through tail addition of the Ojima lactam 48 to alcohol 47. Of the four rings of Taxol, the D ring was formed last, the result of a simple intramolecular SN2 reaction of hydroxytosylate 38, which could be synthesized from hydroxyketone 27. Formation of the six-membered C ring took place through a Dieckmann condensation of lactone 23, which could be obtained through a Chan rearrangement of carbonate ester 15. Substrate 15 could be derived from ketone 6, which, after several oxidations and rearrangements, could be furnished from commercially available patchoulene oxide 1. AB ring synthesis: As shown in Scheme 1, the first steps in the synthesis created the bicyclo[5.3.1]undecane AB ring system of Taxol. Reaction of epoxide 1 with tert-butyllithium removed the acidic α-epoxide proton, leading to an elimination reaction and simultaneous ring-opening of the epoxide to give allylic alcohol 2. The allylic alcohol was epoxidized to epoxyalcohol 3 using tert-butyl hydroperoxide and titanium(IV)tetraisopropoxide. In the subsequent reaction, the Lewis acid boron trifluoride catalyzed the ring opening of the epoxide followed by skeletal rearrangement and an elimination reaction to give unsaturated diol 4. The newly created hydroxyl group was protected as the triethylsilyl ether (5). A tandem epoxidation with meta-chloroperbenzoic acid and Lewis acid-catalyzed Grob fragmentation gave ketone 6, which was then protected as the tert-butyldimethylsilyl ether 7 in 94% yield over three steps. C ring preparation: As shown in Scheme 2, the next phase involved addition of the carbon atoms required for the formation of the C ring. Ketone 7 was treated with magnesium bromide diisopropylamide and underwent an aldol reaction with 4-pentanal (8) to give β-hydroxyketone 9. The hydroxyl group was protected as the asymmetric carbonate ester (10). Oxidation of the enolate of ketone 10 with (-)-camphorsulfonyl oxaziridine (11) gave α-hydroxyketone 12. Reduction of the ketone group with 20 equivalents of sodium bis(2-methoxyethoxy)aluminumhydride (Red-Al) gave triol 13, which was immediately converted to carbonate 14 by treatment with phosgene. Swern oxidation of alcohol 14 gave ketone 15. The next step set the final carbon-carbon bond between the B and C rings. This was achieved through a Chan rearrangement of 15 using lithium tetramethylpiperidide to give α-hydroxylactone 16 in 90% yield. The hydroxyl group was reductively removed using samarium(II) iodide to give an enol, and chromatography of this enol on silica gel gave the separable diastereomers cis 17c (77%) and trans 17t (15%), which could be recycled to 17c through treatment with potassium tert-butoxide. Treatment of pure 17c with lithium tetramethylpiperidide and (±)-camphorsulfonyl oxaziridine gave separable α-hydroxyketones 18c (88%) and 18t (8%) in addition to some recovered starting material (3%). Reduction of pure ketone 18c using Red-Al followed by basic work-up resulted in epimerization to give the required trans-fused diol 19 in 88% yield. C ring synthesis: As shown in Scheme 3, diol 19 was protected with phosgene as a carbonate ester (20). The terminal alkene group of 20 was next converted to a methyl ester using ozonolysis followed by oxidation with potassium permanganate and esterification with diazomethane. Ring expansion to give the cyclohexane C ring 24 was achieved using a Dieckman condensation of lactone 23 with lithium diisopropylamide as a base at -78 °C. Decarboxylation of 24 required protection of the hydroxyl group as the 2-methoxy-2-propyl (MOP) ether (25). With the protecting group in place, decarboxylation was effected with potassium thiophenolate in dimethylformamide to give protected hydroxy ketone 26. In the next two steps the MOP protecting group was removed under acidic conditions, and alcohol 27 was reprotected as the more robust benzyloxymethyl ether 28. The ketone was converted to the trimethylsilyl enol ether 29, which was subsequently oxidized in a Rubottom oxidation using m-chloroperbezoic acid to give the trimethylsilyl protected acyloin 30. At this stage the final missing carbon atom in the Taxol ring framework was introduced in a Grignard reaction of ketone 30 using a 10-fold excess of methylmagnesium bromide to give tertiary alcohol 31. Treatment of this tertiary alcohol with the Burgess reagent (32) gave exocyclic alkene 33. D ring synthesis and AB ring elaboration: In this section of the Holton Taxol synthesis (Scheme 4), the oxetane D ring was completed and ring B was functionalized with the correct substituents. Allylic alcohol 34, obtained from deprotection of silyl enol ether 33 with hydrofluoric acid, was oxidized with osmium tetroxide in pyridine to give triol 35. After protection of the primary hydroxyl group, the secondary hydroxyl group in 36 was converted to a good leaving group using p-toluenesulfonyl chloride. Subsequent deprotection of the trimethylsilyl ether 37 gave tosylate 38, which underwent cyclization to give oxetane 39 by nucleophilic displacement of the tosylate that occurred with inversion of configuration. The remaining unprotected tertiary alcohol was acylated, and the triethylsilyl group was removed to give allylic alcohol 41. The carbonate ester was cleaved by reaction with phenyllithium in tetrahydrofuran at -78 °C to give alcohol 42. The unprotected secondary alcohol was oxidized to ketone 43 using tetrapropylammonium perruthenate (TPAP) and N-methylmorpholine N-oxide (NMO). This ketone was deprotonated with potassium tert-butoxide in tetrahydrofuran at low temperature and further oxidized by reaction with benzeneseleninic anhydride to give α-hydroxyketone 44. Further treatment of 44 with potassium tert-butoxide furnished α-hydroxyketone 45 through a Lobry-de Bruyn-van Ekenstein Rearrangement. Substrate 45 was subsequently acylated to give α-acetoxyketone 46. Tail addition: In the final stages of the synthesis (Scheme 5), the hydroxyl group in 46 was deprotected to give alcohol 47. Reaction of the lithium alkoxide of 47 with the Ojima lactam 48 adds the tail in 49. Deprotection of the triethylsilyl ether with hydrofluoric acid and removal of the BOM group under reductive conditions gave (−)-Taxol 51 in 46 steps. Precursor synthesis: Patchoulene oxide (1) could be accessed from terpene patchoulol (52) through a series of acid-catalyzed carbocation rearrangements proceeded by an elimination following Zaitzev's rule to give pathoulene (53). The driving force for the rearrangement is relief of ring strain. Epoxidation of 53 with peracetic acid gave patchoulene oxide 1. Protecting groups: The total synthesis makes use of multiple protecting groups as follows:
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Plasma sheet** Plasma sheet: In the magnetosphere, the plasma sheet is a sheet-like region of denser (0.3-0.5 ions/cm3 versus 0.01-0.02 in the lobes) hot plasma and lower magnetic field located on the magnetotail and near the equatorial plane, between the magnetosphere's north and south lobes.The origin of the plasma sheet is still a subject of discussion on magnetospheric physics but it is thought that the region plays an important role on the transport of plasma around the Earth from the magnetotail towards the Sun. The plasma sheet is closely related to the convective motion of plasma on the magnetotail occurring as a result of magnetic field reconnection.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Adrenochrome** Adrenochrome: Adrenochrome is a chemical compound produced by the oxidation of adrenaline (epinephrine). It was the subject of limited research from the 1950s through to the 1970s as a potential cause of schizophrenia. While it has no current medical application, the related derivative compound, carbazochrome, is a hemostatic medication. Despite this compound's name, it is unrelated to the element chromium; instead, the ‑chrome suffix indicates a relationship to color, as pure adrenochrome is deep violet. Chemistry: The oxidation reaction that converts adrenaline into adrenochrome occurs both in vivo and in vitro. Silver oxide (Ag2O) was among the first reagents employed for this, but a variety of other oxidising agents have been used successfully. In solution, adrenochrome is pink and further oxidation of the compound causes it to polymerize into brown or black melanin compounds. History: Several small-scale studies (involving 15 or fewer test subjects) conducted in the 1950s and 1960s reported that adrenochrome triggered psychotic reactions such as thought disorder and derealization.In 1954, researchers Abram Hoffer and Humphry Osmond claimed that adrenochrome is a neurotoxic, psychotomimetic substance and may play a role in schizophrenia and other mental illnesses.In what Hoffer called the "adrenochrome hypothesis", he and Osmond in 1967 speculated that megadoses of vitamin C and niacin could cure schizophrenia by reducing brain adrenochrome.The treatment of schizophrenia with such potent anti-oxidants is highly contested. In 1973, the American Psychiatric Association reported methodological flaws in Hoffer's work on niacin as a schizophrenia treatment and referred to follow-up studies that did not confirm any benefits of the treatment. Multiple additional studies in the United States, Canada, and Australia similarly failed to find benefits of megavitamin therapy to treat schizophrenia. History: The adrenochrome theory of schizophrenia waned, despite some evidence that it may be psychotomimetic, as adrenochrome was not detectable in people with schizophrenia.In the early 2000s, interest was renewed by the discovery that adrenochrome may be produced normally as an intermediate in the formation of neuromelanin. This finding may be significant because adrenochrome is detoxified at least partially by glutathione-S-transferase. Some studies have found genetic defects in the gene for this enzyme.Adrenochrome is also believed to have cardiotoxic properties. In popular culture: In his 1954 book The Doors of Perception, Aldous Huxley mentioned the discovery and the alleged effects of adrenochrome which he likened to the symptoms of mescaline intoxication, although he had never consumed it. In popular culture: Anthony Burgess mentions adrenochrome as "drencrom" at the beginning of his 1962 novel A Clockwork Orange. The protagonist and his friends are drinking drug-laced milk: "They had no license for selling liquor, but there was no law yet against prodding some of the new veshches which they used to put into the old moloko, so you could peet it with vellocet or synthemesc or drencrom or one or two other veshches [...]" Hunter S. Thompson mentioned adrenochrome in his 1971 book Fear and Loathing in Las Vegas. This is the likely origin of current myths surrounding this compound, because a character states that "There's only one source for this stuff ... the adrenaline glands from a living human body. It's no good if you get it out of a corpse." The adrenochrome scene also appears in the novel's film adaptation. In the DVD commentary, director Terry Gilliam admits that his and Thompson's portrayal is a fictional exaggeration. Gilliam insists that the drug is entirely fictional and seems unaware of the existence of a substance with the same name. Hunter S. Thompson also mentions adrenochrome in his book Fear and Loathing on the Campaign Trail '72. In the footnotes in chapter April, page 140, he says: "It was sometime after midnight in a ratty hotel room and my memory of the conversation is hazy, due to massive ingestion of booze, fatback, and forty cc's of adrenochrome." Adrenochrome is a component of several far right conspiracy theories, such as QAnon and Pizzagate, with the chemical helping the theories play a similar role to earlier blood libel and Satanic ritual abuse stories. According to QAnon, which has incorporated and expanded Pizzagate's claims about child sex abuse rings, a cabal of Satanists rapes and murders children, using the adrenochrome they "harvest" from their victims' blood as a drug or as an elixir of youth. In reality, adrenochrome is synthesized, solely for research purposes, by biotechnology companies.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Apple–Intel architecture** Apple–Intel architecture: The Apple–Intel architecture, or Mactel, is an unofficial name used for Macintosh personal computers developed and manufactured by Apple Inc. that use Intel x86 processors, rather than the PowerPC and Motorola 68000 ("68k") series processors used in their predecessors or the ARM-based Apple silicon SoCs used in their successors. As Apple changed the architecture of its products, they changed the firmware from the Open Firmware used on PowerPC-based Macs to the Intel-designed Extensible Firmware Interface (EFI). With the change in processor architecture to x86, Macs gained the ability to boot into x86-native operating systems (such as Microsoft Windows), while Intel VT-x brought near-native virtualization with macOS as the host OS. Technologies: Background Apple uses a subset of the standard PC architecture, which provides support for Mac OS X and support for other operating systems. Hardware and firmware components that must be supported to run an operating system on Apple-Intel hardware include the Extensible Firmware Interface. Technologies: The EFI and GUID Partition Table With the change in architecture, a change in firmware became necessary. Extensible Firmware Interface (EFI) is the firmware-based replacement for the PC BIOS from Intel. Designed by Intel, it was chosen by Apple to replace Open Firmware, used on PowerPC architectures. Since many operating systems, such as Windows XP and many versions of Windows Vista, are incompatible with EFI, Apple released a firmware upgrade with a Compatibility Support Module that provides a subset of traditional BIOS support with its Boot Camp product. Technologies: GUID Partition Table (GPT) is a standard for the layout of the partition table on a physical hard disk. It is a part of the Extensible Firmware Interface (EFI) standard proposed by Intel as a substitute for the earlier PC BIOS. The GPT replaces the Master Boot Record (MBR) used with BIOS. Booting: To Mac operating systems Intel Macs can boot in two ways: directly via EFI, or in a "legacy" BIOS compatibility mode. For multibooting, holding down "Option" gives a choice of bootable devices, while the rEFInd bootloader is commonly used for added configurability. Booting: Legacy Live USBs cannot be used on Intel Macs; the EFI firmware can recognize and boot from USB drives, but it can only do this in EFI mode–when the firmware switches to BIOS mode, it no longer recognizes USB drives, due to lack of a BIOS-mode USB driver. Many operating systems, such as earlier versions of Windows and Linux, could only be booted in BIOS mode, or were more easily booted or perform better when booted in BIOS mode, and thus USB booting on Intel-based Macs was for a time largely limited to Mac OS X, which can easily be booted via EFI. Booting: To non-Mac operating systems On April 5, 2006, Apple made available for download a public beta version of Boot Camp, a collection of technologies that allows users of Intel-based Macs to boot Windows XP Service Pack 2. The first non-beta version of Boot Camp is included in Mac OS X v10.5, "Leopard." Before the introduction of Boot Camp, which provides most hardware drivers for Windows XP, drivers for XP were difficult to find.Linux can also be booted with Boot Camp. Differences from standard PCs: Intel-based Mac computers use very similar hardware to PCs from other manufacturers that ship with Microsoft Windows or Linux operating systems. In particular, CPUs, chipsets, and GPUs are entirely compatible. However, Apple computers also include some custom hardware and design choices not found in competing systems: System Management Controller is a custom Apple chip that controls various functions of the computer related to power management, including handling the power button, management of battery and thermal sensors, among others. It also plays a part in the protection scheme deployed to restrict booting macOS to Apple hardware (see Digital Rights Management below). Intel-based Mac doesn't implement TPM. Differences from standard PCs: Laptop input devices. Early MacBook and MacBook Pro computers used an internal variant of USB as a keyboard and trackpad interconnect. Since the 2013 revision of MacBook Air, Apple started to use a custom Serial Peripheral Interface controller instead. The 2016 MacBook Pro additionally uses a custom internal USB device dubbed "iBridge" as an interface to the Touch Bar and Touch ID components, as well as the FaceTime Camera. PC laptops generally use internal variant of the legacy PS/2 keyboard interconnect. PS/2 also used to be the standard for PC laptop pointing devices, although a variety of other interfaces, including USB, SMBus, and I2C, may also be used. Differences from standard PCs: Additional custom hardware may include a GMUX chip that controls GPU switching, non-compliant implementations of solid-state storage and non-standard configurations of HD Audio subsystem. Differences from standard PCs: Keyboard layout has significant differences between Apple and IBM PC keyboards. While PC keyboards can be used in macOS, as well as Mac keyboards in Microsoft Windows, some functional differences occur. For example, the Alt (PC) and ⌥ Option (Mac) keys function equivalently; the same is true for ⊞ Win (PC) and ⌘ Command (Mac) – however, the physical location of those keys is reversed. There are also keys exclusive for each platform (e.g. Prt Sc), some of which may require software remapping to achieve the desired function. Compact and laptop keyboards from Apple also lack some keys considered essential on PCs, such as the forward Delete key, although some of them are accessible through the Fn key. Differences from standard PCs: Boot process. All Intel-based Macs have been using some version of EFI as the boot firmware. At the time the platform debuted in 2006, it was in a stark contrast to PCs, which almost universally employed legacy BIOS, and Apple's implementation of EFI did not initially implement the Compatibility Support Module that would allow booting contemporary standard PC operating systems. Apple updated the firmware with CSM support with the release of Boot Camp in April 2006, and since the release of Windows 8 in 2012, Microsoft has required its OEM partners to use UEFI boot process on PCs, which made the differences smaller. However, Apple's version of EFI also includes some custom extensions that are utilized during regular macOS boot process, which include the following: Drivers for the HFS Plus and APFS file systems with support locating the bootloader based on the "blessed directory" and "blessed file" properties of HFS+ and APFS volumes. The EFI System Partition is thus not used or necessary for regular macOS boot process. Differences from standard PCs: Rudimentary pre-boot GUI framework, including support for image drawing, mouse cursor and events. This is used by FileVault 2 to present the login screen before loading the operating system. Differences from standard PCs: Other non-standard EFI services for managing various firmware features such as the computer's NVRAM and boot arguments.Some of these differences can pose as obstacles both to running macOS on non-Apple hardware and booting alternative operating systems on Mac computers – Apple only provides drivers for its custom hardware for macOS and Microsoft Windows (as part of Boot Camp); drivers for other operating systems such as Linux need to be written by third parties, usually volunteer free software enthusiasts. Digital rights management: Digital rights management in the Apple–Intel architecture is accomplished via the "Dont Steal Mac OS X.kext," sometimes referred to as DSMOS or DSMOSX, a file present in Intel-capable versions of the Mac OS X operating system. Its presence enforces a form of digital rights management, preventing Mac OS X being installed on stock PCs. The name of the kext is a reference to the Mac OS X license conditions, which allow installation on Apple hardware only. According to Apple, anything else is stealing Mac OS X. The kext is located at /System/Library/Extensions on the volume containing the operating system. The extension contains a kernel function called page_transform() that performs AES decryption of "apple-protected" programs. A system lacking a proper key will not be able to run the Apple-restricted binaries, which include Dock, Finder, loginwindow, SystemUIServer, mds, ATSServer, backupd, fontd, translate, or translated. If the check fails, a short poem is displayed, reading "Your karma check for today: There once was a user that whined, his existing OS was so blind, he'd do better to pirate an OS that ran great, but found his hardware declined. Digital rights management: Please don't steal Mac OS! Really, that's way uncool. Digital rights management: (C) Apple Computer, Inc." After the initial announcement of first Intel-based Mac hardware configurations, reporting a Trusted Platform Module among system components, it was believed that the TPM is responsible for handling the DRM protection. It was later proven to not be the case. The keys are actually contained within the System Management Controller, a component exclusive to Apple computers, and can be easily retrieved from it. These two 32-byte keys form a human-readable ASCII string copyrighted by Apple, establishing another possible line of legal defence against prospective clone makers. Virtualization: The processors found in Intel Macs support Intel VT-x, which allows for high performance (near-native) virtualization that gives the user the ability to run and switch between two or more operating systems simultaneously, rather than having to dual-boot and run only one operating system at a time. Virtualization: The first virtualization software for Intel Macs was Parallels Desktop for Mac, released in June 2006. The Parallels virtualization products allow users to use installations of Windows XP and later in a virtualized mode while running macOS. VirtualBox is another piece of virtualization software originally from Innotek (now Oracle Corporation), which had a first public beta release for Mac OS X in April 2007. It supports VT-x and can run multiple other guest operating systems, including Windows XP and later. It is available free of charge under either a proprietary license or the GPL.VMware also offers a Mac virtualization product competing with Parallels called Fusion, released August 2007. VMware's virtualization product also allows users to use installations of Windows XP and later under macOS. Virtualization: Regardless of the product used, there are inherent limitations and performance penalties in using a virtualized guest OS versus the native macOS or booting an alternative OS solution offered via Boot Camp.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Parietal eminence** Parietal eminence: The parietal eminence (parietal tuber, parietal tuberosity) is a convex, smooth eminence on the external surface of the parietal bone of the skull. It is the site where intramembranous ossification of the parietal bone begins during embryological development. It tends to be slightly more prominent in women than in men, so may be used to help to identify the sex of a skull.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Väyrynenite** Väyrynenite: Väyrynenite is a rare phosphate mineral with formula MnBe(PO4)(OH,F). It was first described in 1954 for an occurrence in Viitaniemi, Erajarvi, Finland and named for mineralogist Heikki Allan Väyrynen of Helsinki, Finland.It occurs in pegmatites as an alteration of beryl and triphylite. It occurs in association with eosphorite, moraesite, hurlbutite, beryllonite, amblygonite, apatite, tourmaline, topaz, muscovite, microcline and quartz.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Subsidence** Subsidence: Subsidence is a general term for downward vertical movement of the Earth's surface, which can be caused by both natural processes and human activities. Subsidence involves little or no horizontal movement, which distinguishes it from slope movement.Processes that lead to subsidence include dissolution of underlying carbonate rock by groundwater; gradual compaction of sediments; withdrawal of fluid lava from beneath a solidified crust of rock; mining; pumping of subsurface fluids, such as groundwater or petroleum; or warping of the Earth's crust by tectonic forces. Subsidence resulting from tectonic deformation of the crust is known as tectonic subsidence and can create accommodation for sediments to accumulate and eventually lithify into sedimentary rock.Ground subsidence is of global concern to geologists, geotechnical engineers, surveyors, engineers, urban planners, landowners, and the public in general. Pumping of groundwater or petroleum has led to subsidence of as much as 9 meters (30 ft) in many locations around the world and incurring costs measured in hundreds of millions of US dollars. Causes: Dissolution of limestone Subsidence frequently causes major problems in karst terrains, where dissolution of limestone by fluid flow in the subsurface creates voids (i.e., caves). If the roof of a void becomes too weak, it can collapse and the overlying rock and earth will fall into the space, causing subsidence at the surface. This type of subsidence can cause sinkholes which can be many hundreds of meters deep. Causes: Mining Several types of sub-surface mining, and specifically methods which intentionally cause the extracted void to collapse (such as pillar extraction, longwall mining and any metalliferous mining method which uses "caving" such as "block caving" or "sub-level caving") will result in surface subsidence. Mining-induced subsidence is relatively predictable in its magnitude, manifestation and extent, except where a sudden pillar or near-surface tunnel collapse occurs (usually very old workings). Mining-induced subsidence is nearly always very localized to the surface above the mined area, plus a margin around the outside. The vertical magnitude of the subsidence itself typically does not cause problems, except in the case of drainage (including natural drainage)–rather, it is the associated surface compressive and tensile strains, curvature, tilts and horizontal displacement that are the cause of the worst damage to the natural environment, buildings and infrastructure.Where mining activity is planned, mining-induced subsidence can be successfully managed if there is co-operation from all of the stakeholders. This is accomplished through a combination of careful mine planning, the taking of preventive measures, and the carrying out of repairs post-mining. Causes: Extraction of petroleum and natural gas If natural gas is extracted from a natural gas field the initial pressure (up to 60 MPa (600 bar)) in the field will drop over the years. The pressure helps support the soil layers above the field. If the gas is extracted, the overburden pressure sediment compacts and may lead to earthquakes and subsidence at the ground level. Causes: Since exploitation of the Slochteren (Netherlands) gas field started in the late 1960s the ground level over a 250 km2 area has dropped by a current maximum of 30 cm.Extraction of petroleum likewise can cause significant subsidence. The city of Long Beach, California, has experienced 9 meters (30 ft) over the course of 34 years of petroleum extraction, resulting in damage of over $100 million to infrastructure in the area. The subsidence was brought to a halt when secondary recovery wells pumped enough water into the oil reservoir to stabilize it. Causes: Earthquake Land subsidence can occur in various ways during an earthquake. Large areas of land can subside drastically during an earthquake because of offset along fault lines. Land subsidence can also occur as a result of settling and compacting of unconsolidated sediment from the shaking of an earthquake.The Geospatial Information Authority of Japan reported immediate subsidence caused by the 2011 Tōhoku earthquake. In Northern Japan, subsidence of 0.50 m (1.64 ft) was observed on the coast of the Pacific Ocean in Miyako, Tōhoku, while Rikuzentakata, Iwate measured 0.84 m (2.75 ft). In the south at Sōma, Fukushima, 0.29 m (0.95 ft) was observed. The maximum amount of subsidence was 1.2 m (3.93 ft), coupled with horizontal diastrophism of up to 5.3 m (17.3 ft) on the Oshika Peninsula in Miyagi Prefecture. Causes: Groundwater-related subsidence Groundwater-related subsidence is the subsidence (or the sinking) of land resulting from groundwater extraction. It is a growing problem in the developing world as cities increase in population and water use, without adequate pumping regulation and enforcement. One estimate has 80% of serious land subsidence problems associated with the excessive extraction of groundwater, making it a growing problem throughout the world. Causes: Groundwater fluctuations can also indirectly affect the decay of organic material. The habitation of lowlands, such as coastal or delta plains, requires drainage. The resulting aeration of the soil leads to the oxidation of its organic components, such as peat, and this decomposition process may cause significant land subsidence. This applies especially when groundwater levels are periodically adapted to subsidence, in order to maintain desired unsaturated zone depths, exposing more and more peat to oxygen. In addition to this, drained soils consolidate as a result of increased effective stress. In this way, land subsidence has the potential of becoming self-perpetuating, having rates up to 5 cm/yr. Water management used to be tuned primarily to factors such as crop optimization but, to varying extents, avoiding subsidence has come to be taken into account as well. Causes: Faulting induced When differential stresses exist in the Earth, these can be accommodated either by geological faulting in the brittle crust, or by ductile flow in the hotter and more fluid mantle. Where faults occur, absolute subsidence may occur in the hanging wall of normal faults. In reverse, or thrust, faults, relative subsidence may be measured in the footwall. Causes: Isostatic subsidence The crust floats buoyantly in the asthenosphere, with a ratio of mass below the "surface" in proportion to its own density and the density of the asthenosphere. If mass is added to a local area of the crust (e.g., through deposition), the crust subsides to compensate and maintain isostatic balance.The opposite of isostatic subsidence is known as isostatic rebound—the action of the crust returning (sometimes over periods of thousands of years) to a state of isostacy, such as after the melting of large ice sheets or the drying-up of large lakes after the last ice age. Lake Bonneville is a famous example of isostatic rebound. Due to the weight of the water once held in the lake, the earth's crust subsided nearly 200 feet (61 m) to maintain equilibrium. When the lake dried up, the crust rebounded. Today at Lake Bonneville, the center of the former lake is about 200 feet (61 m) higher than the former lake edges. Causes: Seasonal effects Many soils contain significant proportions of clay. Because of the very small particle size, they are affected by changes in soil moisture content. Seasonal drying of the soil results in a lowering of both the volume and the surface of the soil. If building foundations are above the level reached by seasonal drying, they move, possibly resulting in damage to the building in the form of tapering cracks. Causes: Trees and other vegetation can have a significant local effect on seasonal drying of soils. Over a number of years, a cumulative drying occurs as the tree grows. That can lead to the opposite of subsidence, known as heave or swelling of the soil, when the tree declines or is felled. As the cumulative moisture deficit is reversed, which can last up to 25 years, the surface level around the tree will rise and expand laterally. That often damages buildings unless the foundations have been strengthened or designed to cope with the effect. Impacts: Sinking cities Isfahan, Iran has the highest criticaly bad rate ever between Iranian cities.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Autoguider** Autoguider: An autoguider is an automatic electronic guidance tool used in astronomy to keep a telescope pointed precisely at an object being observed. This prevents the object from drifting across the field of view during long-exposures which would create a blurred or elongated image. Usage: Imaging of dim celestial targets, usually deep sky objects, requires exposure times of many minutes, particularly when narrowband images are being taken. In order for the resulting image to maintain usable clarity and sharpness during these exposures, the target must be held at the same position within the telescope's field of view during the whole exposure; any apparent motion would cause point sources of light (such as stars) to appear as streaks, or the object being photographed to appear blurry. Even computer-tracked mounts and GoTo telescopes do not eliminate the need for tracking adjustments for exposures beyond a few minutes, as astrophotography demands an extremely high level of precision that these devices typically cannot achieve, especially if the mount is not properly polar aligned.To accomplish this automatically an autoguider is usually attached to either a guidescope or finderscope, which is a smaller telescope oriented in the same direction as the main telescope, or an off-axis guider, which uses a prism to divert some of the light originally headed towards the eyepiece. Usage: The device has a CCD or CMOS sensor that regularly takes short exposures of an area of sky near the object. After each image is captured, a computer measures the apparent motion of one or more stars within the imaged area and issues the appropriate corrections to the telescope's computerized mount. Usage: Some computer controlled telescope mounts have an autoguiding port that connects directly to the autoguider (usually referred to as an ST-4 port, which works with analog signals). In this configuration, a guide camera will detect any apparent drift in the field of view. It will then send this signal to a computer which can calculate the required correction. This correction is then sent back to the camera which relays it back to the mount.An autoguider need not be an independent unit; some high-end CCD imaging units (such as those offered by SBIG) have a second, integrated CCD sensor on the same plane as the main imaging chip that is dedicated to autoguiding. Astronomical video cameras or modified webcams can also serve as an autoguiding unit when used with guiding software such as Guidedog or PHD2, or general-purpose astronomical programs such as MaxDSLR. However, these setups are generally not as sensitive as specialized units. Usage: Since an image of a star can take up more than one pixel on an image sensor due to lens imperfections and other effects, autoguiders use the amount of light falling on each pixel to calculate where the star should actually be located. As a result, most autoguiders have subpixel accuracy. In other words, the star can be tracked to an accuracy better than the angular size represented by one CCD pixel. However, atmospheric effects (astronomical seeing) typically limit accuracy to one arcsecond in most situations. To prevent the telescope from moving in response to changes in the guide star's apparent position caused by seeing, the user can usually adjust a setting called "aggressiveness".
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Small nucleolar RNA SNORD47** Small nucleolar RNA SNORD47: In molecular biology, SNORD47 (also known as U47) is a non-coding RNA (ncRNA) molecule which functions in the modification of other small nuclear RNAs (snRNAs). This type of modifying RNA is usually located in the nucleolus of the eukaryotic cell which is a major site of snRNA biogenesis. It is known as a small nucleolar RNA (snoRNA) and also often referred to as a guide RNA. Small nucleolar RNA SNORD47: snoRNA U57 belongs to the C/D box class of snoRNAs which contain the conserved sequence motifs known as the C box (UGAUGA) and the D box (CUGA). Most of the members of the box C/D family function in directing site-specific 2'-O-methylation of substrate RNAs.This snoRNA was originally cloned from HeLa cells and expression verified by northern blotting. It is predicted to guide 2'O-ribose methylation of ribosomal RNA (rRNA) 28S at residue C3866. The mouse orthologue was also clonedThis snoRNA is encoded in the introns of the same genes as other C/D box snoRNAs U44, U74, U75, U76, U77, U78, U79, U80 and U81.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Colossally abundant number** Colossally abundant number: In mathematics, a colossally abundant number (sometimes abbreviated as CA) is a natural number that, in a particular, rigorous sense, has many divisors. Particularly, it's defined by a ratio between the sum of an integer's divisors and that integer raised to a power higher than one. For any such exponent, whichever integer has the highest ratio is a colossally abundant number. It is a stronger restriction than that of a superabundant number, but not strictly stronger than that of an abundant number. Colossally abundant number: Formally, a number n is said to be colossally abundant if there is an ε > 0 such that for all k > 1, σ(n)n1+ε≥σ(k)k1+ε where σ denotes the sum-of-divisors function.The first 15 colossally abundant numbers, 2, 6, 12, 60, 120, 360, 2520, 5040, 55440, 720720, 1441440, 4324320, 21621600, 367567200, 6983776800 (sequence A004490 in the OEIS) are also the first 15 superior highly composite numbers, but neither set is a subset of the other. History: Colossally abundant numbers were first studied by Ramanujan and his findings were intended to be included in his 1915 paper on highly composite numbers. Unfortunately, the publisher of the journal to which Ramanujan submitted his work, the London Mathematical Society, was in financial difficulties at the time and Ramanujan agreed to remove aspects of the work to reduce the cost of printing. His findings were mostly conditional on the Riemann hypothesis and with this assumption he found upper and lower bounds for the size of colossally abundant numbers and proved that what would come to be known as Robin's inequality (see below) holds for all sufficiently large values of n.The class of numbers was reconsidered in a slightly stronger form in a 1944 paper of Leonidas Alaoglu and Paul Erdős in which they tried to extend Ramanujan's results. Properties: Colossally abundant numbers are one of several classes of integers that try to capture the notion of having many divisors. For a positive integer n, the sum-of-divisors function σ(n) gives the sum of all those numbers that divide n, including 1 and n itself. Paul Bachmann showed that on average, σ(n) is around π2n / 6. Grönwall's theorem, meanwhile, says that the maximal order of σ(n) is ever so slightly larger, specifically there is an increasing sequence of integers n such that for these integers σ(n) is roughly the same size as eγn log(log(n)), where γ is the Euler–Mascheroni constant. Hence colossally abundant numbers capture the notion of having many divisors by requiring them to maximise, for some ε > 0, the value of the function σ(n)n1+ε over all values of n. Bachmann and Grönwall's results ensure that for every ε > 0 this function has a maximum and that as ε tends to zero these maxima will increase. Thus there are infinitely many colossally abundant numbers, although they are rather sparse, with only 22 of them less than 1018.Just like with superior highly composite numbers, an effective construction of the set of all colossally abundant numbers is given by the following monotonic mapping from the positive real numbers. Let ln ln ⁡p⌋ for any prime number p and positive real ε . Then s(ε)=∏p∈Ppep(ε) is a colossally abundant number.For every ε the above function has a maximum, but it is not obvious, and in fact not true, that for every ε this maximum value is unique. Alaoglu and Erdős studied how many different values of n could give the same maximal value of the above function for a given value of ε. They showed that for most values of ε there would be a single integer n maximising the function. Later, however, Erdős and Jean-Louis Nicolas showed that for a certain set of discrete values of ε there could be two or four different values of n giving the same maximal value.In their 1944 paper, Alaoglu and Erdős conjectured that the ratio of two consecutive colossally abundant numbers was always a prime number. They showed that this would follow from a special case of the four exponentials conjecture in transcendental number theory, specifically that for any two distinct prime numbers p and q, the only real numbers t for which both pt and qt are rational are the positive integers. Using the corresponding result for three primes—a special case of the six exponentials theorem that Siegel claimed to have proven—they managed to show that the quotient of two consecutive colossally abundant numbers is always either a prime or a semiprime (that is, a number with just two prime factors). The quotient can never be the square of a prime. Properties: Alaoglu and Erdős's conjecture remains open, although it has been checked up to at least 107. If true it would mean that there was a sequence of non-distinct prime numbers p1, p2, p3,... such that the nth colossally abundant number was of the form cn=∏i=1npi Assuming the conjecture holds, this sequence of primes begins 2, 3, 2, 5, 2, 3, 7, 2 (sequence A073751 in the OEIS). Alaoglu and Erdős's conjecture would also mean that no value of ε gives four different integers n as maxima of the above function. Properties: Relation to abundant numbers Like superabundant numbers, colossally abundant numbers are a generalization of abundant numbers. Also like superabundant numbers, it is not a strict generalization; a number can be colossally abundant without being abundant. This is true in the case of 6; 6's divisors are 1,2,3, and 6, but an abundant number is defined to be one where the sum of the divisors, excluding itself, is greater than the number itself; 1+2+3=6, so this condition is not met (and 6 is instead a perfect number). However all colossally abundant numbers are also superabundant numbers. Relation to the Riemann hypothesis: In the 1980s Guy Robin showed that the Riemann hypothesis is equivalent to the assertion that the following inequality is true for all n > 5040: (where γ is the Euler–Mascheroni constant) log log 1.781072418 log log ⁡n This inequality is known to fail for 27 numbers (sequence A067698 in the OEIS): 2, 3, 4, 5, 6, 8, 9, 10, 12, 16, 18, 20, 24, 30, 36, 48, 60, 72, 84, 120, 180, 240, 360, 720, 840, 2520, 5040Robin showed that if the Riemann hypothesis is true then n = 5040 is the last integer for which it fails. The inequality is now known as Robin's inequality after his work. It is known that Robin's inequality, if it ever fails to hold, will fail for a colossally abundant number n; thus the Riemann hypothesis is in fact equivalent to Robin's inequality holding for every colossally abundant number n > 5040. Relation to the Riemann hypothesis: In 2001–2 Lagarias demonstrated an alternate form of Robin's assertion which requires no exceptions, using the harmonic numbers instead of log: exp log ⁡(Hn) Or, other than the 8 exceptions of n = 1, 2, 3, 4, 6, 12, 24, 60: exp log ⁡(Hn)
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Cost-of-production theory of value** Cost-of-production theory of value: In economics, the cost-of-production theory of value is the theory that the price of an object or condition is determined by the sum of the cost of the resources that went into making it. The cost can comprise any of the factors of production (including labor, capital, or land) and taxation. Cost-of-production theory of value: The theory makes the most sense under assumptions of constant returns to scale and the existence of just one non-produced factor of production. With these assumptions, minimal price theorem, a dual version of the so-called non-substitution theorem by Paul Samuelson, holds.: 73, 75  Under these assumptions, the long-run price of a commodity is equal to the sum of the cost of the inputs into that commodity, including interest charges. Historical development of the theory: Historically, the best-known proponent of such theories is probably Adam Smith. Piero Sraffa, in his introduction to the first volume of the "Collected Works of David Ricardo", referred to Smith's "adding-up" theory. Smith contrasted natural prices with market price. Smith theorized that market prices would tend toward natural prices, where outputs would stand at what he characterized as the "level of effectual demand". At this level, Smith's natural prices of commodities are the sum of the natural rates of wages, profits, and rent that must be paid for inputs into production. (Smith is ambiguous about whether rent is price determining or price determined. The latter view is the consensus of later classical economists, with the Ricardo-Malthus-West theory of rent.) David Ricardo mixed this cost-of-production theory of prices with the labor theory of value, as that latter theory was understood by Eugen von Böhm-Bawerk and others. This is the theory that prices tend toward proportionality to the socially necessary labor embodied in a commodity. Ricardo sets this theory at the start of the first chapter of his Principles of Political Economy and Taxation, but contextualizes it as only relating to commodities with elastic supply. Taknaga advances a new interpretation that Ricardo had cost-of-production theory of value from the start and presents a more coherent interpretation based on texts of Principles of Political Economy and Taxation. This alleged refutation leads to what later became known as the transformation problem. Karl Marx later takes up that theory in the first volume of Capital, while indicating that he is quite aware that the theory is untrue at lower levels of abstraction. This has led to all sorts of arguments over what both David Ricardo and Karl Marx "really meant". Nevertheless, it seems undeniable that all the major classical economics and Marx explicitly rejected the labor theory of price([1]). Historical development of the theory: A somewhat different theory of cost-determined prices is provided by the "neo-Ricardian School" [2] of Piero Sraffa and his followers. Yoshnori Shiozawa presented a modern interpretation of Ricardo's cost-of-production theory of value.The Polish economist Michał Kalecki [3] distinguished between sectors with "cost-determined prices" (such as manufacturing and services) and those with "demand-determined prices" (such as agriculture and raw material extraction). Market price: Market price is a familiar economic concept: it is the price that a good or service is offered at, or will fetch, in the marketplace. It is of interest mainly in the study of microeconomics. Market value and market price are equal only under conditions of market efficiency, equilibrium, and rational expectations. In economics, returns to scale and economies of scale are related terms that describe what happens as the scale of production increases. They are different, non-interchangeable concepts. Labor theory of value: The labor theories of value are economic theories according to which the true values of commodities are related to the labor needed to produce them. Labor theory of value: There are many accounts of labor value, with the common element that the "value" of an exchangeable good or service is, or ought to be, or tends to be, or can be considered as, equal or proportional to the amount of labor required to produce it (including the labor required to produce the raw materials and machinery used in production). Labor theory of value: Different labor theories of value prevailed among classical economists through the mid-19th century. This theory is especially associated with Adam Smith and David Ricardo. Since that time, it has been most often associated with Marxian economics, while among modern mainstream economists it is considered to be superseded by the marginal utility approach. Taxes and subsidies: Taxes and subsidies change the price of goods and services. A marginal tax on the sellers of a good will shift the supply curve to the left until the vertical distance between the two supply curves is equal to the per unit tax; other things remaining equal, this will increase the price paid by the consumers (which is equal to the new market price) and decrease the price received by the sellers. Marginal subsidies on production will shift the supply curve to the right until the vertical distance between the two supply curves is equal to the per unit subsidy; other things remaining equal, this will decrease price paid by the consumers (which is equal to the new market price) and increase the price received by the producers.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Tryptic soy broth** Tryptic soy broth: Tryptic soy broth or Trypticase soy broth (frequently abbreviated as TSB) is used in microbiology laboratories as a culture broth to grow aerobic bacteria. It is a complex, general purpose medium that is routinely used to grow certain pathogenic bacteria, which tend to have high nutritional requirements (i.e., they are fastidious). Its agar counterpart is tryptic soy agar (TSA). One of the components of Tryptic soy broth is Phytone Archived 2014-08-22 at the Wayback Machine, which is an enzymatic digest of soybean meal. Tryptic soy broth: TSB is frequently used in commercial diagnostics in conjunction with the additive sodium thioglycolate which promotes growth of anaerobes. Preparation: To prepare 1 liter of TSB, the following ingredients are dissolved under gentle heat. Adjustments to pH should be made using 1N HCl or 1N NaOH to reach a final target pH of 7.3 ± 0.2 at 25 °C (77 °F). The solution is then autoclaved for 15 minutes at 121 °C (250 °F). 17 grams (0.60 oz) of Trypticase peptone (Tryptone) 3 grams (0.11 oz) of Phytone peptone (Soytone) 5 grams (0.18 oz) of Sodium Chloride (NaCl) 2.5 grams (0.088 oz) of dipotassium phosphate (K2HPO4) 2.5 grams (0.088 oz) of dextrose (glucose) 1 liter (35 imp fl oz; 34 U.S. fl oz) with distilled water
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Retread** Retread: Retread, also known as "recap", or a "remold" is a re-manufacturing process for tires that replace the tread on worn tires. Retreading is applied to casings of spent tires that have been inspected and repaired. It preserves about 90% of the material in spent tires and the material cost is about 20% compared to manufacturing a new one. Applications: United States Some applications for retreaded tires are airplanes, racing cars, buses and delivery trucks. Use of retreaded tires was common historically, but as of 2008, it was seldom used for passenger vehicles, mainly due to discomfort on the road, safety issues and cheaper tire brands surfacing on the market. About 17.6 million retreaded tires were sold in North America in 2006. Process: There are two main processes used for retreading tires, called Mold Cure and Pre Cure. Both processes start with the inspection of the tire, followed by non-destructive inspection method such as shearography to locate non-visible damage and embedded debris and nails. Some casings are repaired and some are discarded. Tires can be retreaded multiple times if the casing is in usable condition. Tires used for short delivery vehicles are retreaded more than long haul tires over the life of the tire body. Casings fit for retreading have the old tread buffed away to prepare for retreading. Process: Material cost for a retreaded tire is about 20% that of making a new tire. About 90% of the original tires by weight is retained in retreaded tires. A 1997 study estimates that then current generation of commercial vehicles tires to last up to 600,000 miles if they're retreaded two to three times. Pre cure Previously prepared tread strip is applied to tire casing with cement. This method allows more flexibility in tire sizes and it is the most commonly used method, but results in a seam where the ends of the strip meet. Mold cure Raw rubber is applied to the tire casing and it is then placed in a mold where tread is formed. A dedicated mold is required for each tire size and tread design. Bead to Bead molding In this subtype, retreading is also applied to the side walls. These tires are given entirely new branding and stamps. Regulations: Some jurisdictions have regulations concerning tire retreading. Europe In Europe all retreads, by law, must be manufactured according to EC Regulation 108 (car tires) or 109 (commercial vehicle tires). As part of this regulation all tires must be tested according to the same load and speed criteria as those undergone by new tires. The Land Fill Directive of 1999 banned tires in landfills in 2003, and banned shredded tires in 2006. United States The Department of Transportation requires marking of a "DOTR number" which shows the name of the retreader and when it was retreaded. Safety: The United States National Highway Traffic Safety Administration recognizes the public perception that retread tires frequently used by heavy vehicles are less safe than new tires as evidenced by tire debris frequently found on highways. The NHTSA is continuing research to determine the proportion of tire debris from retreads in comparison to new tires. Additionally, the NHTSA is researching the cause of tire failure and the crash safety problem posed by tire failures.Federal Executive Order 13149, signed by President Bill Clinton supports the use of retread tires for economic and environmental efficiency by requiring federal vehicles to use retread tires after original factory equipped tires become non serviceable, but only when "such products are reasonably available and meet applicable performance standards". Environmental impact: Retread tires in service lower the volume of raw materials required for the manufacturing of a new tire. This includes a pronounced reduction in the use of oil. In fact, the US EPA estimated a greater than 75% savings in oil used for a retread as compared to a new tire. This also means significant reductions in greenhouse gas emissions. A car tire has 40% natural rubber and 60% oil based rubber, a retreading of tires will reduce the need for natural rubber significantly. Environmental impact: In addition to reducing the amount of raw materials extracted, retread tires also minimize the amount of waste that ends up in landfills. The latest figures by the US EPA indicate that over 11.2 M waste tires were dumped into the U.S. municipal solid waste stream. To understand this figure, it is equivalent to lining up passenger tires tread to tread from roughly Los Angeles to San Diego or Philadelphia to Washington DC. Because a retread tire prevents the need for manufacturing a new tire, significant environmental benefits are achieved.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Protocrystalline** Protocrystalline: A protocrystalline phase is a distinct phase occurring during crystal growth, which evolves into a microcrystalline form. The term is typically associated with silicon films in optical applications such as solar cells. Applications: Silicon solar cells Amorphous silicon (a-Si) is a popular solar cell material owing to its low cost and ease of production. Owing to disordered structure (Urbach tail), its absorption extends to the energies below the band gap resulting in a wide-range spectral response; however, it has a relatively low solar cell efficiency. Protocrystalline Si (pc-Si:H) also has a relatively low absorption near the band gap, owing to its more ordered crystalline structure. Thus, protocrystalline and amorphous silicon can be combined in a tandem solar cell, where the top thin layer of a-Si:H absorbs short-wavelength light whereas the longer wavelengths are absorbed by the underlying protocrystalline silicon layer.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**SELFOC Microlens** SELFOC Microlens: SELFOC Microlenses are flat-ended gradient-index lenses. The refractive index variation in the material is created by ion exchange. They are used as collimators or lenses for filter components. The flat ends make alignment easy. They were developed by Nippon Sheet Glass.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Abietadiene synthase** Abietadiene synthase: The enzyme abieta-7,13-diene synthase (EC 4.2.3.18) catalyzes the chemical reaction (+)-copalyl diphosphate ⇌ abieta-7,13-diene + diphosphateThis enzyme belongs to the family of lyases, specifically those carbon-oxygen lyases acting on phosphates. The systematic name of this enzyme class is (+)-copalyl-diphosphate diphosphate-lyase [cyclizing, abieta-7,13-diene-forming]. This enzyme is also called copalyl-diphosphate diphosphate-lyase (cyclizing). This enzyme participates in diterpenoid biosynthesis. Abietadiene synthase: It has recently been shown (Keeling, et al., 2011) that the orthologous gene in Norway spruce (Picea abies) does not produce abietadiene directly, but instead produces a thermally unstable allylic tertiary alcohol 13-hydroxy-8(14)- abietene, which readily dehydrates to abietadiene, levopimaradiene, palustradiene, and neoabietadiene, when analyzed by the commonly used gas chromatography. This has been confirmed in the other conifer species, lodgepole pine (Pinus contorta) and Jack pine (Pinus banksiana) (Hall et al., 2013).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Mir-16 microRNA precursor family** Mir-16 microRNA precursor family: The miR-16 microRNA precursor family is a group of related small non-coding RNA genes that regulates gene expression. miR-16, miR-15, mir-195 and miR-497 are related microRNA precursor sequences from the mir-15 gene family ([1]). This microRNA family appears to be vertebrate specific and its members have been predicted or experimentally validated in a wide range of vertebrate species (MIPF0000006). Background: The human miR-16 precursor was discovered through detailed expression profile and Karyotype analyses of patients by Calin and colleagues. Karyotyping of chromosome structures from individuals with B-cell chronic lymphocytic leukaemias (B-CLL) found that more than half have alterations in the 13q14 region. Deletions of this well characterised 1 megabase region of the genome was also observed in approximately 50% of mantle cell lymphoma, up to 40% of multiple myeloma, and 60% of prostate cancers. Comprehensive screenings of the region at the time did not provide consistent evidence of involvement from any of the known genes at the time. Using CD5+ B-lymphocytes, which is known to accumulate with B-CLL progression, the minimal region lost from 13q14 region was scrutinised for regulatory elements. Publicly available sequence databases were used to identify a gene cluster which encodes the homologue to the human miR15 and miR16 from the Caenorhabditis elegans. Gene targets: In the original publication which identified the action of miR15 and miR16 in the development of B-CLL, Calin and colleagues proposed that miR16 could be the targets with imperfect base pairing for 14 genes. Increased CD5+ B-lymphocytes in CLL suggests the miR16 may be involved in cellular differentiation. In animal models single-stranded microRNA species act by binding to imperfect mRNA complements, typically to the 3' UTR, although targets have also been observed in the coding sequence of the mRNA. Downregulation of miR16 (as well as miR15) was observed in diffuse large B-cell lymphoma. miR16 has been shown to bind to a nine base pair to a complementary sequence in the 3' UTR region of BCL2, which is an anti-apoptotic gene involved in an evolutionarily conserved pathway in programmed cell death. In the nasopharyngeal carcinoma cell line, miR-16 has been shown to target the 3' UTR of vascular endothelial growth factor (VEGF) and repress the expression of VEGF, which is an important angiogenic factor. Clinical relevance: Altered expression of microRNA-16 has been observed in cancer, including malignancies of the breast, colon, brain , lung, lymphatic system, ovaries, pancreas , prostate and stomach. This difference in expression levels can be used distinguish between cancerous and healthy tissues and to determine clinical prognosis. The fact that pathology is associated with a different expression profile has led to the proposal that disease specific biomarkers can provide potential targets for directed clinical intervention. More recently, there is evidence that in colorectal cancer that the efficacy of treatment with the monoclonal antibody cetuximab can be assessed by the expression pattern of colorectal carcinoma after therapy.miR-16 and miR-15a are clustered within a 0.5 kbp region in Chromosome 13 (13q14) in humans, a chromosomal region shown to be deleted or down-regulated in approximately more than half of B-CLL, the most prevalent form of leukemia in adults. Carcinogenesis is a gradual process, involving multiple genetic mutations, thus every patient with malignancy presents with a heterogeneous population of cells. The fact that mir-16 microRNA loss is observed in a large proportion of cells indicates the change occurred early in cancer development and a target for therapeutic intervention.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Dover's powder** Dover's powder: Dover's powder was a traditional medicine against cold and fever developed by Thomas Dover. It is no longer in use in modern medicine, but may have been in use at least through the 1960s. Dover's powder: A 1958 source describes Dover's Powder as follows: "Powder of Ipecacuanha and Opium (B.P., Egyp. P., Ind. P.). Pulv. Ipecac. et Opii; Ipecac and Opium Powder (U.S.N.F.); Dover's Powder; Compound Ipecacuanha Powder. Prepared ipecacuanha, 10 g., powdered opium 10 g., lactose 80 g. It contains 1% of anhydrous morphine. Dose: 320 to 640 mg. (5 to 10 grains). Many foreign pharmacies include a similar powder, sometimes with potassium sulphate or with equal parts of potassium nitrate and potassium sulphate in place of lactose; max. single dose 1 to 1.5 g. and max. in 24 hours 4 to 6 g."Named from Doctor Thomas Dover, an English physician of the eighteenth century who first prepared it, the powder was an old preparation of powder of ipecacuanha (which was formerly used to produce syrup of ipecac), opium in powder, and potassium sulfate. The powder was largely used in domestic practice to induce sweating, to defeat the advance of a "cold" and at the beginning of any attack of fever. It was also known by the name pulvis ipecacuanhae et opii. Dover's powder: To obtain the greatest benefits from its use as a sudorific, it was recommended that copious drafts of some warm and harmless drink be ingested after the use of the powder. Dover's powder: The following excerpt from a report penned by a Doctor Sharp, employed in the British naval service in the West Indies, in this case, in Trinidad, in 1818, illustrates its use. He writes : At this period, thirty cases of acute dysentery also occurred amongst them and although nineteen of the number were men who arrived in the island from Europe on the 1st and 12th of June, yet, the symptoms even in them were equally as mild as in the assimilated soldier, and the disease yielded to the common remedies – viz – bleeding when the state of the vascular system appeared to indicate the use of it, but in general, saline purgatives in small and repeated quantities were only necessary with small doses at bed time, of calomel and opium, infusion of ipecacuanha or Dover’s powder, and this with tonics, moderate use of port wine and a light farinaceous diet generally and speedily accomplished a perfect case. India: Dover's powder was banned in India in 1994.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Pontoon bridge** Pontoon bridge: A pontoon bridge (or ponton bridge), also known as a floating bridge, uses floats or shallow-draft boats to support a continuous deck for pedestrian and vehicle travel. The buoyancy of the supports limits the maximum load that they can carry. Pontoon bridge: Most pontoon bridges are temporary and used in wartime and civil emergencies. There are permanent pontoon bridges in civilian use that can carry highway traffic. Permanent floating bridges are useful for sheltered water crossings if it is not considered economically feasible to suspend a bridge from anchored piers. Such bridges can require a section that is elevated or can be raised or removed to allow waterborne traffic to pass. Pontoon bridge: Pontoon bridges have been in use since ancient times and have been used to great advantage in many battles throughout history, such as the Battle of Garigliano, the Battle of Oudenarde, the crossing of the Rhine during World War II, the Iran–Iraq War's Operation Dawn 8, and most recently, in the 2022 Russian invasion of Ukraine, after crossings over the Dnipro river had been destroyed. Definition: A pontoon bridge is a collection of specialized, shallow draft boats or floats, connected together to cross a river or canal, with a track or deck attached on top. The water buoyancy supports the boats, limiting the maximum load to the total and point buoyancy of the pontoons or boats. The supporting boats or floats can be open or closed, temporary or permanent in installation, and made of rubber, metal, wood, or concrete. The decking may be temporary or permanent, and constructed out of wood, modular metal, or asphalt or concrete over a metal frame. Definition: Etymology The spelling "ponton" in English dates from at least 1870. The use continued in references found in U.S. patents during the 1890s. It continued to be spelled in that fashion through World War II, when temporary floating bridges were used extensively throughout the European theatre. U.S. combat engineers commonly pronounced the word "ponton" rather than "pontoon" and U.S. military manuals spelled it using a single 'o'. The U.S. military differentiated between the bridge itself ("ponton") and the floats used to provide buoyancy ("pontoon"). The original word was derived from Old French ponton, from Latin ponto ("ferryboat"), from pons ("bridge"). Design: When designing a pontoon bridge, the civil engineer must take into consideration the Archimedes' principle: Each pontoon can support a load equal to the mass of the water that it displaces. This load includes the mass of the bridge and the pontoon itself. If the maximum load of a bridge section is exceeded, one or more pontoons become submerged. Flexible connections have to allow for one section of the bridge to be weighted down more heavily than the other parts. The roadway across the pontoons should be relatively light, so as not to limit the carrying capacity of the pontoons.The connection of the bridge to shore requires the design of approaches that are not too steep, protect the bank from erosion and provide for movements of the bridge during (tidal) changes of the water level. Design: Floating bridges were historically constructed using wood. Pontoons were formed by simply lashing several barrels together, by rafts of timbers, or by using boats. Each bridge section consisted of one or more pontoons, which were maneuvered into position and then anchored underwater or on land. The pontoons were linked together using wooden stringers called balks. The balks were covered by a series of cross planks called chesses to form the road surface, and the chesses were secured with side guard rails. Design: A floating bridge can be built in a series of sections, starting from an anchored point on the shore. Modern pontoon bridges usually use pre-fabricated floating structures.Most pontoon bridges are designed for temporary use, but bridges across water bodies with a constant water level can remain in place much longer. Hobart Bridge, a long pontoon bridge built 1943 in Hobart, was only replaced after 21 years. The fourth Galata Bridge that spans the Golden Horn in Istanbul, Turkey was built in 1912 and operated for 80 years. Design: Provisional and lightweight pontoon bridge are easily damaged. The bridge can be dislodged or inundated when the load limit of the bridge is exceeded. The bridge can be induced to sway or oscillate in a hazardous manner from the swell, from a storm, a flood or a fast moving load. Ice or floating objects (flotsam) can accumulate on the pontoons, increasing the drag from river current and potentially damaging the bridge. See below for floating pontoon failures and disasters. Historic uses: Ancient China In ancient China, the Zhou Dynasty Chinese text of the Shi Jing (Book of Odes) records that King Wen of Zhou was the first to create a pontoon bridge in the 11th century BC. However, the historian Joseph Needham has pointed out that in all likely scenarios, the temporary pontoon bridge was invented during the 9th or 8th century BC in China, as this part was perhaps a later addition to the book (considering how the book had been edited up until the Han Dynasty, 202 BC – 220 AD). Although earlier temporary pontoon bridges had been made in China, the first secure and permanent ones (and linked with iron chains) in China came first during the Qin Dynasty (221–207 BC). The later Song Dynasty (960–1279 AD) Chinese statesman Cao Cheng once wrote of early pontoon bridges in China (spelling of Chinese in Wade-Giles format): The Chhun Chhiu Hou Chuan says that in the 58th year of the Zhou King Nan (257 BC), there was invented in the Qin State the floating bridge (fou chhiao) with which to cross rivers. But the Ta Ming ode in the Shih Ching (Book of Odes) says (of King Wen) that he 'joined boats and made of them a bridge' over the River Wei. Sun Yen comments that this shows that the boats were arranged in a row, like the beams (of a house) with boards laid (transversely) across them, which is just the same as the pontoon bridge of today. Tu Yu also thought this. ... Cheng Khang Chheng says that the Zhou people invented it and used it whenever they had occasion to do so, but the Qin people, to whom they handed it down, were the first to fasten it securely together (for permanent use). Historic uses: During the Eastern Han Dynasty (25–220 AD), the Chinese created a very large pontoon bridge that spanned the width of the Yellow River. There was also the rebellion of Gongsun Shu in 33 AD, where a large pontoon bridge with fortified posts was constructed across the Yangtze River, eventually broken through with ramming ships by official Han troops under Commander Cen Peng. During the late Eastern Han into the Three Kingdoms period, during the Battle of Chibi in 208 AD, the Prime Minister Cao Cao once linked the majority of his fleet together with iron chains, which proved to be a fatal mistake once he was thwarted with a fire attack by Sun Quan's fleet. Historic uses: The armies of Emperor Taizu of Song had a large pontoon bridge built across the Yangtze River in 974 in order to secure supply lines during the Song Dynasty's conquest of the Southern Tang.On October 22, 1420, Ghiyasu'd-Din Naqqah, the official diarist of the embassy sent by the Timurid ruler of Persia, Mirza Shahrukh (r. 1404–1447), to the Ming Dynasty of China during the reign of the Yongle Emperor (r. 1402–1424), recorded his sight and travel over a large floating pontoon bridge at Lanzhou (constructed earlier in 1372) as he crossed the Yellow River on this day. He wrote that it was: ... composed of twenty three boats, of great excellence and strength attached together by a long chain of iron as thick as a man's thigh, and this was moored on each side to an iron post as thick as a man's waist extending a distance of ten cubits on the land and planted firmly in the ground, the boats being fastened to this chain by means of big hooks. There were placed big wooden planks over the boats so firmly and evenly that all the animals were made to pass over it without difficulty. Historic uses: Greco-Roman era The Greek writer Herodotus in his Histories, records several pontoon bridges. Emperor Caligula built a 2-mile (3.2 km) bridge at Baiae in 37 AD. For Emperor Darius I The Great of Persia (522–485 BC), the Greek Mandrocles of Samos once engineered a 2-kilometre (1.2 mi) pontoon bridge that stretched across the Bosporus, linking Asia to Europe, so that Darius could pursue the fleeing Scythians as well as move his army into position in the Balkans to overwhelm Macedon. Other spectacular pontoon bridges were Xerxes' Pontoon Bridges across the Hellespont by Xerxes I in 480 BC to transport his huge army into Europe: and meanwhile other chief-constructors proceeded to make the bridges; and thus they made them: They put together fifty-oared galleys and triremes, three hundred and sixty to be under the bridge towards the Euxine Sea, and three hundred and fourteen to be under the other, the vessels lying in the direction of the stream of the Hellespont (though crosswise in respect to the Pontus), to support the tension of the ropes. They placed them together thus, and let down very large anchors, those on the one side towards the Pontus because of the winds which blow from within outwards, and on the other side, towards the West and the Egean, because of the South-East and South Winds. They left also an opening for a passage through, so that any who wished might be able to sail into the Pontus with small vessels, and also from the Pontus outwards. Having thus done, they proceeded to stretch tight the ropes, straining them with wooden windlasses, not now appointing the two kinds of rope to be used apart from one another, but assigning to each bridge two ropes of white flax and four of the papyrus ropes. The thickness and beauty of make was the same for both, but the flaxen ropes were heavier in proportion, and of this rope a cubit weighed one talent. When the passage was bridged over, they sawed up logs of wood, and making them equal in length to the breadth of the bridge they laid them above the stretched ropes, and having set them thus in order they again fastened them above. When this was done, they carried on brushwood, and having set the brushwood also in place, they carried on to it earth; and when they had stamped down the earth firmly, they built a barrier along on each side, so that the baggage-animals and horses might not be frightened by looking out over the sea.According to John Hale's Lords of the Sea, to celebrate the onset of the Sicilian Expedition (415 - 413 B.C.), the Athenian general, Nicias, paid builders to engineer an extraordinary pontoon bridge composed of gilded and tapestried ships for a festival that drew Athenians and Ionians across the sea to the sanctuary of Apollo on Delos. On the occasion when Nicias was a sponsor, young Athenians paraded across the boats, singing as they walked, to give the armada a spectacular farewell. The late Roman writer Vegetius, in his work De Re Militari, wrote: But the most commodious invention is that of the small boats hollowed out of one piece of timber and very light both by their make and the quality of the wood. The army always has a number of these boats upon carriages, together with a sufficient quantity of planks and iron nails. Thus with the help of cables to lash the boats together, a bridge is instantly constructed, which for the time has the solidity of a bridge of stone. Historic uses: The emperor Caligula is said to have ridden a horse across a pontoon bridge stretching two miles between Baiae and Puteoli while wearing the armour of Alexander the Great to mock a soothsayer who had claimed he had "no more chance of becoming emperor than of riding a horse across the Bay of Baiae". Caligula's construction of the bridge cost a massive sum of money and added to discontent with his rule. Historic uses: Middle Ages During the Middle Ages, pontoons were used alongside regular boats to span rivers during campaigns, or to link communities which lacked resources to build permanent bridges. The Hun army of Attila built a bridge across the Nišava during the siege of Naissus in 442 to bring heavy siege towers within range of the city. Sassanid forces crossed the Euphrates on a quickly built pontoon bridge during the siege of Kallinikos in 542. The Ostrogothic Kingdom constructed a fortified bridge across the Tiber during the siege of Rome in 545 to block Byzantine general Belisarius' relief flotillas to the city. The Avar Khaganate forced Syriac-Roman engineers to construct two pontoon bridges across the Sava during the siege of Sirmium in 580 to completely surround the city with their troops and siege works.Emperor Heraclius crossed the Bosporus on horseback on a large pontoon bridge in 638. The army of the Umayyad Caliphate built a pontoon bridge over the Bosporus in 717 during the siege of Constantinople (717–718). The Carolingian army of Charlemagne constructed a portable pontoon bridge of anchored boats bound together and used it to cross the Danube during campaigns against the Avar Khaganate in the 790s. Charlemagne's army built two fortified pontoon bridges across the Elbe in 789 during a campaign against the Slavic Veleti. The German army of Otto the Great employed three pontoon bridges, made from pre-fabricated materials, to rapidly cross the Recknitz river at the Battle on the Raxa in 955 and win decisively against the Slavic Obotrites. Tenth-Century German Ottonian capitularies demanded that royal fiscal estates maintain watertight, river-fordable wagons for purposes of war.The Danish Army of Cnut the Great completed a pontoon bridge across the Helge River during the Battle of Helgeå in 1026. Crusader forces constructed a pontoon bridge across the Orontes to expedite resupply during the siege of Antioch in December 1097. According to the chronicles, the earliest floating bridge across the Dnieper was built in 1115. It was located near Vyshhorod, Kiev. Bohemian troops under the command of Frederick I, Holy Roman Emperor crossed the Adige in 1157 on a pontoon bridge built in advance by the people of Verona on orders of the German Emperor. Historic uses: The French Royal Army of King Philip II of France constructed a pontoon bridge across the Seine to seize Les Andelys from the English at the siege of Château Gaillard in 1203. During the Fifth Crusade, the Crusaders built two pontoon bridges across the Nile at the siege of Damietta (1218–1219), including one supported by 38 boats. On 27 May 1234, Crusader troops crossed the river Ochtum in Germany on a pontoon bridge during the fight against the Stedingers. Imperial Mongol troops constructed a pontoon bridge at the Battle of Mohi in 1241 to outflank the Hungarian army. The French army of King Louis IX of France crossed the Charente on multiple pontoon bridges during the Battle of Taillebourg on 21 July 1242. Louis IX had a pontoon bridge built across the Nile to provide unimpeded access to troops and supplies in early March 1250 during the Seventh Crusade. Historic uses: A Florentine army erected a pontoon bridge across the Arno during the siege of Pisa in 1406. The English army of John Talbot, 1st Earl of Shrewsbury crossed the Oise across a pontoon bridge of portable leather vessels in 1441. Ottoman engineers built a pontoon bridge across the Golden Horn during the siege of Constantinople (1453), using over a thousand barrels. The bridge was strong enough to support carts. The Ottoman Army constructed a pontoon bridge during the siege of Rhodes (1480). Venetian pioneers built a floating bridge across the Adige at the Battle of Calliano (1487). Historic uses: Early modern period Before the Battle of Worcester, the final battle of the English Civil War, on 30 August 1651, Oliver Cromwell delayed the start of the battle to give time for two pontoon bridges to be constructed, one over the River Severn and the other over the River Teme, close to their confluence. This allowed Cromwell to move his troops West of the Severn during the action on 3 September 1651 and was crucial to the victory by his New Model Army. Historic uses: The Spanish Army constructed a pontoon bridge at the Battle of Río Bueno in 1654. However, as the bridge broke apart it all ended in a sound defeat of the Spanish by local Mapuche-Huilliche forces. Historic uses: French general Jean Lannes's troops built a pontoon bridge to cross the Po river prior to the Battle of Montebello (1800). Napoleon's Grande Armée made extensive use of pontoon bridges at the battles of Aspern-Essling and Wagram under the supervision of General Henri Gatien Bertrand. General Jean Baptiste Eblé's engineers erected four pontoon bridges in a single night across the Dnieper during the Battle of Smolensk (1812). Working in cold water, Eblé's Dutch engineers constructed a 100-meter-long pontoon bridge during the Battle of Berezina to allow the Grande Armée to escape to safety. During the Peninsular War the British army transported "tin pontoons": 353  that were lightweight and could be quickly turned into a floating bridge. Historic uses: Lt Col Charles Pasley of the Royal School of Military Engineering at Chatham England developed a new form of pontoon which was adopted in 1817 by the British Army. Each pontoon was split into two halves, and the two pointed ends could be connected together in locations with tidal flow. Each half was enclosed, reducing the risk of swamping, and the sections bore multiple lashing points.The "Palsey pontoon" lasted until 1836 when it was replaced by the "Blanshard pontoon" which comprised tin cylinders 3 feet wide and 22 feet long, placed 11 feet apart, making the pontoon very buoyant. The pontoon was tested with the Palsey pontoon on the Medway.An alternative proposed by Charles Pasley comprised two copper canoes, each 2 foot 8 inches wide and 22 foot long and coming in two sections which were fastened side by side to make a double canoe raft. Copper was used in preference to fast-corroding tin. Lashed at 10 foot centres, these were good for cavalry, infantry and light guns; lashed at 5 foot centres, heavy cannon could cross. The canoes could also be lashed together to form rafts. One cart pulled by two horse carried two half canoes and stores.A comparison of pontoons used by each nations army shows that almost all were open boats coming in one, two or even three pieces, mainly wood, some with canvas and rubber protection. Belgium used an iron boat; the United States used cylinders split into three.In 1862 the Union forces commanded by Major General Ambrose Burnside were stuck on the wrong side of the Rappahannock River at the Battle of Fredericksburg for lack of the arrival of the pontoon train resulting in severe losses.: 115  The report of this disaster resulted in Britain forming and training a Pontoon Troop of Engineers.: 116–8 During the American Civil War various forms of pontoon bridges were tried and discarded. Wooden pontoons and India rubber bag pontoons shaped like a torpedo proved impractical until the development of cotton-canvas covered pontoons, which required more maintenance but were lightweight and easier to work with and transport. From 1864 a lightweight design known as Cumberland Pontoons, a folding boat system, were widely used during the Atlanta Campaign to transport soldiers and artillery across rivers in the South.In 1872 at a military review before Queen Victoria, a pontoon bridge was thrown across the River Thames at Windsor, Berkshire, where the river was 250 feet (76 m) wide. The bridge, comprising 15 pontoons held by 14 anchors, was completed in 22 minutes and then used to move five battalions of troops across the river. It was removed in 34 minutes the next day.: 122–124 At Prairie du Chien, Wisconsin, the Pile-Pontoon Railroad Bridge was constructed in 1874 over the Mississippi River to carry a railroad track connecting that city with Marquette, Iowa. Because the river level could vary by as much as 22 feet, the track was laid on an adjustable platform above the pontoons. This unique structure remained in use until the railroad was abandoned in 1961, when it was removed. Historic uses: The British Blanshard Pontoon stayed in British use until the late 1870s, when it was replaced by the "Blood Pontoon". The Blood Pontoon returned to the open boat system, which enabled use as boats when not needed as pontoons. Side carrying handles helped transportation. The new pontoon proved strong enough to support loaded elephants and siege guns as well as military traction engines.: 119 Early 20th century The British Blood Pontoon MkII, which took the original and cut it into two halves, was still in use with the British Army in 1924.The First World War saw developments on "trestles" to form the link between a river bank and the pontoon bridge. Some infantry bridges in WW1 used any material available, including petrol cans as flotation devices.The Kapok Assault Bridge for infantry was developed for the British Army, using kapok fibre-filled canvas float and timber foot walks. America created their own version.Folding Boat Equipment was developed in 1928 and went through several versions until it was used in WW2 to complement the Bailey Pontoon. It had a continuous canvas hinge and could fold flat for storage and transportation. When assembled it could carry 15 men and with two boats and some additional toppings it could transport a 3-ton truck. Further upgrades during WW2 resulted in it moving to a Class 9 bridge. World War II: Pontoon bridges were used extensively during World War II, mainly in the European Theater of Operations. The United States was the principal user, with Britain next. World War II: United States In the United States, combat engineers were responsible for bridge deployment and construction. These were formed principally into Engineer Combat Battalions, which had a wide range of duties beyond bridging, and specialized units, including Light Ponton Bridge Companies, Heavy Ponton Bridge Battalions, and Engineer Treadway Bridge Companies; any of these could be organically attached to infantry units or directly at the divisional, corps, or army level.American Engineers built three types of floating bridges: M1938 infantry footbridges, M1938 ponton bridges, and M1940 treadway bridges, with numerous subvariants of each. These were designed to carry troops and vehicles of varying weight, using either an inflatable pneumatic ponton or a solid aluminum-alloy ponton bridge. Both types of bridges were supported by pontons (known today as "pontoons") fitted with a deck built of balk, which were square, hollow aluminum beams. World War II: American Light Ponton Bridge Company An Engineer Light Ponton Company consisted of three platoons: two bridge platoons, each equipped with one unit of M3 pneumatic bridge, and a lightly equipped platoon which had one unit of footbridge and equipment for ferrying. The bridge platoons were equipped with the M3 pneumatic bridge, which was constructed of heavy inflatable pneumatic floats and could handle up to 10 short tons (9.1 t); this was suitable for all normal infantry division loads without reinforcement, greater with. World War II: American Heavy Ponton Bridge BattalionA Heavy Ponton Bridge Battalion was provided with equipage required to provide stream crossing for heavy military vehicles that could not be supported by a light ponton bridge. The Battalion had two lettered companies of two bridge platoons each. Each platoon was equipped with one unit of heavy ponton equipage. The battalion was an organic unit of army and higher echelons. The M1940 could carry up to 25 short tons (23 t). The M1 Treadway Bridge could support up to 20 short tons (18 t). The roadway, made of steel, could carry up to 50 short tons (45 t), while the center section made of 4 inches (100 mm) thick plywood could carry up to 30 short tons (27 t). The wider, heavier tanks used the outside steel treadway while the narrower, lighter jeeps and trucks drove across the bridge with one wheel in the steel treadway and the other on the plywood. World War II: American Engineer Treadway Bridge CompanyAn Engineer Treadway Bridge Company consisted of company headquarters and two bridge platoons. It was an organic unit of the armored force, and normally was attached to an Armored Engineer Battalion. Each bridge platoon transported one unit of steel treadway bridge equipage for construction of ferries and bridges in river-crossing operations of the armored division. Stream-crossing equipment included utility powerboats, pneumatic floats, and two units of steel treadway bridge equipment, each of which allowed the engineers to build a floating bridge about 540 feet (160 m) in length. World War II: Materials and equipmentPneumatic pontonThe United States Army Corps of Engineers designed a self-contained bridge transportation and erection system. The Brockway model B666 6 short tons (5.4 t) 6x6 truck chassis (also built under license by Corbitt and White) was used to transport both the bridge's steel and rubber components. A single Brockway truck could carry material for 30 feet (9.1 m) of bridge, including two pontons, two steel saddles that were attached to the pontons, and four treadway sections. Each treadway was 15 feet (4.6 m) long with high guardrails on either side of the 2 feet (0.61 m) wide track.The truck was mounted with a 4 short tons (3.6 t) hydraulic crane that was used to unload the 45 inches (110 cm) wide steel treadways. A custom designed twin boom arm was attached to rear of the truck bed and helped unroll and place the heavy inflatable rubber pontoons upon which the bridge was laid. The 220 inches (560 cm) wheelbase chassis included a 25,000 pounds (11,000 kg) front winch and extra-large air-brake tanks that also served to inflate the rubber pontoons before they were placed in the water.A pneumatic float was made of rubberized fabric separated by bulkheads into 12 airtight compartments and inflated with air. The pneumatic float consisted of an outer perimeter tube, a floor, and a removable center tube. The 18 short tons (16 t) capacity float was 8 feet 3 inches (2.51 m) wide, 33 feet (10 m) long, 2 feet 9 inches (0.84 m) deep. World War II: Solid pontonSolid aluminum-alloy pontons were used in place of pneumatic floats to support heavier bridges and loads. They were also pressed into service for lighter loads as needed. World War II: TreadwayA treadway bridge was a multi-section, prefabricated floating steel bridge supported by pontoons carrying two metal tracks (or "tread ways") forming a roadway. Depending on its weight class, the treadway bridge was supported either by heavy inflatable pneumatic pontons or by aluminum-alloy half-pontons. The aluminum half-pontons were 29 feet 7 inches (9.02 m) long overall, 6 feet 11 inches (2.11 m) wide at the gunwales, and 3 feet 4 inches (1.02 m) deep except at the bow where the gunwale was raised. The gunwales were 6 feet 8 inches (2.03 m) center-to-center. At 6 inches (150 mm) freeboard, the half-ponton has a displacement of 26,500 pounds (12,000 kg). The sides and bow of the half-ponton sloped inward, permitting two or more to be nested for transporting or storing.A treadway bridge could be built of floating spans or fixed spans. An M2 treadway bridge was designed to carry artillery, heavy duty trucks, and medium tanks up to 40 short tons (36 t). This could be of any length, and was what was used over major river obstacles such as the Rhine and Moselle. Doctrine stated that it would take 5 1/2 hours to place a 362-foot section of M2 treadway during daylight and 7 1/2 hours at night. Pergrin says that in practise 50 ft/hour of treadway construction was expected, which is a little slower than the speed specified by doctrine.By 1943, combat engineers faced the need for bridges to bear weights of 35 tons or more. To increase weight bearing capacity, they used bigger floats to add buoyancy. This overcame the capacity limitation, but the larger floats were both more difficult to transport to the crossing site and requiring more and larger trucks in the divisional and corps trains. World War II: Britain Donald Bailey invented the Bailey bridge, which was made up of modular, pre-fabricated steel trusses capable of carrying up to 40 short tons (36 t) over spans up to 180 feet (55 m). While typically constructed point-to-point over piers, they could be supported by pontoons as well.The Bailey bridge was used for the first time in 1942. The first version put into service was a Bailey Pontoon and Raft with a 30 feet (9.1 m) single-single Bailey bay supported on two pontoons. A key feature of the Bailey Pontoon was the use of a single span from the bank to the bridge level which eliminated the need for bridge trestles.For lighter vehicle bridges the Folding Boat Equipment could be used and the Kapok Assault Bridge was available for infantry.An open sea type of pontoon, another British war time invention, known by their code names, the Mulberry harbours floated across the English Channel to provide harbours for the June 1944 Allied invasion of Normandy. The dock piers were code named "Whale". These piers were the floating roadways that connected the "Spud" pier heads to the land. These pier heads or landing wharves, at which ships were unloaded each consisted of a pontoon with four legs that rested on the sea bed to anchor the pontoon, yet allowed it to float up and down freely with the tide. "Beetles" were pontoons that supported the "Whale" piers. They were moored in position using wires attached to "Kite" anchors which were also designed by Allan Beckett. These anchors had a high holding power as was demonstrated in D+13 Normandy storm where the British Mulberry survived most of the storm damage whereas the American Mulberry, which only had 20% of its Kite Anchors deployed, was destroyed. World War II: Gallery Pontoon bridges during World War II Modern military uses: Pontoon bridges were extensively used by both armies and civilians throughout the latter half of the 20th century. Modern military uses: From the Post-War period into the early 1980s the U.S. Army and its NATO and other allies employed three main types of pontoon bridge/raft. The M4 bridge featured a lightweight aluminum balk deck supported by rigid aluminum hull pontoons. The M4T6 bridge used the same aluminum balk deck of the M4, but supported instead by inflatable rubber pontoons. The Class 60 bridge consisted of a more robust steel girder and grid deck supported by inflatable rubber pontoons. All three pontoon bridge types were cumbersome to transport and deploy, and slow to assemble, encouraging the development of an easier to transport, deploy and assemble floating bridge. Modern military uses: Amphibious float bridges Several alternatives featured a self-propelled amphibious integrated transporter, floating pontoon, bridge deck section that could be delivered and assembled in the water under its own power, linking as many units as required to bridge a gap or form a raft ferry. Modern military uses: An early example was the Engin de Franchissement de l’Avant EFA (mobile bridge) amphibious forward crossing apparatus conceived by French General Jean Gillois in 1955. The system consisted of a wheeled amphibious truck equipped with inflatable outboard flotation sponsons and a rotating vehicle bridge deck section. The system was developed by the West German firm Eisenwerke-Kaiserslauter (EWK) and entered production by the French-German consortium Pontesa. The EFA system was first deployed by the French Army in 1965, and subsequently by the West German Bundeswehr, British Army, and on a very limited basis by the U.S. Army, where it was referred to as Amphibious River Crossing Equipment (ARCE). Production ended in 1973. The EFA was used in combat by the Israel Defense Forces (IDF), which employed former U.S. Army equipment to cross the Suez Canal in their counterattack into Egypt during the Yom Kippur War of 1973. Modern military uses: EWK further developed the EFA system into the M2 "Alligator" Amphibious Bridging Vehicle equipped with fold-out aluminum flotation pontoons, which was produced from 1967 to 1970 and sold to the West German, British and Singapore militaries. The M2 was followed by the revised M3 version, entering service in 1996 with Germany, Britain, Taiwan and Singapore. The M3 was used in combat by British Forces during the Iraq War. More recently, Turkey has developed a similar system in the FNSS Samur wheeled amphibious assault bridge, while the Russian PMM-2 and Chinese GZM003 armoured amphibious assault bridge ride on tracks. Modern military uses: A similar amphibious system, the Mobile Floating Assault Bridge-Ferry (MFAB-F) was developed in the U.S. by Chrysler between 1959 and 1962. As with the French EFA, the MFAB-F consisted of an amphibious truck with a rotating bridge deck section, but there were no outboard flotation sponsons. The MFAB-F was first deployed by the U.S. Army in 1964 and later by Belgium. An improved version was produced by FMC from 1970 to 1976. The MFAB-F remained in service into the early 1980s before being replaced by a simpler continuous pontoon or "ribbon bridge" system. Modern military uses: Ribbon float bridges In the early Cold War period the Soviet Red Army began development of a new kind of continuous pontoon bridge made up of short folding sections or bays that could be transported and deployed rapidly, automatically unfold in the water, and quickly be assembled into a floating bridge of variable length. Known as the PMP Folding Float Bridge, it was first deployed in 1962 and subsequently adopted by Warsaw Pact countries and other states employing Soviet military equipment. The PMP proved its viability in combat when it was used by Egyptian forces to cross the Suez Canal in 1973. Operation Badr, which opened the Yom Kippur War between Egypt and Israel, involved the erection of at least 10 pontoon bridges to cross the Canal. Modern military uses: Beginning in 1969 the U.S. Army Mobility Equipment Research and Development Command (MERADCOM) reverse-engineered the Russian PMP design to develop the improved float bridge (IFB), later known as the standard ribbon bridge (SRB). The IFB/SRB was type classified in 1972 and first deployed in service in 1976. It was very similar to the PMP but was constructed of lightweight aluminum instead of heavier steel. Modern military uses: In 1977 the West German Bundeswehr decided to adopt the SRB with some modifications and improvements, entering service in 1979 as the Faltschwimmbrücke, or Foldable Floating Bridge (FSB). Work on designing an improved version of the U.S. SRB incorporating features of the German FSB began in the 1990s, with first deployment by the U.S. Army in the early 2000s as the improved ribbon bridge (IRB). Modern military uses: In addition to the U.S. and Germany, the IFB/SRB/FSB/IRB has been adopted by the Armed Forces of Australia, Brazil, Canada, the Netherlands, Portugal, South Korea and Sweden, among others. Modern military uses: Yugoslav wars During the Yugoslav wars of the 1990s, the Maslenica Bridge was destroyed and a short pontoon bridge was built by Croatian civilian and military authorities in July 1993 over a narrow sea outlet in the town of Maslenica, after the territory was retaken from Serbian Krajina. Between 1993 and 1995 the pontoon served as one of the two operational land links toward Dalmatia and Croat- and Bosnian Muslim-held areas of Bosnia-Herzegovina that did not go through Serb-held territory.In 1995 the 502nd and 38th Engineer Companies of the U.S. Army's 130th Engineer Brigade, and the 586th Engineer Company from Ft. Benning GA, operating as part of IFOR assembled a standard ribbon bridge under adverse weather conditions across the Sava River near Županja (between Croatia and Bosnia), with a total length of 2,034 feet (620 m). It was dismantled in 1996. Modern military uses: Iran–Iraq war Numerous pontoon bridges were constructed by the Iranians and Iraqis to cross the various rivers and marshes alongside the Iraqi border. Notable instances include one constructed over the Karkheh river to ambush Iraqi Armor during Operation Nasr, and another where they crossed certain marshes during Operation Dawn 8. They were extremely prominent due to their use in allowing for tanks and transports to cross rivers. Modern military uses: Invasion of Iraq The United States Army's 299th Multi-role Bridge Company, USAR deployed a standard ribbon bridge across the Euphrates river at Objective Peach near Al Musayib on the night of 3 April 2003. The 185-meter bridge was built to support retrograde operations because of the heavy-armor traffic crossing a partially destroyed adjacent highway span."By dawn on 4 April 2003, the 299th Engineer Company had emplaced a 185-meter long Assault Float Bridge—the first time in history that a bridge of its type was built in combat." This took place during the 2003 invasion of Iraq by American and British forces. That same night, the 299th also constructed a 40-metre (130 ft) single-story Medium Girder Bridge to patch the damage done to the highway span. The 299th was part of the U.S. Army's 3rd Infantry Division as they crossed the border into Iraq on 20 March 2003. Modern military uses: Syrian civil war In February 2018, pro-regime fighters used a pontoon bridge to cross the Euphrates river during the Battle of Khasham. Eastern Ukraine offensive In May 2022, Ukrainian forces repelled an attempted Russian military crossing of the Donets river, west of Sievierodonetsk in Luhansk Oblast, during the Eastern Ukraine offensive. At least one Russian battalion tactical group was reportedly destroyed, as well as the pontoon bridge deployed in the crossing. Permanent pontoon bridges in civilian use: This design for bridges is also used for permanent bridges designed for highway traffic, pedestrian traffic and bicycles, with sections for boats to ply the waterway being crossed. Seattle in the United States and Kelowna in British Columbia, Canada are two places with permanent pontoon bridges, see William R. Bennett Bridge in British Columbia and these in Seattle: Lacey V. Murrow Memorial Bridge, Evergreen Point Floating Bridge and Homer M. Hadley Memorial Bridge. There are five pontoon bridges across the Suez Canal. Nordhordland Bridge is a combined cable-stayed and pontoon highway bridge in Norway. Failures and disasters: The Saint Isaac's Bridge across the Neva River in Saint Petersburg suffered two disasters, one natural, a gale in 1733, and then a fire in 1916. Failures and disasters: Floating bridges can be vulnerable to inclement weather, especially strong winds. The U.S. state of Washington is home to some of the longest permanent floating bridges in the world, and two of these failed in part due to strong winds.In 1979, the longest floating bridge crossing salt water, the Hood Canal Bridge, was subjected to winds of 80 miles per hour (130 km/h), gusting up to 120 miles per hour (190 km/h). Waves of 10–15 feet (3.0–4.6 m) battered the sides of the bridge, and within a few hours the western 3⁄4 mile (1.2 km) of the structure had sunk. It has since been rebuilt. Failures and disasters: In 1990, the 1940 Lacey V. Murrow Memorial Bridge was closed for renovations. Specifically, the sidewalks were being removed to widen the traffic lanes to the standards mandated by the Interstate Highway System. Engineers realized that jackhammers could not be employed to remove the sidewalks without risking compromising the structural integrity of the entire bridge. As such, a unique process called hydrodemolition was employed, in which powerful jets of water are used to blast away concrete, bit by bit. The water used in this process was temporarily stored in the hollow chambers in the pontoons of the bridge in order to prevent it from contaminating the lake. During a week of rain and strong winds, the watertight doors were not closed and the pontoons filled with water from the storm, in addition to the water from the hydrodemolition. The inundated bridge broke apart and sank. The bridge was rebuilt in 1993. Failures and disasters: A minor disaster occurs if anchors or connections between the pontoon bridge segments fail. This may happen because of overloading, extreme weather or flood. The bridge disintegrates and parts of it start to float away. Many cases are known. When the Lacey V. Murrow Memorial Bridge sank, it severed the anchor cables of the bridge parallel to it. A powerful tugboat pulled on that bridge against the wind during a subsequent storm, and prevented further damage.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Reabsorption** Reabsorption: In renal physiology, reabsorption or tubular reabsorption is the process by which the nephron removes water and solutes from the tubular fluid (pre-urine) and returns them to the circulating blood. It is called reabsorption (and not absorption) because these substances have already been absorbed once (particularly in the intestines) and the body is reclaiming them from a postglomerular fluid stream that is on its way to becoming urine (that is, they will soon be lost to the urine unless they are reabsorbed from the tubule into the peritubular capillaries. This happens as a result of sodium transport from the lumen into the blood by the Na+/K+ATPase in the basolateral membrane of the epithelial cells. Thus, the glomerular filtrate becomes more concentrated, which is one of the steps in forming urine. Nephrons are divided into five segments, with different segments responsible for reabsorbing different substances. Reabsorption allows many useful solutes (primarily glucose and amino acids), salts and water that have passed through Bowman's capsule, to return to the circulation. These solutes are reabsorbed isotonically, in that the osmotic potential of the fluid leaving the proximal convoluted tubule is the same as that of the initial glomerular filtrate. However, glucose, amino acids, inorganic phosphate, and some other solutes are reabsorbed via secondary active transport through cotransport channels driven by the sodium gradient. Reabsorption: Renin–angiotensin system: The kidneys sense low blood pressure. Release renin into the blood. Renin causes production of angiotensin I. Angiotensin-converting enzyme (ACE) converts angiotensin I to angiotensin II. Angiotensin II stimulates the release of aldosterone, ADH, and thirst. Aldosterone causes kidneys to reabsorb sodium; ADH increases the uptake of water. Water follows sodium. As blood volume increases, pressure also increases.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Epigenetic theories of homosexuality** Epigenetic theories of homosexuality: Epigenetic theories of homosexuality concern the studies of changes in gene expression or cellular phenotype caused by mechanisms other than changes in the underlying DNA sequence, and their role in the development of homosexuality. Epigenetics examines the set of chemical reactions that switch parts of the genome on and off at strategic times and locations in the organism's life cycle. However, epigenetic theories tangle a multiplicity of initiating causes and of resulting final effects and will never lead to a single cause or a single result. Hence, any interpretation of such theories may not focus just one isolated reason of a multiplicity of causes or of effects.Instead of affecting the organism's DNA sequence, non-genetic factors may cause the organism's genes to express themselves differently. DNA in the human body is wrapped around histones, which are proteins that package and order DNA into structural units. DNA and histone are covered with chemical tags known as the epigenome, which shapes the physical structure of the genome. It tightly wraps inactive genes on the DNA sequence making those genes unreadable while loosely wrapping active genes making them more expressive. The more tightly wrapped the gene, the less it will be expressed in the organism. These epigenetic tags react to stimuli presented from the outside world. It adjusts specific genes in the genome to respond to humans' rapidly changing environments. The idea of epigenetics and gene expression has been a theory applied to the origins of homosexuality in humans. One team of researchers examined the effects of epi-marks buffering XX fetuses and XY fetuses from certain androgen exposure and used published data on fetal androgen signaling and gene regulation through non-genetic changes in DNA packaging to develop a new model for homosexuality. The researchers found that stronger than average epi-marks, epigenomes that are wrapped tightly around the DNA sequence, convert sexual preference in individuals without altering genitalia or sexual identity. However, a later study found that male homosexuality is not linked to low androgen sensitivity or "sex-reversed" epi-marks. Epigenetic marks: Epigenetic marks (epi-marks) are temporary "switches" that control how our genes are expressed during gestation and after birth. Moreover, epi-marks are modifications of histone proteins. Epigenetic marks are modifications of the methyl and acetyl groups that bind to DNA histones thereby changing how the proteins function and as a result, alter gene expression. Epi-marks change how the histones function and as a result, influence the way genes are expressed. Epigenetic marks promote normal sexual development during fetal development. However, they can be passed on to offspring through the process of meiosis. When they are transferred from one parent to an offspring of the opposite sex, it can contribute to an altered sexual development, thus leading to masculinization of female offspring and feminization of male offspring. However, these epi-marks hold no consistency between individuals in regard to strength and variability. Twin studies: Identical twins have nearly identical DNA, which leads to the perceived conclusion that all identical twins are either heterosexual or homosexual. However, it is evident that this is not the case, consequently leaving a gap in the explanation for homosexuality. A "gay" gene does not produce homosexuality. Rather, epigenetic modifications act as temporary "switches" that regulate how the genes are expressed. Of the pairs of identical twins in which one twin is homosexual, the other twin, despite having the same genome, only has a 20-50% chance of being homosexual as well. This leads to the hypothesis that homosexuality is created by something else rather than the genes. Epigenetic transformation allows the on and off switching of certain genes, subsequently shaping how cells respond to androgen signaling, which is critical in sexual development. Twin studies: Another example of epigenetic consequences is evident in multiple sclerosis in monozygotic (identical) twins. There are pairs of twins that are discordant with multiple sclerosis and do not both show the trait. After gene testing, it was suggested that DNA was identical and that epigenetic differences contributed to the gene difference between identical twins. Effects of fetal androgen exposure: While in the fetal stages, hormonal influences of androgen, specifically testosterone, cause feminine qualities in regard to sexual development in females and masculine qualities in males. In typical sexual development, females are exposed to minimal amounts of testosterone, thus feminizing their sexual development, while males are typically exposed to high levels of testosterone, which masculinize their development. Epi-marks play a critical role in this development by acting as a buffer between the fetus and androgen exposure. Moreover, they predominantly protect XY fetuses from androgen underexposure while protecting XX fetuses from androgen overexposure. However, when androgen overexposure happens in XX fetuses, research suggests they can show masculinized behavior in comparison to females who undergo normal levels of androgen exposure. The research also suggests that excess androgen exposure in females led to reduced heterosexual interest in adulthood than did females with normal levels of androgen. Heritability: New epi-marks are usually produced with each generation, but these marks sometimes carry over between generations. Sex-specific epi-marks are produced in early fetal development that protect each sex from the natural disparity in testosterone that occurs during later stages of fetal development. Different epi-marks protect different sex-specific traits from being masculinized or feminized—some affect the genitals, others affect sexual identity, and yet others affect sexual preference. However, when these epi-marks are transmitted across generations from fathers to daughters or mothers to sons, they may cause reversed effects, such as the feminization of some traits in sons and similarly a partial masculinization of daughters. Furthermore, the reversed effects of feminization and masculinization can lead to a reversed sexual preference. For example, sex-specific epi-marks normally prevent female fetuses from being masculinized through exposure of atypically high testosterone, and vice versa for male fetuses. Sex-specific epi-marks are normally erased and not passed between generations. However, they can sometimes escape erasure and are then transferred from a father's genes to a daughter or from a mother's genes to a son. When this happens, this may lead to an altered sexual preference. Epi-marks normally protect parents from variation in sex hormone levels during fetal development, but can carry over across generations and subsequently lead to homosexuality in opposite-sex offspring. This demonstrates that gene coding for these epi-marks can spread in the population because they benefit the development and fitness of the parent but only rarely escape erasure, leading to same-sex sexual preference in offspring. Limitations of the hypothesis: Epigenetic explanations for sexual orientation are still purely speculative. W. Rice and colleagues say that they "cannot provide definitive evidence that homosexuality has a epigenetic underpinning". Tuck C. Ngun and Eric Vilain published a paper in 2014 in which they evaluated and critiqued the epigenetic model proposed by Rice and colleagues in 2012. Ngun and Vilain agreed with much of Rice's model, but disagreed that "sex-reversing sensitivity to androgen signaling via epigenetic markers will result in homosexuality in both sexes", saying that there is no evidence that same-sex attraction in men is linked to low androgenic receptivity. Limitations of the hypothesis: Also, a report of a study of 34 male monozygotic twin pairs discordant for sexual orientation revealed no support for the epigenetic hypothesis.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Longevity medicine (aging)** Longevity medicine (aging): Longevity medicine is a set of preventive healthcare practices that rely on biomarkers of aging, such as aging clocks, to keep the patient's biological and psychological age as near to peak performance as feasible throughout life. Biogerontology and precision medicine are some of the related fields. As of early 2020s it is a "fast developing field", according to an article in a Lancet specialty journal. Longevity medicine (aging): In the first decade of the 21st century, what was called "age management medicine" was considered a field of alternative medicine, and, as of 2007, was not recognized by the American Medical Association. Other names at this time included "antiaging medicine" and "regenerative medicine". Age management medicine is controversial. The field is underregulated and supported by insufficient scientific evidence. People who practice it open themselves up to legal liability on grounds of negligence–malpractice, warranty issues, and product liability. The use of growth hormone has been frequently recommended; however, such use is associated with cancer. Longevity medicine (aging): Age management medicine is often promoted by anti-aging practitioners specializing in nutritional supplements and hormone-replacement, a practice that may lead to harmful side-effects.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Michaelis–Gutmann bodies** Michaelis–Gutmann bodies: Michaelis–Gutmann bodies, (M-G bodies) are concentrically layered basophilic inclusions found in Hansemann cells in the urinary tract. They are 2 to 10 μm in diameter, and are thought to represent remnants of phagosomes mineralized by iron and calcium deposits.M-G bodies are a pathognomonic feature of malakoplakia, an inflammatory condition that affects the genitourinary tract. They were discovered in 1902 by Leonor Michaelis and Carl Gutmann.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Simeon (email client)** Simeon (email client): Simeon was an IMAP4 email client by The Esys Corporation with support for IMSP and LDAP.Simeon was available for several platforms, including Windows (3.x, 95 and NT), Macintosh (both 68k and PowerPC), and multiple Unix variants.Although commended for its rich features as an early IMAP client, its difficult interface was regarded as more complex to use than POP based mail clients. Lack of advanced filtering of mail and inability to easily manage multiple mail accounts (Simeon requires editing of configuration files) were also criticized.Simeon was the default email client installed on Heriot-Watt University's IT infrastructure. It was also formerly used at the University of East Anglia.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Media Block** Media Block: A Media Block or Integrated Media Block is a component in a digital cinema projection system. Its purpose is to convert the Digital Cinema Package (DCP) content into data that ultimately produces picture and sound in a theater in compliance with DCI anti-piracy encryption requirements. Terminology: DCI specification allows for two different security system architectures. Terminology: In the first the Media Block is outside of the projector. This design is simply referred to as a "Media Block" and is typically a device attached directly to the motherboard of a Digital Cinema server. The media block is usually connected to the projector by dual-link SDI cables. Such media block is limited to processing 2K output, downscaling 4K DCPs if necessary. Terminology: The second architecture describes an "Integrated Media Block" (IMB). This refers to a device attached and integrated directly into the projector, which receives image data from the server, usually via a cat6 Ethernet connection. They can process 2K and 4K output. Some hardware implementations integrate the entire server on a single board and are able to work both as a MB as well as an IMB. Security Features: Upon ingestion into a DCP server, KDMs are stored on flash memory in the media block or IMB. A KDM is written to enable the playback of a specific DCP during a specific time windows and on a specific media block or IMB, identified by its serial number during the authoring process. Media blocks and IMBs also contain a secure clock that is set in the factory cannot be altered by the end user, which the DCP servers to which they are attached use to determine showtimes. The secure clock prevents theaters from showing encrypted movies outside the times authorized by the KDM (e.g. after it has expired) by simply changing the date and time in the server's BIOS. Media blocks and IMBs also typically include anti-tamper devices, designed to self-destruct the unit if unauthorized modification of its hardware, software or secure clock is attempted.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**AirQ+** AirQ+: AirQ+ is a free software for Windows and Linux operating systems developed by the World Health Organization (WHO) Regional Office for Europe. The program calculates the magnitude of several health effects associated to exposure to the most relevant air pollutants in a given population. AirQ+ has been used in the BreatheLife campaign and in numerous studies aimed at measuring long-term exposure to ambient particulate matter PM2.5. The first version of the program, AirQ, was distributed in a Microsoft Excel spreadsheet program in 1999, followed by another version of AirQ for Windows in 2000. A substantial difference between AirQ and AirQ+ is that AirQ+ contains a new graphical user interface with several help texts and various features to input and analyse data and illustrate results. AirQ+ version 1.3 was released in October 2018, version 2.0 in November 2019 and version 2.1 in May 2021. It is available in English, French, German and Russian. Purpose: AirQ+ is intended as a tool to ascertain the magnitude of the burden and impacts of air population on health in a given locality. It performs this function by featuring data analysis, graphing tools, tables and quantitative information for prominent pollutants such as particulate matter (PM), nitrogen dioxide (NO2), and tropospheric ozone (O3). AirQ+ also has the capacity to perform calculations for black carbon (BC) and provides rough estimates of impacts of household (indoor) air pollution on health. AirQ+ can be applied to long- and short-term exposure to ambient air pollution and to long-term household air pollution exposure caused by solid fuel use. Data input: For most prominent air pollutants, the user needs to input the following data: air quality data (concentration of air pollutants); relative risk (RR) values for the pollutant being assessed (source: epidemiological studies; default values are provided) data for population at risk (population distribution); health data (the health effect in question, like mortality); a concentration cut-off value for consideration.For household (indoor air pollution), the user needs to provide the following input: relative risk (RR) values; data for population at risk; health data; percentage of solid fuels us.A minimum working knowledge of epidemiological concepts, in particular exposure–response relationship, relative risk, attributable risk and life table calculations is required to run the software. AirQ+ includes default values users can use for running impact assessments. Users: Users include students, scientists, environmental experts, decision-makers, planners, and policy analysts. Advanced users can customize runtime parameters to meet their needs. Related software: Other online available software tools that calculate the impacts of air pollution have been developed by the United States Environmental Protection Agency with its BenMAP.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**ADDML** ADDML: Archival Data Description Mark-up Language (ADDML) is a standard describing a collection of data files. The standard was originally developed by the National Archives of Norway (NAN), and existed in several different versions until a constant form was reached with 8.2, the present de facto standard. Scope: ADDML is a standard describing a collection of data files organised as flat files. A flat file in this context is a file existing as plain text, internally organised either by fixed positioning or delimiter separation. Such a collection of files is called a dataset. A file containing the description of a dataset is called a dataset description. It is also possible to describe other types of files, but not in detail. This can be used to describe relations between files and metadata about them. Usage: ADDML serves several purposes. Its main task is to describe the technical structure of a dataset designated for repository submissions. Today’s standard sees an extension of its original purpose, but the technical structure remains, making it possible to describe a flat file structure when it is to be exchanged from one system to another (and not only for archival purposes). Usage: Version 8.3 additionally facilitates the description of other types of files, but not in detail, since other standards available are already handling this kind of description. Emphasis in the implementation of describing other files than flat files in ADDML has been put on the option of describing the file types, the relation between them and so forth. Both the reference part and the data objects part are generic, making an expansion possible according to individual needs. In addition, the option of including properties has been developed and implemented from version 8.0 and on. Usage: The implementation of ADDML requires limitations. The use of the generic parts of the standard depends on individual definitions.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**South-up map orientation** South-up map orientation: South-up map orientation is the orientation of a map with south up, at the top of the map, amounting to a 180-degree rotation of the map from the standard convention of north-up. Maps in this orientation are sometimes called upside down maps or reversed maps.Other maps with non-standard orientation include T and O maps, polar maps, and Dymaxion maps. Psychological significance: Research suggests that north-south positions on maps have psychological consequences. In general, north is associated with richer people, more expensive real estate, and higher altitude, while south is associated with poorer people, cheaper prices, and lower altitude (the "north-south bias"). When participants were presented with south-up oriented maps, this north-south bias disappeared.Researchers posit the observed association between map-position and goodness/badness (north=good; south=bad) is caused by the combination of (i) the convention of consistently placing north at the top of maps, and (ii) a much more general association between vertical position and goodness/badness (up=good, down=bad), which has been documented in numerous contexts (e.g., power/status, profits/prices, affect/emotion, and even the divine). Common English idioms support the notion that many English speakers conflate or associate north with up and south with down (e.g., "heading up north", "down south", Down Under), a conflation that can only be understood as learned by repeated exposure to a particular map-orientation convention (i.e., north put at the top of maps). Related idioms used in popular song lyrics provide further evidence for the pervasiveness of "north-south bias" among English speakers, in particular with regard to wealth. Examples include, using "Uptown" to mean "high class or rich" (as in "Uptown Girl" by Billy Joel), or using "Downtown" to convey lower socioeconomic status (as in "Bad, Bad Leroy Brown" by Jim Croce). Cultural diversity education: Cultural diversity and media literacy educators use south-up oriented world maps to help students viscerally experience the frequently disorienting effect of seeing something familiar from a different perspective. Having students consider the privileged position given to the Northern hemisphere (especially Europe and North America) on most world maps can help students confront their more general potential for culturally biased perceptions. History of south-up oriented maps as political statements: Throughout history, maps have been made with varied orientations, and reversing the orientation of maps is technically very easy to do. As such, some cartographers maintain that the issue of south-up map orientation is itself trivial. More noteworthy than the technical matter of orientation, per se, is the history of explicitly using south-up map orientation as a political statement, that is, creating south-up oriented maps with the express rationale of reacting to the north-up oriented world maps that have dominated map publication during the modern age. History of south-up oriented maps as political statements: The history of south-up map orientation as political statement can be traced back to the early 1900s. Joaquín Torres García, a Uruguayan modernist painter, created one of the first maps to make a political statement related to north-south map positions entitled "América Invertida". "Torres-García placed the South Pole at the top of the earth, thereby suggesting a visual affirmation of the importance of the (South American) continent."A popular example of a south-up oriented map designed as a political statement is "McArthur's Universal Corrective Map of the World" (1979). An insert on this map explains that the Australian, Stuart McArthur, sought to confront "the perpetual onslaught of 'downunder' jokes—implications from Northern nations that the height of a country's prestige is determined by its equivalent spatial location on a conventional map of the world". McArthur's Universal Corrective Map of the World (1979) has sold over 350,000 copies to date. In popular culture: South-up maps are commonly available as novelties or sociopolitical statements in southern hemisphere locales, particularly Australia. A south-up oriented world map appears in episode "Somebody's Going to Emergency, Somebody's Going to Jail" of The West Wing, and issues of cultural bias are discussed in relation to it. The cartoon strip Mafalda by Argentinian cartoonist Quino once posed the question "Why are we down?" American cartoonist Leo Cullum published a cartoon in The New Yorker titled, "Happy penguin looking at upside-down globe; Antarctica is on top" (April 20, 1992). In popular culture: The computer strategy game Neocolonialism developed by Seth Alter uses a south-up map, with the developer stating it is intended to "evoke discomfort" and to "exemplify the north-south dichotomy of the world, wherein the southern hemisphere is generally poorer than the northern hemisphere."
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**AOC2** AOC2: Amine oxidase, copper containing 2 (AOC2) is a protein that in humans is encoded by the AOC2 gene. The protein is a copper-containing primary amine oxidase enzyme. Function: Copper amine oxidases catalyze the oxidative conversion of amines to aldehydes and ammonia in the presence of copper and quinone cofactor. This gene shows high sequence similarity to copper amine oxidases from various species ranging from bacteria to mammals. The protein contains several conserved motifs including the active site of amine oxidases and the histidine residues that likely bind copper. It may be a critical modulator of signal transmission in retina, possibly by degrading the biogenic amines dopamine, histamine, and putrescine. This gene may be a candidate gene for hereditary ocular diseases. Alternate splicing results in multiple transcript variants.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Aston Medal** Aston Medal: The Aston Medal is awarded by the British Mass Spectrometry Society to individuals who have worked in the United Kingdom and have made outstanding contributions to our understanding of the biological, chemical, engineering, mathematical, medical, or physical sciences relating directly to mass spectrometry. The medal is named after one of Britain's founders of mass spectrometry and 1922 Nobel prize winner Francis William Aston.The award is made sporadically, with no more than one medal being awarded each year. Recipients of this honour receive a gold-plated medal with a portrait of Francis Aston as well as an award certificate. Recipients: 1989 – Allan Maccoll 1990 – John H. Beynon 1996 – Brian Green 1998 – Keith Jennings 1999 – Dai Games 2003 – Colin Pillinger 2005 – Tom Preston 2006 – John Todd 2008 – Robert Bateman 2010 – Richard Evershed 2011 – Carol Robinson 2013 – Tony Stace 2017 – R. Graham Cooks
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Dummy (football)** Dummy (football): In association football, rugby league, rugby union and Australian rules football, a dummy or feint is a player deceiving the opposition into believing he is going to pass, shoot, move in a certain direction, or receive the ball and instead doing something different, thus gaining an advantage. Association football: In association football, a dummy (feint) is often used when dribbling, in offensive situations. Examples used in order to deceive an opponent into what direction you will move, include: the step over as used by Ronaldo and Cristiano Ronaldo; the flip flap (also known as "elastico") used by Rivellino, Ronaldo and Ronaldinho; the Marseille turn (also known as the "360" or "roulette") used by Zinedine Zidane, and Diego Maradona; the rainbow flick as used by Neymar; the Cruyff turn named after Johan Cruyff; and scoop turn (dragging the ball around a defender without it leaving your foot) as used by Romário.The next most common instance is also an offensive situation, in which a player, in a reasonable shooting area, fakes a shot to trick a defender coming in for a tackle and have him flinch away. This allows the player to go around the defender and shoot from a closer distance. This dummy can also be used on a goalkeeper in a one-on-one situation: a notable example being The Goal of the Century scored by Diego Maradona where, having run half the length of the field past several outfield players, he faced goalkeeper Peter Shilton and left him on his backside with a feint, before slotting the ball into the net.There is another situation that is used often enough that "dummy" becomes a verb. In this scenario, a player goes toward the path of passing ball, pretends to trap it and lets it goes through the legs. This is to allow his teammate—who is also moving toward the ball but further away—to retrieve it. Another common scenario is the "dummying" player running after the ball after letting it go through their legs, a move which is known as the nutmeg. This is very effective if the trap fake is convincing because the stop/start on the defending player is always slower than the attacking player, who has the momentum. Luis Suárez is known to execute these types of moves quite often. Rugby league and rugby union: In rugby league football and rugby union football, a dummy has a similar meaning, but is generally confined to a player leading their opposing players into believing that they are about to pass or sometimes kick the ball, but instead retaining and running with the ball. This has the effect of drawing defending players to the apparent recipient of the dummy pass. If successful, the defender is said to have been "sold the dummy". One of the first rugby players to be credited with using the dummy, or at least taking the technique to New Zealand, was Tommy Haslam. Haslam played for Batley before the rugby schism and was a member of the 1888 British Isles tour of New Zealand and Australia. Australian rules football: In Australian rules football the term 'dummy' again has a similar meaning to other football codes. A dummy is used to evade a tackler by feigning a hand pass or foot pass to a teammate and then changing direction suddenly to escape the opponent who has been fooled by the move. The term is also described as baulking or 'selling candy'.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Boom (navigational barrier)** Boom (navigational barrier): A boom or a chain (also boom defence, harbour chain, river chain, chain boom, boom chain or variants) is an obstacle strung across a navigable stretch of water to control or block navigation. In modern times they usually have civil uses, such as to prevent access to a dangerous river channel. But, especially historically, they have been used militarily, with the goal of denying access to an enemy's ships: a modern example is the anti-submarine net. Booms have also been used to force passing vessels to pay a toll. Description: A boom generally floats on the surface, while a chain can be on the surface or below the water. A chain could be made to float with rafts, logs, ships or other wood, making the chain a boom as well. Historical uses: Especially in medieval times, the end of a chain could be attached to a chain tower or boom tower. This allowed safe raising or lowering of the chain, as they were often heavily fortified. By raising or lowering a chain or boom, access could be selectively granted rather than simply rendering the stretch of water completely inaccessible. The raising and lowering could be accomplished by a windlass mechanism or a capstan.Booms or chains could be broken by a sufficiently large or heavy ship, and this occurred on many occasions, including the siege of Damietta, the raid on the Medway and the Battle of Vigo Bay.A Frequently, however, attackers instead seized the defences and cut the chain or boom by more conventional methods. The boom at the siege of Derry, for example, was cut by sailors in a longboat. Historical uses: As a key portion of defences, booms were usually heavily defended. This involved shore-based chain towers, artillery batteries, or forts. In the Age of Sail, a boom protecting a harbour could have several ships defending it with their broadsides, discouraging assaults on the boom. On some occasions, multiple booms spanned a single stretch of water. Examples: Historical The chain at Fort Blockhouse, protecting Portsmouth Harbour from 1431 to 1539. Examples: The Leonine Wall included a chain blocking the Tiber A chain spanned the Golden Horn A chain and boom blocked the River Medway during the Raid on the Medway Hudson River Chain The chain blocking the Parana River during the Battle of Vuelta de Obligado A chain was placed from Columbus, Kentucky across the Mississippi River to Missouri in order to block Union ships during the American Civil War Between the A Palma Castle in Mugardos and Saint Philip Castle, in ria of Ferrol, to defend the city and naval base. Notes: A.^ Some sources have the chain being dismantled instead of broken by a ship in the siege of Damietta and in the raid on the Medway.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**CCL13** CCL13: Chemokine (C-C motif) ligand 13 (CCL13) is a small cytokine belonging to the CC chemokine family. Its gene is located on human chromosome 17 within a large cluster of other CC chemokines. CCL13 induces chemotaxis in monocytes, eosinophils, T lymphocytes, and basophils by binding cell surface G-protein linked chemokine receptors such as CCR2, CCR3 and CCR5. Activity of this chemokine has been implicated in allergic reactions such as asthma. CCL13 can be induced by the inflammatory cytokines interleukin-1 and TNF-α.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**ITap** ITap: iTap is a predictive text technology developed for mobile phones, developed by Motorola employees as a competitor to T9. It was designed as a replacement for the old letter mappings on phones to help with word entry. This makes some of the modern mobile phones features like text messaging and note-taking easier. ITap: When entering three or more characters in a row, iTap guesses the rest of the word. For example, entering "prog" will suggest "program". If a different word is desired, such as "progress" or words formed with different letters but requiring the same keypresses like "prohibited" or "spoil", an arrow key can be pressed to highlight other words in a menu for selection, in order of descending commonality of their use. Enter words: Press keypad keys (one press per letter) to begin entering a word. As the user types, the phone automatically shows additional letters that form a suggested combination. Scroll right to view other possible combinations, and highlight the combination one wants. Enter words: Press direction key "up" to enter the highlighted combination when it spells a word. A space is automatically inserted after the word. In some implementations, pressing the button assigned the "space" character, usually the star (*) key, results in retaining the current stem, without inserting the rest of the offered completion.If the phone does not recognize a word it then stores the word as an optional choice. When the memory space is filled the phone deletes the oldest word to make space for the new word. Comparison with T9: Similar to XT9 (the most recent version of T9), iTap is also able to complete words and phrases. iTap will guess the best match based upon a built in dictionary, including words sharing the typed prefix. This dictionary also contains phrases and commonly used sentences. This way the predictive guesses iTap offers are enhanced based upon context of the word that is being typed. Comparison with T9: iTap typically uses a different user interface (UI) than T9 does. However, T9 provides an API that can be used to create a similar UI if phone manufacturers decide to do so. iTap provides suggestions for word completions after only one key press in all cases. However, T9 completes custom words after one key press and on most phones other words that users have entered previously can be retrieved after three key presses. T9 enables these UI decisions to be largely up to the phone manufacturer and so far none of them have chosen to mimic the UI of iTap with T9.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Volume-weighted average price** Volume-weighted average price: In finance, volume-weighted average price (VWAP) is the ratio of the value of a security or financial asset traded to the total volume of transactions during a trading session. It is a measure of the average trading price for the period.Typically, the indicator is computed for one day, but it can be measured between any two points in time. Volume-weighted average price: VWAP is often used as a trading benchmark by investors who aim to be as passive as possible in their execution. Many pension funds, and some mutual funds, fall into this category. The aim of using a VWAP trading target is to ensure that the trader executing the order does so in line with the volume on the market. It is sometimes argued that such execution reduces transaction costs by minimizing market impact costs (the additional cost due to the market impact, i.e. the adverse effect of a trader's activities on the price of a security).VWAP is often used in algorithmic trading. A broker may guarantee the execution of an order at the VWAP and have a computer program enter the orders into the market to earn the trader's commission and create P&L. This is called a guaranteed VWAP execution. The broker can also trade in a best effort way and answer the client with the realized price. This is called a VWAP target execution; it incurs more dispersion in the answered price compared to the VWAP price for the client but a lower received/paid commission. Trading algorithms that use VWAP as a target belong to a class of algorithms known as volume participation algorithms. Volume-weighted average price: The first execution based on the VWAP was in 1984 for the Ford Motor Company by James Elkins, then head trader at Abel Noser. Formula: VWAP is calculated using the following formula: PVWAP=∑jPj⋅Qj∑jQj where: PVWAP is Volume Weighted Average Price; Pj is price of trade j ;Qj is quantity of trade j ;j is each individual trade that takes place over the defined period of time, excluding cross trades and basket cross trades. Using the VWAP: The VWAP can be used similar to moving averages, where prices above the VWAP reflect a bullish sentiment and prices below the VWAP reflect a bearish sentiment. Traders may initiate short positions as a stock price moves below VWAP for a given time period or initiate long positions as the price moves above VWAP.Institutional buyers and algorithms often use VWAP to plan entries and initiate larger positions without disturbing the stock price.VWAP slippage is the performance of a broker, and many Buy-side firms now use a Mifid wheel to direct their flow to the best broker.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Picogen** Picogen: Picogen is a rendering system for the creation and rendering of artificial terrain, based on ray tracing. It is free software. Overview: While the primary purpose of picogen is to display realistic 3D terrain, both in terms of terrain formation and image plausibility, it also is a heightmap-creation tool, in which heightmaps are programmed in a syntax reminiscent of Lisp.The shading system is partially programmable. Example features: Whitted-Style ray tracer for quick previews Rudimentary path tracer for high quality results Partial implementation of Preetham's Sun-/Skylight Model Procedural heightmaps, though before rendering they are tesselated Frontends: Currently there is a frontend to picogen, called picogen-wx (based on wxWidgets). It is encapsulated from picogen and thus communicates with it on command-line level. Picogen-wx provides several panels to design the different aspects of a landscape, e.g. the Sun/Sky- or the Terrain-Texture-Panel. Each panel has its own preview window, though each preview window can be reached from any other panel. Frontends: Landscapes can be loaded and saved through an own, simple XML-based file format, and images of arbitrary size (including antialiasing) can be saved.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Well-being contributing factors** Well-being contributing factors: Well-being is a topic studied in psychology, especially positive psychology. Related concepts are eudaimonia, happiness, flourishing, quality of life, contentment, and meaningful life. Theories: Central theories are Diener's tripartite model of subjective well-being, Ryff's Six-factor Model of Psychological Well-being, Corey Keyes' work on flourishing, and Seligman's contributions to positive psychology and his theories on authentic happiness and P.E.R.M.A. Theories: Positive psychology is concerned with eudaimonia, "the good life" or flourishing, living according to what holds the greatest value in life – the factors that contribute the most to a well-lived and fulfilling life. While not attempting a strict definition of the good life, positive psychologists agree that one must live a happy, engaged, and meaningful life in order to experience "the good life". Martin Seligman referred to "the good life" as "using your signature strengths every day to produce authentic happiness and abundant gratification". According to Christopher Peterson, "eudaimonia trumps hedonism".Research on positive psychology, well-being, eudaimonia and happiness, and the theories of Diener, Ryff, Keyes and Seligmann cover a broad range of levels and topics, including "the biological, personal, relational, institutional, cultural, and global dimensions of life."The pursuit of happiness predicts both positive emotions and less depressive symptoms. People who prioritize happiness are more psychologically able, all else held equal. Methodology of study: Well-being measurement Different ways of measuring well-being reveal different contributing factors. The correlation between two of these, life satisfaction and happiness, in the World Values Survey (1981–2005) is only 0.47. These are different, but related concepts which are used interchangeably outside of academia. Typically, life satisfaction, or evaluative wellbeing is measured with Cantril's self-anchoring ladder, a questionnaire where wellbeing is rated on a scale from 1–10. Happiness or hedonic/Affective well-being measurement is measured with the positive and negative affect schedule (PANAS), a more complex scale. Methodology of study: Limitations The UK Government's Department of Health compiled a factsheet in 2014, in which it is stated that the key limitations to well-being, quality of life and life satisfaction research are that: There are numerous associations and correlations in the body of evidence, but few causal relationships, since existing longitudinal datasets "do not use consistent wellbeing and predictor measures at different time points"; After controlling for mental health status, not many of the found associations are still significant; Subgroup analyses are rare; There are too few studies to conduct meta-analyses; There are too few interventional studies. Major factors: For evaluative well-being (life satisfaction) Mental health is the strongest individual predictor of life satisfaction. Mental illness is associated with poorer well-being. In fact, mental health is the strongest determinant of quality of life at a later age. Studies have documented the relationship between anxiety and quality of life. The VOXEU analysis of happiness showed the principal determinants of an adult's life satisfaction to be income, parenting, family break up, mother's mental health and schooling. The factors that explain life satisfaction roughly map (negatively) to those factors that explain misery. They are first and foremost diagnosed depression/anxiety, which explains twice as much as the next factor, physical health (number of medical conditions), that explains just as much variance in subjective well-being between people, as income and whether someone is partnered. These factors count twice as much as each of whether someone is employed and whether they are a non-criminal, which in turn are 3 times as important as years of education. Major factors: Overall, the best predictor of an adult's life satisfaction is their emotional health as a child as reported by the mother and child. It trumps factors like the qualifications that someone gets and their behaviour at 16 as reported by the mother. A child and therefore an adult's emotional health is most affected itself by a mother's mental health, which is just over twice as important as family income. Major factors: 2/3 as important as family income is parent's involvement, which is 0.1 partial correlation coefficients more important than aggressive parenting (negative), father's unemployment (negative), family conflict (negative) and whether the mother worked in the subject's 1st year of life. Whether the mother worked thereafter has 0 correlation with well-being, however. In terms of non-family factors, the place where someone goes to secondary school matters a fair bit more than observed family background altogether, which in turn is slightly more important than the place where someone went to primary school. Major factors: For affective well-being (happiness) The main determinants of affective well-being, by correlation and effect size are: Corruption index (-0.54) Control of corruption (0.47) Bureaucratic quality (0.40) PPP-adjusted GDP per capita (although there is evidence of publication bias) (0.39) Economic freedom (0.35) Human rights violations (-0.33) Political and ethnic violence (-0.28) Civil liberties (0.28) Life expectancy at birth (0.27) Satisfaction with standard of living (0.24) Biological factors: Gender Over the last 33 years, a significant decrease in women's happiness leads researchers to believe that men are happier than women. In contrast, a Pew Research Centre survey found that more women are satisfied with their lives than men, overall. Other research has found no gender gap in happiness.Part of these findings could be due to the way men and women differ in calculating their happiness. Women calculate the positive self-esteem, closeness in their relationships and religion. Men calculate positive self-esteem, active leisure and mental control. Therefore, neither men nor women are at greater risk of being less happy than the other. Earlier in life, women are more likely than men to fulfill their goals (material goals and family life aspirations), thereby increasing their life satisfaction and overall happiness. However, it is later in life that men fulfill their goals, are more satisfied with their family life and financial situation and, as a result, their overall happiness surpasses that of women. Possible explanations include the unequal division of labor within the household, or that women experience more variance (more extremes) in emotion but are generally happier. Effects of gender on well-being are paradoxical: men report feeling less happy than women, however, women are more susceptible to depression.A study was conducted by Siamak Khodarahimi to determine the roles of gender and age on positive psychology constructs – psychological hardiness, emotional intelligence, self-efficacy and happiness – among 200 Iranian adolescents and 200 young adults who were questioned through various tests. The study found that the males of the sample showed significantly higher rates in psychological hardiness, emotional intelligence, self-efficacy and happiness than females, regardless of age. Biological factors: Genetics Happiness is partly genetically based. Based on twin studies, 50 percent of a given human's happiness level is genetically determined, 10 percent is affected by life circumstances and situation, and a remaining 40 percent of happiness is subject to self-control.Whether emotions are genetically determined or not was studied by David Lykken and Auke Tellegen. They found that up to 80% of the variance in long-term sense of well-being among Minnesotan twins separated at birth was attributable to heredity. The remaining theoretical 20%, however, still leaves room for significant change in thoughts and behavior from environmental/learned sources that should not be understated, and the interpretation of variance in twin studies is controversial, even among clinical psychologists.Individual differences in both overall Eudaimonia, identified loosely with self-control, and in the facets of eudaimonia are inheritable. Evidence from one study supports 5 independent genetic mechanisms underlying the Ryff facets of this trait, leading to a genetic construct of eudaimonia in terms of general self-control, and four subsidiary biological mechanisms enabling the psychological capabilities of purpose, agency, growth, and positive social relations. Biological factors: Neurology It is generally accepted that happiness is at least in part mediated through dopaminergic, adrenergic and serotonergic metabolism. A correlation has been found between hormone levels and happiness. SSRIs, such as Prozac, are used to adjust the levels of serotonin in the clinically unhappy. Researchers, such as Alexander, have indicated that many peoples usage of narcotics may be the unwitting result of attempts to readjust hormone levels to cope with situations that make them unhappy.A positive relationship has been found between the volume of gray matter in the right precuneus area of the brain and the subject's subjective happiness score. Meditation based interventions, including mindfulness, have been found to correlate with a significant gray matter increase within the precuneus. Biological factors: Neuroscience's findings Neuroscience and brain imaging have shown increasing potential for helping science understand happiness and sadness. Though it may be impossible to achieve any comprehensive objective measure of happiness, some physiological correlates to happiness can be measured. Stefan Klein, in his book The Science of Happiness, links the dynamics of neurobiological systems (i.e., dopaminergic, opiate) to the concepts and findings of positive psychology and social psychology.Nobel prize winner Eric Kandel and researcher Cynthia Fu described very accurate diagnoses of depression just by looking at fMRI brain scans. Biological factors: By identifying neural correlates for emotions, scientists may be able to use methods like brain scans to tell us more about the different ways of being "happy". Richard Davidson has conducted research to determine which parts of the brain are involved in positive emotions. He found that the left prefrontal cortex is more activated when we are happy and is also associated with greater ability to recover from negative emotions as well as enhanced ability to suppress negative emotions. Davidson found that people can train themselves to increase activation in this area of their brains. It is thought that our brain can change throughout our lives as a result of our experiences; this is known as neuroplasticity. Biological factors: The evolutionary perspective offers an alternative approach to understanding happiness and quality of life. Key guiding questions are: What features are included in the brain that allow humans to distinguish between positive and negative states of mind? How do these features improve humans' ability to survive and reproduce? The evolutionary perspective claims that the answers to these questions point towards an understanding of what happiness is about and how to best exploit the capacities of the brain with which humans are endowed. This perspective is presented formally and in detail by the evolutionary biologist Bjørn Grinde in his book Darwinian Happiness. Personal factors: In relation with age In adolescence There has been a significant focus in past research on adulthood, in regards to well-being and development and although eudaimonia is not a new field of study, there has been little research done in the areas of adolescence and youth. Research that has been done on this age group had previously explored more negative aspects than well-being, such as problem and risk behaviours (i.e. drug and alcohol use). Personal factors: Researchers who conducted a study in 2013 recognized the absence of adolescents in eudaimonic research and the importance of this developmental stage. Adolescents rapidly face cognitive, social and physical changes, making them prime subjects to study for development and well-being. The eudaimonic identity theory was used in their research to examine the development of identity through self-discovery and self-realization. They emphasize the personal value found in discovering and appeasing one's “daimon” (daemon) through subjective experiences that develop eudaimonic happiness from aligning with one's true self.: 250 Researchers focused their studies on PYD (positive youth development) and the eudaimonic identity theory in the context of three developmental elements: self-defining activities, personal expressiveness and goal-directed behaviours. Personal factors: They determined that adolescents sample multiple self-defining activities; these activities aid in identity formation, as individuals choose activities that they believe represents who they are. These self-defining activities also help determine the adolescent's social environments. For example, an adolescent involved in sports, would likely surround themselves with like-minded active and competitive people. Personal factors: Personal expressiveness, as coined by psychologist A. S. Waterman, are the activities that we choose to express and connect with our “daimon” through subjective experiences.Finally, goal-directed behaviours, are developed through goal setting, where individuals work towards identity establishment. Adolescents recognize their passions, abilities and talents and aim to fulfill their goals and behave in a way that appeases their true self.: 251 The study on adolescents was conducted in Italy, Chile and the United States, which produced slightly varied outcomes. Outcomes were contingent on availability, access and choice of opportunities (activities).: 254  Socioeconomic context also affected the results, as not all individuals could access the activities that may be more in-line with their true selves. Personal factors: The Personally Expressive Activities Questionnaire (PEAQ) was used to conduct the study. Adolescence was the youngest age group that the PEAQ was used on. The PEAQ asked adolescents to self-report on activities they participate in and describe themselves with self-defining activities.: 260  It was reported that 80% of adolescents defined themselves with two to four self-defining activities signifying an understanding in adolescence of self-concept through the domains of leisure, work and academia.: 255 Leisure activities were found to have the largest impact on individuals because these activities were the most self-directed of the three domains, as adolescents had the choice of activity, and were more likely to be able to align it with their true selves. The study found that subjective experiences were more important than the activities themselves and that adolescents reported higher levels of well-being. They reported that when adolescents express themselves through self-defining activities across multiple domains, they have a clearer image of themselves, of what they want to achieve and higher wellness. Goal-setting was found to be a unique predictor; when adolescents work towards goals set by themselves and accomplish them, they are likely to have a clearer emerging identity and higher well-being. Researchers found that more adolescents were happy when they were involved in self-chosen activities because the activities were chosen in line with their true self.: 257–259 In midlife The midlife crisis may mark the first reliable drop in happiness during an average human's life. Evidence suggests most people generally become happier with age, with the exception of the years 40 – 50, which is the typical age at which a crisis might occur. Researchers specify that people in both their 20s and 70s are happier than during midlife, although the extent of happiness changes at different rates. For example, feelings of stress and anger tend to decline after age 20, worrying drops after age 50, and enjoyment very slowly declines in adulthood but finally starts to rise after age 50. Personal factors: Well-being in late life is more likely to be related to other contextual factors including proximity to death. However, most of this terminal decline in well-being could be attributed to other changes in age-normative functional declines including physical health and function. Also, there is growing debate that assumptions that a single population estimate of age-related changes in well-being truly reflects the lived experiences of older adults has been questioned. The use of growth mixture modelling frameworks has allowed researchers to identify homogenous groups of individuals who are more similar to each other than the population based on their level and change in well-being and has shown that most report stable well-being in their late life and in the decade prior to death. These findings are based on decades of data, and control for cohort groups; the data avoids the risk that the drops in happiness during midlife are due to populations' unique midlife experiences, like a war. The studies have also controlled for income, job status and parenting (as opposed to childlessness) to try to isolate the effects of age. Personal factors: Researchers found support for the notion of age changes inside the individual that affect happiness. Personal factors: This could be for any number of reasons. Psychological factors could include greater awareness of one's self and preferences; an ability to control desires and have more realistic expectations – unrealistic expectations tend to foster unhappiness; moving closer to death may motivate people to pursue personal goals; improved social skills, like forgiveness, may take years to develop – the practice of forgiveness seems linked to higher levels of happiness; or happier people may live longer and are slightly overrepresented in the elderly population. Age-related chemical changes might also play a role.Other studies have found older individuals reported more health problems, but fewer problems overall. Young adults reported more anger, anxiety, depression, financial problems, troubled relationships and career stress. Researchers also suggest depression in the elderly is often due largely to passivity and inaction – they recommend people continue to undertake activities that bring happiness, even in old age.The activity restriction model of depressed affect suggests that stressors that disrupt traditional activities of daily life can lead to a decrease in mental health. The elderly population is vulnerable to activity restriction because of the disabling factors related to age. Increases in scheduled activity, as well as social support, can decrease the chances of activity restriction. Personal factors: In relation with depression and languishing A study by Keyes found that there are major costs of depression, which 14% of adults experience annually: it impairs social roles; it costs billions each year due to work absenteeism, diminished productivity, and healthcare costs; finally, depression accounts for at least one-third of suicides. Therefore, it is important to study flourishing to learn about what is possible if issues such as depression are tackled and how the ramifications of focusing on the positive make life better not just for one person, but also for others around them.Flourishing has significant positive aspects magnified when compared to languishing adults and when languishing adults are compared to depressed adults, as explained by Keyes. For example, languishing adults have the same amount of chronic disease as those that are depressed whereas flourishing adults are in exceptionally better physical health. Languishing adults miss as many days at work as depressed adults and, in fact, visit doctors and therapists more than depressed adults. Personal factors: Positive psychology interventions (PPI) in patients A strengths-based approach to personal positive change aims to have clinical psychology place an equal weight on both positive and negative functioning when attempting to understand and treat distress. This rationale is based on empirical findings. Because positive characteristics interact with negative life events to predict disorder the exclusive study of negative life events could produce misleading results.Thus, psychologists are looking to use positive psychology to treat patients. Amy Krentzman, among the others, discussed positive intervention as a way to treat patients. She defined positive intervention as a therapy or activity primarily aimed at increasing positive feelings, positive behaviors, or positive cognitions, as opposed to focusing on negative thoughts or dysfunctional behaviors. A way of using positive intervention as a clinical treatment is to use positive activity interventions. Positive activity interventions, or PAIs, are brief self-administered exercises that promote positive feelings, thoughts, and behaviors. Two widely used PAIs are “Three Good Things” and “Best Future Self.” “Three Good Things” requires a patient to daily document, for a week, three events that went well during the day, and the respective cause, or causes (this exercise can be modified with counterfactual thinking, that is, adding the imagination of things had them been worse). “Best Future Self” has a patient “think about their life in the future, and imagine that everything has gone as well as it possibly could. They have worked hard and succeeded at accomplishing all of their life goals. Think of this as the realization of all of their life dreams.” The patient is then asked to write down what they imagined. These positive interventions have been shown to decrease depression, and interventions focusing on strengths and positive emotions can, in fact, be as effective in treating disorder as other more commonly used approaches such as cognitive behavioral therapy. Moreover, the apparent effect of PPIs cannot be caused by publication bias, according to a meta-analysis on 49 studies (2009). PPIs studied included producing gratitude letters, performing optimistic thinking, replaying positive life experiences, and socializing with people.Also, in a newer meta-analysis (39 studies, 6,139 participants, 2012), the standardized mean difference was 0.34 higher for subjective well-being, 0.20 for psychological well-being and 0.23 for depression. Three to six months after the intervention, the effects for subjective well-being and psychological well-being were still significant, so effects seem fairly sustainable. However, in high-quality studies, the positive effect was weaker, though positive, so authors considered further high-quality studies necessary to strengthen the evidence. They claimed that the above-mentioned meta-analysis (2009) did not put enough weight on the quality of studies. Personal factors: PPIs found positive included blessings, kindness practices, taking personal goals, and showing gratitude.The interventions called "Gratitude Journaling" and "Three Good Things" seem to operate via gratitude. There is evidence that, when gratitude journaling, focussing on quality over quantity as well as people more than possessions, yields greater benefits. There is also evidence of a diminished effect from gratitude journaling if it is done more than once or twice a week. Journaling sans gratitude is effective in decreasing negative emotions in general, which suggests that the act of journaling, rather than gratitude alone, is involved in the treatment effect.Positive psychology seeks to inform clinical psychology of the potential to expand its approach, and of the merit of the possibilities. Given a fair opportunity, positive psychology might well change priorities to better address the breadth and depth of the human experience in clinical settings. Personal factors: Post-traumatic growth Posttraumatic growth (PTG) is a possible outcome after a traumatic event, besides posttraumatic stress disorder (PTSD). Following a traumatic event, for instance rape, incest, cancer, attack, or combat, "it is normal to experience debilitating symptoms of depression and anxiety." A person who shows PTG however, will experience these negative outcomes for a time and then show an increase in well-being, higher than it was before the trauma occurred. Martin Seligman, a founder of positive psychology, emphasizes that "arriving at a higher level of psychological functioning than before" is a key point in PTG. If instead an individual experiences a depressive period but recovers from an incident and returns to their normal level of psychological functioning, they are demonstrating resilience. This suggests that in PTG, the trauma acts as a turning point for the person to achieve greater well-being. Seligman recognizes "the fact that trauma often sets the stage for growth" and given the right tools, individuals can make the most of that opportunity."When reflecting on a traumatic growth, Seligman suggests using the following five elements to facilitate PTG: understand the response to trauma, reduce anxiety, utilize constructive disclosure, create a trauma narrative, and articulate life principles and stances that are more robust to challenge. Someone experiencing PTG will achieve elements of Seligman's "good life" theory, including a more meaningful and purposeful valuing of life, improved positive relationships, accomplishment, and a more optimistic and open mindset according to the broaden-and-build theory. Personal factors: Post-traumatic growth in constructive journalism The phenomenon of PTG is applicable to many disciplines. The construct is important not only for just soldiers, emergency responders, and survivors of traumatic events, but on average, for everyday citizens facing typical adversity. One way to expose citizens to stories of PTG is through constructive journalism. Constructive journalism, as defined by PhD student Karen McIntyre at University of North Carolina Chapel Hill, is "an emerging style of journalism in which positive psychology techniques are applied to news work with the aim of engaging readers by creating more productive news stories, all while maintaining core journalistic functions". Cathrine Gyldensted, an experienced reporter with a Masters in applied positive psychology and coauthor of two books, demonstrated that typical news reporting, which is associated with negative valence, harms mood. Using PTG to focus on victims' strengths and instances of overcoming adversity encourages readers to implement similar ideals in their own lives. "So the goal of positive psychology in well-being theory is to measure and to build human flourishing." Combining positive psychology constructs like PTG, PERMA, and "broaden and build" with journalism could potentially improve affect and inspire individuals about the benefits of positive psychology. Personal factors: PERMA not only plays a role in our own personal lives but also can be used for public major news stories. With this model, journalists can instead focus on the positives of a story and ask questions about how conflicts or even tragedies have brought people together, how someone has experienced post-traumatic growth, and more. News stories then shift the perspective from a victimizing one to an uplifting one. Positive psychology is slowly but steadily making its way through news reporting via constructive journalism. PERMA helps journalists ask the right questions to continue that progress by bringing the focus of a potentially negative story to the positives and solutions. Personal factors: Affect - ratio of positive to negative affect Fredrickson and Losada postulated in 2005 that the ratio of positive to negative affect, known as the critical positivity ratio, can distinguish individuals that flourish from those that do not. Languishing was characterized by a ratio of positive to negative affect of 2.5. Optimal functioning or flourishing was argued to occur at a ratio of 4.3. The point at which flourishing changes to languishing is called the Losada line and is placed at the positivity ratio of 2.9. Those with higher ratios were claimed to have broader behavioral repertoires, greater flexibility and resilience to adversity, more social resources, and more optimal functioning in many areas of their life. The model also predicted the existence of an upper limit to happiness, reached at a positivity ratio of 11.5. Fredrickson and Losada claimed that at this limit, flourishing begins to disintegrate and productivity and creativity decrease. They suggested as positivity increased, so to "appropriate negativity" needs to increase. This was described as time-limited, practicable feedback connected to specific circumstances, i.e. constructive criticism.This positivity ratio theory was widely accepted until 2013, when Nick Brown, a graduate student in applied positive psychology, co-authored a paper with Alan Sokal and Harris Friedman, showing that the mathematical basis of the paper was invalid. Fredrickson partially retracted the paper, agreeing that the math may be flawed, but maintaining that the empirical evidence is still valid. Brown and colleagues insist there is no evidence for the critical positivity ratio whatsoever. Personal factors: In relation with basic emotions Most psychologists focus on a person's most basic emotions. There are thought to be between seven and fifteen basic emotions. The emotions can be combined in many ways to create more subtle variations of emotional experience. This suggests that any attempt to wholly eliminate negative emotions from our life would have the unintended consequence of losing the variety and subtlety of our most profound emotional experiences. Efforts to increase positive emotions will not automatically result in decreased negative emotions, nor will decreased negative emotions necessarily result in increased positive emotions. Russell and Feldman Barrett (1992) described emotional reactions as core affects, which are primitive emotional reactions that are consistently experienced but often not acknowledged; they blend pleasant and unpleasant as well as activated and deactivated dimensions that we carry with us at an almost unconscious level.While a 2012 study found that wellbeing was higher for people who experienced both positive and negative emotions, evidence suggests negative emotions can be damaging. In an article titled "The undoing effect of positive emotions", Barbara Fredrickson et al. hypothesized positive emotions undo the cardiovascular effects of negative emotions. When people experience stress, they show increased heart rate, higher blood sugar, immune suppression, and other adaptations optimized for immediate action. If unregulated, the prolonged physiological activation can lead to illness, coronary heart disease, and heightened mortality. Both lab and survey research substantiate that positive emotions help people under stress to return to a preferable, healthier physiological baseline. Other research shows that improved mood is one of the various benefits of physical exercise. Personal factors: Behavioral repertoire The broaden-and-build theory of positive emotions suggests positive emotions (e.g. happiness, interest, anticipation) broaden one's awareness and encourage novel, varied, and exploratory thoughts and actions. Over time, this broadened behavioral repertoire builds skills and resources. For example, curiosity about a landscape becomes valuable navigational knowledge; pleasant interactions with a stranger become a supportive friendship; aimless physical play becomes exercise and physical excellence. Positive emotions are contrasted with negative emotions, which prompt narrow survival-oriented behaviors. For example, the negative emotion of anxiety leads to the specific fight-or-flight response for immediate survival. Personal factors: Elevation After several years of researching disgust, Jonathan Haidt, and others, studied its opposite; the term "elevation" was coined. Elevation is a pleasant moral emotion, triggered by witnessing virtuous acts of remarkable moral goodness and resulting in a desire to act morally and do "good". As an emotion it has a biological basis, and is sometimes characterized by a feeling of expansion in the chest or a tingling feeling on the skin. Personal factors: In relation with experience Thomas Nagel has said that "There are elements which, if added to one's experience, make life better; there are other elements which if added to one's experience, make life worse. But what remains when these are set aside is not merely neutral: it is emphatically positive."Experiences are central to a proposed dimension of well-being called psychological richness. This additional dimension of well-being was proposed as an empirically-supported expansion to the hedonic vs. eudaimonic well-being dichotomy. Whereas hedonic well-being can be measured via life satisfaction, and eudaimonic well-being can be measured via one’s perceptions of the meaning of their life, psychological richness is measured via characteristic experiences. Psychological richness is cultivated through having psychologically rich experiences, which are characterized as varying, interesting, novel, challenging, and perspective-changing, as subjectively measured by the experiencer. One line of evidence for this comes from studies conducted with college students, where students who went on trips (new and unusual experiences), whether they be short excursions or semester-length study abroad programs, reported increased psychological richness, but not increases in happiness or meaning (Oishi et al., 2021). In contrast to hedonic well-being, which is thought to result in personal satisfaction, and eudaimonic well-being, which is thought to result in societal contribution, psychological richness is thought to result in wisdom. Personal factors: The concept of "flourishing" The term flourishing, in positive psychology, refers to optimal human functioning. It comprises four parts: goodness, generativity, growth, and resilience (Fredrickson, 2005). According to Fredrickson (2005), goodness is made up of: happiness, contentment, and effective performance; generativity is about making life better for future generations, and is defined by “broadened thought-action repertoires and behavioral flexibility”; growth involves the use of personal and social assets; and resilience reflects survival and growth after enduring a hardship. A flourishing life stems from mastering all four of these parts. Two contrasting ideologies are languishing and psychopathology. On the mental health continuum, these are considered intermediate mental health disorders, reflecting someone living an unfulfilled and perhaps meaningless life. Those who languish experience more emotional pain, psychosocial deficiency, restrictions in regular activities, and missed workdays.Fredrickson & Losada (2005) conducted a study on university students, operationalizing positive and negative affect. Based on a mathematical model which has been strongly criticized, and now been formally withdrawn by Fredrickson as invalid, Fredrickson & Losada claimed to have discovered a critical positivity ratio, above which people would flourish and below which they would not. Although Fredrickson claims that her experimental results are still valid, these experimental results have also been questioned due to poor statistical methodology, and Alan Sokal has pointed out that "given [Fredrickson and Losada's] experimental design and method of data analysis, no data whatsoever could possibly give any evidence of any nonlinearity in the relationship between "flourishing" and the positivity ratio — much less evidence for a sharp discontinuity."Another study surveyed a U.S. sample of 3,032 adults, aged 25–74. Results showed 17.2 percent of adults were flourishing, while 56.6 percent were moderately mentally healthy. Some common characteristics of a flourishing adult included: educated, older, married and wealthy. The study findings suggest there is room for adults to improve as less than 20 percent of Americans are living a flourishing life. (Keyes, 2002).Benefits from living a flourishing life emerge from research on the effects of experiencing a high ratio of positive to negative affect. The studied benefits of positive affect are increased responsiveness, "broadened behavioral repertoires", increased instinct, and increased perception and imagination. In addition, the good feelings associated with flourishing result in improvements to immune system functioning, cardiovascular recovery, lessened effects of negative affect, and frontal brain asymmetry. Other benefits to those of moderate mental health or moderate levels of flourishing were: stronger psychological and social performance, high resiliency, greater cardiovascular health, and an overall healthier lifestyle (Keyes, 2007). The encountered benefits of flourishing suggest a definition: "[flourishing] people experience high levels of emotional, psychological and social well being due to vigor and vitality, self-determination, continuous self- growth, close relationships and a meaningful and purposeful life" (Siang-Yang, 2006, p. 70). Personal factors: Happiness Happiness measurement Oxford Happiness Questionnaire Psychologists Peter Hills and Michael Argyle developed the Oxford Happiness Questionnaire as a broad measure of psychological well-being. The approach was criticized for lacking a theoretical model of happiness and for overlapping too much with related concepts such as self-esteem, sense of purpose, social interest, kindness, sense of humor and aesthetic appreciation. Personal factors: Satisfaction with Life Scale "Happiness" encompasses different emotional and mental phenomena. One method of assessment is Ed Diener's Satisfaction with Life Scale. According to Diener, this five-question survey corresponds well with impressions from friends and family, and low incidence of depression.Rather than long-term, big picture appraisals, some methods attempt to identify the amount of positive affect from one activity to the next. Scientists use beepers to remind volunteers to write down the details of their current situation. Alternatively, volunteers complete detailed diary entries each morning about the day before. A discrepancy arises when researchers compare the results of these short-term "experience sampling" methods, with long-term appraisals. Namely, the latter may not be very accurate; people may not know what makes their life pleasant from one moment to the next. For instance, parents' appraisals mention their children as sources of pleasure, while "experience sampling" indicates parents were not enjoying caring for their children, compared to other activities.Psychologist Daniel Kahneman explains this discrepancy by differentiating between happiness according to the "experiencing self" compared to the "remembering self": when asked to reflect on experiences, memory biases like the Peak-End effect (e.g. we mostly remember the dramatic parts of a vacation, and how it was at the end) play a large role. A striking finding was in a study of colonoscopy patients. Adding 60 seconds to this invasive procedure, Kahneman found participants reported the colonoscopy as more pleasant. This was attributed to making sure the colonoscopy instrument was not moved during the extra 60 seconds – movement is the source of the most discomfort. Thus, Kahneman was appealing to the remembering self's tendency to focus on the end of the experience. Such findings help explain human error in affective forecasting – people's ability to predict their future emotional states. Personal factors: Changes in happiness levels Humans exhibit a variety of abilities. This includes an ability of emotional Hedonic Adaptation, an idea suggesting that beauty, fame and money do not generally have lasting effects on happiness (this effect has also been called the Hedonic treadmill). In this vein, some research has suggested that only recent events, meaning those that occurred within the last 3 months, affect happiness levels.The tendency to adapt, and therefore return to an earlier level of happiness, is illustrated by studies showing lottery winners are no happier in the years after they've won. Other studies have shown paraplegics are nearly as happy as control groups that are not paralyzed, after equally few years. Daniel Kahneman explains: "they are not paraplegic full time... It has to do with allocation of attention". Thus, contrary to our impact biases, lotteries and paraplegia do not change experiences to as great a degree as we would believe. Personal factors: However, in a newer study (2007), winning a medium-sized lottery prize had a lasting mental wellbeing effect of 1.4 GHQ points on Britons even two years after the event. Moreover, adaptation can be a very slow and incomplete process. Distracting life changes such as the death of a spouse or losing one's job can show measurable changes in happiness levels for several years. Even the "adapted" paraplegics mentioned above did ultimately report lower levels of pleasure (again, they were happier than one would expect, but not fully adapted). Thus, adaptation is a complex process, and while it does mitigate the emotional effects of many life events it cannot mitigate them entirely. Personal factors: Happiness set point The happiness set point idea is that most people return to an average level of happiness – or a set point – after temporary highs and lows in emotionality. People whose set points lean toward positive emotionality tend to be cheerful most of the time and those whose set points tend to be more negative emotionality tend to gravitate toward pessimism and anxiety. Lykken found that we can influence our level of well-being by creating environments more conductive to feelings of happiness and by working with our genetic makeup. One reason that subjective well-being is for the most part stable is because of the great influence genetics have. Although the events of life have some effect on subjective well-being, the general population returns to their set point. Personal factors: In her book The How of Happiness, Sonja Lyubomirsky similarly argued people's happiness varies around a genetic set point. Diener warns, however, that it is nonsensical to claim that "happiness is influenced 30–50% by genetics". Diener explains that the recipe for happiness for an individual always requires genetics, environment, and behaviour too, so it is nonsensical to claim that an individual's happiness is due to only one ingredient. Personal factors: Only differences in happiness can be attributed to differences in factors. In other words, Lyubomirsky's research does not discuss happiness in one individual; it discusses differences in happiness between two or more people. Specifically, Lyubomirsky suggests that 30–40% of the difference in happiness levels is due to genetics (i.e. heritable). In other words, still, Diener says it makes no sense to say one person's happiness is "due 50% to genetics", but it does make sense to say one person's difference in happiness is 50% due to differences in their genetics (and the rest is due to behaviour and environment).Findings from twin studies support the findings just mentioned. Twins reared apart had nearly the same levels of happiness thereby suggesting the environment is not entirely responsible for differences in people's happiness. Importantly, an individual's baseline happiness is not entirely determined by genetics, and not even by early life influences on one's genetics. Whether or not a person manages to elevate their baseline to the heights of their genetic possibilities depends partly on several factors, including actions and habits. Some happiness-boosting habits seem to include gratitude, appreciation, and even altruistic behavior. Other research-based habits and techniques for increasing happiness are discussed on this page. Personal factors: Besides the development of new habits, the use of antidepressants, effective exercise, and a healthier diet have proven to affect mood significantly. There is evidence that a vegan diet reduces stress and anxiety. Exercise is sometimes called the "miracle" or "wonder" drug – alluding to the wide variety of proven benefits it provides. Personal factors: It is worth mentioning that a recent book, Anatomy of an Epidemic, challenges the use of non-conservative usage of medications for mental patients, specially with respect to their long-term positive feedback effects.Yongey Mingyur Rinpoche has said that neuro scientists have found that with meditation, an individual's happiness baseline can change. and meditation has been found to increase happiness in several studies. A study on Brahma Kumaris Raja yoga meditators showed them having higher happiness (Oxford happiness questionnaire) than the control group. Personal factors: Evidences against the happiness set point theory In recent large panel studies divorce, death of a spouse, unemployment, disability and similar events have been shown to change the long-term subjective well-being, even though some adaptation does occur and inborn factors affect this.Fujita and Diener found that 24% of people changed significantly between the first five years of the study and the last five years. Almost one in four people showed changes in their well-being over the years; indeed sometimes those changes were quite dramatic. Bruce Headey found that 5–6% of people dramatically increased their life satisfaction over a 15- to 20-year period and that the goals people pursued significantly affected their life satisfaction. Personal factors: Personal training to increase happiness The easiest and best possible way to increase one's happiness is by doing something that increases the ratio of positive to negative emotions. Contrary to some beliefs, in many scenarios, people are actually very good at determining what will increase their positive emotions. There have been many techniques developed to help increase one's happiness. Personal factors: A first technique is known as the "Sustainable Happiness Model (SHM)." This model proposes that long-term happiness is determined upon: (1) one's genetically determined set-point, (2) circumstantial factors, and (3) intentional activities. Lyubomirsky, Sheldon and Schkade suggest to make these changes in the correct way in order to have long-term happiness. Another suggestion of how to increase one's happiness is through a procedure called "Hope Training." Hope Training is primarily focused on hope due to the belief that hope drives the positive emotions of well-being. This training is based on the hope theory, which states that well-being can increase once people have developed goals and believe themselves to achieve those goals. One of the main purposes of hope training is to eliminate individuals from false hope syndrome. False hope syndrome particularly occurs when one believes that changing their behavior is easy and the outcomes of the change will be evidenced in a short period of time.There are coaching procedures based on positive psychology, which are backed by scientific research, with availability of intervention tools and assessments that positive psychology trained coaches can utilize to support the coaching process. Positive psychology coaching uses scientific evidence and insights gained in these areas to work with clients in their goals. Personal factors: Time and happiness Philip Zimbardo suggests we might also analyze happiness from a "time perspective". He suggested the sorting of people's focus in life by valence (positive or negative) and also by their time perspective (past, present, or future orientation). Doing so may reveal some individual conflicts, not over whether an activity is enjoyed, but whether one prefers to risk delaying gratification further. Zimbardo also believes research reveals an optimal balance of perspectives for a happy life; commenting, our focus on reliving positive aspects of our past should be high, followed by time spent believing in a positive future, and finally spending a moderate (but not excessive) amount of time in enjoyment of the present. Personal factors: The "flow" In the 1970s Csikszentmihalyi's started to study flow, a state of absorption where one's abilities are well-matched to the demands at-hand. Flow is characterized by intense concentration, loss of self-awareness, a feeling of being perfectly challenged (neither bored nor overwhelmed), and a sense "time is flying". Flow is intrinsically rewarding; it can also assist in the achievement of goals (e.g., winning a game) or improving skills (e.g., becoming a better chess player). Anyone can experience flow, in different domains, such as play, creativity, and work. Personal factors: Flow is achieved when the challenge of the situation meets one's personal abilities. A mismatch of challenge for someone of low skills results in a state of anxiety; insufficient challenge for someone highly skilled results in boredom. The effect of challenging situations means that flow is often temporarily exciting and variously stressful, but this is considered Eustress, which is also known as "good" stress. Eustress is arguably less harmful than chronic stress, although the pathways of stress-related systems are similar. Both can create a "wear and tear" effect; however, the differing physiological elements and added psychological benefits of eustress might well balance any wear and tear experienced. Personal factors: Csikszentmihalyi identified nine indicator elements of flow: 1. Clear goals exist every step of the way, 2. Immediate feedback guides one's action, 3. There is a balance between challenges and abilities, 4. Action and awareness are merged, 5. Distractions are excluded from consciousness, 6. Failure is not worrisome, 7. Self-consciousness disappears, 8. Sense of time is distorted, and 9. The activity becomes "autotelic" (an end in itself, done for its own sake) His studies also show that flow is greater during work while happiness is greater during leisure activities. Personal factors: Health Addiction Arguably, some people pursue ineffective shortcuts to feeling good. These shortcuts create positive feelings, but are problematic, in part because of the lack of effort involved. Some examples of these shortcuts include shopping, drugs, chocolate, loveless sex, and TV. These are problematic pursuits because all of these examples have the ability to become addictive. When happiness comes to us so easily, it comes with a price we may not realize. This price comes when taking these shortcuts is the only way to become happy, otherwise viewed as an addiction. A review by Amy Krentzman on the Application of Positive Psychology to Substance Use, Addiction, and Recovery Research, identified, in the field of positive psychology, three domains that allow an individual to thrive and contribute to society. Personal factors: One of these, A Pleasant Life, involves good feelings about the past, present, and future. To tie this with addiction, they chose an example of alcoholism. Research on positive affect and alcohol showed a majority of the population associates drinking with pleasure. The pleasure one feels from alcohol is known as somatic pleasure, which is immediate but a short lived sensory delight. The researchers wanted to make clear pleasure alone does not amount to a life well lived; there is more to life than pleasure. Secondly, the Engaged Life is associated with positive traits such as strength of character. A few examples of character strength according to Character Strength and Virtues: A Handbook and Classification by Seligman and Peterson (2004) are bravery, integrity, citizenship, humility, prudence, gratitude, and hope, all of which are shown in the rise to recovery. To descend into an addiction shows a lack of character strength; however, rising to recovery shows the reinstatement of character strengths, including the examples mentioned above. Thirdly, the Meaningful Life is service and membership to positive organizations. Examples of positive organizations include family, workplace, social groups, and society in general. Organizations, like Alcoholics Anonymous, can be viewed as a positive organization. Membership fosters positive affect, while also promoting character strengths, which as seen in the Engaged Life, can aid in beating addiction. Personal factors: Emotional health Researcher Dianne Hales described an emotionally healthy person as someone who exhibits flexibility and adaptability to different circumstances, a sense of meaning and affirmation in life, an "understanding that the self is not the center of the universe", compassion and the ability to be unselfish, an increased depth and satisfaction in intimate relationships, and a sense of control over the mind and body. Personal factors: Mental health Layard and others show that the most important influence on happiness is mental health.L.M. Keyes and Shane Lopez illustrate the four typologies of mental health functioning: flourishing, struggling, floundering and languishing. However, complete mental health is a combination of high emotional well-being, high psychological well-being, and high social well-being, along with low mental illness.Although health is part of well-being, some people are able to maintain satisfactory wellbeing despite the presence of psychological symptoms. Personal factors: Physical health Meta-analyses published between 2013 and 2017 show that exercise is associated with reductions in depressive symptoms, fatigue and QoL plus improvements in attention, hyperactivity, impulsivity, social functioning, schizophrenic symptoms, and verbal fluency in various special populations. However, aerobic exercise has no significant effect on anxiety disorders.In 2005 a study conducted by Andrew Steptow and Michael Marmot at University College London, found that happiness is related to biological markers that play an important role in health. The researchers aimed to analyze whether there was any association between well-being and three biological markers: heart rate, cortisol levels, and plasma fibrinogen levels. The participants who rated themselves the least happy had cortisol levels that were 48% higher than those who rated themselves as the most happy. The least happy subjects also had a large plasma fibrinogen response to two stress-inducing tasks: the Stroop test, and tracing a star seen in a mirror image. Repeating their studies three years later Steptow and Marmot found that participants who scored high in positive emotion continued to have lower levels of cortisol and fibrinogen, as well as a lower heart rate.In Happy People Live Longer (2011), Bruno Frey reported that happy people live 14% longer, increasing longevity 7.5 to 10 years and Richard Davidson's bestseller (2012) The Emotional Life of Your Brain argues that positive emotion and happiness benefit long-term health.However, in 2015 a study building on earlier research found that happiness has no effect on mortality. "This "basic belief that if you're happier you're going to live longer. That's just not true." Consistent results are that "apart from good health, happy people were more likely to be older, not smoke, have fewer educational qualifications, do strenuous exercise, live with a partner, do religious or group activities and sleep for eight hours a night."Happiness does however seem to have a protective impact on immunity. The tendency to experience positive emotions was associated with greater resistance to colds and flu in interventional studies irrespective of other factors such as smoking, drinking, exercise, and sleep.Positive emotional states have a favorable effect on mortality and survival in both healthy and diseased populations. Even at the same level of smoking, drinking, exercise, and sleep, happier people seem to live longer. Interventional trials conducted to establish a cause-effect relationship indicate positive emotions to be associated with greater resistance to objectively verifiable colds and flu. Personal factors: Alternative medicine Health consumers sometimes confuse the terms "wellness" and "well-being". Wellness is a term more commonly associated with alternative medicine which may or may not coincide with gains in subjective well-being. In 2014, the Australian Government reviewed the effectiveness of numerous complementary therapies: they found low-moderate quality evidence that the Alexander technique, Buteyko, massage therapy (remedial massage), tai chi and yoga are helpful for certain health conditions. On the other hand, the balance of evidence indicates that homeopathy, aromatherapy, bowen therapy, Feldenkrais, herbalism, iridology, kinesiology, pilates, reflexology and rolfing shiatsu were classed as ineffective. Personal factors: Fruit and vegetable consumption There is growing evidence that a diet rich in fruits and vegetables is related to greater happiness, life satisfaction, and positive mood as well. This evidence cannot be entirely explained by demographic or health variables including socio-economic status, exercise, smoking, and body mass index, suggesting a causal link. Further studies have found that fruit and vegetable consumption predicted improvements in positive mood the next day, not vice versa. On days when people ate more fruits and vegetables, they reported feeling calmer, happier, and more energetic than normal, and they also felt more positive the next day.Cross-sectional studies worldwide support a relationship between happiness and fruit and vegetable intake. Those eating fruits and vegetables each day have a higher likelihood of being classified as “very happy,” suggesting a strong and positive correlation between fruit and vegetable consumption and happiness. Whether it be in South Korea, Iran, Chile, USA, or UK, greater fruit and vegetable consumption had a positive association with greater happiness, independent of factors such as smoking, exercise, body mass index, and socio-economic factors. This could be due to the protective benefits from chronic diseases and a greater intake of nutrients important for psychological health.Other food and drink practices associated with well-being are probiotics, alcohol, and binge drinking. Gluten and FODMAPs can negatively impact mood in some people. Bupa recommends oily fish, food with tryptophan such as milk, nuts, lentils, whole grain breads, cereals, pasta, soy and chocolate, dark chocolate, the Mediterranean diet overall including vegetables, fruits, whole grains, nuts and olive oil for wellbeing. Personal factors: The documentary ‘food matters’ includes claims of well-being benefits of raw foods, which has been disputed as pseudoscience. Hedonic well-being Eudaimonic well-being has been found to be empirically distinguishable from hedonic well-being. Personal factors: Identity Individual roles play a part in cognitive well-being. Not only does having social ties improve cognitive well-being, it also improves psychological health.Having multiple identities and roles helps individuals to relate to their society and provide the opportunity for each to contribute more as they increase their roles, therefore creating enhanced levels of cognitive well-being. Each individual role is ranked internally within a hierarchy of salience. Salience is “...the subjective importance that a person attaches to each identity”.Different roles an individual has have a different impact on their well-being. Within this hierarchy, higher roles offer more of a source to their well-being and define more meaningfulness to their overall role as a human being. Personal factors: Ethnic identity may play a role in an individual's cognitive well-being. Studies have shown that “...both social psychological and developmental perspectives suggest that a strong, secure ethnic identity makes a positive contribution to cognitive well-being”. Those in an acculturated society may feel more equal as a human being within their culture, therefore experiencing increased well-being. Personal factors: Optimism and helplessness Learned optimism refers to development of one's potential for a sanguine outlook. Optimism is learned as personal efforts and abilities are linked to personally desired outcomes. In short, it is the belief one can influence the future in tangible and meaningful ways. Learned optimism contrasts with learned helplessness, which consists of a belief, or beliefs, one has no control over what occurs, and that something external dictates outcomes, e.g., success. Optimism is learned by consciously challenging negative self talk. This includes self talk on any event viewed as a personal failure that permanently affects all areas of the person's life.Intrapersonal, or internal, dialogues influence one's feelings. In fact, reports of happiness are correlated with the general ability to "rationalize or explain" social and economic inequalities. Hope is a powerful positive feeling, linked to a learned style of goal-directed thinking. Hope is fostered when a person utilizes both pathways thinking (the perceived capacity to find routes to desired goals) and agency thinking (the requisite motivations to use those routes).Author and journalist J.B. MacKinnon suggested the cognitive tool of "Vertical Agitation" can assist in avoiding helplessness (e.g., paralysis in the face of Earth's many problems). The concept stemmed from research on denial by sociologist Stanley Cohen. Cohen explained: in the face of massive problems people tend towards learned helplessness rather than confronting the dissonant facts of the matter. Vertical Agitation involves focusing on one part of a problem at a time, while holding oneself accountable for solving the problem – all the way to the highest level of government, business and society (such as advocating strongly for something: eco-friendly lightbulbs). This allows each individual in society to make vital "trivial" (read: small) changes, without being intimidated by the work needed to be done as a whole. Mackinnon added: a piecemeal approach also keeps individuals from becoming too 'holier than thou' (harassing friends and family about every possible improvement), where widespread practice of Vertical Agitation would lead to much improvement. Personal factors: Personal Finance Well-being has traditionally focused on improving physical, emotional and mental quality of life with little understanding of how dependent they all are on financial health. However, financial stress often manifests itself in physical and emotional difficulties that lead to increased healthcare costs and reduced productivity. A more inclusive paradigm for well-being would acknowledge money as a source of empowerment that maximizes physical and emotional health by reducing financial stress. Such a model would provide individuals with the financial knowledge they need, as well as enable them to gain valuable insight and understanding regarding their financial habits, as well as their thoughts, feelings, fears and attitudes about money. Through this work, individuals would be better equipped to manage their money and achieve the financial wellness that is essential for their overall well-being.It has been argued that money cannot effectively "buy" much happiness unless it is used in certain ways, and that "Beyond the point at which people have enough to comfortably feed, clothe, and house themselves, having more money – even a lot more money – makes them only a little bit happier." In his book Stumbling on Happiness, psychologist Daniel Gilbert described research suggesting money makes a significant difference to the poor (where basic needs are not yet met), but has a greatly diminished effect once one reaches middle class (i.e. the Easterlin paradox). Every dollar earned is just as valuable to happiness up to a $75,000 annual income, thereafter, the value of each additional dollar earns a diminishing amount of happiness. According to the latest systematic review of the economic literature on life satisfaction, one's perception of their financial circumstances fully mediates the effects of objective circumstances on one's well-being. People overestimate the influence of wealth by 100%.Professor of Economics Richard Easterlin noted that job satisfaction does not depend on salary. In other words, having extra money for luxuries does not increase happiness as much as enjoying one's job or social network. Gilbert is thus adamant, people should go to great lengths to figure out which jobs they would enjoy, and to find a way to do one of those jobs for a living (that is, provided one is also attentive to social ties). Personal factors: Unemployment is detrimental to individual well-being. However, that does not hold true in countries where unemployment is widespread. Psychology Today reports that the impact of unemployment is dampened in those for whom work is less central to their identity, those who receive less criticism and less negative judgments from others, those who can meet their immediate financial obligations and those who do not see their unemployment as high stress and negative. Other protective factors include the expectation of reemployment, routines that structure one's time and evaluating oneself as worthy, competent and successful. According to the latest systematic review of the economic literature on life satisfaction, unemployment is worse for wellbeing for those that are right wing or live in high income countries. Not all unemployment is bad, however: international data from sixteen Western countries indicates that retirement at any age yields large increases in subjective well-being that returns to trend by age 70.Executive coaching, a workplace intervention for well-being and performance, is proven to work in certain contexts, according to a 2013 independent quantitative scientific summary synthesising high quality scientific research on coaching. It tells us that standard effect sizes for the outcomes of performance/skills, well-being, coping, goal-attainment and work/career attitudes range from 0.43 to 0.74. Personal factors: A more recent study has challenged the Easterlin paradox. Using recent data from a broader collection of countries, a positive link was found between GDP and well-being; and there was no point at which wealthier countries' subjective well-being ceased to increase. It was concluded economic growth does indeed increase happiness.Wealth is strongly correlated with life satisfaction but the correlation between money and emotional well-being is weak. The pursuit of money may lead people to ignore leisure time and relationships, both of which may cause and contribute to happiness. The pursuit of money at the risk of jeopardizing one's personal relationships and sacrificing enjoyment from one's leisure activities seems an unwise approach to finding happiness. Personal factors: Money, or its hectic pursuit, has been shown to hinder people's savoring ability, or the act of enjoying everyday positive experiences and emotions. In a study looking at working adults, wealthy individuals reported lower levels of savoring ability (the ability to prolong positive emotion) relative to their poorer peers.Studies have routinely shown that nations are happier when people's needs are met. Personal factors: Some studies suggest, however, that people are happier after spending money on experiences, rather than physical things, and after spending money on others, rather than themselves. However, purchases that buy ‘time’, for instance, cleaners or cooks typically increase individual well-being.Lottery winners report higher levels of happiness immediately following the event. But research shows winner's happiness levels drop and return to normal baseline rates within months to years. This finding suggests money does not cause long-term happiness (1978). However, in a more recent British study on lottery prizes between £1,000 and £120,000, a positive effect even two years after the event was found, the return to normal being only partial and varying.One 600 women strong 2011 study shows that house owners are no happier than renters. Degree of ownership also matter: “...housing property rights matter for subjective well-being. Specifically, using subjective well-being data from China, the authors find that homeownership is associated with higher levels of life satisfaction, although this happiness premium is larger for people who have full ownership compared to those who have only a minor ownership stake in their home.” According to the latest systematic review of the economic literature on life satisfaction, living in rural areas seems to have some association with well-being, because the included studies tend to control for income and rural areas tend to be poor. Income has a high effect on happiness and incomes are higher in urban areas, so chasing a rural lifestyle at the expense of income may be a ‘grass is always greener’ move. Personal factors: Adults who live with parents also tend to have poorer levels of well-being. Personal factors: Mindfulness Mindfulness is an intentionally focused awareness of one's immediate experience. "Focused awareness" is a conscious moment-by-moment attention to situational elements of an experience: i.e., thoughts, emotions, physical sensations, and surroundings. An aim of mindfulness is to become grounded in the present moment; one learns to observe the arising and passing of experience. One does not judge the experiences and thoughts, nor do they try to "figure things out" and draw conclusions, or change anything – the challenge during mindfulness is to simply observe. Benefits of mindfulness practice include reduction of stress, anxiety, depression, and chronic pain. See also Reverence (emotion). Personal factors: Ellen J. Langer argued people slip into a state of "mindlessness" by engaging in rote behavior, performing familiar, scripted actions without much cognition, as if on autopilot.Advocates of focusing on present experiences also mention research by Psychologist Daniel Gilbert, who suggested daydreaming, instead of a focus on the present, may impede happiness. Fellow researcher, Matt Killingsworth, found evidence to support the harm of daydreaming. Fifteen thousand participants from around the world provided over 650 000 reports (using an online application on their phones that requested data at random times). Killingsworth found people who reported daydreaming soon reported less happiness; daydreaming is extremely common. Zimbardo (see "Time Perspectives" above) bestowed the merits of a present-focus, and recommended occasional recall of past positive experiences. Reflecting on past positive experiences can influence current mood, and assist in building positive expectations for the future. Personal factors: There is research that suggests a person's focus influences level of happiness, where thinking too much about happiness can be counter-productive. Rather than asking: "Am I happy?" – which when posed just 4 times a day, starts to decrease happiness, it might well be better to reflect on one's values (e.g., "Can I muster any hope?"). Asking different questions can assist in redirecting personal thoughts, and perhaps, lead to taking steps to better apply one's energies. The personal answer to any particular question can lead to positive actions, and hopefulness, which is a very powerful, and positive feeling. Hopefulness is more likely to foster happiness, while feelings of hopelessness tend to undermine happiness. Personal factors: Todd Kashdan, researcher and author of "Designing Positive Psychology", explained early science's findings should not be overgeneralized or adopted too uncritically. Mindfulness to Kashdan is very resource-intensive processing; he warned it is not simply better at all times. To illustrate, when a task is best performed with very little conscious thought (e.g., a paramedic performing practiced, emergency maneuvers). Nevertheless, development of the skill lends to its application at certain times, which can be useful for the reasons just described; Professor of Psychology and Psychiatry Richard J. Davidson highly recommends "mindfulness meditation" for use in the accurate identification and management of emotions. Personal factors: Personality The modifiable personality traits which might cause greater well-being have yet to be critically synthesised. However, there is evidence that certain traits are beneficial for individual happiness or performance: locus of control, curiosity, religiousness, spirituality, spiritual striving, sense of urgency, self-compassion, authenticity, growth mindset, positive mental attitudes, grit, goal orientation with a meta-analysis concluding that approach rather than avoidance goals are superior for performance; as well as prosocial rather than zero-sum goals. Personal factors: Researchers who have reported on the character traits of people with high and low life satisfaction found that character strengths which predict life satisfaction are zest, curiosity, hope, and humour. Character strengths that do not predict life satisfaction include appreciation of beauty and excellence, creativity, kindness, love of learning, and perspective. Meanwhile, research on character strengths that is separated by gender indicates the character strengths that predict life satisfaction in men are humour, fairness, perspective, and creativity, while the character strengths that predict life satisfaction in women are zest, gratitude, hope, appreciation of beauty, and love. Personal factors: Certain traits are specifically beneficial to those with certain health issues. Believing in yourself (high self efficacy) matters for eating disorders, immune response, stress management, pain management and healthy living. Personal factors: In literature the positive psychological approach to personality is correlated often with the concepts of personal/psychosocial development and human development, balanced, strong, mature and proactive personality, character strengths and virtues, evidenced by traits like optimism and energy, pragmatism, active consciousness, assertiveness, free and powerful will, self-determination and self-realization, personal and social autonomy, social adaptability, personal and social efficiency, interpersonal development and professional development, proactive and positive thinking, humanity, empathy and love, emotional intelligence, subjective/psychological well-being, extraversion, happiness, positive emotions.Many tools for psychological wellness have entered popular culture via the personal development and self help industry. Positive music, will lower distress and pain, but news media consumption is detrimental for happiness. One exception is motivational media, for it has been found that inspiration helps with creativity, productivity and happiness. Reading self help books is associated with higher well-being, however, there is poor evidence on life coaching. Proactive laughter as in laughter yoga increases mood and improves pain tolerance. Smiling ummarised increases attractiveness, calm in stressful situations, retrieval of happy memories, likeability, happiness, perceived happiness (by others), perceived politeness/relaxedness/carefree, perceived honesty but also perceived stupidity. However, proactively smiling only increases happiness among those who believe smiling is a reaction to feeling happy, rather than a positive intervention.Ed Diener et al. (1999) suggested this equation: positive emotion – negative emotion = subjective well-being. Since tendency to positive emotion has a correlation of 0.8 with extroversion and tendency towards negative emotion is indistinguishable from neuroticism, the above equation could also be written as extroversion – neuroticism = happiness. These two traits could account for between 50% and 75% of happiness. These are all referring to the Big Five personality traits model of personality. Personal factors: An emotionally stable (the opposite of Neurotic) personality correlates well with happiness. Not only does emotional stability make one less prone to negative emotions, it also predicts higher social intelligence – which helps to manage relationships with others (an important part of being happy, discussed below).Cultivating an extroverted temperament may correlate with happiness for the same reason: it builds relationships and support groups. Some people may be fortunate, from the standpoint of personality theories that suggest individuals have control over their long-term behaviors and cognitions. Genetic studies indicate genes for personality (specifically extroversion, neuroticism and conscientiousness), and a general factor linking all 5 traits, account for the heritability of subjective well-being. Recent research suggests there is a happiness gene, the 5-HTT gene. Personal factors: Purpose in life Purpose in life refers broadly to the pursuit of life satisfaction. It has also been found that those with high purpose in life scores have strong goals and sense of direction. They feel there is meaning to their past and present life, and hold beliefs that continue to give their life purpose. Research in the past has focused on purpose in the face of adversity (what is awful, difficult, or absurd in life). Recently, research has shifted to include a focus on the role of purpose in personal fulfillment and self-actualization. Personal factors: The self-control approach, as expounded by C. R. Snyder, focusses on exercising self-control to achieve self-esteem by fulfilling goals and feeling in control of our own success. This is further reinforced by a sense of intentionality in both efforts and outcomes.The intrinsic motivation approach of Viktor Frankl emphasized finding value in three main areas: creative, experiential, and attitudinal. Creative values are expressed in acts of creating or producing something. Experiential values are actualized through the senses, and may overlap the hedonistic view of happiness. Attitudinal values are prominent for individuals who are unable to pursue the preceding two classes of values. Attitudinal values are believed to be primarily responsible for allowing individuals to endure suffering with dignity.A personal sense of responsibility is required for the pursuit of the values that give life meaning, but it is the realization that one holds sole responsibility for rendering life meaningful that allows the values to be actualized and life to be given true purpose. Determining what is meaningful for one's self provides a sense of autonomy and control which promotes self-esteem.Purpose in life is positively correlated with education level and volunteerism. However, it has also been found to decrease with age. Personal factors: Purpose in life is both highly individual, and what specifically provides purpose will change over the course of one's lifetime.All three of the above theories have self-esteem at their core. Self-esteem is often viewed as the most significant measure of psychological well-being, and highly correlated with many life-regulating skills. Purpose in life promotes and is a source of self-esteem; it is not a by-product of self-esteem. Personal factors: Self-efficacy Self-efficacy refers to a belief that one's ability to accomplish a task is a function of personal effort. Low self-efficacy, or a disconnect between ability and personal effort, is associated with depression; by comparison, high self-efficacy is associated with positive change, including overcoming abuse, overcoming eating disorders, and maintaining a healthy lifestyle. High self-efficacy also has positive benefits for one's immune system, aids in stress management, and decreases pain. A related concept, Personal effectiveness, is primarily concerned with planning and the implementation of methods of accomplishment. Personal factors: Sports According to Bloodworth and McNamee sports and physical activities are a key contributor to the development of people's well-being. The influence of sports on well-being is conceptualized within a framework which includes impermanence, its hedonistic shallowness and its epistemological inadequacy. Researching the effect of sport on well-being is difficult as some societies are unable to access sports, a deficiency in studying this phenomenon. Personal factors: Suffering Suffering can indicate behavior worthy of change, as well as ideas that require a person's careful attention and consideration. Generally, psychology acknowledges suffering can not be completely eliminated, but it is possible to successfully manage and reduce suffering. The University of Pennsylvania's Positive Psychology Center explains: "Psychology's concern with remedying human problems is understandable and should certainly not be abandoned. Human suffering demands scientifically informed solutions. Suffering and well being, however, are both part of the human condition, and psychologists should be concerned with both." Positive psychology, inspired by empirical evidence, focuses on productive approaches to pain and suffering, as well the importance of cultivating strengths and virtues to keep suffering to a minimum (see also Character strengths and virtues (book)). Personal factors: In reference to the Buddhist saying "Life is suffering", researcher and clinical psychologist Jordan Peterson suggested this view as realistic, not pessimistic, where acceptance of the reality life is harsh, provides a freedom from the expectation one should always be happy. This realization can assist in the management of inevitable suffering. To Peterson, virtues are important because they provide people with essential tools to escape suffering (e.g., the strength to admit dissonant truths to themselves). Peterson maintained suffering is made worse by false philosophy (i.e., denial that natural suffering is inevitable).Similarly, Seligman believes positive psychology is "not a luxury", saying "most of Positive Psychology is for all of us, troubled or untroubled, privileged or in privation, suffering or carefree. The pleasures of a good conversation, the strength of gratitude, the benefits of kindness or wisdom or spirituality or humility, the search for meaning and the antidote to "fidgeting until we die" are the birthrights of us all."Positive coping is defined as "a response aimed at diminishing the physical, emotional, and psychological burden that is linked to stressful life events and daily hassles" It is found that proper coping strategies will reduce the burden of short-term stress and will help relieve long-term stress. Stress can be reduced by building resources that inhibit or buffer future challenges. For some people, these effective resources could be physiological, psychological or social. Personal factors: Terror management Terror management theory maintains that people suffer cognitive dissonance (anxiety) when they are reminded of their inevitable death. Through terror management, individuals are motivated to seek consonant elements – symbols which make sense of mortality and death in satisfactory ways (i.e. boosting self-esteem). Personal factors: Research has found that strong belief in religious or secular meaning systems affords psychological security and hope. It is moderates (e.g. agnostics, slightly religious individuals) who likely suffer the most anxiety from their meaning systems. Religious meaning systems are especially adapted to manage anxiety about death or dying because they are unlikely to be disconfirmed (for various reasons), they are all encompassing, and they promise literal immortality.Whether emotional effects are beneficial or adverse seems to vary with the nature of the belief. Belief in a benevolent God is associated with lower incidence of general anxiety, social anxiety, paranoia, obsession, and compulsion whereas belief in a punitive God is associated with greater symptoms. (An alternative explanation is that people seek out beliefs that fit their psychological and emotional states.)Citizens of the world's poorest countries are the most likely to be religious, and researchers suggest this is because of religion's powerful coping abilities. Luke Galen also supports terror management theory as a partial explanation of the above findings. Galen describes evidence (including his own research) that the benefits of religion are due to strong convictions and membership in a social group. Relational factors: Love and caring The capacity for loving attachments and relationships, especially with parents, is the strongest predictor of well-being later in life. Relational factors: Marriage Seligman writes: "Unlike money, which has at most a small effect, marriage is robustly related to happiness... In my opinion, the jury is still out on what causes the proven fact married people are happier than unmarried people." (pp. 55–56). Married persons report higher levels of happiness and well-being than single people. Other data has shown a spouse's happiness depends on the happiness of their partner. When asked, spouses reported similar happiness levels to each other. The data also shows the spouses' happiness level fluctuates similarly to one another. If the husband is having a bad week, the wife will similarly report she had a bad week.There is little data on alternatives like polyamory, although one study stated wife order in polygyny did not have a substantial effect on life or marital satisfaction over all. This study also found younger wives were happier than older wives. Relational factors: On the other hand, at least one large study in Germany found no difference in happiness between married and unmarried people.Studies have shown that married couples are consistently happier and more satisfied with their life than those who are single. Some research findings have indicated that marriage is the only real significant bottom-up predictor of life satisfaction for men and women, and that those people who have a higher life satisfaction prior to marriage, tend to have a happier marriage.Self-reported satisfaction typically drops as the years of marriage roll on, particularly for couples who have children compared to those who do not. The reasons for this decline include a drop in affectionate behaviour. One team of researcher from Northwestern University who summarised the literature in 2013, identifies that this trend does not reverse throughout the marital period.Surprisingly, there has been a steady decline in the positive relationship between marriage and well-being in the United States since the 1970s. This decline is due to women reporting being less happy than previously and single men reporting being happier than previously. Research does exist, however, suggesting that compared to single people, married people have better physical and psychological health and tend to live longer.With this, a two-factor theory of love was developed by Barnes and Sternberg. This theory is composed of two components: passionate love and companionate love. Passionate love is considered to be an intense longing for a loved one. This love is often experienced through joy and sexual fulfillment, or even through rejection. On the other hand, companionate love is associated with affection, friendship and commitment. Stutzer and Frey (2006) found that the absence of loneliness and the emotional support that promotes self-esteem are both important aspects that contribute to individual well-being within marriage. Both passionate and companionate love are the foundations for every variety of love that one may experience. When passionate and companionate love are compromised in a marital relationship, satisfaction is decreased and the likelihood of divorce increases. In other words, the lack of positive support and validation increases the risk for divorce. Relational factors: Because of the expansive research done on the significance of social support within a marriage, it is important to understand that this research was inspired by a theory called the attachment theory perspective. Attachment theory stresses the importance of support and care giving in a relationship for the development of trust and security. Attachment theory, as conceptualized by Collins and Feeney (2000) is an interpersonal, transactional process that involves one partners caregiving responses. Relational factors: Parenthood While the mantle of parenting is sometimes held as the necessary path of adulthood, study findings are actually mixed as to whether parents report higher levels of happiness relative to non-parents. Folk wisdom suggests a child brings partners closer; research has found couples actually become less satisfied after the birth of the first child. The joys of having a child are overshadowed by the responsibilities of parenthood. Based on quantitative self-reports, researchers found parents prefer doing almost anything else to looking after their children. By contrast, parents' self-report levels of happiness are higher than those of non-parents. This may be due to already happy people having more children than unhappy people. In addition, it might also be that, in the long-term, having children gives more meaning to life. One study found having up to three children increased happiness among married couples, but not among other groups with children. Proponents of Childfreedom maintain this is because one can enjoy a happy, productive life without the trouble of ever being a parent. In a research study by Pollmann-Schult (2014) on 13,093 Germans, it was found that when finances and time costs are held constant, parents are happier and show increased life satisfaction than non-parents.By contrast, many studies found having children makes parents less happy. Compared with non-parents, parents with children have lower levels of well-being and life satisfaction until children move out of the household, at which point parents have higher well-being and satisfaction. In addition, parents report more feelings of depression and anxiety than non-parents. However, when adults without children are compared to empty nest parents, parenthood is positively associated with emotional well-being. People found parenthood to be more stressful in the 1970s than they did in the 1950s. This is thought to be because of social changes in regards to employment and marital status.Males apparently become less happy after the birth of a child due to added economic pressure and taking on the role of being a parent. A conflict between partners can arise when the couple does not desire traditional roles, or has an increasing number of roles. Unequal responsibilities of child-rearing between men and women account for this difference in satisfaction. Fathers who worked and shared an equal part in child-raising responsibilities were found to be the least satisfied. Research shows that single parents have higher levels of distress and report more mental health problems than married persons.Researchers implemented the Huta & Ryan Scale: Four Eudaimonic Measurement Questionnaire to analyze the participants eudaimonic motives, through motivation towards activities. The investigation was conducted on Canadian university undergraduates. The four eudaimonic pursuits as described by Huta & Ryan are: "Seeking to pursue excellence or a personal ideal" "Seeking to use the best in yourself" "Seeking to develop a skill, learn, or gain insight into something" "Seeking to do what you believe in".The study determined that participants derived well-being from eudaimonic pursuits only if their parents had role modeled eudaimonia, but not if their parents had merely verbally endorsed eudaimonia.Studies were also conducted on responsiveness and demandingness. The studies participants were American university undergraduates. The terms are described as follows; responsiveness satisfies the basic psychological need for autonomy. This is relevant to eudaimonia because it supports and implements the values of initiative, effort, and persistence, and integration of one's behaviour's values, and true-self. Autonomy is an important psychological factor because it provides the individual with independence. Demandingness cultivates many of the qualities needed for eudaimonia, including structure, self-discipline, responsibility, and vision. Responsiveness and demandingness are reported to be good aspects of parenting. The studies report both of these qualities as important factors to well-being.The study addressed parenting style by assessing and using adaptions of Baumrind's Parent Behaviour Rating Interview. Adaptions of this interview were made into a seventy-five question based survey; participants answered questions organized into fifteen subscales. The study determined that eudaimonically oriented participants reported their parents had been both demanding and responsive towards them. A multiple regression showed that demandingness and responsiveness together explained as much as twenty-eight percent of the variance in eudaimonia, this suggests parenting played a major role in the development of this pursuit. This supported the expectation that eudaimonia is cultivated when parents encourage internal structure, self-discipline, responsibility, and vision, and simultaneously fulfill a child's needs for autonomy. The research concludes that parents who want their children to experience eudaimonia must firstly themselves "mentor" their children in the approaches to attain eudaimonia. To encourage eudaimonia verbally is not sufficient enough to suffice eudaimonia into adulthood. Parents must clearly role model eudaimonia for it to truly be present in the child's life. Relational factors: Social ties In the article "Finding Happiness after Harvard", George Vaillant concluded a study on what aspects of life are important for "successful living". In the 1940s, Arlie Bock, while in charge of the Harvard Health Services, started a study, selecting 268 Harvard students from graduating classes of 1942, '43, and '44. He sought to identify the aspects of life contributing to "successful living". In 1967, the psychiatrist George Vaillant continued the study, undertaking follow-up interviews to gauge the lives of many of the students. In 2000, Vaillant again interviewed these students as to their progress in life. Vaillant observed: health, close relationships, and how participants dealt with their troubles. Vaillant found a key aspect to successful living is healthy and strong relationships.A widely publicized study from 2008 in the British Medical Journal reported happiness in social networks may spread from person to person. Researchers followed nearly 5000 individuals for 20 years in the long-standing Framingham Heart Study and found clusters of happiness and unhappiness that spread up to 3 degrees of separation on average. Happiness tended to spread through close relationships like friends, siblings, spouses, and next-door neighbors; researchers reported happiness spread more consistently than unhappiness through the network. Moreover, the structure of the social network appeared to affect happiness, as people who were very central (with many friends, and friends of friends) were significantly happier than those on the network periphery. People closer with others are more likely to be happy themselves. Overall, the results suggest happiness can spread through a population like a virus. Having a best friend buffers one's negative life experiences. When one's best friend is present Cortisol levels are decreased and feelings of self-worth increase.Neuroeconomist Paul Zak studies morality, oxytocin, and trust, among other variables. Based on research findings, Zak recommends: people hug others more often to get into the habit of feeling trust. He explains "eight hugs a day, you'll be happier, and the world will be a better place".Recently, Anderson et al. found that sociometric status (the amount of respect one has from face-to-face peer group) is significantly and causally related to happiness as measured by subjective well-being. Institutional factors: Education Education and intelligence Research suggests neither a good education nor a high IQ reliably increases happiness.Anders Ericsson argued an IQ above 120 has a decreasing influence on success. Presumably, IQs above 120 do not additionally cause other happiness indicators like success (with the exception of careers like Theoretical physics, where high IQs are more predictive of success). Above that IQ level, other factors, like social skills and a good mentor, matter more. As these relate to happiness, intelligence and education may simply allow one to reach a middle-class level of need satisfaction (as mentioned above, being richer than this seems to hardly affect happiness). According to the findings of the study, Using Theatrical Concepts for Role-plays with Educational Agents by Klesen, she expresses how role- playing embeds information and educational goals and causes people to learn unintentionally. Studies has shown that enjoyment in things as simple as role playing increases a person's IQ and their happiness.Martin Seligman has said: "As a professor, I don't like this, but the cerebral virtues—curiosity, love of learning—are less strongly tied to happiness than interpersonal virtues like kindness, gratitude and capacity for love." Educational goals John White (2013) investigated the educational goals at public schools in Britain. School-education involves both cognitive and conceptual learning, but also the development social skills and personal development. Ideally, children develop self-confidence, and create purpose for themselves. According to White, in the past schools only focused on knowledge and education but now Britain has moved to a broader direction. White's Every Child Matters initiative seeks to enhance children's well-being across the range of children's services. Institutional factors: Physical education As a basic building block to a better existence, positive psychology aims to improve the quality of experiences. Within its framework, students could learn to become excited about physical activity. Playing comes natural to children; positive psychology seeks to preserve this zest (a sense of excitement and motivation for life) for movement in growing and developing children. If offered in an interesting, challenging and pleasurable way physical activity would thus internalize an authentic feeling of happiness in students. Positive psychology's approach to physical activity could give students the means of acquiring an engaged, pleasant and meaningful life. Institutional factors: School education Positive psychology is beneficial to schools and students because it encourages individuals to strive to do their best, whereas scolding has the opposite effect. Clifton and Rath discussed research conducted by Dr. Elizabeth Hurlock in 1925, where fourth, fifth and sixth graders were either praised, criticized or ignored, based on their work on math problems. Praised students improved by 71%, those criticized improved by 19%, and students provided with no feedback improved a mere 5%. Praise seems an effective method of fostering improvement. Institutional factors: According to Clifton and Rath ninety nine out of one hundred people prefer the influence of positive people. The benefits include: increased productivity, and contagious positive emotions, which assists one in working to the best of their abilities. Even a single negative person can ruin the entire positive vibe in an environment. Clifton and Rath cited ‘positive emotions as an essential daily requirement for survival’. Institutional factors: In 2008, in conjunction with the Positive Psychology Center at the University of Pennsylvania, a whole-of-school implementation of Positive Psychology was undertaken by Geelong Grammar School (Victoria, Australia). This involved training of teaching staff in the principles and skills of positive psychology. Ongoing support was provided by The Positive Psychology Center staff, who remained in-residence for the entire year.Staats, Hupp and Hagley (2008) used positive psychology to explore academic honesty. They identified positive traits displayed by heroes, then determined if the presence of these traits in students predicted future intent to cheat. The results of their research: ‘an effective working model of heroism in the context of the academic environment’ (Staats, Hupp & Hagley, 2008). Institutional factors: School grades of children According to a study reported in the NY Post newspaper, 48% of parents reward their children's good grades with cash or something else of meaning. Among many families in the United States, this is controversial. Although psychology experts support the offer of reward for good behavior as a better alternative than the use of punishment for bad behavior, in some circumstances, families cannot afford to give their children an average of 16 dollars for every good grade earned. Alternatives for money include allowing a child extra time on a computer or staying up later than usual. Some psychology experts believe the best reward is praise and encouragement because material rewards can cause long-term negative effects for children. Institutional factors: A study, regarding rewards for children, conducted in 1971 by psychologist, Edward L. Deci, at the University of Rochester, is still referenced today. Featured in the New York Times, it focused on the short- and long-term effects of rewards for positive behavior. Deci suggested rewards for positive behavior is an effective incentive for only a short period. At the outset, rewards can support motivation to work hard and strive towards personal goals. However, once rewards cease, children showed less interest in the task relative to participants who never received rewards. Deci pointed out, at a young age, children's natural instinct is to resist people who try to control their behavior, which he cited as support for his conclusion rewards for good behavior have limited effectiveness. Institutional factors: In contrast, the New York Times featured research findings that supported the merits of offering rewards to children for good behavior. Expert economists argued children experiencing trouble with their behavior or schoolwork should have numerous helpful options, including rewards. Although children might well experience an initial attraction to financial or material, a love for learning could develop subsequently. Despite the controversy regarding the use of rewards, some experts believe the best way to motivate a child is to offer rewards at the beginning of the school year, but if unsuccessful they recommend teachers and parents stop using the reward system. Because of individual differences among children, no one method will work for everyone. Some children respond well to the use of rewards for positive behavior, while others evidence negative effects. The results seem to depend on the person. Institutional factors: Youth development Positive youth development focuses on the promotion of healthy development rather than viewing youth as prone to problems needing to be addressed. This is accomplished through programs and efforts by communities, schools, and government agencies. Institutional factors: Work It has been argued that happiness at work is one of the driving forces behind positive outcomes at work, rather than just being a resultant product.Despite a large body of positive psychological research into the relationship between happiness and productivity, happiness at work has traditionally been seen as a potential by-product of positive outcomes at work, rather than a pathway to success in business. However a growing number of scholars, including Boehm and Lyubomirsky, argue that it should be viewed as one of the major sources of positive outcomes in the workplace. Institutional factors: Human Resource Management A practical application of positive psychology is to assist individuals and organizations in identifying strengths so as to increase and sustain well-being. Therapists, counselors, coaches, various psychological professionals, HR departments, business strategists, and others, are using new methods and techniques to broaden and build upon the strengths of a wide population of individuals. This includes those not suffering from mental illness or disorder. Institutional factors: Workplace Positive psychology has been implemented in business management practice, but has faced challenges. Wong & Davey (2007) noted managers can introduce positive psychology to a workplace, but they might struggle with positive ways to apply it to employees. Furthermore, for employees to welcome and commit to positive psychology, its application within an organization must be transparent. Managers must also understand the implementation of positive psychology will not necessarily combat any commitment challenges that exist. However, with its implementation employees might become more optimistic and open to new concepts or management practices.In their article "The Benefits of Frequent Positive Affect: Does Happiness Lead to Success?", S. Lyubomirsky et al. report: "Study after study shows that happiness precedes important outcomes and indicators of thriving, including fulfilling and productive work". Institutional factors: Positive psychology, when applied correctly, can provide employees with a greater opportunity to use skills and vary work duties. However, changing work conditions and roles can lead to stress among employees if they are improperly supported by management. This is particularly true for employees who must meet the expectations of organizations with unrealistic goals and targets. Thomas and Tasker (2010) showed less worker autonomy, fewer opportunities for development, less-enriched work roles, and lower levels of supervisor support reflected the effect of industry growth on job satisfaction.Can an organization implement positive change? Lewis et al. (2007) developed appreciative inquiry (AI), which is an integrated, organizational-level methodology for approaching organizational development. Appreciative inquiry is based on the generation of organizational resourcefulness, which is accomplished by accessing a variety of human psychological processes, such as: positive emotional states, imagination, social cohesion, and the social construction of reality.A relatively new practice in the workplace is recruiting and developing people based on their strengths (what they love to do, are naturally good at and energises them). Standard Chartered Bank pioneered this approach in the early 2000s. More and more organisations are realising the benefit of recruiting people who are in their element in the job as opposed to simply having the right competencies for the job. Aviva, Morrisons (a large UK supermarket) and Starbucks have all adopted this approach.Psychologist Howard Gardner has extensively researched the merit of undertaking good work at one's job. He suggested young generations (particularly in the United States) are taught to focus on the selfish pursuit of money for its own sake, although having money does not engender happiness, and psychological studies show that there is a strong correlation between the wealthy and experience of intensively negative emotions. Gardner's proposed alternatives loosely follow the pleasant/good/meaningful life classifications outlined above; he believes young people should be trained to pursue excellence in their field, as well as engagement (see flow, above) in accordance with their moral belief systems. Societal factors: Criminology Offender rehabilitation Traditional work with offenders has focused on their deficits (e.g., with respect to socialization, and schooling) and other "criminogenic" risk-factors. Rehabilitation more often than not has taken the form of forced treatment or training, ostensibly for the good of the offender, and the community. Arguably, this approach has shortcomings, suggesting a need to make available additional positive options to treatment staff so they can best assist offenders, and so that offenders can better find their way forward. Positive psychology has made recent inroads with the advent of the "Good Lives Model", developed by Tony Ward, Shadd Maruna, and others. With respect to rehabilitation: "Individuals take part ... because they think that such activities might either improve the quality of their life (an intrinsic goal) or at least look good to judges, parole boards and family members (an extrinsic goal)." Positive criminology and positive victimology Positive criminology and positive victimology are conceptual approaches, developed by the Israeli criminologist Natti Ronel and his research team, that follow principles of positive psychology and apply them into the fields of criminology and victimology, respectively. Positive criminology and victimology both place an emphasis on social inclusion and on unifying and integrating forces at individual, group, social and spiritual levels that are associated with the limiting of crime and recovery from victimization. In traditional approaches, the study of crime, violence and related behaviors emphasizes the negative aspects in people's lives that are associated with deviance, criminality and victimization. A common understanding is that human relationships are affected more by destructive encounters than by constructive or positive ones. Positive criminology and victimology argue that a different approach is viable, based on three dimensions – social integration, emotional healing and spirituality – that constitute positive direction indicators. Societal factors: Economics In economics, the term well-being is used for one or more quantitative measures intended to assess the quality of life of a group, for example, in the capabilities approach and the economics of happiness. As with the related cognate terms 'wealth' and 'welfare', economics sources often contrast the state with its opposite. The study of well-being is divided into subjective well-being and objective well-being. Societal factors: Political views Psychologists in the happiness community feel politics should promote population happiness. Politics should also consider level of human happiness among future generations, concern itself with life expectancy, and focus on the reduction of suffering. Based on political affiliation, some studies argue conservatives, on average, are happier than liberals. A potential explanation is greater acceptance of income inequalities in society leads to a less worried nature. Luke Galen, Associate Professor of Psychology at Grand Valley State University, mentioned political commitments as important because they are a sort of secular world view that, like religion, can be generally beneficial to coping with death anxiety (see also Terror management theory and religion and happiness). Environmental factors: Living in an environment with more green spaces is associated with higher well-being, partly due to the beneficial effects on psychological relaxation, stress alleviation, increased physical activity, and reduced exposure to air pollutants and noise, among others. According to the latest systematic review of the economic literature on life satisfaction, pollution is bad for one's well-being. Exposure to outdoor air pollution and chimney smoke fireplaces causes dementia and other-health risks. Climate change mitigation measures have mostly positive direct effects on human well-being. Cultural factors: Culture People base their own well-being in relation to their environment and the lives of others around them. Well-being is also subject to how one feels other people in their environment perceive them, whether that positively or negatively. Whether or not other cultures are subject to internal culture appraisal is based on that culture's type. According to Diener and Suh, Collectivistic cultures are more likely to use norms and the social appraisals of others in evaluating their subjective well-being, whereas those [individualistic] societies are more likely to heavily weight the internal [frame of reference] arising from one's own happiness. Cultural factors: Different views on well-being Various cultures have various perspectives on the nature of positive human functioning. For example, studies on aversion to happiness, or fear of happiness, indicates that some individuals and cultures are averse to the experience of happiness, because they believe happiness may cause bad things to happen. Empirical evidence indicates that there are fundamental differences in the ways well-being is construed in Western and non-Western cultures, including the Islamic and East Asian cultures. Exploring various cultural perspectives on well-being, Joshanloo (2014) identifies and discusses six broad differences between Western and non-Western conceptions of well-being. For example, whereas Western cultures tend to emphasize the absence of negative emotions and autonomy in defining well-being, Eastern cultures tend to emphasize virtuous or religious activity, self-transcendence, and harmony.Eunkook M. Suh (University of California) and Shigehiro Oishi (University of Minnesota; now at University of Virginia) examined the differences of happiness on an international level and different cultures' views on what creates well-being and happiness. In a study, of over 6,000 students from 43 nations, to identify mean life satisfaction, on a scale of 1–7, the Chinese ranked lowest at 3.3; and Dutch scored the highest at 5.4. When asked how much subjective well-being was ideal, Chinese ranked lowest at 4.5, and Brazilians highest at 6.2, on a scale of 1–7. The study had three main findings: (1) People living in individualistic, rather than collectivist, societies are happier; (2) Psychological attributes referencing the individual are more relevant to Westerners; (3) Self-evaluating happiness levels depend on different cues, and experiences, from one's culture.The results of a study by Chang E. C. showed that Asian Americans and Caucasian Americans have similar levels of optimism but Asian Americans are far more pessimistic than Caucasian Americans. However, there were no major differences in depression across cultures. On the other hand, pessimism was positively linked to problem solving behaviors for Asian Americans, but was negatively linked for Caucasian Americans. Cultural factors: Religion and spirituality Religiousness and spirituality are closely related but distinct topics. Religion is any organized, and often institutionalized, system of cultural practices and beliefs pertaining to the meaning of human existence. It occurs within a traditional context such as a formal religious institution. Spirituality, on the other hand, is a general term applied to the process of finding meaning and a better understanding of one's place in the universe. It is the individual or collective search for that which is sacred or meaningful in life. One may therefore be religious but not spiritual, and vice versa. Cultural factors: Religion There have been some studies of how religion relates to happiness. Causal relationships remain unclear, but more religion is seen in happier people. Consistent with PERMA, religion may provide a sense of meaning and connection to something bigger, beyond the self. Religion may also provide community membership and hence relationships. Another component may have to do with ritual.Religion and happiness have been studied by a number of researchers, and religion features many elements addressing the components of happiness, as identified by positive psychology. Its association with happiness is facilitated in part by the social connections of organized religion, and by the neuropsychological benefits of prayer and belief. Cultural factors: There are a number of mechanisms through which religion may make a person happier, including social contact and support that result from religious pursuits, the mental activity that comes with optimism and volunteering, learned coping strategies that enhance one's ability to deal with stress, and psychological factors such as "reason for being." It may also be that religious people engage in behaviors related to good health, such as less substance abuse, since the use of psychotropic substances is sometimes considered abuse.The Handbook of Religion and Health describes a survey by Feigelman (1992) that examined happiness in Americans who have given up religion, in which it was found that there was little relationship between religious disaffiliation and unhappiness. A survey by Kosmin & Lachman (1993), also cited in this handbook, indicates that people with no religious affiliation appear to be at greater risk for depressive symptoms than those affiliated with a religion. A review of studies by 147 independent investigators found, "the correlation between religiousness and depressive symptoms was -.096, indicating that greater religiousness is mildly associated with fewer symptoms."The Legatum Prosperity Index reflects the repeated finding of research on the science of happiness that there is a positive link between religious engagement and well-being: people who report that God is very important in their lives are on average more satisfied with their lives, after accounting for their income, age and other individual characteristics.Surveys by Gallup, the National Opinion Research Centre and the Pew Organisation conclude that spiritually committed people are twice as likely to report being "very happy" than the least religiously committed people. An analysis of over 200 social studies contends that "high religiousness predicts a lower risk of depression and drug abuse and fewer suicide attempts, and more reports of satisfaction with sex life and a sense of well-being. Cultural factors: However, the links between religion and happiness are always very broad in nature, highly reliant on scripture and small sample number. To that extent, there is a much larger connection between religion and suffering (Lincoln 1034)." And a review of 498 studies published in peer-reviewed journals concluded that a large majority of them showed a positive correlation between religious commitment and higher levels of perceived well-being and self-esteem and lower levels of hypertension, depression, and clinical delinquency. A meta-analysis of 34 recent studies published between 1990 and 2001 found that religiosity has a salutary relationship with psychological adjustment, being related to less psychological distress, more life satisfaction, and better self-actualization. Finally, a recent systematic review of 850 research papers on the topic concluded that "the majority of well-conducted studies found that higher levels of religious involvement are positively associated with indicators of psychological well-being (life satisfaction, happiness, positive affect, and higher morale) and with less depression, suicidal thoughts and behaviour, drug/alcohol use/abuse."However, there remains strong disagreement among scholars about whether the effects of religious observance, particularly attending church or otherwise belonging to religious groups, is due to the spiritual or the social aspects—i.e. those who attend church or belong to similar religious organizations may well be receiving only the effects of the social connections involved. While these benefits are real enough, they may thus be the same one would gain by joining other, secular groups, clubs, or similar organizations.Religiousness has often been found to correlate with positive health attributes. People who are more religious show better emotional well-being and lower rates of delinquency, alcoholism, drug abuse, and other social problems.Six separate factors are cited as evidence for religion's effect on well-being: religion (1) provides social support, (2) supports healthy lifestyles, (3) promotes personality integration, (4) promotes generativity and altruism, (5) provides unique coping strategies, and (6) provides a sense of meaning and purpose. Many religious individuals experience emotions that create positive connections among people and allow them to express their values and potential. These four emotions are known as "sacred emotions," which are said to be (1) gratitude and appreciation, (2) forgiveness, (3) compassion and empathy, and (4) humility.Social interaction is necessarily a part of the religious experience. Religiosity has been identified to correlate positively with prosocial behavior in trauma patients, and prosocial behavior is furthermore associated with well-being. It also has stronger associations with well-being in individuals genetically predisposed towards social sensitivity in environments where religion prioritizes social affiliation. It has also been linked to greater resilience against stress as well as higher measures of self-actualization and success in romantic relationships and parental responsibilities.These benefits, while being correlational, may come about as a result of becoming more religiously involved. The benefit of having a secure social group likely plays a key part in religion's positive effects. One form of Christian counseling uses religion through talk therapy and assessments to promote mental health. In another instance, people who were not Buddhist, but were exposed to Buddhist concepts, scored higher on measures of outgroup acceptance and prosociality. This effect was found not only in Western countries, but also in places where Buddhism is prevalent, indicating a general association of Buddhism with acceptance. This finding seems to indicate that merely encountering a religious belief system such as Buddhism may allow some of its effects to be transferred to nonbelievers. Cultural factors: However, many disagree that the benefits the religious experience are due to their beliefs, and some find there to be no conclusive psychological benefits of belief at all. For example, the health benefit that the elderly gain from going to church may in fact be the reason they are able to go to church; the less healthy cannot leave their homes. Meta analysis has found that find studies purporting the beneficial results of religiosity often fail to fully represent data correctly due to a number of issues such as self-report bias, the use of inappropriate comparison groups, and the presence of criterion contamination. Other studies have disputed the efficacy of intercessory prayer positively affecting the health of those being prayed for. They have shown that, when scientifically rigorous studies are performed (by randomizing the patients and preventing them from knowing that they are being prayed for), there is no discernible effect.Religion has power as a cohesive social force, and whether or not it is always beneficial is debated. Irrespective of a group's beliefs, many find that simply belonging to a tight social group reduces anxiety and mental health problems. In addition, there may be a degree of self-selectivity amongst the religious; the behavioral benefits they display may simply be common aspects of those who choose to or are able to practice religion. As a result, whether or not religion can be prescribed scientifically as a means of self-betterment is unclear. Cultural factors: Spirituality While religion is often formalised and community-oriented, spirituality tends to be individually based and not as formalised. In a 2014 study, 320 children, ages 8–12, in both public and private schools, were given a Spiritual Well-Being Questionnaire assessing the correlation between spirituality and happiness. Spirituality – and not religious practices (praying, attending church services) – correlated positively with the child's happiness; the more spiritual the child was, the happier the child was. Spirituality accounted for about 3–26% of the variance in happiness.Meditation has been found to lead to high activity in the brain's left prefrontal cortex, which in turn has been found to correlate with happiness.A study using the Oxford happiness questionnaire on Brahma Kumaris Raja yoga meditators showed them having higher happiness than the control group. Yongey Mingyur Rinpoche has said that neuro scientists have found that with meditation, an individual's happiness baseline can change.Many people describe themselves as both religious and spiritual, but spirituality represents just one particular function of religion. Spirituality as related to positive psychology can be defined as "a search for the sacred". What is defined as sacred can be related to God, life itself, or almost any other facet of existence. It simply must be viewed as having spiritual implications which are transcendent of the individual. Spiritual well-being addresses this human need for transcendence and involves social as well as existential well-being. Spiritual well-being is associated with various positive outcomes such as better physical and psychological well-being, lower anxiety, less depression, self-actualization, positive relationships with parents, higher rates of positive personality traits and acceptance. Researchers have cautioned to differentiate between correlative and causal associations between spirituality and psychology.Reaching the sacred as a personal goal, also called spiritual striving, has been found to correlate highest with well-being compared to other forms of striving. This type of striving can improve a sense of self and relationships and creates a connection to the transcendent Additionally, multiple studies have shown that self-reported spirituality is related to lower rates of mortality and depression and higher rates of happiness.Currently, most research on spirituality examines ways in which spirituality can help in times of crisis. Spirituality has been found to remain constant when experiencing traumatic events and/or life stressors such as accidents, war, sickness, and death of a loved one. When confronted with an obstacle, people might turn to prayer or meditation. Coping mechanisms involving spirituality include meditative meditation, creating boundaries to preserve the sacred, spiritual purification to return to the righteous path, and spiritual reframing which focuses on maintaining belief. One clinical application of spirituality and positive psychology research is the "psychospiritual intervention," which represents the potential that spirituality has to increase well-being. These coping mechanisms that aim to preserve the sacred have been found by researchers to increase well-being and return the individual back to the sacred.Overall, spirituality is a process that occurs over a lifetime and includes searching, conserving, and redefining what is sacred in an extremely individualized manner. It does not always have a positive effect and in fact, has been associated with very negative events and life changes. Research is lacking in spirituality but it is necessary because spirituality can assist in enhancing the experiences of the uncontrollable parts of life. Other factors: Modernity Much research has pointed at the rising rates of depression, leading people to speculate that modernization may be a factor in the growing percentage of depressed people. One study found that women in urban America were much more likely to experience depression than those in rural Nigeria. Other studies have found a positive correlation between a country's GDP per capita, as quantitative measure of modernization, and lifetime risk of a mood disorder trended toward significance (p=0.06).Many people believe it is the increased number of pressures and expectations, increased isolation, increased individualism, and increased inactivity that contribute to higher rates of depression in modern societies. Other factors: Weather Some evidence suggests sunnier climates do not predict happiness. In one study, both Californians and Midwesterners expected the former's happiness ratings to be higher due to a sunnier environment. In fact, the Californian and Midwestern happiness ratings did not show a significant difference. Other research has found that temperature, wind power, sunlight, precipitation and air temperature has a small impact on mood, but some people appear to be affected in a large way (but it's not 5 factor personality). A study of Dutch teenagers identified that the effect of weather on mood depends on whether they were Summer lovers, summer haters, rain haters and unaffected by weather. Other researchers say the necessary minimum daily dose of sunlight is as little as 30 minutes.That is not to say weather is never a factor for happiness. Perhaps the changing norms of sunlight cause seasonal affective disorder, which undermines level of happiness. Additional future research: Positive psychology research and practice is currently conducted and developed in various countries throughout the world. To illustrate, in Canada, Charles Hackney of Briercrest College applies positive psychology to the topic of personal growth through martial arts training; Paul Wong, president of the International Network on Personal Meaning, is developing an existential approach to positive psychology, which is framed in the second wave positive psychology (PP 2.0).The research program ‘Understanding Positive Emotions’ at Human Science Lab, London, investigates how material well-being and perceptual well-being work as relative determinants in conditioning our mind for positive emotions.Cognitive and behavioral change, although sometimes slight and complex, can produce an 'intense affect'. Additional future research: Isen (2009) remarked that further progress requires suitable research methods, and appropriate theories on which to base contemporary research. Chang (2008) suggested that researchers have a number of paths to pursue regarding the enhancement of emotional intelligence, even though emotional intelligence does not guarantee the development of positive affect; in short, more study is required to track the gradient of positive affect in psychology.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Clash of Codes** Clash of Codes: Clash of Codes is a term in sports used to describe a match played between two teams who play different codes of the same sport. Games are usually played with the codes changing at half-time, or across two matches of the difference codes with an aggregate score. Usually associated with the codes of football, and especially rugby, several games have occurred throughout history. American Football vs Rugby League: Jacksonville Axemen vs Jacksonville Knights The first clash of codes match between American football and rugby league was played between Jacksonville Axemen and Jacksonville Knights. The league side Axemen defeated the American football side Knights 38–27. Rugby League vs Rugby Union: Kangaroos vs Wallabies In September 1909 the national league and union sides of Australia played a four match test series resulting in two wins a piece for either side. The exact format and rules of the game are unknown. All games were played at the Agricultural Oval in Sydney. Bath vs Wigan The first clash of codes game in the UK between rugby league and rugby union received a lot of media attention and was labelled as The Clash of the Codes. The game was between Bath and Wigan and saw league side Wigan win with an aggregate score of 101–50 across two games. Rugby League vs Rugby Union: St Helens vs Sale Sharks In January 2003, St. Helens took on Sale Sharks in a single game played at Knowsley Road, which had one half under league rules and the other under union rules. At the time Sale had the been a professional side for almost a decade which helped improve both strength and fitness that was necessary for them to adapt to the constant tackling required in rugby league, as well as being able to call on the services of a number of ex-league players, most notably Jason Robinson, who had played for Wigan in 1996, factors which were though to have resulted in a much closer game compared to that of Bath vs Wigan. Having built up a 41–0 lead under union rules, St Helens were restricted to only 39 points under league rules. Rugby League vs Rugby Union: Salford Red Devils vs Sale Sharks In February 2014, eleven years after the first dual code single game, it was announced that the AJ Bell Stadium would see another fixture, scheduled for 26 August 2014, between the facility's two tenants, Salford Red Devils and Sale Sharks, to raise money for various charities. However, in July the same year it was subsequently announced that the game was being postponed owing to the difficulties of the two clubs' respective league schedules - the original date was between two important fixtures towards the end of Salford's league season, while Sale had yet to start their own league season. Rugby League vs Rugby Union: Western Suburbs vs Randwick In October 2015, Western Suburbs Magpies played Randwick DRUFC in Australia's first Clash of Codes games of domestic teams in what was described as "hybrid rugby". The game was 13-a-side and featured league rules when in the teams own half and union rules when in the opposition half, as well as 60 second transitions. The league side won 47–19, with union points for tries, and league points for conversions, penalties, and drop goals.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Ethylamine** Ethylamine: Ethylamine, also known as ethanamine, is an organic compound with the formula CH3CH2NH2. This colourless gas has a strong ammonia-like odor. It condenses just below room temperature to a liquid miscible with virtually all solvents. It is a nucleophilic base, as is typical for amines. Ethylamine is widely used in chemical industry and organic synthesis. Synthesis: Ethylamine is produced on a large scale by two processes. Most commonly ethanol and ammonia are combined in the presence of an oxide catalyst: CH3CH2OH + NH3 → CH3CH2NH2 + H2OIn this reaction, ethylamine is coproduced together with diethylamine and triethylamine. In aggregate, approximately 80M kilograms/year of these three amines are produced industrially. It is also produced by reductive amination of acetaldehyde. CH3CHO + NH3 + H2 → CH3CH2NH2 + H2OEthylamine can be prepared by several other routes, but these are not economical. Ethylene and ammonia combine to give ethylamine in the presence of a sodium amide or related basic catalysts. Synthesis: H2C=CH2 + NH3 → CH3CH2NH2Hydrogenation of acetonitrile, acetamide, and nitroethane affords ethylamine. These reactions can be effected stoichiometrically using lithium aluminium hydride. In another route, ethylamine can be synthesized via nucleophilic substitution of a haloethane (such as chloroethane or bromoethane) with ammonia, utilizing a strong base such as potassium hydroxide. This method affords significant amounts of byproducts, including diethylamine and triethylamine. Synthesis: CH3CH2Cl + NH3 + KOH → CH3CH2NH2 + KCl + H2OEthylamine is also produced naturally in the cosmos; it is a component of interstellar gases. Reactions: Like other simple aliphatic amines, ethylamine is a weak base: the pKa of [CH3CH2NH3]+ has been determined to be 10.8Ethylamine undergoes the reactions anticipated for a primary alkyl amine, such as acylation and protonation. Reaction with sulfuryl chloride followed by oxidation of the sulfonamide give diethyldiazene, EtN=NEt. Ethylamine may be oxidized using a strong oxidizer such as potassium permanganate to form acetaldehyde. Reactions: Ethylamine like some other small primary amines is a good solvent for lithium metal, giving the ion [Li(amine)4]+ and the solvated electron. Such solutions are used for the reduction of unsaturated organic compounds, such as naphthalenes and alkynes. Applications: Ethylamine is a precursor to many herbicides including atrazine and simazine. It is found in rubber products as well.Ethylamine is used as a precursor chemical along with benzonitrile (as opposed to o-chlorobenzonitrile and methylamine in ketamine synthesis) in the clandestine synthesis of cyclidine dissociative anesthetic agents (the analogue of ketamine which is missing the 2-chloro group on the phenyl ring, and its N-ethyl analog) which are closely related to the well known anesthetic agent ketamine and the recreational drug phencyclidine and have been detected on the black market, being marketed for use as a recreational hallucinogen and tranquilizer. This produces a cyclidine with the same mechanism of action as ketamine (NMDA receptor antagonism) but with a much greater potency at the PCP binding site, a longer half-life, and significantly more prominent parasympathomimetic effects.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Streaming algorithm** Streaming algorithm: In computer science, streaming algorithms are algorithms for processing data streams in which the input is presented as a sequence of items and can be examined in only a few passes, typically just one. These algorithms are designed to operate with limited memory, generally logarithmic in the size of the stream and/or in the maximum value in the stream, and may also have limited processing time per item. Streaming algorithm: As a result of these constraints, streaming algorithms often produce approximate answers based on a summary or "sketch" of the data stream. History: Though streaming algorithms had already been studied by Munro and Paterson as early as 1978, as well as Philippe Flajolet and G. Nigel Martin in 1982/83, the field of streaming algorithms was first formalized and popularized in a 1996 paper by Noga Alon, Yossi Matias, and Mario Szegedy. For this paper, the authors later won the Gödel Prize in 2005 "for their foundational contribution to streaming algorithms." There has since been a large body of work centered around data streaming algorithms that spans a diverse spectrum of computer science fields such as theory, databases, networking, and natural language processing. History: Semi-streaming algorithms were introduced in 2005 as a relaxation of streaming algorithms for graphs, in which the space allowed is linear in the number of vertices n, but only logarithmic in the number of edges m. This relaxation is still meaningful for dense graphs, and can solve interesting problems (such as connectivity) that are insoluble in o(n) space. Models: Data stream model In the data stream model, some or all of the input is represented as a finite sequence of integers (from some finite domain) which is generally not available for random access, but instead arrives one at a time in a "stream". If the stream has length n and the domain has size m, algorithms are generally constrained to use space that is logarithmic in m and n. They can generally make only some small constant number of passes over the stream, sometimes just one. Models: Turnstile and cash register models Much of the streaming literature is concerned with computing statistics on frequency distributions that are too large to be stored. For this class of problems, there is a vector a=(a1,…,an) (initialized to the zero vector 0 ) that has updates presented to it in a stream. The goal of these algorithms is to compute functions of a using considerably less space than it would take to represent a precisely. There are two common models for updating such streams, called the "cash register" and "turnstile" models.In the cash register model, each update is of the form ⟨i,c⟩ , so that ai is incremented by some positive integer c . A notable special case is when c=1 (only unit insertions are permitted). Models: In the turnstile model, each update is of the form ⟨i,c⟩ , so that ai is incremented by some (possibly negative) integer c . In the "strict turnstile" model, no ai at any time may be less than zero. Sliding window model Several papers also consider the "sliding window" model. In this model, the function of interest is computing over a fixed-size window in the stream. As the stream progresses, items from the end of the window are removed from consideration while new items from the stream take their place. Models: Besides the above frequency-based problems, some other types of problems have also been studied. Many graph problems are solved in the setting where the adjacency matrix or the adjacency list of the graph is streamed in some unknown order. There are also some problems that are very dependent on the order of the stream (i.e., asymmetric functions), such as counting the number of inversions in a stream and finding the longest increasing subsequence. Evaluation: The performance of an algorithm that operates on data streams is measured by three basic factors: The number of passes the algorithm must make over the stream. The available memory. Evaluation: The running time of the algorithm.These algorithms have many similarities with online algorithms since they both require decisions to be made before all data are available, but they are not identical. Data stream algorithms only have limited memory available but they may be able to defer action until a group of points arrive, while online algorithms are required to take action as soon as each point arrives. Evaluation: If the algorithm is an approximation algorithm then the accuracy of the answer is another key factor. The accuracy is often stated as an (ϵ,δ) approximation meaning that the algorithm achieves an error of less than ϵ with probability 1−δ Applications: Streaming algorithms have several applications in networking such as monitoring network links for elephant flows, counting the number of distinct flows, estimating the distribution of flow sizes, and so on. They also have applications in databases, such as estimating the size of a join. Some streaming problems: Frequency moments The kth frequency moment of a set of frequencies a is defined as Fk(a)=∑i=1naik The first moment F1 is simply the sum of the frequencies (i.e., the total count). The second moment F2 is useful for computing statistical properties of the data, such as the Gini coefficient of variation. F∞ is defined as the frequency of the most frequent items. Some streaming problems: The seminal paper of Alon, Matias, and Szegedy dealt with the problem of estimating the frequency moments. Some streaming problems: Calculating frequency moments A direct approach to find the frequency moments requires to maintain a register mi for all distinct elements ai ∈ (1,2,3,4,...,N) which requires at least memory of order Ω(N) . But we have space limitations and require an algorithm that computes in much lower memory. This can be achieved by using approximations instead of exact values. An algorithm that computes an (ε,δ)approximation of Fk, where F'k is the (ε,δ)- approximated value of Fk. Where ε is the approximation parameter and δ is the confidence parameter. Some streaming problems: Calculating F0 (distinct elements in a DataStream) FM-Sketch algorithm Flajolet et al. in introduced probabilistic method of counting which was inspired from a paper by Robert Morris. Morris in his paper says that if the requirement of accuracy is dropped, a counter n can be replaced by a counter log n which can be stored in log log n bits. Flajolet et al. in improved this method by using a hash function h which is assumed to uniformly distribute the element in the hash space (a binary string of length L). Some streaming problems: h:[m]→[0,2L−1] Let bit(y,k) represent the kth bit in binary representation of y y=∑k≥0bit(y,k)∗2k Let ρ(y) represents the position of least significant 1-bit in the binary representation of yi with a suitable convention for ρ(0) == if if y=0 Let A be the sequence of data stream of length M whose cardinality need to be determined. Let BITMAP [0...L − 1] be the hash space where the ρ(hashedvalues) are recorded. The below algorithm then determines approximate cardinality of A.Procedure FM-Sketch: for i in 0 to L − 1 do BITMAP[i] := 0 end for for x in A: do Index := ρ(hash(x)) if BITMAP[index] = 0 then BITMAP[index] := 1 end if end for B := Position of left most 0 bit of BITMAP[] return 2 ^ B If there are N distinct elements in a data stream. Some streaming problems: For log ⁡(N) then BITMAP[i] is certainly 0 For log ⁡(N) then BITMAP[i] is certainly 1 For log ⁡(N) then BITMAP[i] is a fringes of 0's and 1's K-minimum value algorithm The previous algorithm describes the first attempt to approximate F0 in the data stream by Flajolet and Martin. Their algorithm picks a random hash function which they assume to uniformly distribute the hash values in hash space. Some streaming problems: Bar-Yossef et al. in introduced k-minimum value algorithm for determining number of distinct elements in data stream. They used a similar hash function h which can be normalized to [0,1] as h:[m]→[0,1] . But they fixed a limit t to number of values in hash space. The value of t is assumed of the order O(1ε2) (i.e. less approximation-value ε requires more t). KMV algorithm keeps only t-smallest hash values in the hash space. After all the m values of stream have arrived, υ=Max(h(ai)) is used to calculate F0′=tυ . That is, in a close-to uniform hash space, they expect at-least t elements to be less than O(tF0) .Procedure 2 K-Minimum Value Initialize first t values of KMV for a in a1 to an do if h(a) < Max(KMV) then Remove Max(KMV) from KMV set Insert h(a) to KMV end if end for return t/Max(KMV) Complexity analysis of KMV KMV algorithm can be implemented in log ⁡(m)) memory bits space. Each hash value requires space of order log ⁡(m)) memory bits. There are hash values of the order O(1ε2) . The access time can be reduced if we store the t hash values in a binary tree. Thus the time complexity will be reduced to log log ⁡(m)) Calculating Fk Alon et al. estimates Fk by defining random variables that can be computed within given space and time. The expected value of random variables gives the approximate value of Fk. Some streaming problems: Assume length of sequence m is known in advance. Then construct a random variable X as follows: Select ap be a random member of sequence A with index at p, ap=l∈(1,2,3,…,n) Let r=|{q:q≥p,aq=l}| , represents the number of occurrences of l within the members of the sequence A following ap. Random variable X=m(rk−(r−1)k) .Assume S1 be of the order O(n1−1/k/λ2) and S2 be of the order log ⁡(1/ε)) . Algorithm takes S2 random variable Y1,Y2,...,YS2 and outputs the median Y . Where Yi is the average of Xij where 1 ≤ j ≤ S1. Now calculate expectation of random variable E(X). Some streaming problems: E(X)=∑i=1n∑i=1mi(jk−(j−1)k)=mm[(1k+(2k−1k)+…+(m1k−(m1−1)k))+(1k+(2k−1k)+…+(m2k−(m2−1)k))+…+(1k+(2k−1k)+…+(mnk−(mn−1)k))]=∑i=1nmik=Fk Complexity of Fk From the algorithm to calculate Fk discussed above, we can see that each random variable X stores value of ap and r. So, to compute X we need to maintain only log(n) bits for storing ap and log(n) bits for storing r. Total number of random variable X will be the S1∗S2 Hence the total space complexity the algorithm takes is of the order of log log log ⁡m)) Simpler approach to calculate F2 The previous algorithm calculates F2 in order of log log ⁡n)) memory bits. Alon et al. in simplified this algorithm using four-wise independent random variable with values mapped to {−1,1} This further reduces the complexity to calculate F2 to log log log ⁡m)) Frequent elements In the data stream model, the frequent elements problem is to output a set of elements that constitute more than some fixed fraction of the stream. A special case is the majority problem, which is to determine whether or not any value constitutes a majority of the stream. Some streaming problems: More formally, fix some positive constant c > 1, let the length of the stream be m, and let fi denote the frequency of value i in the stream. The frequent elements problem is to output the set { i | fi > m/c }.Some notable algorithms are: Boyer–Moore majority vote algorithm Count-Min sketch Lossy counting Multi-stage Bloom filters Misra–Gries heavy hitters algorithm Misra–Gries summary Event detection Detecting events in data streams is often done using a heavy hitters algorithm as listed above: the most frequent items and their frequency are determined using one of these algorithms, then the largest increase over the previous time point is reported as trend. This approach can be refined by using exponentially weighted moving averages and variance for normalization. Some streaming problems: Counting distinct elements Counting the number of distinct elements in a stream (sometimes called the F0 moment) is another problem that has been well studied. Some streaming problems: The first algorithm for it was proposed by Flajolet and Martin. In 2010, Daniel Kane, Jelani Nelson and David Woodruff found an asymptotically optimal algorithm for this problem. It uses O(ε2 + log d) space, with O(1) worst-case update and reporting times, as well as universal hash functions and a r-wise independent hash family where r = Ω(log(1/ε) / log log(1/ε)). Some streaming problems: Entropy The (empirical) entropy of a set of frequencies a is defined as log ⁡aim , where m=∑i=1nai Online learning Learn a model (e.g. a classifier) by a single pass over a training set. Feature hashing Stochastic gradient descent Lower bounds: Lower bounds have been computed for many of the data streaming problems that have been studied. By far, the most common technique for computing these lower bounds has been using communication complexity.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Epitope mapping** Epitope mapping: In immunology, epitope mapping is the process of experimentally identifying the binding site, or epitope, of an antibody on its target antigen (usually, on a protein). Identification and characterization of antibody binding sites aid in the discovery and development of new therapeutics, vaccines, and diagnostics. Epitope characterization can also help elucidate the binding mechanism of an antibody and can strengthen intellectual property (patent) protection. Experimental epitope mapping data can be incorporated into robust algorithms to facilitate in silico prediction of B-cell epitopes based on sequence and/or structural data.Epitopes are generally divided into two classes: linear and conformational/discontinuous. Linear epitopes are formed by a continuous sequence of amino acids in a protein. Conformational epitopes epitopes are formed by amino acids that are nearby in the folded 3D structure but distant in the protein sequence. Note that conformational epitopes can include some linear segments. B-cell epitope mapping studies suggest that most interactions between antigens and antibodies, particularly autoantibodies and protective antibodies (e.g., in vaccines), rely on binding to discontinuous epitopes. Importance for antibody characterization: By providing information on mechanism of action, epitope mapping is a critical component in therapeutic monoclonal antibody (mAb) development. Epitope mapping can reveal how a mAb exerts its functional effects - for instance, by blocking the binding of a ligand or by trapping a protein in a non-functional state. Many therapeutic mAbs target conformational epitopes that are only present when the protein is in its native (properly folded) state, which can make epitope mapping challenging. Epitope mapping has been crucial to the development of vaccines against prevalent or deadly viral pathogens, such as chikungunya, dengue, Ebola, and Zika viruses, by determining the antigenic elements (epitopes) that confer long-lasting immunization effects.Complex target antigens, such as membrane proteins (e.g., G protein-coupled receptors [GPCRs]) and multi-subunit proteins (e.g., ion channels) are key targets of drug discovery. Mapping epitopes on these targets can be challenging because of the difficulty in expressing and purifying these complex proteins. Membrane proteins frequently have short antigenic regions (epitopes) that fold correctly only when in the context of a lipid bilayer. As a result, mAb epitopes on these membrane proteins are often conformational and, therefore, are more difficult to map. Importance for intellectual property (IP) protection: Epitope mapping has become prevalent in protecting the intellectual property (IP) of therapeutic mAbs. Knowledge of the specific binding sites of antibodies strengthens patents and regulatory submissions by distinguishing between current and prior art (existing) antibodies. The ability to differentiate between antibodies is particularly important when patenting antibodies against well-validated therapeutic targets (e.g., PD1 and CD20) that can be drugged by multiple competing antibodies. In addition to verifying antibody patentability, epitope mapping data have been used to support broad antibody claims submitted to the United States Patent and Trademark Office.Epitope data have been central to several high-profile legal cases involving disputes over the specific protein regions targeted by therapeutic antibodies. In this regard, the Amgen v. Sanofi/Regeneron Pharmaceuticals PCSK9 inhibitor case hinged on the ability to show that both the Amgen and Sanofi/Regeneron therapeutic antibodies bound to overlapping amino acids on the surface of PCSK9. Methods: There are several methods available for mapping antibody epitopes on target antigens: X-ray co-crystallography and cryogenic electron microscopy (cryo-EM). X-ray co-crystallography has historically been regarded as the gold-standard approach for epitope mapping because it allows direct visualization of the interaction between the antigen and antibody. Cryo-EM can similarly provide high-resolution maps of antibody-antigen interactions. However, both approaches are technically challenging, time-consuming, and expensive, and not all proteins are amenable to crystallization. Moreover, these techniques are not always feasible due to the difficulty in obtaining sufficient quantities of correctly folded and processed protein. Finally, neither technique can distinguish key epitope residues (energetic "hot spots") for mAbs that bind to the same group of amino acids. Methods: Array-based oligo-peptide scanning. Also known as overlapping peptide scan or pepscan analysis, this technique uses a library of oligo-peptide sequences from overlapping and non-overlapping segments of a target protein, and tests for their ability to bind the antibody of interest. This method is fast, relatively inexpensive, and specifically suited to profile epitopes for large numbers of candidate antibodies against a defined target. The epitope mapping resolution depends on the number of overlapping peptides that are used. The main disadvantage of this approach is that discontinuous epitopes are deconstructed into smaller peptides, which can cause lower binding affinities. However, advances have been made with technologies such as constrained peptides, which can be used to mimic conformational as well as discontinuous epitopes. For example, an antibody against CD20 was mapped in a study using array-based oligo-peptide scanning, by combining non-adjacent peptide sequences from different parts of the target protein and enforcing conformational rigidity onto this combined peptide (e.g., by using CLIPS scaffolds). Replacement analysis on peptides also allows single amino acid resolution, and can therefore pinpoint key epitope residues. Methods: Site-directed mutagenesis mapping. The molecular biological technique of site-directed mutagenesis (SDM) can be used to enable epitope mapping. In SDM, systematic mutations of amino acids are introduced into the sequence of the target protein. Binding of an antibody to each mutated protein is tested to identify the amino acids that comprise the epitope. This technique can be used to map both linear and conformational epitopes but is labor-intensive and time-consuming, typically limiting analysis to a small number of amino-acid residues. Methods: High-throughput shotgun mutagenesis epitope mapping. Shotgun mutagenesis is a high-throughput approach for mapping the epitopes of mAbs. The shotgun mutagenesis technique begins with the creation of a mutation library of the entire target antigen, with each clone containing a unique amino acid mutation (typically an alanine substitution). Hundreds of plasmid clones from the library are individually arrayed in 384-well microplates, expressed in human cells, and tested for antibody binding. Amino acids of the target required for antibody binding are identified by a loss of immunoreactivity. These residues are mapped onto structures of the target protein to visualize the epitope. Benefits of high-throughput shotgun mutagenesis epitope mapping include: 1) the ability to identify both linear and conformational epitopes, 2) a shorter assay time than other methods, 3) the presentation of properly folded and post-translationally modified proteins, and 4) the ability to identify key amino acids that drive the energetic interactions (energetic "hot spots" of the epitope). Methods: Hydrogen–deuterium exchange (HDX). This method gives information about the solvent accessibility of various parts of the antigen and antibody, demonstrating reduced solvent accessibility in regions of protein-protein interactions. One of its advantages is that it determines the interaction site of the antigen-antibody complex in its native solution, and does not introduce any modifications (e.g. mutation) to either the antigen or the antibody. HDX epitope mapping has also been demonstrated to be the effective method to rapidly supply complete information for epitope structure. It does not usually provide data at the level of amino acid, but this limitation is being improved by new technology advancements. It has recently been recommended as a fast and cost-effective epitope mapping approach, using the complex protein system influenza hemagglutinin as an example. Methods: Cross-linking-coupled mass spectrometry. Antibody and antigen are bound to a labeled cross-linker, and complex formation is confirmed by high-mass MALDI detection. The binding location of the antibody to the antigen can then be identified by mass spectrometry (MS). The cross-linked complex is highly stable and can be exposed to various enzymatic and digestion conditions, allowing many different peptide options for detection. MS or MS/MS techniques are used to detect the amino-acid locations of the labelled cross-linkers and the bound peptides (both epitope and paratope are determined in one experiment). The key advantage of this technique is the high sensitivity of MS detection, which means that very little material (hundreds of micrograms or less) is needed.Other methods, such as yeast display, phage display, and limited proteolysis, provide high-throughput monitoring of antibody binding but lack resolution, especially for conformational epitopes.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Cerebellar granule cell** Cerebellar granule cell: Cerebellar granule cells form the thick granular layer of the cerebellar cortex and are among the smallest neurons in the brain. (The term granule cell is used for several unrelated types of small neurons in various parts of the brain.) Cerebellar granule cells are also the most numerous neurons in the brain: in humans, estimates of their total number average around 50 billion, which means that they constitute about 3/4 of the brain's neurons. Structure: The cell bodies are packed into a thick granular layer at the bottom of the cerebellar cortex. A granule cell emits only four to five dendrites, each of which ends in an enlargement called a dendritic claw. These enlargements are sites of excitatory input from mossy fibers and inhibitory input from Golgi cells. Structure: The thin, unmyelinated axons of granule cells rise vertically to the upper (molecular) layer of the cortex, where they split in two, with each branch traveling horizontally to form a parallel fiber; the splitting of the vertical branch into two horizontal branches gives rise to a distinctive "T" shape. A parallel fiber runs for an average of 3 mm in each direction from the split, for a total length of about 6 mm (about 1/10 of the total width of the cortical layer). As they run along, the parallel fibers pass through the dendritic trees of Purkinje cells, contacting one of every 3–5 that they pass, making a total of 80–100 synaptic connections with Purkinje cell dendritic spines. Granule cells use glutamate as their neurotransmitter, and therefore exert excitatory effects on their targets. Structure: Development In normal development, endogenous Sonic hedgehog signaling stimulates rapid proliferation of cerebellar granule neuron progenitors (CGNPs) in the external granule layer (EGL). Cerebellum development occurs during late embryogenesis and the early postnatal period, with CGNP proliferation in the EGL peaking during early development (P7, postnatal day 7, in the mouse). As CGNPs terminally differentiate into cerebellum granule cells (also called cerebellar granule neurons, CGNs), they migrate to the internal granule layer (IGL), forming the mature cerebellum (by P20, post-natal day 20 in the mouse). Mutations that abnormally activate Sonic hedgehog signaling predispose to cancer of the cerebellum (medulloblastoma) in humans with Gorlin syndrome and in genetically engineered mouse models. Function: Granule cells receive all of their input from mossy fibers, but outnumber them 200 to 1 (in humans). Thus, the information in the granule cell population activity state is the same as the information in the mossy fibers, but recoded in a much more expansive way. Because granule cells are so small and so densely packed, it has been very difficult to record their spike activity in behaving animals, so there is little data to use as a basis of theorizing. The most popular concept of their function was proposed by David Marr, who suggested that they could encode combinations of mossy fiber inputs. The idea is that with each granule cell receiving input from only 4–5 mossy fibers, a granule cell would not respond if only a single one of its inputs was active, but would respond if more than one were active. This "combinatorial coding" scheme would potentially allow the cerebellum to make much finer distinctions between input patterns than the mossy fibers alone would permit.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Leap second** Leap second: A leap second is a one-second adjustment that is occasionally applied to Coordinated Universal Time (UTC), to accommodate the difference between precise time (International Atomic Time (TAI), as measured by atomic clocks) and imprecise observed solar time (UT1), which varies due to irregularities and long-term slowdown in the Earth's rotation. The UTC time standard, widely used for international timekeeping and as the reference for civil time in most countries, uses TAI and consequently would run ahead of observed solar time unless it is reset to UT1 as needed. The leap second facility exists to provide this adjustment. The leap second was introduced in 1972 and since then 27 leap seconds have been added to UTC. Leap second: Because the Earth's rotational speed varies in response to climatic and geological events, UTC leap seconds are irregularly spaced and unpredictable. Insertion of each UTC leap second is usually decided about six months in advance by the International Earth Rotation and Reference Systems Service (IERS), to ensure that the difference between the UTC and UT1 readings will never exceed 0.9 seconds.This practice has proven disruptive, particularly in the twenty-first century and especially in services that depend on precise timestamping or time-critical process control. And since not all computers are adjusted by leap-second, they will display times differing from those that have been adjusted. After many years of discussions by different standards bodies, in November 2022, at the 27th General Conference on Weights and Measures, it was decided to abandon the leap second by or before 2035. History: In about 140 CE, Ptolemy, the Alexandrian astronomer, sexagesimally subdivided both the mean solar day and the true solar day to at least six places after the sexagesimal point, and he used simple fractions of both the equinoctial hour and the seasonal hour, none of which resemble the modern second. Muslim scholars, including al-Biruni in 1000, subdivided the mean solar day into 24 equinoctial hours, each of which was subdivided sexagesimally, that is into the units of minute, second, third, fourth and fifth, creating the modern second as 1⁄60 of 1⁄60 of 1⁄24 = 1⁄86,400 of the mean solar day in the process. With this definition, the second was proposed in 1874 as the base unit of time in the CGS system of units. Soon afterwards Simon Newcomb and others discovered that Earth's rotation period varied irregularly, so in 1952, the International Astronomical Union (IAU) defined the second as a fraction of the sidereal year. In 1955, considering the tropical year to be more fundamental than the sidereal year, the IAU redefined the second as the fraction 1⁄31,556,925.975 of the 1900.0 mean tropical year. In 1956, a slightly more precise value of 1⁄31,556,925.9747 was adopted for the definition of the second by the International Committee for Weights and Measures, and in 1960 by the General Conference on Weights and Measures, becoming a part of the International System of Units (SI).Eventually, this definition too was found to be inadequate for precise time measurements, so in 1967, the SI second was again redefined as 9,192,631,770 periods of the radiation emitted by a caesium-133 atom in the transition between the two hyperfine levels of its ground state. That value agreed to 1 part in 1010 with the astronomical (ephemeris) second then in use. It was also close to 1⁄86,400 of the mean solar day as averaged between years 1750 and 1892. History: However, for the past several centuries, the length of the mean solar day has been increasing by about 1.4–1.7 ms per century, depending on the averaging time. By 1961, the mean solar day was already a millisecond or two longer than 86400 SI seconds. Therefore, time standards that change the date after precisely 86400 SI seconds, such as the International Atomic Time (TAI), would become increasingly ahead of time standards tied to the mean solar day, such as Universal Time (UT). History: When the Coordinated Universal Time (UTC) standard was instituted in 1960, based on atomic clocks, it was felt necessary to maintain agreement with UT, which, until then, had been the reference for broadcast time services. From 1960 to 1971, the rate of UTC atomic clocks was offset from a pure atomic time scale by the BIH to remain synchronized with UT2, a practice known as the "rubber second". The rate of UTC was decided at the start of each year, and was offset from the rate of atomic time by −150 parts per 1010 for 1960–1962, by −130 parts per 1010 for 1962–63, by −150 parts per 1010 again for 1964–65, and by −300 parts per 1010 for 1966–1971. Alongside the shift in rate, an occasional 0.1 s step (0.05 s before 1963) was needed. This predominantly frequency-shifted rate of UTC was broadcast by MSF, WWV, and CHU among other time stations. In 1966, the CCIR approved "stepped atomic time" (SAT), which adjusted atomic time with more frequent 0.2 s adjustments to keep it within 0.1 s of UT2, because it had no rate adjustments. SAT was broadcast by WWVB among other time stations.In 1972, the leap-second system was introduced so that the UTC seconds could be set exactly equal to the standard SI second, while still maintaining the UTC time of day and changes of UTC date synchronized with those of UT1. By then, the UTC clock was already 10 seconds behind TAI, which had been synchronized with UT1 in 1958, but had been counting true SI seconds since then. After 1972, both clocks have been ticking in SI seconds, so the difference between their displays at any time is 10 seconds plus the total number of leap seconds that have been applied to UTC as of that time; as of May 2023, 27 leap seconds have been applied to UTC, so the difference is 10 + 27 = 37 seconds. Insertion of leap seconds: The scheduling of leap seconds was initially delegated to the Bureau International de l'Heure (BIH), but passed to the International Earth Rotation and Reference Systems Service (IERS) on 1 January 1988. IERS usually decides to apply a leap second whenever the difference between UTC and UT1 approaches 0.6 s, in order to keep the difference between UTC and UT1 from exceeding 0.9 s. Insertion of leap seconds: The UTC standard allows leap seconds to be applied at the end of any UTC month, with first preference to June and December and second preference to March and September. As of May 2023, all of them have been inserted at the end of either 30 June or 31 December. IERS publishes announcements every six months, whether leap seconds are to occur or not, in its "Bulletin C". Such announcements are typically published well in advance of each possible leap second date – usually in early January for 30 June and in early July for 31 December. Some time signal broadcasts give voice announcements of an impending leap second. Insertion of leap seconds: Between 1972 and 2020, a leap second has been inserted about every 21 months, on average. However, the spacing is quite irregular and apparently increasing: there were no leap seconds in the six-year interval between 1 January 1999 and 31 December 31, 2004 but there were nine leap seconds in the eight years 1972–1979. Since the introduction of leap seconds, 1972 has been the longest year on record: 366 days and two seconds. Insertion of leap seconds: Unlike leap days, which begin after 28 February, 23:59:59 local time, UTC leap seconds occur simultaneously worldwide; for example, the leap second on 31 December 2005, 23:59:60 UTC was 31 December 2005, 18:59:60 (6:59:60 p.m.) in U.S. Eastern Standard Time and 1 January 2006, 08:59:60 (a.m.) in Japan Standard Time. Insertion of leap seconds: Process When it is mandated, a positive leap second is inserted between second 23:59:59 of a chosen UTC calendar date and second 00:00:00 of the following date. The definition of UTC states that the last day of December and June are preferred, with the last day of March or September as second preference, and the last day of any other month as third preference. All leap seconds (as of 2019) have been scheduled for either 30 June or 31 December. The extra second is displayed on UTC clocks as 23:59:60. On clocks that display local time tied to UTC, the leap second may be inserted at the end of some other hour (or half-hour or quarter-hour), depending on the local time zone. A negative leap second would suppress second 23:59:59 of the last day of a chosen month so that second 23:59:58 of that date would be followed immediately by second 00:00:00 of the following date. Since the introduction of leap seconds, the mean solar day has outpaced atomic time only for very brief periods and has not triggered a negative leap second. Slowing rotation of the Earth: Leap seconds are irregularly spaced because the Earth's rotation speed changes irregularly. Indeed, the Earth's rotation is quite unpredictable in the long term, which explains why leap seconds are announced only six months in advance. Slowing rotation of the Earth: A mathematical model of the variations in the length of the solar day was developed by F. R. Stephenson and L. V. Morrison, based on records of eclipses for the period 700 BCE to 1623 CE, telescopic observations of occultations for the period 1623 until 1967 and atomic clocks thereafter. The model shows a steady increase of the mean solar day by 1.70 ms (±0.05 ms) per century, plus a periodic shift of about 4 ms amplitude and period of about 1,500 yr. Over the last few centuries, rate of lengthening of the mean solar day has been about 1.4 ms per century, being the sum of the periodic component and the overall rate.The main reason for the slowing down of the Earth's rotation is tidal friction, which alone would lengthen the day by 2.3 ms/century. Other contributing factors are the movement of the Earth's crust relative to its core, changes in mantle convection, and any other events or processes that cause a significant redistribution of mass. These processes change the Earth's moment of inertia, affecting the rate of rotation due to the conservation of angular momentum. Some of these redistributions increase Earth's rotational speed, shorten the solar day and oppose tidal friction. For example, glacial rebound shortens the solar day by 0.6 ms/century and the 2004 Indian Ocean earthquake is thought to have shortened it by 2.68 microseconds.It is a mistake, however, to consider leap seconds as indicators of a slowing of Earth's rotation rate; they are indicators of the accumulated difference between atomic time and time measured by Earth rotation. The plot at the top of this section shows that in 1972 the average length of day was approximately 86400.003 seconds and in 2016 it was approximately 86400.001 seconds, indicating an overall increase in Earth's rotation rate over that time period. Positive leap seconds were inserted during that time because the annual average length of day remained greater than 86400 SI seconds, not because of any slowing of Earth's rotation rate.In 2021, it was reported that Earth was spinning faster in 2020 and experienced the 28 shortest days since 1960, each of which lasted less than 86399.999 seconds. This caused engineers worldwide to discuss a negative leap second and other possible timekeeping measures of which some could eliminate leap seconds. Future of leap seconds: The TAI and UT1 time scales are precisely defined, the former by atomic clocks (and thus independent of Earth's rotation) and the latter by astronomical observations (that measure actual planetary rotation and thus the solar time at the Greenwich meridian). UTC (on which civil time is usually based) is a compromise, stepping with atomic seconds but periodically reset by a leap second to match UT1. Future of leap seconds: The irregularity and unpredictability of UTC leap seconds is problematic for several areas, especially computing (see below). With increasing requirements for accuracy in automation systems and high-frequency trading, this raises a number of issues. Consequently, the long-standing practice of inserting leap seconds is under review by the relevant international standards body. Future of leap seconds: International proposals for elimination of leap seconds On 5 July 2005, the Head of the Earth Orientation Center of the IERS sent a notice to IERS Bulletins C and D subscribers, soliciting comments on a U.S. proposal before the ITU-R Study Group 7's WP7-A to eliminate leap seconds from the UTC broadcast standard before 2008 (the ITU-R is responsible for the definition of UTC). It was expected to be considered in November 2005, but the discussion has since been postponed. Under the proposal, leap seconds would be technically replaced by leap hours as an attempt to satisfy the legal requirements of several ITU-R member nations that civil time be astronomically tied to the Sun. Future of leap seconds: A number of objections to the proposal have been raised. P. Kenneth Seidelmann, editor of the Explanatory Supplement to the Astronomical Almanac, wrote a letter lamenting the lack of consistent public information about the proposal and adequate justification. Steve Allen of the University of California, Santa Cruz cited what he claimed to be the large impact on astronomers in a Science News article. He has an extensive online site devoted to the issues and the history of leap seconds, including a set of references about the proposal and arguments against it.At the 2014 General Assembly of the International Union of Radio Scientists (URSI), Demetrios Matsakis, the United States Naval Observatory's Chief Scientist for Time Services, presented the reasoning in favor of the redefinition and rebuttals to the arguments made against it. He stressed the practical inability of software programmers to allow for the fact that leap seconds make time appear to go backwards, particularly when most of them do not even know that leap seconds exist. The possibility of leap seconds being a hazard to navigation was presented, as well as the observed effects on commerce. Future of leap seconds: The United States formulated its position on this matter based upon the advice of the National Telecommunications and Information Administration and the Federal Communications Commission (FCC), which solicited comments from the general public. This position is in favor of the redefinition.In 2011, Chunhao Han of the Beijing Global Information Center of Application and Exploration said China had not decided what its vote would be in January 2012, but some Chinese scholars consider it important to maintain a link between civil and astronomical time due to Chinese tradition. The 2012 vote was ultimately deferred. At an ITU/BIPM-sponsored workshop on the leap second, Han expressed his personal view in favor of abolishing the leap second, and similar support for the redefinition was again expressed by Han, along with other Chinese timekeeping scientists, at the URSI General Assembly in 2014. Future of leap seconds: At a special session of the Asia-Pacific Telecommunity Meeting on 10 February 2015, Chunhao Han indicated China was now supporting the elimination of future leap seconds, as were all the other presenting national representatives (from Australia, Japan, and the Republic of Korea). At this meeting, Bruce Warrington (NMI, Australia) and Tsukasa Iwama (NICT, Japan) indicated particular concern for the financial markets due to the leap second occurring in the middle of a workday in their part of the world. Subsequent to the CPM15-2 meeting in March/April 2015 the draft gives four methods which the WRC-15 might use to satisfy Resolution 653 from WRC-12.Arguments against the proposal include the unknown expense of such a major change and the fact that universal time will no longer correspond to mean solar time. It is also answered that two timescales that do not follow leap seconds are already available, International Atomic Time (TAI) and Global Positioning System (GPS) time. Computers, for example, could use these and convert to UTC or local civil time as necessary for output. Inexpensive GPS timing receivers are readily available, and the satellite broadcasts include the necessary information to convert GPS time to UTC. It is also easy to convert GPS time to TAI, as TAI is always exactly 19 seconds ahead of GPS time. Examples of systems based on GPS time include the CDMA digital cellular systems IS-95 and CDMA2000. In general, computer systems use UTC and synchronize their clocks using Network Time Protocol (NTP). Systems that cannot tolerate disruptions caused by leap seconds can base their time on TAI and use Precision Time Protocol. However, the BIPM has pointed out that this proliferation of timescales leads to confusion.At the 47th meeting of the Civil Global Positioning System Service Interface Committee in Fort Worth, Texas, in September 2007, it was announced that a mailed vote would go out on stopping leap seconds. The plan for the vote was: April 2008: ITU Working Party 7A will submit to ITU Study Group 7 project recommendation on stopping leap seconds During 2008, Study Group 7 will conduct a vote through mail among member states October 2011: The ITU-R released its status paper, Status of Coordinated Universal Time (UTC) study in ITU-R, in preparation for the January 2012 meeting in Geneva; the paper reported that, to date, in response to the UN agency's 2010 and 2011 web-based surveys requesting input on the topic, it had received 16 responses from the 192 Member States with "13 being in favor of change, 3 being contrary." January 2012: The ITU makes a decision.In January 2012, rather than decide yes or no per this plan, the ITU decided to postpone a decision on leap seconds to the World Radiocommunication Conference in November 2015. At this conference, it was again decided to continue using leap seconds, pending further study and consideration at the next conference in 2023.In October 2014, Włodzimierz Lewandowski, chair of the timing subcommittee of the Civil GPS Interface Service Committee and a member of the ESA Navigation Program Board, presented a CGSIC-endorsed resolution to the ITU that supported the redefinition and described leap seconds as a "hazard to navigation".Some of the objections to the proposed change have been addressed by its supporters. For example, Felicitas Arias, who, as Director of the International Bureau of Weights and Measures (BIPM)'s Time, Frequency, and Gravimetry Department, was responsible for generating UTC, noted in a press release that the drift of about one minute every 60–90 years could be compared to the 16-minute annual variation between true solar time and mean solar time, the one hour offset by use of daylight time, and the several-hours offset in certain geographically extra-large time zones.A proposed alternative to the leap second is the leap hour or leap minute, which requires changes only once every few centuries.On 18 November 2022, the General Conference on Weights and Measures (CGPM) resolved to eliminate leap seconds by or before 2035. The difference between atomic and astronomical time will be allowed to grow to a larger value yet to be determined. A suggested possible future measure would be to let the discrepancy increase to a full minute, which would take 50 to 100 years, and then have the last minute of the day taking two minutes in a "kind of smear" with no discontinuity. The year 2035 for eliminating leap seconds was chosen considering Russia's request to extend the timeline to 2040, since, unlike the United States's global navigation satellite system, GPS, which does not adjust its time with leap seconds, Russia's system, GLONASS, does adjust its time with leap seconds. Issues created by insertion (or removal) of leap seconds: Calculation of time differences and sequence of events To compute the elapsed time in seconds between two given UTC dates requires the consultation of a table of leap seconds, which needs to be updated whenever a new leap second is announced. Since leap seconds are known only 6 months in advance, time intervals for UTC dates further in the future cannot be computed. Issues created by insertion (or removal) of leap seconds: Missing leap seconds announcement Although BIPM announces a leap second 6 months in advance, most time distribution systems (SNTP, IRIG-B, PTP) announce leap seconds at most 12 hours in advance, sometimes only in the last minute and some even not at all (DNP 03). Issues created by insertion (or removal) of leap seconds: Implementation differences Not all clocks implement leap seconds in the same manner. Leap seconds in Unix time are commonly implemented by repeating 23:59:59 or adding the time-stamp 23:59:60. Network Time Protocol (SNTP) freezes time during the leap second, some time servers declare "alarm condition". Other schemes smear time in the vicinity of a leap second, spreading out the second of change over a longer period. This aims to avoid any negative effects of a substantial (by modern standards) step in time. This approach has led to differences between systems, as leap smear is not standardized and several different schemes are used in practice. Issues created by insertion (or removal) of leap seconds: Textual representation of the leap second The textual representation of a leap second is defined by BIPM as "23:59:60". There are programs that are not familiar with this format and may report an error when dealing with such input. Issues created by insertion (or removal) of leap seconds: Binary representation of the leap second Most computer operating systems and most time distribution systems represent time with a binary counter indicating the number of seconds elapsed since an arbitrary epoch; for instance, since 1970-01-01 00:00:00 in POSIX machines or since 1900-01-01 00:00:00 in NTP. This counter does not count positive leap seconds, and has no indicator that a leap second has been inserted, therefore two seconds in sequence will have the same counter value. Some computer operating systems, in particular Linux, assign to the leap second the counter value of the preceding, 23:59:59 second (59–59–0 sequence), while other computers (and the IRIG-B time distribution) assign to the leap second the counter value of the next, 00:00:00 second (59–0–0 sequence). Since there is no standard governing this sequence, the timestamp of values sampled at exactly the same time can vary by one second. This may explain flaws in time-critical systems that rely on timestamped values. Issues created by insertion (or removal) of leap seconds: Other reported software problems associated with the leap second Several models of global navigation satellite receivers have software flaws associated with leap seconds: Some older versions of Motorola Oncore VP, UT, GT, and M12 GPS receivers had a software bug that would cause a single timestamp to be off by a day if no leap second was scheduled for 256 weeks. On 28 November 2003, this happened. At midnight, the receivers with this firmware reported 29 November 2003, for one second and then reverted to 28 November 2003. Issues created by insertion (or removal) of leap seconds: Older Trimble GPS receivers had a software flaw that would insert a leap second immediately after the GPS constellation started broadcasting the next leap second insertion time (some months in advance of the actual leap second), rather than waiting for the next leap second to happen. This left the receiver's time off by a second in the interim. Issues created by insertion (or removal) of leap seconds: Older Datum Tymeserve 2100 GPS receivers and Symmetricom Tymeserve 2100 receivers apply a leap second as soon as the a leap second notification is received, instead of waiting for the correct date. The manufacturers no longer supports these models and no corrected software is available. A workaround has been described and tested, but if the GPS system rebroadcasts the announcement, or the unit is powered off, the problem will occur again. Issues created by insertion (or removal) of leap seconds: Four different brands of navigational receivers that use data from BeiDou satellites were found to implement leap seconds one day early. This was traced to a bug related to how the BeiDou protocol numbers the days of the week.Several software vendors have distributed software that has not properly functioned with the concept of leap seconds: NTP specifies a flag to inform the receiver that a leap second is imminent. However, some NTP server implementations have failed to set their leap second flag correctly. Some NTP servers have responded with the wrong time for up to a day after a leap second insertion. Issues created by insertion (or removal) of leap seconds: A number of organizations reported problems caused by flawed software following the leap second that occurred on 30 June 2012. Among the sites which reported problems were Reddit (Apache Cassandra), Mozilla (Hadoop), Qantas, and various sites running Linux. Issues created by insertion (or removal) of leap seconds: Despite the publicity given to the 2015 leap second, a small number of network failures occurred due to leap second-related software errors of some routers. Several older versions of the Cisco Systems NEXUS 5000 Series Operating System NX-OS (versions 5.0, 5.1, 5.2) are affected by leap second bugs.Some businesses and service providers have been impacted by leap-second related software bugs: In 2015, interruptions occurred with Twitter, Instagram, Pinterest, Netflix, Amazon, and Apple's music streaming series Beats 1. Issues created by insertion (or removal) of leap seconds: Leap second software bugs in Linux reportedly affected the Altea airlines reservation system, used by Qantas and Virgin Australia, in 2015. Cloudflare was affected by a leap second software bug. Its DNS resolver implementation incorrectly calculated a negative number when subtracting two timestamps obtained from the Go programming language's time.Now()function, which then used only a real-time clock source. This could have been avoided by using a monotonic clock source, which has since been added to Go 1.9. Issues created by insertion (or removal) of leap seconds: The Intercontinental Exchange, parent body to 7 clearing houses and 11 stock exchanges including the New York Stock Exchange, chose to cease operations for 61 minutes at the time of the 30 June 2015, leap second.There were misplaced concerns that farming equipment using GPS navigation during harvests occurring on 31 December 2016, would be affected by the 2016 leap second. GPS navigation makes use of GPS time, which is not impacted by the leap second.Due to a software error, the UTC time broadcast by the NavStar GPS system was incorrect by about 13 microseconds on 25–26 January 2016. Workarounds for leap second problems: The most obvious workaround is to use the TAI scale for all operational purposes and convert to UTC for human-readable text. UTC can always be derived from TAI with a suitable table of leap seconds. The Society of Motion Picture and Television Engineers (SMPTE) video/audio industry standards body selected TAI for deriving timestamps of media. IEC/IEEE 60802 (Time sensitive networks) specifies TAI for all operations. Grid automation is planning to switch to TAI for global distribution of events in electrical grids. Bluetooth mesh networking also uses TAI.Instead of inserting a leap second at the end of the day, Google servers implement a "leap smear", extending seconds slightly over a 24-hour period centered on the leap second. Amazon followed a similar, but slightly different, pattern for the introduction of the 30 June 2015, leap second, leading to another case of the proliferation of timescales. They later released an NTP service for EC2 instances which performs leap smearing. UTC-SLS was proposed as a version of UTC with linear leap smearing, but it never became standard.It has been proposed that media clients using the Real-time Transport Protocol inhibit generation or use of NTP timestamps during the leap second and the second preceding it.NIST has established a special NTP time server to deliver UT1 instead of UTC. Such a server would be particularly useful in the event the ITU resolution passes and leap seconds are no longer inserted. Those astronomical observatories and other users that require UT1 could run off UT1 – although in many cases these users already download UT1-UTC from the IERS, and apply corrections in software.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Delafossite** Delafossite: Delafossite is a copper iron oxide mineral with formula CuFeO2 or Cu1+Fe3+O2. It is a member of the delafossite mineral group, which has the general formula ABO2, a group characterized by sheets of linearly coordinated A cations stacked between edge-shared octahedral layers (BO6). Delafossite, along with other minerals of the ABO2 group, is known for its wide range of electrical properties, its conductivity varying from insulating to metallic. Delafossite is usually a secondary mineral that crystallizes in association with oxidized copper and rarely occurs as a primary mineral. Composition: The chemical formula for delafossite is CuFeO2, which was first determined through chemical analysis of the pure mineral by G. S. Bohart. The ratio he determined was very close to Cu:Fe:O=1:1:2, with slightly more iron than copper. Rogers. attributed this fact to a small amount of hematite in the sample. In order to determine the composition of delafossite Rogers used the Ziervogel process. The Ziervogel process is used to test for the presence of cuprous oxides by looking for the "spangle reaction" which produces thin flakes of metallic silver when cuprous oxide is mixed with silver sulfate. When Rogers heated powdered delafossite with silver sulfate solution, the spangle reaction occurred. The only oxides possibilities to consider for delafossite are cuprous copper and ferrous iron. Rogers concluded that the iron was combining with the oxygen as a radical and that it only acted as a radical. This indicated that the copper in delafossite was in the cuprous rather than the cupric form. Hence he concluded that the composition of delafossite was probably cuprous metaferrite, CuFe3+O2. This composition was later confirmed by Pabst by the determination of interionic distances in the crystal lattice. Structure: The atomic structure of delafossite and the delafossite group consists of a sheet of linearly coordinated A cations stacked between edge-shared octahedral layers (BO6). In the delafossite atomic structure there are two alternating planar layers. The two layers consist of one layer triangular-patterned A cations and a layer of edge-sharing BO6 octahedra compacted with respect to the c axis. The delafossite structure can have two polytypes according to the orientation of the planar layer stacking. Hexagonal 2H types that have a space group of P63/mmc are formed when two A layers are stacked with each layer rotated 180° in relation to one another. Alternatively, when the layers are stacked each layer in the same direction in relation to one another, it makes a rhombohedral 3R type with a space group of R3m. Physical properties: The color of delafossite is black, with a hardness of 5.5, and imperfect cleavage in the {1011} direction. Pabst calculated the density of delafossite to be 5.52. Contact twinning has been observed in the {0001} direction. The unit cell parameters were calculated to be a = 3.0351 Å, c = 17.166 Å, V = 136.94 Å3. Delafossite is tabular to equidimensional in habit and has a black streak and a metallic luster. Delafossite has hexagonal symmetry that can have the space groups R3m or P63/mmc depending on the stacking of A cation layers. Delafossite compounds can have magnetic properties when magnetic ions are in the B cation position. Delafossite compounds also have properties dealing with electric conductivity such as insulation and/or metallic conduction. Delafossite compounds can exhibit p- or n-type conductivity based on their composition.Rhombohedral (3R), CuFeO2 properties: P-type semiconductor, bandgap 1.47 eV High light absorption coefficient of 7.5×104 cm−1 near the band gap edge at 700 nm. Physical properties: High hole mobility of 34 cm2 v−1 s−1 even at doping levels as high as 1.8 × 1019 cm−3. Good stability in aqueous environments.Hexagonal (2H), CuFeO2 properties: unknown as pure 2H CuFeO2 is very difficult to synthesize. Synthesis: 3R CuFeO2 is often synthesized by solid state reactions, sol gel methods, vapor deposition, and hydrothermal synthesis. Pure 2H CuFeO2 and other 2H delafossite-type oxides are difficult to synthesize. The only pure 2H CuFeO2 crystals were pure 2H CuFeO2 nanoplates with a thickness of about 100 nm which were synthesized at temperatures as low as 100 °C from CuI and FeCl3·6H2O. Application: Solar cells: 2H CuFeO2 has a band gap of 1.33 eV and a high absorption coefficient of 3.8×104 cm−1 near the band gap edge at 700 nm. It demonstrated a photovoltaic effect when placed into thin film structures composed of ITO/ZnO/2H CuFeO2/graphite/carbon black.Other applications: CuFeO2 is made of earth abundant elements and has good stability in aqueous environments, and as such was investigated as photocathodes for photoelectrochemical reduction of CO2, solar water reduction, and as a cathode material in lithium batteries. Whereas the 3R phase was somewhat characterized, only X-ray diffraction and theoretical calculation of eg and t2g occupancies of the Fe3+ are available for 2H CuFeO2. Geologic occurrence: In 1873, delafossite was discovered by Charles Friedel in a region of Ekaterinbug, Siberia. Since its discovery it has been identified as a fairly common mineral found in such places as the copper mines of Bisbee, Arizona. Delafossite is usually a secondary mineral often found in oxidized areas of copper deposits although it can be a primary mineral as well. Delafossite can be found as massive, relatively distinct crystals on hematite. Delafossite has since been found in mines around the world from Germany to Chile. Origin of the name: Delafossite was first noted by Charles Friedel in 1873 and given the composition Cu2O·Fe2O3. The mineral was given the name delafossite in honor of the French mineralogist and crystallographer Gabriel Delafosse (1796–1878). Delafosse is known for noting the close relationship between crystal symmetry and physical properties.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Input Field Separators** Input Field Separators: For many command line interpreters (“shell”) of Unix operating systems, the input field separators or internal field separators or $IFS shell variable holds characters used to separate text into tokens. Input Field Separators: The value of IFS, (in the bash shell) typically includes the space, tab, and the newline characters by default. These whitespace characters can be visualized by issuing the "declare" command in the bash shell or printing IFS with commands like printf %s "$IFS" | od -c, printf "%q\n" "$IFS" or printf %s "$IFS" | cat -A (the latter two commands being only available in some shells and on some systems). Input Field Separators: From the Bash, version 4 man page: The shell treats each character of $IFS as a delimiter, and splits the results of the other expansions into words on these characters. If IFS is unset, or its value is exactly <space><tab><newline>, the default, then sequences of <space>, <tab>, and <newline> at the beginning and end of the results of the previous expansions are ignored, and any sequence of IFS characters not at the beginning or end serves to delimit words. If IFS has a value other than the default, then sequences of the whitespace characters space and tab are ignored at the beginning and end of the word, as long as the whitespace character is in the value of IFS (an IFS whitespace character). Any character in IFS that is not IFS whitespace, along with any adjacent IFS whitespace characters, delimits a field. A sequence of IFS whitespace characters is also treated as a delimiter. If the value of IFS is null, no word splitting occurs. IFS abbreviation: According to the Open Group Base Specifications, IFS is an abbreviation for "input field separators." A newer version of this specification mentions that "this name is misleading as the IFS characters are actually used as field terminators." However IFS is often referred to as "internal field separators." Exploits: IFS was usable as an exploit in some versions of Unix. A program with root permissions could be fooled into executing user-supplied code if it ran (for instance) system("/bin/mail") and was called with $IFS set to "/", in which case it would run the program "bin" (in the current directory and thus writable by the user) with root permissions. This has been fixed by making the shells not inherit the IFS variable.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Tangsuyuk** Tangsuyuk: Tangsuyuk (탕수육) is a Korean Chinese meat dish with sweet and sour sauce. It can be made with either pork or beef. History and etymology: Tangsuyuk is a dish that was first made by Chinese merchants in the port city of Incheon, where the majority of the ethnic Chinese population in contemporary South Korea live. It is derived from the Shandong-style sweet and sour pork, as Chinese immigrants in Korea, including those that had first migrated to Northeastern China, mostly had Shandong ancestry.Although the Chinese characters meaning "sugar" (糖), "vinegar" (醋), and "meat" (肉) in the original Chinese name "糖醋肉 (pronounced tángcù ròu in Chinese)" are pronounced dang, cho, and yuk in Korean, the dish is called tangsuyuk, not dangchoyuk, because the word tangsu derived from the transliteration of Chinese pronunciation tángcù [tʰǎŋ.tsʰû], with the affricate c [tsʰ] in the second syllable weakened into fricative s [s]. Transliterated loanwords like tangsu do not comprise Sino-Korean vocabulary and do not carry hanja. History and etymology: The third syllable ròu (肉) was not transliterated, as Sino-Korean word yuk (육; 肉) meaning "meat" was also commonly used in Korean dish names.As the word tangsuyuk is the combination of transliterated loanword tangsu and Sino-Korean yuk, it was not a Sino-Korean vocabulary that could be written in hanja. However, Koreans back-formed the second syllable with hanja su (수; 水), meaning "water", perhaps because the sauce was considered soupy. Preparation: Bite-size pieces of pork or beef loin are coated with batter, usually made by soaking a mixture of potato or sweet potato starch and corn starch in water for several hours and draining the excess water. Glutinous rice flour may also be used. Egg white or cooking oil is added to the batter to change its consistency. Similarly to other Korean deep fried dishes, battered tangsuyuk meat is double-fried.Tangsuyuk is served with sweet and sour sauce, which is typically made by boiling vinegar, sugar and water, with variety of fruits and vegetables like carrot, cucumber, onion, water chestnut, wood ear mushroom and pineapple. Starch slurry is used to thicken the sauce.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**FreeON** FreeON: In computer software, FreeON is an experimental, open source (GPL) suite of programs for linear scaling quantum chemistry, formerly known as MondoSCF. It is highly modular, and has been written from scratch for N-scaling SCF theory in Fortran95 and C. Platform independent IO is supported with HDF5. FreeON should compile with most modern Linux distributions. FreeON performs Hartree–Fock, pure density functional, and hybrid HF/DFT calculations (e.g. B3LYP) in a Cartesian-Gaussian LCAO basis. All algorithms are O(N) or O(N lg N) for non-metallic systems. Periodic boundary conditions in 1, 2 and 3 dimensions have been implemented through the Lorentz field ( Γ -point), and an internal coordinate geometry optimizer allows full (atom+cell) relaxation using analytic derivatives. Effective core potentials for energies and forces have been implemented, but Effective Core Potential (ECP) lattice forces do not work yet. Advanced features include O(N) static and dynamic response, as well as time reversible Born Oppenheimer Molecular Dynamics (MD).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Puddling (civil engineering)** Puddling (civil engineering): Puddling is both the material and the process of lining a water body such as a channel or pond with puddle clay (puddle, puddling) – a watertight (low hydraulic conductivity) material based on clay and water mixed to be workable. Puddle clay as a lining: Puddling is used in maintaining canals or reservoirs on permeable ground. The technique of puddling and its use was developed by early canal engineer James Brindley; it is considered his greatest contribution to engineering. This processed material was used extensively in UK canal construction in the period starting circa 1780. Starting about 1840 puddle clay was used more widely as the water-retaining element (or core) within earthfill dams, particularly in the Pennines. Its usage in UK dams was superseded about 1960 by the use of rolled clay in the core, and better control of moisture content. Puddle clay as a lining: A considerable number of early notable dams were built in that era and they are now sometimes referred to as the 'Pennines embankment' type. These dams are characterized by a slender vertical puddle clay core supported on both sides by earthfill shoulders of more heterogeneous material. To control under-seepage through the natural foundation below the dam, the Pennines embankments generally constructed a puddle clay-filled cutoff trench in rock directly below the central core. Later construction often used concrete to fill the cutoff trench.To make puddle, clay or heavy loam is chopped with a spade and mixed into a plastic state with water and sometimes coarse sand or grit to discourage excavation by moles or water voles. The puddle is laid about 10 inches (25 cm) thick at the sides and nearly 3 ft (0.91 m) thick at the bottom of a canal, built up in layers. Puddle has to be kept wet in order to remain waterproof so it is important for canals to be kept filled with water. Puddle clay as a lining: The clay is laid down with a tool called a 'punner', or 'pun', a large rectangular block on a handle about 5 feet (1.5 m) long, or trodden down, or compacted by some other means (e.g. by an excavator using the convex outside of its scoop, or, historically, by driving cattle across the area). Puddle as a building material: Puddle clay or puddle adobe is often called cob. Cob has added ingredients of a fibrous material to act as a mechanical binder.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Carboalkoxylation** Carboalkoxylation: In industrial chemistry, carboalkoxylation is a process for converting alkenes to esters. This reaction is a form of carbonylation. A closely related reaction is hydrocarboxylation, which employs water in place of alcohols A commercial application is the carbomethoxylation of ethylene to give methyl propionate: C2H4 + CO + MeOH → MeO2CC2H5The process is catalyzed by Pd[C6H4(CH2PBu-t)2]2. Under similar conditions, other Pd-diphosphines catalyze formation of polyethyleneketone. Carboalkoxylation: Methyl propionate ester is a precursor to methyl methacrylate, which is used in plastics and adhesives.Carboalkoxylation has been incorporated into various telomerization schemes. For example carboalkoxylation has been coupled with the dimerization of 1,3-butadiene. This step produces a doubly unsaturated C9-ester: 2 CH2=CH-CH=CH2 + CO + CH3OH → CH2=CH(CH2)3CH=CHCH2CO2CH3 Hydroesterification: Related to carboalkoxylation is hydroesterification, the insertion of alkenes and alkynes into the H-O bond of carboxylic acids. Vinyl acetate is produced industrially by the addition of acetic acid to acetylene in the presence of zinc acetate catalysts: Presently, zinc acetate is used as the catalyst: CH3CO2H + C2H2 → CH3CO2CHCH2
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**History of Philosophy Quarterly** History of Philosophy Quarterly: The History of Philosophy Quarterly (HPQ) is a peer-reviewed academic journal dedicated to the history of philosophy. The journal is indexed by PhilPapers and the Philosopher's Index.The History of Philosophy Quarterly was founded in 1984 by Nicholas Rescher of the University of Pittsburgh. In the first issue, the editors of the journal announced that a focus would be on looking to the history of philosophy to help solve contemporary issues, advocating "that approach to philosophical history, increasingly prominent in recent years, which refuses to see the boundary between philosophy and its history as an impassable barrier, but regards historical studies as a way of dealing with problems of continued interest and importance." The journal is published by the University of Illinois Press and the current editor is Brian Copenhaver at University of California, Los Angeles.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Prismanes** Prismanes: The prismanes are a class of hydrocarbon compounds consisting of prism-like polyhedra of various numbers of sides on the polygonal base. Chemically, it is a series of fused cyclobutane rings (a ladderane, with all-cis/all-syn geometry) that wraps around to join its ends and form a band, with cycloalkane edges. Their chemical formula is (C2H2)n, where n is the number of cyclobutane sides (the size of the cycloalkane base), and that number also forms the basis for a system of nomenclature within this class. The first few chemicals in this class are: Triprismane, tetraprismane, and pentaprismane have been synthesized and studied experimentally, and many higher members of the series have been studied using computer models. The first several members do indeed have the geometry of a regular prism, with flat n-gon bases. As n becomes increasingly large, however, modeling experiments find that highly symmetric geometry is no longer stable, and the molecule distorts into less-symmetric forms. One series of modelling experiments found that starting with [11]prismane, the regular-prism form is not a stable geometry. For example, the structure of [12]prismane would have the cyclobutane chain twisted, with the dodecagonal bases non-planar and non-parallel. Nonconvex prismanes: For large base-sizes, some of the cyclobutanes can be fused anti to each other, giving a non-convex polygon base. These are geometric isomers of the prismanes. Two isomers of [12]prismane that have been studied computationally are named helvetane and israelane, based on the star-like shapes of the rings that form their bases. This was explored computationally after originally being proposed as an April fools joke. Their names refer to the shapes found on the flags of Switzerland and Israel, respectively. Polyprismanes: The polyprismanes consist of multiple prismanes stacked base-to-base. The carbons at each intermediate level—the n-gon bases where the prismanes fuse to each other—have no hydrogen atoms attached to them. Related structures: The asteranes contain a methylene group bridge on each edge between the two n-gon bases. Each side is thus a cyclohexane rather than a cyclobutane.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Quincha** Quincha: Quincha is a traditional construction system that uses, fundamentally, wood and cane or giant reed forming an earthquake-proof framework that is covered in mud and plaster. History: Quincha is a Spanish term widely known in Latin America, borrowed from Quechua qincha (kincha in Kichwa). Even though Spanish and Portuguese are closely related languages, in this case, the Portuguese equivalent is completely different: Pau-a-pique. Historically, quincha has been utilized in the Spanish and Portuguese colonies throughout the different regions of the Americas. The construction technology is said to have existed for at least 8,000 years. In Peru, it is a popular construction design in the coastal regions. It is also adopted in urban centers after the incidence of earthquakes such as the case of the rebuilding of the city of Trujillo after the 1759 earthquake. Construction: The framework or wattle is a main feature of traditional quincha. It is constructed by interweaving pieces of wood, cane, or bamboo and is covered with a mixture of mud and straw (or daub). It is then covered on both sides with a thin lime plaster finish, which serves as a sort of wall or ceiling panels.Quincha is known for its flexibility since it can be shaped into different designs. For example, the builders of the church at San Jose at Ingenio, Nazca modified quincha to construct its ornate twin-towered facade. Its resistance to earthquake is attributed to the combination of heavy mass (used for thermal insulation) and timber-frame structure. The lattice design of its framework also provides the quincha building stability, allowing it to shake during an earthquake without damage.A modern iteration of quincha is called quincha metallica, a method developed by the Chilean architect Marcelo Cortés. In this system, steel and wielded wire mesh are used instead of bamboo or cane to create the matrix that holds the mud, which is also improved through the addition of lime to control the clay's expansion and water impermeability.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Salt bridge** Salt bridge: In electrochemistry, a salt bridge or ion bridge is a laboratory device used to connect the oxidation and reduction half-cells of a galvanic cell (voltaic cell), a type of electrochemical cell. It maintains electrical neutrality within the internal circuit. If no salt bridge were present, the solution in one-half cell would accumulate a negative charge and the solution in the other half cell would accumulate a positive charge as the reaction proceeded, quickly preventing further reaction, and hence the production of electricity. Salt bridges usually come in two types: glass tubes and filter paper. Glass tube bridges: One type of salt bridge consists of a U-shaped glass tube filled with a relatively inert electrolyte. It is usually a combination of potassium or ammonium cations and chloride or nitrate anions, which have similar mobility in solution. The combination is chosen which does not react with any of the chemicals used in the cell. The electrolyte is often gelified with agar-agar to help prevent the intermixing of fluids that might otherwise occur. Glass tube bridges: The conductivity of a glass tube bridge depends mostly on the concentration of the electrolyte solution. At concentrations below saturation, an increase in concentration increases conductivity. Beyond-saturation electrolyte content and narrow tube diameter may both lower conductivity. Filter paper bridges: Porous paper such as filter paper may be used as a salt bridge if soaked in an appropriate electrolyte such as the electrolytes used in glass tube bridges. No gelification agent is required as the filter paper provides a solid medium for conduction. The conductivity of this kind of salt bridge depends on a number of factors: the concentration of the electrolyte solution, the texture of the paper, and the absorbing ability of the paper. Generally, smoother texture and higher absorbency equate to higher conductivity. A porous disk or other porous barriers between the two half-cells may be used instead of a salt bridge; these allow ions to pass between the two solutions while preventing bulk mixing of the solutions.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**F6 (classification)** F6 (classification): F6, also SP6, is a wheelchair sport classification that corresponds to the neurological level L2 - L5. Historically, this class has been known as Lower 4, Upper 5. People in this class have good sitting balance, and good forward and backward movement of their trunk. They have some use of their thighs and can press their knees together. Sports open to people in this class include archery, adaptive rowing, ten-pin bowling, swimming, wheelchair basketball, wheelchair fencing and athletics. The process for classification into this class has a medical and functional classification process. This process is often sport specific. Definition: This is wheelchair sport classification that corresponds to the neurological level L2 - L5. Historically, this class has been known as Lower 4, Upper 5.In 2002, USA Track & Field defined this class as, "These athletes also put the shot and throw the discus and javelin. They have very good balance and movements in the forward and backward plane, with good trunk rotation. They can lift their thighs off the chair and press the knees together. Some have the ability to straighten and bend their knees. Neurological level: L2-L5." Disabled Sports USA defined the functional definition of this class in 2003 as, "Have very good balance and movements in the backwards and forwards plane. Have good trunk rotation. Can lift the thighs, i.e. off the chair (hip flexion). Can press the knees together (hip abduction). May have the ability to straighten the knees (knee extension). May have some ability to bend the knees (knee flexion)." Neurological The neurological definition of this class is L2 - L5. The location of lesions on different vertebrae tend to be associated with disability levels and functionality issues. L2 is associated with hip flexors. L3 is associated with knee extensors. L4 is associated with ankle doris flexors. L5 is associated with long toe extensors. Definition: Anatomical People with lesions at L4 have issues with their lower back muscles, hip flexors and their quadriceps. People with lesions at the L4 to S2 who are complete paraplegics may have motor function issues in their gluts and hamstrings. Their quadriceps are likely to be unaffected. They may be absent sensation below the knees and in the groin area. Definition: Functional People in this class have good sitting balance. People with lesions at L4 have trunk stability, can lift a leg and can flex their hips. They can walk independently with the use of longer leg braces. They may use a wheelchair for the sake of convenience. Recommended sports include many standing related sports. People in this class have a total respiratory capacity of 88% compared to people without a disability. Governance: In general, classification for spinal cord injuries and wheelchair sport is overseen by International Wheelchair and Amputee Sports Federation (IWAS), having taken over this role following the 2005 merger of ISMWSF and ISOD. From the 1950s to the early 2000s, wheelchair sport classification was handled International Stoke Mandeville Games Federation (ISMGF).Some sports have classification managed by other organizations. In the case of athletics, classification is handled by IPC Athletics. The International Paralympic Committee manages classification for a number of spinal cord injury and wheelchair sports including alpine skiing, biathlon, cross country skiing, ice sledge hockey, powerlifting, shooting, swimming, and wheelchair dance.Some sports specifically for people with disabilities, like race running, have two governing bodies that work together to allow different types of disabilities to participate. Classification is also handled at the national level or at the national sport specific level. In the United States, this has been handled by Wheelchair Sports, USA (WSUSA) who managed wheelchair track, field, slalom, and long-distance events. For wheelchair basketball in Canada, classification is handled by Wheelchair Basketball Canada. History: Early on in this classes history, the class had a different name and was based on medical classification and originally intended for athletics. During the 1960s and 1970s, classification involved being examined in a supine position on an examination table, where multiple medical classifiers would often stand around the player, poke and prod their muscles with their hands and with pins. The system had no built in privacy safeguards and players being classified were not insured privacy during medical classification nor with their medical records.During the late 1960s, people oftentimes tried to cheat classification to get in classified more favorably. The group most likely to try to cheat at classification were wheelchair basketball players with complete spinal cord injuries located at the high thoracic transection of the spine. Starting in the 1980s and going into the 1990s, this class began to be more defined around functional classification instead of a medical one. Sports: Athletics Under the IPC Athletics classification system, this class competes in F56. Field events open to this class have included shot put, discus and javelin. In pentathlon, the events for this class have included Shot, Javelin, 200m, Discus, 1500m. F6 athletes throw from a seated position, and the javelin they use weighs .6 kilograms (1.3 lb). The shot put used by women in this class weighs less than the traditional one at 3 kilograms (6.6 lb).There are performance differences and similarities between this class and other wheelchair classes. A 1999 study of discus throwers found that for F5 to F8 discus throwers, the upper arm tends to be near horizontal at the moment of release of the discus. F5 to F7 discus throwers have greater angular speed of the shoulder girdle during release of the discus than the lower number classes of F2 to F4. F5 and F8 discus throwers have less average angular forearm speed than F2 and F4 throwers. F2 and F4 speed is caused by use of the elbow flexion to compensate for the shoulder flexion advantage of F5 to F8 throwers. A study of javelin throwers in 2003 found that F6 throwers have angular speeds of the shoulder girdle similar to that of F4, F5, F3, F7, F8 and F9 throwers. A study of was done comparing the performance of athletics competitors at the 1984 Summer Paralympics. It found there was little significant difference in performance in distance between women in 2 (SP4), 3 (SP4, SP5) and 4 (SP5, SP6) in the discus. It found there was little significant difference in performance in time between men in 2 (SP4), 3 and 4 in the 100 meters. It found there was little significant difference in performance in distance between women in 2 (SP4), 3, 4, 5 and 6 in the discus. It found there was little significant difference in performance in time between men in 3, 4, 5 and 6 in the 200 meters. It found there was little significant difference in performance in time between women in 3, 4 and 5 in the 60 meters. It found there was little significant difference in performance in distance between men in 3 and 4 in the javelin. It found there was little significant difference in performance in distance between men in 3 and 4 in the shot put. It found there was little significant difference in performance in distance between women in 4, 5 and 6 in the discus. It found there was little significant difference in performance in distance between women in 4, 5 and 6 in the javelin. It found there was little significant difference in performance in distance between women in 4, 5 and 6 in the shot put. It found there was little significant difference in performance in distance between women in 4, 5 and 6 in the discus. It found there was little significant difference in performance in time between women in 4, 5 and 6 in the 60 meters. It found there was little significant difference in performance in time between women in 4, 5 and 6 in the 800 meters. It found there was little significant difference in performance in time between women in 4, 5 and 6 in the 1,500 meters. It found there was little significant difference in performance in time between women in 4, 5 and 6 in the slalom. It found there was little significant difference in performance in distance between men in 4, 5 and 6in the discus. It found there was little significant difference in performance in distance between men in 4, 5 and 6 in the shot put. It found there was little significant difference in performance in time between men in 4, 5 and 6 in the 100 meters. It found there was little significant difference in performance in time between men in 4, 5 and 6 in the 800 meters. It found there was little significant difference in performance in time between men in 4, 5 and 6 in the 1,500 meters. It found there was little significant difference in performance in time between men in 4, 5 and 6 in the slalom. It found there was little significant difference in performance in distance between women in 5 and 6 in the discus. It found there was little significant difference in performance in time between women in 5 and 6 in the 60 meters. It found there was little significant difference in performance in time between women in 5 and 6 in the 100 meters. It found there was little significant difference in performance in distance between men in 5 and 6 in the javelin. It found there was little significant difference in performance in distance between men in 5 and 6 in the shot put. It found there was little significant difference in performance in time between men in 5 and 6 in the 100 meters. Sports: Swimming Swimmers in this class compete in a number of IPC swimming classes. These include S5, SB5, S7 and S8. People in SB5 tend to be complete paraplegics below T11 to L1 who cannot use their legs for swimming, or complete paraplegics at L2 to L3 with surgical rods put in their spinal column from T4 to T6 which affects their balance. S7 swimmers with spinal cord injuries tend to be complete paraplegics with lesions below L2 to L3. When swimming, they are able to do an effect catch phase because of good hand control. They can use their arms to get power and maintain control. Their hips are higher in the water than lower numbered classes for people with spinal cord injuries. While they have no kick movement in their legs, they are able to keep their legs in a streamlined position. They use their hands for turns. They either do a sitting dive start or start in the water. S8 swimmers with spinal cord injuries tend to be complete paraplegics with lesions below L4 to L5. When swimming, they are able to kick but limited use of their ankles means that their propulsion from kicking can be limited. They normally do diving starts from the platform but are not able to get full power because of limited use of their legs. They do leg turns but have limited propulsion power off the wall.A study of was done comparing the performance of athletics competitors at the 1984 Summer Paralympics. It found there was little significant difference in performance times between women in 4 (SP5, SP6), 5 (SP6, SP7) and 6 (SP7) in the 100m breaststroke. It found there was little significant difference in performance times between women in 4 (SP5, SP6), 5 (SP6, SP7) and 6 (SP7) in the 100m backstroke. It found there was little significant difference in performance times between women in 4 (SP5, SP6), 5 (SP6, SP7) and 6 (SP7) in the 100m freestyle. It found there was little significant difference in performance times between women in 4 (SP5, SP6), 5 (SP6, SP7) and 6 (SP7) in the 14 x 50 m individual medley. It found there was little significant difference in performance times between men in 4 (SP5, SP6), 5 (SP6, SP7) and 6 (SP7) in the 100m backstroke. It found there was little significant difference in performance times between men in 4 (SP5, SP6), 5 (SP6, SP7) and 6 (SP7) in the 100m breaststroke. It found there was little significant difference in performance times between women in 2 (SP4), 3 (SP4, SP5) and 4 (SP5, SP6) in the 25 m butterfly. It found there was little significant difference in performance times between men in 2 (SP4), 3 (SP4, SP5) and 4 (SP5, SP6) in the 25 m butterfly. It found there was little significant difference in performance times between women in 5 (SP6, SP7) and 6 (SP7) in the 50 m butterfly. It found there was little significant difference in performance times between men in 5 (SP6, SP7) and 6 (SP7) in the 4 x 50 m individual medley. It found there was little significant difference in performance times between men in 5 (SP6, SP7) and 6 (SP7) in the 100 m freestyle. Sports: Wheelchair basketball The original wheelchair basketball classification system in 1966 had 5 classes: A, B, C, D, S. Each class was worth so many points. A was worth 1, B and C were worth 2. D and S were worth 3 points. A team could have a maximum of 12 points on the floor. This system was the one in place for the 1968 Summer Paralympics. Class A was for T1-T9 complete. Class B was for T1-T9 incomplete. Class C was for T10-L2 complete. Class D was for T10-L2 incomplete. Class S was for Cauda equina paralysis. This class would have been part of Class C or Class D.From 1969 to 1973, a classification system designed by Australian Dr. Bedwell was used. This system used some muscle testing to determine which class incomplete paraplegics should be classified in. It used a point system based on the ISMGF classification system. Class IA, IB and IC were worth 1 point. Class II for people with lesions between T1-T5 and no balance were also worth 1 point. Class III for people with lesions at T6-T10 and have fair balance were worth 1 point. Class IV was for people with lesions at T11-L3 and good trunk muscles. They were worth 2 points. Class V was for people with lesions at L4 to L5 with good leg muscles. Class IV was for people with lesions at S1-S4 with good leg muscles. Class V and IV were worth 3 points. The Daniels/Worthington muscle test was used to determine who was in class V and who was class IV. Paraplegics with 61 to 80 points on this scale were not eligible. A team could have a maximum of 11 points on the floor. The system was designed to keep out people with less severe spinal cord injuries, and had no medical basis in many cases. This class would have been IV or V.In 1982, wheelchair basketball finally made the move to a functional classification system internationally. While the traditional medical system of where a spinal cord injury was located could be part of classification, it was only one advisory component. With this system, players in this class became Class II and 3 or 3.5 point players. A maximum of 14 points was allowed on the court at a time. Under the current system, they would likely be classified a 3 point player. if they are L2 to L4. They are likely to be classified a 4-point player if they are L5 to S2. Sports: Wheelchair fencing Generally, people in this class are classified as 3 or 4. Wheelchair fencers from this class who are classified as 3 are paraplegics from D10 to L2, scoring between 5 and 9 points on Type 1 and Type 2 function tests. For class 4, fencers tend to have a lesion below L4. They tend to score at least 5 points on Type 3 and Type 4 of the function test. For international IWF sanctioned competitions, classes are combined. 3 and 4 are combined, competing as Category A. Sports: Other sports One of the sports open to people in this class is archery. People in this class compete in ARW2. This class is for people who have limited to good trunk function and normal functioning in their arms. It includes paraplegic archers, while ARW1 includes tetraplegic archers. Rowing is another sport open to people in this class. Currently, people with complete spinal cord injury at L3 level or incomplete lesion at L1 compete in TA. This class is for people with trunk and arm function. In 1991, the first internationally accepted adaptive rowing classification system was established and put into use. People from this class were initially classified as P2 for people with lesions at T10-L4. Ten-pin bowling is another sport open to people in this class, where they compete in TPB8. People in this class do not have more than 70 points for functionality, have normal arm pitch for throwing and use a wheelchair. Getting classified: Classification is often sport specific, and has two parts: a medical classification process and a functional classification process.Medical classification for wheelchair sport can consist of medical records being sent to medical classifiers at the international sports federation. The sportsperson's physician may be asked to provide extensive medical information including medical diagnosis and any loss of function related to their condition. This includes if the condition is progressive or stable, if it is an acquired or congenital condition. It may include a request for information on any future anticipated medical care. It may also include a request for any medications the person is taking. Documentation that may be required may include X-rays, ASIA scale results, or Modified Ashworth Scale scores.One of the standard means of assessing functional classification is the bench test, which is used in swimming, lawn bowls and wheelchair fencing. Using the Adapted Research Council (MRC) measurements, muscle strength is tested using the bench press for a variety of spinal cord related injuries with a muscle being assessed on a scale of 0 to 5. A 0 is for no muscle contraction. A 1 is for a flicker or trace of contraction in a muscle. A 2 is for active movement in a muscle with gravity eliminated. A 3 is for movement against gravity. A 4 is for active movement against gravity with some resistance. A 5 is for normal muscle movement.During functional and medical classification, a number of tests may be run for people in this class. For the trunk rotation test, people in this class are expected to have abdominal function and lower limb function demonstrated by having hip flexors and abductors.Wheelchair fencing classification has 6 test for functionality during classification, along with a bench test. Each test gives 0 to 3 points. A 0 is for no function. A 1 is for minimum movement. A 2 is for fair movement but weak execution. A 3 is for normal execution. The first test is an extension of the dorsal musculature. The second test is for lateral balance of the upper limbs. The third test measures trunk extension of the lumbar muscles. The fourth test measures lateral balance while holding a weapon. The fifth test measures the trunk movement in a position between that recorded in tests one and three, and tests two and four. The sixth test measures the trunk extension involving the lumbar and dorsal muscles while leaning forward at a 45 degree angle. In addition, a bench test is required to be performed.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Photoionisation cross section** Photoionisation cross section: Photoionisation cross section in the context of condensed matter physics refers to the probability of a particle (usually an electron) being emitted from its electronic state. Cross section in photoemission: The photoemission is a useful experimental method for the determination and the study of the electronic states. Sometimes the small amount of deposited material over a surface has a weak contribution to the photoemission spectra, which makes its identification very difficult. The knowledge of the cross section of a material can help to detect thin layers or 1D nanowires over a substrate. A right choice of the photon energy can enhance a small amount of material deposited over a surface, otherwise the display of the different spectra won't be possible.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Morse–Palais lemma** Morse–Palais lemma: In mathematics, the Morse–Palais lemma is a result in the calculus of variations and theory of Hilbert spaces. Roughly speaking, it states that a smooth enough function near a critical point can be expressed as a quadratic form after a suitable change of coordinates. The Morse–Palais lemma was originally proved in the finite-dimensional case by the American mathematician Marston Morse, using the Gram–Schmidt orthogonalization process. This result plays a crucial role in Morse theory. The generalization to Hilbert spaces is due to Richard Palais and Stephen Smale. Statement of the lemma: Let (H,⟨⋅,⋅⟩) be a real Hilbert space, and let U be an open neighbourhood of the origin in H. Let f:U→R be a (k+2) -times continuously differentiable function with k≥1; that is, f∈Ck+2(U;R). Assume that f(0)=0 and that 0 is a non-degenerate critical point of f; that is, the second derivative D2f(0) defines an isomorphism of H with its continuous dual space H∗ by Then there exists a subneighbourhood V of 0 in U, a diffeomorphism φ:V→V that is Ck with Ck inverse, and an invertible symmetric operator A:H→H, such that Corollary: Let f:U→R be f∈Ck+2 such that 0 is a non-degenerate critical point. Then there exists a Ck -with- Ck -inverse diffeomorphism ψ:V→V and an orthogonal decomposition such that, if one writes then
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Scofflaw** Scofflaw: Scofflaw is a noun coined during the Prohibition era which originally denoted a person who drinks illegally, or otherwise ignored anti-drinking laws. It is a compound of the words scoff and law. Its use has been extended to mean one who flouts any law, especially those difficult to enforce, and particularly traffic laws. Etymology: "Scofflaw" was the winning entry of a nationwide competition to create a new word for "the lawless drinker," with a prize of $200 in gold, sponsored by Delcevare King, a banker and enthusiastic supporter of Prohibition, in 1923. Two separate entrants, Henry Irving Dale and Kate L. Butler, submitted the word, and split the $200 prize equally. Scofflaw was deemed the best and most suitable out of over 25,000 entries. The word was from the outset frequently used until the eventual repeal of Prohibition in 1933. It experienced a revival in the 1950s, as a term for anyone who displays disdain for laws difficult to enforce. The word itself remains a symbol of the Prohibition era. Use: "The Scofflaw" is the name of the 99th episode of Seinfeld. The second part of the three-part documentary Prohibition is titled A Nation of Scofflaws and documents the origin and use of the word.A New York Times investigation into the ship Dona Liberta is titled Stowaways and Crimes Aboard a Scofflaw Ship. It was later incorporated into chapter four of The Outlaw Ocean (2019), by Ian Urbina as The Scofflaw Fleet. Concerns raised about derelict and scofflaw vessels in Vancouver's False Creek.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Rheumatoid factor** Rheumatoid factor: Rheumatoid factor (RF) is the autoantibody that was first found in rheumatoid arthritis. It is defined as an antibody against the Fc portion of IgG and different RFs can recognize different parts of the IgG-Fc. RF and IgG join to form immune complexes that contribute to the disease process such as chronic inflammation and join destruction at the synovium and cartilage.Rheumatoid factor can also be a cryoglobulin (antibody that precipitates on cooling of a blood sample); it can be either type 2 (monoclonal IgM to polyclonal IgG) or type 3 (polyclonal IgM to polyclonal IgG) cryoglobulin. Rheumatoid factor: Although predominantly encountered as IgM, rheumatoid factor can be of any isotype of immunoglobulins, i.e. IgA, IgG, IgM, IgE, IgD. Testing: RF is tested by collecting blood in a plain tube (5 mL is often enough). The serum is tested for the presence of RF. There are different methods available, which include nephelometry, turbidimetry, agglutination of gamma globulin-coated latex particles or erythrocytes. RF is often evaluated in patients suspected of having any form of arthritis even though positive results can be due to other causes, and negative results do not rule out disease. But, in combination with signs and symptoms, it can play a role in both diagnosis and disease prognosis. It is part of the usual disease criteria of rheumatoid arthritis.The presence of rheumatoid factor in serum can also indicate the occurrence of suspected autoimmune activity unrelated to rheumatoid arthritis, such as that associated with tissue or organ rejection. In such instances, RF may serve as one of several serological markers for autoimmunity. The sensitivity of RF for established rheumatoid arthritis is only 60–70% with a specificity of 78%.Rheumatoid factor is part of the 2010 ACR/EULAR classification criteria for rheumatoid arthritis. RF positivity combines well with anti-CCP and/or 14-3-3η (YWHAH) to inform diagnosis. RF positivity at baseline has also been described as a good prognostic marker for future radiographic damage. Interpretation: High levels of rheumatoid factor (in general, above 20 IU/mL, 1:40, or over the 95th percentile; there is some variation among labs) occur in rheumatoid arthritis (present in 80%) and Sjögren's syndrome (present in 70%). The higher the level of RF the greater the probability of destructive articular disease. It is also found in Epstein–Barr virus or Parvovirus infection and in 5–10% of healthy persons, especially the elderly. Interpretation: There is an association between rheumatoid factor and more persistently active synovitis, more joint damage, greater eventual disability and arthritis.Other than in rheumatoid arthritis, rheumatoid factor may also be elevated in other conditions, including: Systemic lupus erythematosus (SLE) Sjögren syndrome Hepatitis B and C, herpes, HIV, and other viral infections Primary biliary cirrhosis Infectious mononucleosis and any chronic viral infection Leprosy Sarcoidosis Tuberculosis, syphilis and other chronic bacterial infections Visceral leishmaniasis Malaria and other parasitic infections Cancer History: The test was first described by Norwegian Dr Erik Waaler in 1940 and redescribed by Dr Harry M. Rose and colleagues in 1948. Redescription is said to be due to the uncertainties due to World War II. It is still referred to as the Waaler–Rose test.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Data Reference Model** Data Reference Model: The Data Reference Model (DRM) is one of the five reference models of the Federal Enterprise Architecture. Overview: The DRM is a framework whose primary purpose is to enable information sharing and reuse across the United States federal government via the standard description and discovery of common data and the promotion of uniform data management practices. The DRM describes artifacts which can be generated from the data architectures of federal government agencies. The DRM provides a flexible and standards-based approach to accomplish its purpose. The scope of the DRM is broad, as it may be applied within a single agency, within a community of interest, or cross-community of interest. Data Reference Model topics: DRM structure The DRM provides a standard means by which data may be described, categorized, and shared. These are reflected within each of the DRM's three standardization areas: Data Description: Provides a means to uniformly describe data, thereby supporting its discovery and sharing. Data Context: Facilitates discovery of data through an approach to the categorization of data according to taxonomies. Additionally, enables the definition of authoritative data assets within a community of interest. Data Sharing: Supports the access and exchange of data where access consists of ad hoc requests (such as a query of a data asset), and exchange consists of fixed, re-occurring transactions between parties. Enabled by capabilities provided by both the Data Context and Data Description standardization areas. DRM Version 2 The Data Reference Model version 2 released in November 2005 is a 114-page document with detailed architectural diagrams and an extensive glossary of terms. The DRM also make many references to ISO standards specifically the ISO/IEC 11179 metadata registry standard. DRM usage The DRM is not technically a published technical interoperability standard such as web services, it is an excellent starting point for data architects within federal and state agencies. Any federal or state agencies that are involved with exchanging information with other agencies or that are involved in data warehousing efforts should use this document as a guide.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Teichmüller modular form** Teichmüller modular form: In mathematics, a Teichmüller modular form is an analogue of a Siegel modular form on Teichmüller space.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Hyperorgasmia** Hyperorgasmia: Hyperorgasmia, is the experience of a significantly larger number of orgasms in a short period of time than what is normal. It has been reported to occur as a side effect of the antidepressant drug, moclobemide.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Shakespeare's writing style** Shakespeare's writing style: William Shakespeare's style of writing was borrowed from the conventions of the day and adapted to his needs. Overview: William Shakespeare's first plays were written in the conventional style of the day. He wrote them in a stylised language that does not always spring naturally from the needs of the characters or the drama. The poetry depends on extended, elaborate metaphors and conceits, and the language is often rhetorical—written for actors to declaim rather than speak. For example, the grand speeches in Titus Andronicus, in the view of some critics, often hold up the action, while the verse in The Two Gentlemen of Verona has been described as stilted.Soon, however, William Shakespeare began to adapt the traditional styles to his own purposes. The opening soliloquy of Richard III has its roots in the self-declaration of Vice in medieval drama. At the same time, Richard's vivid self-awareness looks forward to the soliloquies of Shakespeare's mature plays. No single play marks a change from the traditional to the freer style. Shakespeare combined the two throughout his career, with Romeo and Juliet perhaps the best example of the mixing of the styles. By the time of Romeo and Juliet, Richard II, and A Midsummer Night's Dream in the mid-1590s, Shakespeare had begun to write a more natural poetry. He increasingly tuned his metaphors and images to the needs of the drama itself. Overview: Shakespeare's standard poetic form was blank verse, composed in iambic pentameter with clever use of puns and imagery. In practice, this meant that his verse was usually unrhymed and consisted of ten syllables to a line, spoken with a stress on every second syllable. The blank verse of his early plays is quite different from that of his later ones. It is often beautiful, but its sentences tend to start, pause, and finish at the end of lines, with the risk of monotony. Once Shakespeare mastered traditional blank verse, he began to interrupt and vary its flow. This technique releases the new power and flexibility of the poetry in plays such as Julius Caesar and Hamlet. Shakespeare uses it, for example, to convey the turmoil in Hamlet's mind: After Hamlet, Shakespeare varied his poetic style further, particularly in the more emotional passages of the late tragedies. The literary critic A. C. Bradley described this style as "more concentrated, rapid, varied, and, in construction, less regular, not seldom twisted or elliptical". In the last phase of his career, Shakespeare adopted many techniques to achieve these effects. These included enjambments, irregular pauses and stops, and extreme variations in sentence structure and length. In Macbeth, for example, the language darts from one unrelated metaphor or simile to another in one of Lady Macbeth's well-known speeches: And in Macbeth's preceding speech: The audience is challenged to complete the sense. The late romances, with their shifts in time and surprising turns of plot, inspired a last poetic style in which long and short sentences are set against one another, clauses are piled up, subject and object are reversed, and words are omitted, creating an effect of spontaneity.Shakespeare's poetic genius was allied with a practical sense of the theatre. Like all playwrights of the time, Shakespeare dramatised stories from sources such as Petrarch and Holinshed. He reshaped each plot to create several centres of interest and show as many sides of a narrative to the audience as possible. This strength of design ensures that a Shakespeare play can survive translation, cutting and wide interpretation without loss to its core drama. As Shakespeare's mastery grew, he gave his characters clearer and more varied motivations and distinctive patterns of speech. He preserved aspects of his earlier style in the later plays, however. In his late romances, he deliberately returned to a more artificial style, which emphasised the illusion of theatre. Form: In some of Shakespeare's early works, punctuation at the end of the lines strengthens the rhythm. He and other dramatists at the time used this form of blank verse for much of the dialogue between characters to elevate the poetry of drama. To end many scenes in his plays he used a rhyming couplet, thus creating suspense. A typical example occurs in Macbeth as Macbeth leaves the stage to murder Duncan: His plays make effective use of the soliloquy, in which a character makes a solitary speech, giving the audience insight to the character's motivations and inner conflict. The character either speaks to the audience directly (in the case of choruses, or characters that become epilogues), or more commonly, speaks to himself or herself in the fictional realm. Shakespeare's writing features extensive wordplay of double entendres and clever rhetorical flourishes. Humour is a key element in all of Shakespeare's plays. His works have been considered controversial through the centuries for his use of bawdy punning, to the extent that "virtually every play is shot through with sexual puns." Indeed, in the nineteenth century, popular censored versions of the plays were produced as The Family Shakspeare [sic] by Henrietta Bowdler (writing anonymously) and later by her brother Thomas Bowdler. Comedy is not confined to Shakespeare's comedies, and is a core element of many of the tragedy and history plays. For example, comic scenes dominate over historical material in Henry IV, Part 1. Similarities to contemporaries: Besides following the popular forms of his day, Shakespeare's general style is comparable to several of his contemporaries. His works have many similarities to the writing of Christopher Marlowe, and seem to reveal strong influences from the Queen's Men's performances, especially in his history plays. His style is also comparable to Francis Beaumont's and John Fletcher's, other playwrights of the time.Shakespeare often borrowed plots from other plays and stories. Hamlet, for example, is comparable to Saxo Grammaticus' Gesta Danorum. Romeo and Juliet is thought to be based on Arthur Brooke's narrative poem The Tragical History of Romeus and Juliet. King Lear is based on the story of King Leir in Historia Regum Britanniae by Geoffrey of Monmouth, which was retold in 1587 by Raphael Holinshed. Borrowing plots in this way was not uncommon at the time. After Shakespeare's death, playwrights quickly began borrowing from his works, a tradition that continues to this day. Differences from contemporaries: Shakespeare's works express the complete range of human experience. His characters were human beings who commanded the sympathy of audiences when many other playwrights' characters were flat or archetypes. Macbeth, for example, commits six murders by the end of the fourth act, and is responsible for many deaths offstage, yet still commands an audience's sympathy until the very end because he is seen as a flawed human being, not a monster. Hamlet knows that he must avenge the death of his father, but he is too indecisive, too self-doubting, to carry this out until he has no choice. His failings cause his downfall, and he exhibits some of the most basic human reactions and emotions. Shakespeare's characters were complex and human in nature. By making the protagonist's character development central to the plot, Shakespeare changed what could be accomplished with drama.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Neutron backscattering** Neutron backscattering: Neutron backscattering is one of several inelastic neutron scattering techniques. Backscattering from monochromator and analyzer crystals is used to achieve an energy resolution in the order of μeV. Neutron backscattering experiments are performed to study atomic or molecular motion on a nanosecond time scale. History: Neutron backscattering was proposed by Heinz Maier-Leibnitz in 1966, and realized by some of his students in a test setup at the research reactor FRM I in Garching bei München, Germany. Following this successful demonstration of principle, permanent spectrometers were built at Forschungszentrum Jülich and at the Institut Laue-Langevin (ILL). Later instruments brought an extension of the accessible momentum transfer range (IN13 at ILL), the introduction of focussing optics (IN16 at ILL), and a further increase of intensity by a compact design with a phase-space transform chopper (HFBS at NIST, SPHERES at FRM II, IN16B at the Institut Laue-Langevin). Backscattering spectrometers: Operational backscattering spectrometers at reactors include IN10, IN13, and IN16B at the Institut Laue-Langevin, the High Flux Backscattering Spectrometer (HFBS) at the NIST Center for Neutron Research, the SPHERES] instrument of Forschungszentrum Jülich at FRM II and EMU at ANSTO. Inverse geometry spectrometers: Inverse geometry spectrometers at spallation sources include IRIS and OSIRIS at the ISIS neutron source at Rutherford-Appleton, BASIS at the Spallation Neutron Source, and MARS at the Paul Scherrer Institute Historic instruments: Historic instruments are the first backscattering spectrometer that was a temporary setup at FRM I and the backscattering spectrometer BSS (also called PI) at the DIDO reactor of the Forschungszentrum Jülich (decommissioned).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Perlfee** Perlfee: The Perlfee rabbit is a rare breed originating in Germany They are only found in blueish-grey colour, with dark, light and medium shades accepted, medium is preferred, the belly and around the eyes should be lighter in colour.It is a recognized breed by the British Rabbit Council but not the American Rabbit Breeders Association. Behavior: Perlfee rabbits are rather docile and friendly. They are lively rabbits who make excellent pets for the beginner.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**GDP-4-dehydro-D-rhamnose reductase** GDP-4-dehydro-D-rhamnose reductase: In enzymology, a GDP-4-dehydro-D-rhamnose reductase (EC 1.1.1.187) is an enzyme that catalyzes the chemical reaction GDP-6-deoxy-D-mannose + NAD(P)+ ⇌ GDP-4-dehydro-6-deoxy-D-mannose + NAD(P)H + H+The 3 substrates of this enzyme are GDP-6-deoxy-D-mannose, NAD+, and NADP+, whereas its 4 products are GDP-4-dehydro-6-deoxy-D-mannose, NADH, NADPH, and H+. This enzyme belongs to the family of oxidoreductases, specifically those acting on the CH-OH group of donor with NAD+ or NADP+ as acceptor. The systematic name of this enzyme class is GDP-6-deoxy-D-mannose:NAD(P)+ 4-oxidoreductase. Other names in common use include GDP-4-keto-6-deoxy-D-mannose reductase, GDP-4-keto-D-rhamnose reductase, and guanosine diphosphate-4-keto-D-rhamnose reductase. This enzyme participates in fructose and mannose metabolism.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Color volume** Color volume: A color solid is the three-dimensional representation of a color model, an analog of the two-dimensional color wheel. The added spatial dimension allows a color solid to depict an added dimension of color variation. Whereas a two-dimensional color wheel typically depicts the variables of hue (red, green, blue, etc.) and lightness (gradations of light and dark, tints or shades), a color solid adds the variable of colorfulness (either chroma or saturation), allowing the solid to depict all conceivable colors in an organized three-dimensional structure. Organization: Vertical cross sections of various spherically-shaped color solids Different color theorists have each designed unique color solids. Many are in the shape of a sphere, whereas others are warped three-dimensional ellipsoid figures—these variations being designed to express some aspect of the relationship of the colors more clearly. The color spheres conceived by Phillip Otto Runge and Johannes Itten are typical examples and prototypes for many other color solid schematics.Pure, saturated hues of equal brightness are located around the equator at the periphery of the color sphere. As in the color wheel, contrasting (or complementary) hues are located opposite each other. Moving toward the center of the color sphere on the equatorial plane, colors become less and less saturated, until all colors meet at the central axis as a neutral gray. Moving vertically in the color sphere, colors become lighter (toward the top) and darker (toward the bottom). At the upper pole, all hues meet in white; at the bottom pole, all hues meet in black. The vertical axis of the color sphere, then, is gray all along its length, varying from black at the bottom to white at the top. All pure (saturated) hues are located on the surface of the sphere, varying from light to dark down the color sphere. All impure (unsaturated hues, created by mixing contrasting colors) comprise the sphere's interior, likewise varying in brightness from top to bottom. Usage: Artists and art critics find the color solid to be a useful means of organizing the three variables of color—hue, lightness, and saturation (or chroma), as modelled in the HCL and HSL color models—in a single schematic, using it as an aid in the composition and analysis of visual art. Color volume: Color volume is the set of all available color at all available hue, saturation and brightness. It's the result of a 2D color space or 2D color gamut (that represent chromaticity) combined with the dynamic range.The term has been used to describe HDR's higher color volume than SDR (i.e. peak brightness of at least 1,000 cd/m2 higher than SDR's 100 cd/m2 limit and wider color gamut than Rec. 709 / sRGB).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Polyonychia** Polyonychia: Polyonychia also known as supernumerary nails is a condition in which two or more nails grow in the same finger or toe. Signs and symptoms: The signs/symptoms of polyonychia are very easy to detect: two or more nails growing on the same finger or toe. The nails can either be separate, small nails (micronychia) or one wide, almost complete nail, the digit affected could also be wider than normal Causes: Polyonychia is generally caused by a congenital duplication of the distal phalange of the affected digit(s), this can be caused by congenital factors (sporadic without a genetic link) or by genetic factors (sporadic or familial with genetic link). It can also be caused by polysyndactyly, which is characterized as one normal digit being connected/webbed (syndactyly) to an extra digit (polydactyly). Polyonychia can also be acquired, such as after an accident that affected the nail bed causing it to split. This type of polyonychia is just referred to as "post-traumatic split nail" Polyonychia's syndromic causes include: Isolated congenital onychodysplasiaPolyonychia's non-syndromic causes include: Polyphalangism (more specifically of the distal phalange) Polysyndactyly
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Fish counter** Fish counter: Automatic fish counters are automatic devices for measuring the number of fish passing along a particular river in a particular period of time. Usually one particular species is of interest. One important species studied by fish counters are Atlantic salmon. This species is of interest owing to its ecologically vulnerable status and anadromous lifestyles. Methods of operation: Fish counters can be divided into three principal types: resistive counters, optical counters, and hydroacoustic counters. Methods of operation: Resistive counters A resistive counter is associated with an in-river structure, an example constituting a Crump weir. The resistivity of a fish is lower than that of water. So, as fish cross this barrier, they pass embedded electrodes, and the difference in resistivity disturbs the field established in the vicinity of the electrodes, altering inter-electrode resistance. With three electrodes these disturbances can then be measured by a Wheatstone bridge, or other means, to detect the size and direction of travel of the fish. Methods of operation: Fish counters of this type are used widely in Scotland to census populations of Atlantic salmon, where comparison with closed circuit television shows around a 97% detection rate. Optical counters An optical counter is also associated with an in-river structure. However, rather than pass electrodes, in an optical counter the fish interrupt some of a number of vertically arranged beams of light. The pattern of beam-breaks can be used to determine the size, profile, and direction of motion of the fish. Methods of operation: Infrared light is used for minimizing the disturbance of the fish as they will not see the light when passing through the counter. When a fish swims through the net of light beams, the resulting silhouette image is used for counting as well as estimating the size of each fish. Each individual image is memorized in the control unit so that the counting can be verified afterwards. Methods of operation: Some systems such as the Riverwatcher use the infrared scanner to trigger a digital camera to capture between 1 and 5 photos or a short video clip of each fish. The computer then automatically links the images to other information contained in the database for that individual fish such as size, passing hour, speed, silhouette image, temperature etc. The camera is installed in a special tunnel that contains both the camera and lights providing constant light, and same distance from the camera for the fish. That way, it is possible to get good images of the fish regardless of time of day. The performance of optical counters has been determined by studies, under various conditions, to be greater than 90%. Optical counters can also distinguish the size of fish more accurately than other counter types and so are particularly useful where a mixture of species inhabit a river (for example rivers where salmon mix with sea trout). The key disadvantage of optical counters is the small penetration of the beams through the water, restricting their use to narrow river features or in-river structures, an example being fish ladders. Hydroacoustic counters Hydroacoustic counters operate using the principles of sonar. A fish is insonified by a sound source and reflections from the fish are detected by an underwater microphone. The reflection occurs because of the sudden change in impedance to sound waves within the fish, particularly at the swimbladder (90% of the reflection). Hydroacoustic counters do not require in-river structures, but require skilled installation and operators. Without skilled installation at ideal sites hydroacoustic counters can be inaccurate. Studies typically indicate detection rates of 50% to 80%, though one study found detection rates as low as 3%. Careful planning and pre-siting study must be used to determine effectiveness. Methods of operation: The lack of a requirement for any in-river structure makes the counters an attractive proposition. Generally used for short-term or seasonal studies, some situations require a long-term count which is accurate in absolute terms, not only in relative change (for example, no hydroacoustic sensors are routinely used in the detection of Scottish Atlantic salmon). In these instances resistivity or optical sensors tend to be preferred. Such methods usually require significant habitat modification, such as construction of a weir to funnel the fish through the counter. Methods of operation: Recent advances in automated hydroacoustic monitoring systems has allowed continuous monitoring for periods exceeding 18 months. These systems include intelligent monitoring and real-time data processing, ensuring proper operation and publication of status and results (e.g. fish counts) on a routine basis. Siting counters: In river structures Resistivity and (particularly) optical fish counters require in-river structures to direct the fish through the detection aperture of the counter. Fish ladders and Borland fish passes are effective structures for this purpose and occasionally a natural restriction within the river may be used for a similar purpose. However, for most counters a custom in-river structure will be required. One of the most effective such structures is the Crump weir, a triangular profile weir designed to ensure rapid planar flow over the detector. Siting counters: Siting within the river system A species of anadromous fish, such as the Atlantic salmon, may return to a particular breeding ground throughout its life. This means that within the larger rivers a number of quite distinct populations may cross a counter together, in aggregate. A population which uses a particular tributary may collapse whilst the overall numbers are not clearly affected. Issues with the management of that particular tributary and population therefore go unnoticed. Counters should be placed to count individual populations, rather than the species in aggregate, in order that population collapses and recoveries can be detected. Alternative methods: The results of automatic fish counters can be supplemented, confirmed, or replaced by a number of alternative techniques, varying in accuracy, cost, complexity, and skew effects. Electrofishing Traps Net and rod counts Redd counts (disturbances in gravel caused by mating activities of some fish) Closed circuit television
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Puppy Linux** Puppy Linux: Puppy Linux is an operating system and family of light-weight Linux distributions that focus on ease of use and minimal memory footprint. The entire system can be run from random-access memory (RAM) with current versions generally taking up about 600 MB (64-bit), 300 MB (32-bit), allowing the boot medium to be removed after the operating system has started. Applications such as AbiWord, Gnumeric and MPlayer are included, along with a choice of lightweight web browsers and a utility for downloading other packages. The distribution was originally developed by Barry Kauler and other members of the community, until Kauler retired in 2013. The tool Woof can build a Puppy Linux distribution from the binary packages of other Linux distributions. History: Barry Kauler started Puppy Linux in response to a trend of other distributions becoming stricter on system requirements over time. His own distribution, with an emphasis on speed and efficiency and being lightweight, started from "Boot disk HOWTO" and gradually included components file-by-file until Puppy Linux was completed. Puppy Linux was initially based on Vector Linux but then became a fully independent distribution. History: Release versions Puppy 0.1 is the initial release of Puppy Linux. It has no UnionFS, extremely minimal persistence support, and has no package manager or ability to install applications.Puppy 1.0 series runs comfortably on very dated hardware, such as a Pentium computer with at least 32 MB RAM. For newer systems, the USB key drive version might be better (although if USB device booting is not directly supported in the BIOS, the Puppy floppy boot disk can be used to kick-start it). It is possible to run Puppy Linux with Windows 9x/Me. It is also possible if the BIOS does not support booting from USB drive, to boot from the CD and keep user state on a USB key drive; this is saved on shutdown and read from the USB device on bootup.Puppy 2.0 uses the Mozilla-based SeaMonkey as its Internet suite (primarily a web browser and e-mail client).Puppy 3.0 features Slackware 12 compatibility. This is accomplished by the inclusion of almost all the dependencies needed for the installation of Slackware packages. However, Puppy Linux is not a Slackware-based distribution.Puppy 4.0 is built from scratch using the T2 SDE and no longer features native Slackware 12 compatibility in order to reduce the size and include newer package versions than those found in 3. To compensate for this, an optional "compatibility collection" of packages was created that restores some of the lost compatibility.Puppy 4.2.0–4.3.0 feature changes to the user interface and backend, upgraded packages, language and character support, new in-house software and optimizations, while still keeping the ISO image size under 100 MB.Puppy 5.0.0–5.7.0 are based on a project called Woof, which is designed to assemble a Puppy Linux distribution from the packages of other Linux distributions. Woof includes some binaries and software derived from Ubuntu, Debian, Slackware, T2 SDE, or Arch repositories. Puppy 5 came with a stripped down version of the Midori browser to be used for reading help files and a choice of web browsers to be installed, including Chromium, Firefox, SeaMonkey Internet Suite, Iron and Opera.Puppy 6.0.5 is built from Ubuntu 14.04 "Trusty Tahr" packages, has binary compatibility with Ubuntu 14.04 and access to the Ubuntu package repositories. Tahrpup is built from the woof-CE build system, forked from Barry Kauler's Woof late last year after he announced his retirement from Puppy development. It is built from the latest testing branch, incorporates all the latest woof-CE features and is released in PAE and noPAE ISOs, with the option to switch kernels.Puppy 6.3.2 is built with Slackware packages instead of Ubuntu 14.04 "Trusty Tahr" packages but is very similar to its predecessor. History: Puppy 7.5 is built from Ubuntu 16.04 "Xenial Xerus" packages, which has binary compatibility with Ubuntu 16.04 and access to the Ubuntu package repositories. XenialPup is built from the woof-CE build system, forked from Barry Kauler's Woof. It is built from the latest testing branch, incorporates all the latest woof-CE features and is released in PAE and noPAE ISOs, with the option to switch kernels. It has a new UI, a new kernel update for greater hardware compatibility, redesigned Puppy Package Manager, some bugfixes and base packages inclusion into the woof structure.Puppy 8.0 is built from Ubuntu "Bionic Beaver" 18.04.2 packages, has binary compatibility with Ubuntu 18.04.2 and access to the Ubuntu package repositories. BionicPup is built from the woof-CE build system, forked from Barry Kauler's Woof. It is built from the latest testing branch and incorporates all the latest woof-CE features.Puppy 8.2.1 is built from Raspberry Pi OS packages, has full support for the Raspberry Pi 0 to the Raspberry Pi 4, and is relatively similar to its predecessor. Raspberry Pi OS is based on Debian, meaning that Puppy Linux still has Debian/Ubuntu support. This version of Puppy Linux is not compatible with personal computers, like desktops or laptops.Puppy 9.5 is built from Ubuntu "Focal Fossa" 20.04 (64-bit) packages, has binary compatibility with Ubuntu 20.04 and access to the Ubuntu repositories. FossaPup64 comes with JWM as the default window manager. Also, at this release, Puppy Linux has dropped support for 32-bit (x86) computers, due to Ubuntu dropping 32-bit support at this release as well. Features: Puppy Linux is a complete operating system bundled with a collection of applications suited to general use tasks. It can be used as a rescue disk, a demonstration system that leaves the previous installation unaltered, as an accommodation for a system with a blank or missing hard drive, or for using modern software on legacy computers.Puppy's compact size allows it to boot from any media that the computer can support. It can function as a live USB for flash devices or other USB mediums, a CD, an internal hard disk drive, an SD card, a Zip drive or LS-120/240 SuperDisk, through PXE, and through a floppy boot disk that chainloads the data from other storage media. It has also been ported to ARM and can run on a single-board computer such as the Raspberry Pi.Puppy Linux features built-in tools which can be used to create bootable USB drives, create new Puppy CDs, or remaster a new live CD with different packages. It also uses a sophisticated write-caching system with the purpose of extending the life of live USB flash drives.Puppy Linux includes the ability to use a normal persistent updating environment on a write-once multisession CD/DVD that does not require a rewritable disc; this is a unique feature that sets it apart from other Linux distributions. While other distributions offer live CD versions of their operating systems, none offer a similar feature. Features: Puppy's bootloader does not mount hard drives or connect to the network automatically. This ensures that a bug or even unknowingly incompatible software won't corrupt the contents of such devices.Puppy Linux offers a session save on shutdown. Since Puppy Linux fundamentally runs in RAM, any files and configurations made or changed in a session would disappear otherwise. This feature enables the user to either save the contents to a writable storage medium, or write the file system to the same CD containing Puppy, if "multisession" was used to create the booted CD and if the disc drive supports burning. This applies to CD-Rs, CD-RWs, and DVDs. Features: It is also possible to save all files to an external hard drive, USB stick, or even a floppy disk instead of the root file system. Puppy can also be installed to a hard disk. User interface: The default window manager in most Puppy releases is JWM.Packages of the IceWM desktop, Fluxbox and Enlightenment are also available via Puppy's PetGet package (application) management system (see below). Some derivative distributions, called puplets, come with default window managers other than JWM.When the operating system boots, everything in the Puppy package uncompresses into a RAM area, the "ramdisk". The PC needs to have at least 128 MB of RAM (with no more than 8 MB shared video) for all of Puppy to load into the ramdisk. However, it is possible for it to run on a PC with only about 48 MB of RAM because part of the system can be kept on the hard drive, or less effectively, left on the CD. User interface: Puppy is fairly full-featured for a system that runs entirely in a ramdisk, when booted as Live system or from a "frugal" installation. However, Puppy also supports the "full" installation mode, which enables Puppy to run from a hard drive partition, without a ramdisk. Applications were chosen that met various constraints, size in particular. Because one of the aims of the distribution is to be extremely easy to set up, there are many wizards that guide the user through a wide variety of common tasks. Package and distribution management: Puppy Linux's package manager, Puppy Package Manager, installs packages in PET (Puppy Enhanced Tarball) format by default but it also accepts packages from other distros (such as .deb, .rpm, .txz, and .tgz packages) or by using third-party tools to convert packages from other distros to PET packages. Puppy Package Manager can also trim the software bloat of a package to reduce the disk space used. Building the distribution: On earlier releases of Puppy Linux, Puppy Unleashed was used to create Puppy ISO images. It consists of more than 500 packages that are put together according to the user's needs. However, on later versions starting with Puppy Linux version 5.0, it was replaced by Woof. It is an advanced tool for creating Puppy installations. It requires an Internet connection and some knowledge of Linux to use. It is able to download the binary source packages from another Linux distribution and process them into Puppy Linux packages by just defining the name of that Linux distro. It is equipped with a simpler version control named Bones on earlier releases but on later versions of woof, Fossil version control is used.Puppy also comes with a remastering tool that takes a "snapshot" of the current system and lets the user create a live CD from it, and an additional remastering tool that is able to remove installed components.Puppy Linux uses the T2 SDE build scripts to build the base binary packages. Official variants: Because of the relative ease with which the Woof tool and the remaster tool can be used to build variants of Puppy Linux, there are many variants available. Variants of Puppy Linux are known as puplets. After Barry Kauler reduced his involvement with the Puppy Project, he designed two new distributions within the same Puppy Linux family, Quirky and Wary. Quirky – An embedded, less-stable distro with all files contained in an initramfs built into the kernel. It has simple module loading management but fewer drivers are included. It is used for experimental purposes. Racy – A variant of puppy optimized for newer PCs.Wary – A Puppy variant targeted at users with old hardware. It uses an older Linux kernel, which has long-term support and the newest applications.Easy – A puppy variant in which the init script is completely rewritten and which uses originally developed application containers aside the conventional package management. Reception: DistroWatch reviewer Rober Storey concluded about Puppy 5.2.5 in April 2011: "A lot of people like Puppy — it's in the top 10 of the DistroWatch page-hit ranking. I enjoy Puppy too, and it's what I run exclusively on my netbook. Maybe the only thing wrong with Puppy is that users' expectations tend to exceed the developer's intentions."In a detailed review of Puppy Linux in May 2011 Howard Fosdick of OS News addressed the fact that Puppy Linux the user runs as the root UID, "In theory this could be a problem — but in practice it presents no downside. I've never heard of a single Puppy user suffering a problem due to this." Fosdick concluded "I like Puppy because it's the lightest Linux distro I've found that is still suitable for end users. Install it on an old P-III or P-IV computer and your family or friends will use it just as effectively for common tasks as any expensive new machine."In December 2011 Jesse Smith, writing in DistroWatch, reviewed Puppy 5.3.0 Slacko Puppy. He praised its simplicity, flexibility and clear explanations, while noting the limitations of running as root. He concluded "I would also like to see an option added during the boot process which would give the user the choice of running in unprivileged mode as opposed to running as root. Always being the administrator has its advantages for convenience, but it means that the user is always one careless click away from deleting their files and one exploit away from a compromised operating system. As a live CD it's hard to beat Puppy Linux for both performance and functional software. It has minimal hardware requirements and is very flexible. It's a great distro as long as you don't push it too far out of its niche."In December 2011 Howard Fosdick reviewed the versions of Puppy Linux then available. He concluded, "Puppy's diversity and flexibility make it a great community-driven system for computer enthusiasts, hobbyists, and tinkerers. They also make for a somewhat disorderly world. You might have to read a bit to figure out which Puppy release or Puplet is for you. Puppy's online documentation is extensive but can be confusing. It's not always clear which docs pertain to which releases. Most users rely on the active, friendly forum for support." He also noted "Those of us who enjoy computers sometimes forget that many view them with disdain. What's wrong with it now? Why do I have to buy a new one every four years? Why on earth do they change the interface in every release? Can't it just work? Puppy is a great solution for these folks. It's up-to-date, free, and easy to use. And now, it supports free applications from the Ubuntu, Slackware, or Puppy repositories. Now that's user-friendly."An April 2020 review of Bionic 8.0 by Igor Ljubuncic in Dedoimedo concluded, "Puppy Linux delivered on its happy message, and even exceeded my expectations. Now, I've always been a fan, and rarely had anything bad to say, so a positive result was kind of warranted. What really amazed me was not that this is a lean and fast little distro - it's the fact it manages to keep its relevance despite the obvious lethargy in the Linux desktop space. You may say, well, why bother - but if you have older hardware or travel a lot, Puppy gives you your own, complete work session that will boot and run pretty much anywhere, with tons of goodies and excellent configuration tools."
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**BERT (language model)** BERT (language model): Bidirectional Encoder Representations from Transformers (BERT) is a family of language models introduced in 2018 by researchers at Google. A 2020 literature survey concluded that "in a little over a year, BERT has become a ubiquitous baseline in Natural Language Processing (NLP) experiments counting over 150 research publications analyzing and improving the model."BERT was originally implemented in the English language at two model sizes: (1) BERTBASE: 12 encoders with 12 bidirectional self-attention heads totaling 110 million parameters, and (2) BERTLARGE: 24 encoders with 16 bidirectional self-attention heads totaling 340 million parameters. Both models were pre-trained on the Toronto BookCorpus (800M words) and English Wikipedia (2,500M words). Architecture: BERT is based on the transformer architecture. Specifically, BERT is composed of Transformer encoder layers. BERT uses WordPiece to convert each English word into an integer code. Its vocabulary has size 30,000. Any token not appearing in its vocabulary is replaced by [UNK] for "unknown". Architecture: BERT was pre-trained simultaneously on two tasks:language modeling: 15% of tokens were selected for prediction, and the training objective was to predict the selected token given its context. The selected token is replaced with a [MASK] token with probability 80%, replaced with a random word token with probability 10%, not replaced with probability 10%.For example, the sentence "my dog is cute" may have the 4-th token selected for prediction. The model would have input text "my dog is [MASK]" with probability 80%, "my dog is happy" with probability 10%, "my dog is cute" with probability 10%.After processing the input text, the model's 4-th output vector is passed to a separate neural network, which outputs a probability distribution over its 30,000-large vocabulary. Architecture: next sentence prediction: Given two spans of text, the model predicts if these two spans appeared sequentially in the training corpus, outputting either [IsNext] or [NotNext]. The first span starts with a special token [CLS] (for "classify"). The two spans are separated by a special token [SEP] (for "separate"). After processing the two spans, the 1-st output vector (the vector coding for [CLS]) is passed to a separate neural network for the binary classification into [IsNext] and [NotNext]. Architecture: For example, given "[CLS] my dog is cute [SEP] he likes playing" the model should output token [IsNext]. Architecture: Given "[CLS] my dog is cute [SEP] how do magnets work" the model should output token [NotNext].As a result of this training process, BERT learns latent representations of words and sentences in context. After pre-training, BERT can be fine-tuned with fewer resources on smaller datasets to optimize its performance on specific tasks such as NLP tasks (language inference, text classification) and sequence-to-sequence based language generation tasks (question-answering, conversational response generation). The pre-training stage is significantly more computationally expensive than fine-tuning. Performance: When BERT was published, it achieved state-of-the-art performance on a number of natural language understanding tasks: GLUE (General Language Understanding Evaluation) task set (consisting of 9 tasks) SQuAD (Stanford Question Answering Dataset) v1.1 and v2.0 SWAG (Situations With Adversarial Generations) Analysis: The reasons for BERT's state-of-the-art performance on these natural language understanding tasks are not yet well understood. Current research has focused on investigating the relationship behind BERT's output as a result of carefully chosen input sequences, analysis of internal vector representations through probing classifiers, and the relationships represented by attention weights. Analysis: The high performance of the BERT model could also be attributed to the fact that it is bidirectionally trained. This means that BERT, based on the Transformer model architecture, applies its self-attention mechanism to learn information from a text from the left and right side during training, and consequently gains a deep understanding of the context. For example, the word fine can have two different meanings depending on the context (I feel fine today, She has fine blond hair). BERT considers the words surrounding the target word fine from the left and right side. Analysis: However it comes at a cost: due to encoder-only architecture lacking a decoder, BERT can't be prompted and can't generate text, while bidirectional models in general do not work effectively without the right side, thus being difficult to prompt, with even short text generation requiring sophisticated computationally expensive techniques.In contrast to deep learning neural networks which require very large amounts of data, BERT has already been pre-trained which means that it has learnt the representations of the words and sentences as well as the underlying semantic relations that they are connected with. BERT can then be fine-tuned on smaller datasets for specific tasks such as sentiment classification. The pre-trained models are chosen according to the content of the given dataset one uses but also the goal of the task. For example, if the task is a sentiment classification task on financial data, a pre-trained model for the analysis of sentiment of financial text should be chosen. The weights of the original pre-trained models were released on GitHub. History: BERT was originally published by Google researchers Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. The design has its origins from pre-training contextual representations, including semi-supervised sequence learning, generative pre-training, ELMo, and ULMFit. Unlike previous models, BERT is a deeply bidirectional, unsupervised language representation, pre-trained using only a plain text corpus. Context-free models such as word2vec or GloVe generate a single word embedding representation for each word in the vocabulary, whereas BERT takes into account the context for each occurrence of a given word. For instance, whereas the vector for "running" will have the same word2vec vector representation for both of its occurrences in the sentences "He is running a company" and "He is running a marathon", BERT will provide a contextualized embedding that will be different according to the sentence.On October 25, 2019, Google announced that they had started applying BERT models for English language search queries within the US. On December 9, 2019, it was reported that BERT had been adopted by Google Search for over 70 languages. In October 2020, almost every single English-based query was processed by a BERT model. Recognition: The research paper describing BERT won the Best Long Paper Award at the 2019 Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Radiometer** Radiometer: A radiometer or roentgenometer is a device for measuring the radiant flux (power) of electromagnetic radiation. Generally, a radiometer is an infrared radiation detector or an ultraviolet detector. Microwave radiometers operate in the microwave wavelengths. Radiometer: While the term radiometer can refer to any device that measures electromagnetic radiation (e.g. light), the term is often used to refer specifically to a Crookes radiometer ("light-mill"), a device invented in 1873 in which a rotor (having vanes which are dark on one side, and light on the other) in a partial vacuum spins when exposed to light. A common belief (one originally held even by Crookes) is that the momentum of the absorbed light on the black faces makes the radiometer operate. If this were true, however, the radiometer would spin away from the non-black faces, since the photons bouncing off those faces impart more momentum than the photons absorbed on the black faces. Photons do exert radiation pressure on the faces, but those forces are dwarfed by other effects. Radiometer: The currently accepted explanation depends on having just the right degree of vacuum, and relates to the transfer of heat rather than the direct effect of photons.A Nichols radiometer demonstrates photon pressure. It is much more sensitive than the Crookes radiometer and it operates in a complete vacuum, whereas operation of the Crookes radiometer requires an imperfect vacuum. The MEMS radiometer can operate on the principles of Nichols or Crookes and can operate over a wide spectrum of wavelength and particle energy levels.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Herglotz–Zagier function** Herglotz–Zagier function: In mathematics, the Herglotz–Zagier function, named after Gustav Herglotz and Don Zagier, is the function log ⁡(nx)}1n. introduced by Zagier (1975) who used it to obtain a Kronecker limit formula for real quadratic fields.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Folate-biopterin transporter family** Folate-biopterin transporter family: The folate-biopterin transporter (FBT) family (TC# 2.A.71) is a distant family within the major facilitator superfamily, most closely related to drug resistance permeases. Proteins of the FBT family are reported to contain about 480 to 650 amino acyl residues. All probably have 12 transmembrane α-helical segments (TMSs). They may function by H+ symport. Transport reaction: The probable transport reaction catalyzed by characterized FBT family members is: [folate, biopterin, or AdoMet] (out) + H+ (out) → [folate, biopterin, or AdoMet] (in) + H+ (in) Functionally characterized members: The FBT family includes functionally characterized members from protozoa, cyanobacteria and plants. Functionally characterized members of the family include FT1, the major folate transporter, and BT1, the biopterin/folate transporter and AdoMetT1, the major S-adenosylmethionine uptake porter. A related protein in Trypanosoma brucei, ESAGIO, shows weak folate/biopterin transport activity. There are at least 6 homologues of the FT1 transporter in Leishmania encoded by tandem genes.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Tipster** Tipster: A tipster is someone who regularly provides information (tips) on the likely outcomes of sporting events on internet sites or special betting places. History: In the past tips were bartered for and traded but nowadays, thanks largely to the Internet and premium rate telephone lines, they are usually exchanged for money, and many tipsters operate websites. Some of them are free and some require subscription. In the past tipping was mostly associated with horse racing but can apply to any sport that has odds offered on it. The relaxed cultural attitude towards gambling in the UK is increasingly resulting in a gambling element being promoted alongside sport coverage in the media. System: A tip in gambling is a bet suggested by a third party who is perceived to be more knowledgeable about that subject than the bookmaker who sets the initial odds. (A bookmaker will vary his odds according to the amount of money wagered, but has to start with a blank book and himself set an initial price to encourage betting.) Thus a tip is not even regarded by the tipster as a certainty but that the bookmaker has set a price too low (or too high) from what the true risk is: it is a form of financial derivative, since the tipster himself risks none of his own money but sells his expert knowledge to others to try to "beat the bookie". System: The Tipster must overcome the profit margin integrated into sports betting odds by bookmakers trading teams and then also obtain an additional edge to deliver profit over the long term. Role: Tipsters are sometimes insiders of a particular sport able to provide bettors with information not publicly available. There are other tipsters who provide equally respectable results through analysis of commonly accessible information. Role: Some tipsters use statistical based estimations about the outcome of a game, and compare this estimation with the bookmaker's odds. If there is a gap between the estimate odds and the bookmakers odds, the tipster is said to identify "value", and a person who bets on such odds when they perceive not a certainty but a "gap in the book" is said to be a "value bettor". When value is found, the tipster is recommending the bettor to place a bet. Role: A tip that is considered to be a racing certainty, that is, almost completely certain to be true, is also called a nap and tipsters in newspapers will tend to indicate the "nap". United Kingdom: Newspapers Most National newspapers in the UK employ a tipster or columnist who provides horse racing tips. Rather than pick a tip for each race that occurs on a given day the normal protocol is to provide a Nap and nb selection. Nap (derived from the card game Napoleon) indicates this is the tipster's most confident selection of the day. nb = "Next best" and indicates another selection that the tipster rates highly. Both types of selections would be counted in calculating the tipsters running profit/loss figure which states how far in profit or loss an individual would be if they had backed every tip with a level stake (£1). United Kingdom: Television The popular Channel 4 television program The Morning Line, when it was on air up to 2016,previewed weekend horse racing on a Saturday morning culminating in the panel of experts and guests providing their selections for the day. 2017 saw ITV takeover UK horse racing coverage and they have a similar show to what Channel 4 had called "The Opening Show". It airs usually at 9.30am on Saturdays on ITV4 & is presented by Oli Bell. Sky Sports News runs a similar preview segment including expert analysis of the teams and betting odds relating to Premier League football fixtures on a Saturday. United Kingdom: Radio The United Kingdom, morning national Radio 4 Today Program usually includes a couple of racing tips in its short sports section (Garry Richardson is the usual presenter, although others fill in when he is away) but these are not taken too seriously (in fact the tips are supplied by a well-known newspaper tipster): but the program tracks Richardson's performance as a tipster for amusement value: he is usually quite well "down" but just very occasionally is "up" after a correct tip at a long price. United Kingdom: Scams Premium tipping services charge a fee for accessing a tip or tips by telephone, internet or post. The more reputable companies will keep an accurate record of their tipping activities enabling a prospective client to assess their past form and so anticipate potential future performance. There is a lot of scope for less reputable operations to massage these figures or even to fabricate figures in order to attract new customers. In 2008, the Office of Fair Trading state the figure lost to tipster scams each year is between £1 million and £4 million in the UK alone.Derren Brown's Channel 4 program The System exposed one method by which tipping services operate. By giving out different tips to different people (unknown to each other) in a horse race, one person must win (essentially, a sweepstake). The bettor who won might then assume that they received real insight into the race outcome from the tipster and may then pay for subsequent tips. Australia: Australia has led the way in the emergence of tipping competitions where the object is to win prizes for registering virtual bets. The focus of the majority of these competitions has been Australian rules football but the commonly referred to term for the activity of Footy tipping now also covers Soccer, Rugby league and Rugby Union. In the UK there are a growing number of such competitions but most relate to the Horse Racing industry. Australia: In theory, tipping for prizes in a free competition provides a viable alternative to gambling for real. However, many will take the opposite view that it makes gambling more accessible to a wider audience by creating what is perceived to be a safe route in. There is also a lot of scope for gamblers looking to identify good tips using such competitions as an information resource given some competitions publish current tips entered and historical records for the tipsters involved. Internet: Internet forums are increasingly being used as a means to share ideas and information within web communities and many such forums exist in the gambling arena as a means of discussing views on events or simply offering advice and tips. While many in the gambling community view this as a way in which they can earn respect from their peers in an otherwise isolated profession, tipping services also use these areas to attract users to their premium schemes. Stocks and shares: While the term gambling is often considered to be confined to sports betting or at least the services offered by a bookmaker, the classification can also be applied to investing in stocks where the gamble relates to a share or commodity price moving in a certain direction. Stock tips, as publicised in the financial sections of the media, are largely directed at the casual investor but their interrelation and interest to the business sector has proven to be controversial.The increase in spread betting as a financial derivative also blurs the distinction between financial investment and gambling: since in the United Kingdom a win on a bet pays no tax, but another form of investment might require payment of Capital Gains Tax, there may be a financial advantage to "betting". Stocks and shares: Derivatives Many newspapers and other betting journals such as the Racing Post track the leading newspapers' tipsters and see how well their predictions match the actual outcome, by assuming a nominal £1 bet on every tip that the tipster makes, and calculating the theoretical return. Thus, tipsters themselves can be "tipped" as being a good or bad tipster. Therefore, it is actually possible in theory to bet on whether a tipster's prediction will be correct (rather than bet on the prediction itself). Other uses: Tipster is also a term used in the United Kingdom for a person who gives information regarding potential news stories, particularly those involving celebrities, to journalists, often in exchange for cash; or more generally an informant.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Non-adjacent form** Non-adjacent form: The non-adjacent form (NAF) of a number is a unique signed-digit representation, in which non-zero values cannot be adjacent. For example: (0 1 1 1)2 = 4 + 2 + 1 = 7 (1 0 −1 1)2 = 8 − 2 + 1 = 7 (1 −1 1 1)2 = 8 − 4 + 2 + 1 = 7 (1 0 0 −1)2 = 8 − 1 = 7All are valid signed-digit representations of 7, but only the final representation, (1 0 0 −1)2, is in non-adjacent form. Non-adjacent form: The non-adjacent form is also known as "canonical signed digit" representation. Properties: NAF assures a unique representation of an integer, but the main benefit of it is that the Hamming weight of the value will be minimal. For regular binary representations of values, half of all bits will be non-zero, on average, but with NAF this drops to only one-third of all digits. This leads to efficient implementations of add/subtract networks (e.g. multiplication by a constant) in hardwired digital signal processing.Obviously, at most half of the digits are non-zero, which was the reason it was introduced by G.W. Reitweisner for speeding up early multiplication algorithms, much like Booth encoding. Properties: Because every non-zero digit has to be adjacent to two 0s, the NAF representation can be implemented such that it only takes a maximum of m + 1 bits for a value that would normally be represented in binary with m bits. Properties: The properties of NAF make it useful in various algorithms, especially some in cryptography; e.g., for reducing the number of multiplications needed for performing an exponentiation. In the algorithm, exponentiation by squaring, the number of multiplications depends on the number of non-zero bits. If the exponent here is given in NAF form, a digit value 1 implies a multiplication by the base, and a digit value −1 by its reciprocal. Properties: Other ways of encoding integers that avoid consecutive 1s include Booth encoding and Fibonacci coding. Converting to NAF: There are several algorithms for obtaining the NAF representation of a value given in binary. One such is the following method using repeated division; it works by choosing non-zero coefficients such that the resulting quotient is divisible by 2 and hence the next coefficient a zero. Converting to NAF: Input E = (em−1 em−2 ··· e1 e0)2 Output Z = (zm zm−1 ··· z1 z0)NAF i ← 0 while E > 0 do if E is odd then zi ← 2 − (E mod 4) E ← E − zi else zi ← 0 E ← E/2 i ← i + 1 return z A faster way is given by Prodinger where x is the input, np the string of positive bits and nm the string of negative bits: Input x Output np, nm xh = x >> 1; x3 = x + xh; c = xh ^ x3; np = x3 & c; nm = xh & c; which is used, for example, in A184616.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Agile usability engineering** Agile usability engineering: Agile usability engineering is a method created from a combination of agile software development and usability engineering practices. Agile usability engineering attempts to apply the principles of rapid and iterative development to the field of user interface design. Early implementations of usability engineering in user-centered design came into professional practice during the mid–late 1980s. Early implementations of agile software development evolved in the mid-1990s. It has only been within the past few years that the human-computer interaction community have seen widespread acceptance of agile usability engineering. History: When methods such as extreme programming and test driven development were introduced by Kent Beck, usability engineering had to become light-weight in order to work with agile environments. Individuals like Kent Beck have helped to shape the methodology of agile usability engineering by working on projects such as the Chrysler Comprehensive Compensation System. Such time-driven projects have helped individuals experience and understand the best methodologies to practice while working in an agile environment. History: An early example of usability engineering in an agile software development environment can be found in the work of Larry Constantine and Lucy Lockwood who designed a browser-resident classroom information management system. During this process, the design team worked directly with an education team, which served as both subject-matter experts and representative end users to develop initial user role models and an inventory of task cases. This process mimics participatory design. With this material, mock-ups were iteratively designed to achieve the desired goal of “the stringent design objective of enabling immediate, productive use of the system based on a single-page tutorial.”The following table displays the differences and similarities of light-weight processes compared to heavy-weight processes as suggested by Thomas Memmel. Methods: Many projects that are used in the agile software development process can benefit from agile usability engineering. Any project that cannot use models and representatives will have issues in an agile usability engineering environment, as the projects must be as light-weight as possible. Methods: Throughout the usability engineering phase in agile development, users work with the product or service in order to provide feedback, problem reports and new requirements to the developers. The process is done interactively with focus directed first on basic functionality and later on with more advanced features. As the process progresses to advanced stages, more users work with the product or service. Solutions are quickly applied based on severity. The phase ends with a milestone. Methods: Paul McInerney and Frank Maurer administered a case study confirming that UI design practices required adjustments; especially in order to adapt an iterative development. However, it was concluded that the resulting UI designs are certainly not worse than what would have been made with the standard heavyweight approach.The core practices in agile modeling as described by Scott Ambler, help to describe the focus in agile usability engineering. The core practices include Validation, Teamwork, Simplicity, Motivation, Productivity, Documentation, and Iterative & Incremental.A modified agile development process, with usability instruments included, was developed and presented in the CHI ‘08 Extended Abstracts on Human Factors in Computing Systems. The usability instruments includes extended unit tests for usability evaluations, extreme personas to extend the typical extreme programming user story, user studies to extend the extreme programming concept of the on-site customer, usability expert evaluations to solve ad hoc problems and usability tests to solve on-site customer representative problems. Issues: Due to the struggle of incorporating traditional usability engineering methods into an agile environment, many issues have risen. Without comprehensive resources, practitioners have tried to follow the patterns of others who have previously been successful. Table 2 represents the table of Problems, Symptoms, and Possible Solutions developed by Lynn Miller and Desirée Sy and presented in the CHI ‘09 Extended Abstracts on Human Factors in Computing Systems. Issues: The following table is a summary of the main problems experienced by User Experience practitioners while doing Agile UCD.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Congener (beverages)** Congener (beverages): In the alcoholic beverages industry, congeners are substances, other than the desired type of alcohol, ethanol, produced during fermentation. These substances include small amounts of chemicals such as methanol and other alcohols (known as fusel alcohols), acetone, acetaldehyde, esters, tannins, and aldehydes (e.g. furfural). Congeners are responsible for most of the taste and aroma of distilled alcoholic beverages, and contribute to the taste of non-distilled drinks. It has been suggested that these substances contribute to the symptoms of a hangover. Brandy, rum and red wine have the highest amount of congeners, while vodka and beer have the least. Congener (beverages): Congeners are the basis of alcohol congener analysis, a sub-discipline of forensic toxicology which determines what a person drank. There is some evidence that high-congener drinks induce more severe hangovers, but the effect is not well studied and is still secondary to the total amount of ethanol consumed.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Semantic Web Stack** Semantic Web Stack: The Semantic Web Stack, also known as Semantic Web Cake or Semantic Web Layer Cake, illustrates the architecture of the Semantic Web. Semantic Web Stack: The Semantic Web is a collaborative movement led by international standards body the World Wide Web Consortium (W3C). The standard promotes common data formats on the World Wide Web. By encouraging the inclusion of semantic content in web pages, the Semantic Web aims at converting the current web, dominated by unstructured and semi-structured documents into a "web of data". The Semantic Web stack builds on the W3C's Resource Description Framework (RDF). Overview: The Semantic Web Stack is an illustration of the hierarchy of languages, where each layer exploits and uses capabilities of the layers below. It shows how technologies that are standardized for Semantic Web are organized to make the Semantic Web possible. It also shows how Semantic Web is an extension (not replacement) of classical hypertext web. The illustration was created by Tim Berners-Lee. The stack is still evolving as the layers are concretized. (Note: A humorous talk on the evolving Semantic Web stack was given at the 2009 International Semantic Web Conference by James Hendler.) Semantic Web technologies: As shown in the Semantic Web Stack, the following languages or technologies are used to create Semantic Web. The technologies from the bottom of the stack up to OWL are currently standardized and accepted to build Semantic Web applications. It is still not clear how the top of the stack is going to be implemented. All layers of the stack need to be implemented to achieve full visions of the Semantic Web. Semantic Web technologies: Hypertext Web technologies The bottom layers contain technologies that are well known from hypertext web and that without change provide basis for the semantic web. Internationalized Resource Identifier (IRI), generalization of URI, provides means for uniquely identifying semantic web resources. Semantic Web needs unique identification to allow provable manipulation with resources in the top layers. Unicode serves to represent and manipulate text in many languages. Semantic Web should also help to bridge documents in different human languages, so it should be able to represent them. XML is a markup language that enables creation of documents composed of semi-structured data. Semantic web gives meaning (semantics) to semi-structured data. XML Namespaces provides a way to use markups from more sources. Semantic Web is about connecting data together, and so it is needed to refer more sources in one document. Standardized Semantic Web technologies Middle layers contain technologies standardized by W3C to enable building semantic web applications. Resource Description Framework (RDF) is a framework for creating statements in a form of so-called triples. It enables to represent information about resources in the form of graph - the semantic web is sometimes called Giant Global Graph. RDF Schema (RDFS) provides basic vocabulary for RDF. Using RDFS it is for example possible to create hierarchies of classes and properties. Web Ontology Language (OWL) extends RDFS by adding more advanced constructs to describe semantics of RDF statements. It allows stating additional constraints, such as for example cardinality, restrictions of values, or characteristics of properties such as transitivity. It is based on description logic and so brings reasoning power to the semantic web. SPARQL is a RDF query language - it can be used to query any RDF-based data (i.e., including statements involving RDFS and OWL). Querying language is necessary to retrieve information for semantic web applications. RIF is a rule interchange format. It is important, for example, to allow describing relations that cannot be directly described using description logic used in OWL. Unrealized Semantic Web technologies Top layers contain technologies that are not yet standardized or contain just ideas that should be implemented in order to realize Semantic Web. Cryptography is important to ensure and verify that semantic web statements are coming from trusted source. This can be achieved by appropriate digital signature of RDF statements. Trust to derived statements will be supported by (a) verifying that the premises come from trusted source and by (b) relying on formal logic during deriving new information. User interface is the final layer that will enable humans to use semantic web applications.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Y-DNA haplogroups in populations of the Caucasus** Y-DNA haplogroups in populations of the Caucasus: Various Y-DNA haplogroups have differing frequencies within each ethnolinguistic group in the Caucasus region. Table: The table below lists the frequencies – identified by major studies – of various haplogroups amongst selected ethnic groups from the Caucasus. The first two columns list the ethnic and linguistic affiliations of the individuals studied, the third column gives the sample size studied, and the other columns give the percentage of the particular haplogroup. Language family abbreviations: IE Indo-European NEC Northeast Caucasian NWC Northwest Caucasian SC South Caucasian
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Estradiol dipropionate/hydroxyprogesterone caproate** Estradiol dipropionate/hydroxyprogesterone caproate: Estradiol dipropionate/hydroxyprogesterone caproate (EDP/OHPC), sold under the brand name EP Hormone Depot, is a combined estrogen–progestogen medication which is used in Japan. It is manufactured by Teikoku Zoki Pharmaceutical Co., Tokyo and contains 1 mg/mL estradiol dipropionate and 50 mg/mL hydroxyprogesterone caproate.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Ripretinib** Ripretinib: Ripretinib, sold under the brand name Qinlock, is a medication for the treatment of adults with advanced gastrointestinal stromal tumor (GIST), a type of tumor that originates in the gastrointestinal tract. It is taken by mouth. Ripretinib inhibits the activity of the kinases KIT and PDGFRA, which helps keep cancer cells from growing.The most common side effects include alopecia (hair loss), fatigue, nausea, abdominal pain, constipation, myalgia (muscle pain), diarrhea, decreased appetite, palmar-plantar erythrodysesthesia syndrome (a skin reaction in the palms and soles) and vomiting.Ripretinib was approved for medical use in the United States in May 2020, in Australia in July 2020, and in the European Union in November 2021. Ripretinib is the first new drug specifically approved in the United States as a fourth-line treatment for advanced gastrointestinal stromal tumor (GIST). Medical uses: Ripretinib is indicated for the treatment of adults with advanced gastrointestinal stromal tumor (GIST), a type of tumor that originates in the gastrointestinal tract, who have received prior treatment with three or more kinase inhibitor therapies, including imatinib. GIST is type of stomach, bowel, or esophagus tumor. Adverse effects: The most common side effects include alopecia (hair loss), fatigue, nausea, abdominal pain, constipation, myalgia (muscle pain), diarrhea, decreased appetite, palmar-plantar erythrodysesthesia syndrome (a skin reaction in the palms and soles) and vomiting.Ripretinib can also cause serious side effects including skin cancer, hypertension (high blood pressure) and cardiac dysfunction manifested as ejection fraction decrease (when the muscle of the left ventricle of the heart is not pumping as well as normal).Ripretinib may cause harm to a developing fetus or a newborn baby. History: Ripretinib was approved for medical use in the United States in May 2020.The approval of ripretinib was based on the results of an international, multi-center, randomized, double-blind, placebo-controlled clinical trial (INVICTUS/NCT03353753) that enrolled 129 participants with advanced gastrointestinal stromal tumor (GIST) who had received prior treatment with imatinib, sunitinib, and regorafenib. The trial compared participants who were randomized to receive ripretinib to participants who were randomized to receive placebo, to determine whether progression free survival (PFS) – the time from initial treatment in the clinical trial to growth of the cancer or death – was longer in the ripretinib group compared to the placebo group. During treatment in the trial, participants received ripretinib 150 mg or placebo once a day in 28-day cycles, repeated until tumor growth was found (disease progression), or the participant experienced intolerable side effects. After disease progression, participants who were randomized to placebo were given the option of switching to ripretinib. The trial was conducted at 29 sites in the United States, Australia, Belgium, Canada, France, Germany, Italy, the Netherlands, Poland, Singapore, Spain, and the United Kingdom.The major efficacy outcome measure was progression-free survival (PFS) based on assessment by blinded independent central review (BICR) using modified RECIST 1.1 in which lymph nodes and bone lesions were not target lesions and a progressively growing new tumor nodule within a pre-existing tumor mass must meet specific criteria to be considered unequivocal evidence of progression. Additional efficacy outcome measures included overall response rate (ORR) by BICR and overall survival (OS). The trial demonstrated a statistically significant improvement in PFS for participants in the ripretinib arm compared with those in the placebo arm (HR 0.15; 95% CI: 0.09, 0.25; p<0.0001).The U.S. Food and Drug Administration (FDA) granted the application for ripretinib priority review and fast track designations, as well as breakthrough therapy designation and orphan drug designation. The FDA granted approval of Qinlock to Deciphera Pharmaceuticals, Inc. Society and culture: Legal status Ripretinib was approved for medical use in the United States in May 2020, and in Australia in July 2020.On 16 September 2021, the Committee for Medicinal Products for Human Use (CHMP) adopted a positive opinion, recommending the granting of a marketing authorization for the medicinal product Qinlock, intended for the treatment of advanced gastrointestinal stromal tumour (GIST) in people who have received prior treatment with three or more kinase inhibitors. The applicant for this medicinal product is Deciphera Pharmaceuticals (Netherlands) B.V. Ripretinib was approved for medical use in the European Union in November 2021. Names: Ripretinib is the International nonproprietary name (INN) and the United States Adopted Name (USAN).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Kepler-90i** Kepler-90i: Kepler-90i (also known by its Kepler Object of Interest designation KOI-351.08) is a super-Earth exoplanet with a radius 1.32 times that of Earth, orbiting the early G-type main sequence star Kepler-90 every 14.45 days, discovered by NASA's Kepler spacecraft. It is located about 2,840 light-years (870 parsecs, or nearly 2.4078×1016 km) from Earth in the constellation Draco. The exoplanet is the eighth in the star's multiplanetary system. As of December 2017, Kepler-90 is the star hosting the most exoplanets found. Kepler-90i was found with the transit method, in which the dimming effect that a planet causes as it crosses in front of its star is measured, and by a newly utilized computer tool, deep learning, a class of machine learning algorithms. Characteristics: Mass, radius and temperature Kepler-90i is a super-Earth exoplanet with a radius of 1.32 REarth, indicating that it is small enough to be rocky. With an Earth-like composition, Kepler-90i would have a mass of about 2.3 MEarth, since its volume is 1.32 2.3 times that of Earth's. It has an equilibrium temperature of 709 K (436 °C; 817 °F), similar to the average temperature of Venus. Characteristics: Host star The planet orbits Kepler-90, a G-type main sequence star. The star has a mass of 1.2 M☉ and a radius 1.2 R☉. It has a surface temperatures of 6080 K and has an estimated age of around 2 billion years, with considerable uncertainty. In comparison, the Sun is about 4.6 billion years old and has a surface temperature of 5778 K.The star's apparent magnitude, or how bright it appears from Earth's perspective, is 14. It is too dim to be seen with the naked eye. Characteristics: Orbital characteristics Kepler-90i orbits its host star about every 14.45 days with a semi-major axis of 0.107 AU. Due to its very close distance to its host star, it is likely to be tidally locked, meaning that one side permanently faces the star in eternal daylight and the other side permanently faces away from the star in eternal darkness. Discovery: In 2009, NASA's Kepler spacecraft was observing stars on its photometer, the instrument it uses to detect transit events, in which a planet crosses in front of and dims its host star for a brief and roughly regular period of time. In its last test, Kepler observed 50000 stars in the Kepler Input Catalog, including Kepler-90; the preliminary light curves were sent to the Kepler science team for analysis, who chose obvious planetary companions from the bunch for follow-up at observatories. Discovery of the exoplanet was aided by a newly utilized computer tool, deep learning, a class of machine learning algorithms.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Satellite radio** Satellite radio: Satellite radio is defined by the International Telecommunication Union (ITU)'s ITU Radio Regulations (RR) as a broadcasting-satellite service. The satellite's signals are broadcast nationwide, across a much wider geographical area than terrestrial radio stations, and the service is primarily intended for the occupants of motor vehicles. It is available by subscription, mostly commercial free, and offers subscribers more stations and a wider variety of programming options than terrestrial radio.Satellite radio technology was inducted into the Space Foundation Space Technology Hall of Fame in 2002. Satellite radio uses the 2.3 GHz S band in North America for nationwide digital radio broadcasting. In other parts of the world, satellite radio uses the 1.4 GHz L band allocated for DAB. History and overview: The first satellite radio broadcasts occurred in Africa and the Middle East in 1999. The first US broadcasts were in 2001 followed by Japan in 2004 and Canada in 2005. History and overview: There have been three (not counting MobaHo! of Japan) major satellite radio companies: WorldSpace, Sirius Satellite Radio and XM Satellite Radio, all founded in the 1990s in the United States. WorldSpace operated in the Africa and Asia region, whereas Sirius and XM competed in the North American (USA and Canada) market. Of the three companies, WorldSpace went bankrupt in 2009 and Sirius and XM merged in 2008 to form Sirius XM. The merger was done to avoid bankruptcy. The new company had financial problems and was within days of bankruptcy in 2009, but was able to find investors. The company did not go bankrupt and Sirius XM Satellite radio continues (as of 2023) to operate. History and overview: Africa and Eurasia WorldSpace was founded by Ethiopia-born lawyer Noah Samara in Washington, D.C., in 1990, with the goal of making satellite radio programming available to the developing world. On June 22, 1991, the FCC gave WorldSpace permission to launch a satellite to provide digital programming to Africa and the Middle East. WorldSpace first began broadcasting satellite radio on October 1, 1999, in Africa. India would ultimately account for over 90% of WorldSpace’s subscriber base. In 2008, WorldSpace announced plans to enter Europe, but those plans were set aside when the company filed for Chapter 11 bankruptcy in November 2008. In March 2010, the company announced it would be de-commissioning its two satellites (one served Asia, the other served Africa). Liberty Media, which owns 50% of Sirius XM Radio, had considered purchasing WorldSpace’s assets, but talks between the companies collapsed. The satellites are now transmitting educational data and operate under the name of Yazmi USA, LLC. History and overview: Ondas Media was a Spanish company which had proposed to launch a subscription-based satellite radio system to serve Spain and much of Western Europe, but failed to acquire licenses throughout Europe.Onde Numérique was a French company which had proposed to launch a subscription-based satellite radio system to serve France and several other countries in Western Europe but has suspended its plans indefinitely, effective December, 2016. History and overview: United States Sirius Satellite Radio was founded by Martine Rothblatt, who served as the new company's Chairman of the Board. Co-founder David Margolese served as Chief Executive Officer with former NASA engineer Robert Briskman serving as President and Chief Operating Officer. In June 1990, Rothblatt's shell company, Satellite CD Radio, Inc., petitioned the Federal Communications Commission (FCC) to assign new frequencies for satellites to broadcast digital sound to homes and cars. The company identified and argued in favor of the use of the S-band frequencies that the FCC subsequently decided to allocate to digital audio broadcasting. The National Association of Broadcasters contended that satellite radio would harm local radio stations.In April 1992, Rothblatt resigned as CEO of Satellite CD Radio; Briskman, who designed the company's satellite technology, was then appointed chairman and CEO. Six months later, Rogers Wireless co-founder Margolese, who had provided financial backing for the venture, acquired control of the company and succeeded Briskman. Margolese renamed the company CD Radio, and spent the next five years lobbying the FCC to allow satellite radio to be deployed, and the following five years raising $1.6 billion, which was used to build and launch three satellites into elliptical orbit from Kazakhstan in July 2000. In 1997, after Margolese had obtained regulatory clearance and "effectively created the industry," the FCC also sold a license to the American Mobile Radio Corporation, which changed its name to XM Satellite Radio in October 1998. XM was founded by Lon Levin and Gary Parsons, who served as chairman until November 2009.CD Radio purchased their license for $83.3 million, and American Mobile Radio Corporation bought theirs for $89.9 million. Digital Satellite Broadcasting Corporation and Primosphere were unsuccessful in their bids for licenses. Sky Highway Radio Corporation had also expressed interest in creating a satellite radio network, before being bought out by CD Radio in 1993 for $2 million. In November 1999, Margolese changed the name of CD Radio to Sirius Satellite Radio. In November 2001, Margolese stepped down as CEO, remaining as chairman until November 2003, with Sirius issuing a statement thanking him "for his great vision, leadership and dedication in creating both Sirius and the satellite radio industry."XM’s first satellite was launched on March 18, 2001 and its second on May 8, 2001. Its first broadcast occurred on September 25, 2001, nearly four months before Sirius. Sirius launched the initial phase of its service in four cities on February 14, 2002, expanding to the rest of the contiguous United States on July 1, 2002. The two companies spent over $3 billion combined to develop satellite radio technology, build and launch the satellites, and for various other business expenses. Stating that it was the only way satellite radio could survive, Sirius and XM announced their merger on February 19, 2007, becoming Sirius XM. The FCC approved the merger on July 25, 2008, concluding that it was not a monopoly, primarily due to Internet audio-streaming competition. History and overview: Japan MobaHo! was a mobile satellite digital audio/video broadcasting service based in Japan which offered different services to Japan and the Republic of Korea and whose services began on October 20, 2004, and ended on March 31, 2009. Canada XM satellite radio was launched in Canada on November 29, 2005. Sirius followed two days later on December 1, 2005. Sirius Canada and XM Radio Canada announced their merger into Sirius XM Canada on November 24, 2010. It was approved by the Canadian Radio-television and Telecommunications Commission on April 12, 2011. System design: Satellite radio uses the 2.3 GHz S band in North America for nationwide digital radio broadcasting. MobaHO! operated at 2.6 GHz. In other parts of the world, satellite radio uses part of the 1.4 GHz L band allocated for DAB.Satellite radio subscribers purchase a receiver and pay a monthly subscription fee to listen to programming. They can listen through built-in or portable receivers in automobiles; in the home and office with a portable or tabletop receiver equipped to connect the receiver to a stereo system; or on the Internet. Reception is activated by obtaining the radio's unique ID and giving this to the service provider.Ground stations transmit signals to the satellites which are 35,786 kilometers (22,236 miles) above the Equator in geostationary orbits. The satellites send the signals back down to radio receivers in cars and homes. This signal contains scrambled broadcasts, along with meta data about each specific broadcast. The signals are unscrambled by the radio receiver modules, which display the broadcast information. In urban areas, ground repeaters enable signals to be available even if the satellite signal is blocked. The technology allows for nationwide broadcasting, so that, for instance US listeners can hear the same stations anywhere in the country. Content, availability and market penetration: Satellite radio in the US offers commercial-free music stations, as well as news, sports, and talk, some of which include commercials. In 2004, satellite radio companies in the United States began providing background music to hotels, retail chains, restaurants, airlines and other businesses. On April 30, 2013, SiriusXM CEO Jim Meyer stated that the company would be pursuing opportunities over the next few years to provide in-car services through their existing satellites, including telematics (automated security and safety, such as stolen vehicle tracking and roadside assistance) and entertainment (such as weather and gas prices).As of December 2020, SiriusXM had 34.7 million subscribers. This was primarily due to the company’s partnerships with automakers and car dealers. Roughly 60% of new cars sold come equipped with SiriusXM, and just under half of those units gain paid subscriptions. The company has long-term deals with General Motors, Ford, Toyota, Kia, Bentley, BMW, Volkswagen, Nissan, Hyundai and Mitsubishi. The presence of Howard Stern, whose show attracts over 12 million listeners per week, has also been a factor in the company’s steady growth. As of 2013, the main competition to satellite radio is streaming Internet services, such as Pandora and Spotify, as well as FM and AM Radio. Satellite radio vs. other formats: Satellite radio differs from AM, FM radio, and digital television radio (DTR) in the following ways (the table applies primarily to the United States):
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Carbon monoxide** Carbon monoxide: Carbon monoxide (chemical formula CO) is a poisonous, flammable gas that is colorless, odorless, tasteless, and slightly less dense than air. Carbon monoxide consists of one carbon atom and one oxygen atom connected by a triple bond. It is the simplest carbon oxide. In coordination complexes, the carbon monoxide ligand is called carbonyl. It is a key ingredient in many processes in industrial chemistry.The most common source of carbon monoxide is the partial combustion of carbon-containing compounds. Numerous environmental and biological sources generate carbon monoxide. In industry, carbon monoxide is important in the production of many compounds, including drugs, fragrances, and fuels. Upon emission into the atmosphere, carbon monoxide affects several processes that contribute to climate change.Carbon monoxide has important biological roles across phylogenetic kingdoms. It is produced by many organisms, including humans. In mammalian physiology, carbon monoxide is a classical example of hormesis where low concentrations serve as an endogenous neurotransmitter (gasotransmitter) and high concentrations are toxic resulting in carbon monoxide poisoning. It is isoelectronic with cyanide anion CN−. History: Prehistory Humans have maintained a complex relationship with carbon monoxide since first learning to control fire circa 800,000 BC. Early humans probably discovered the toxicity of carbon monoxide poisoning upon introducing fire into their dwellings. The early development of metallurgy and smelting technologies emerging circa 6,000 BC through the Bronze Age likewise plagued humankind from carbon monoxide exposure. Apart from the toxicity of carbon monoxide, indigenous Native Americans may have experienced the neuroactive properties of carbon monoxide through shamanistic fireside rituals. History: Ancient history Early civilizations developed mythological tales to explain the origin of fire, such as Prometheus from Greek mythology who shared fire with humans. Aristotle (384–322 BC) first recorded that burning coals produced toxic fumes. Greek physician Galen (129–199 AD) speculated that there was a change in the composition of the air that caused harm when inhaled, and many others of the era developed a basis of knowledge about carbon monoxide in the context of coal fume toxicity. Cleopatra may have died from carbon monoxide poisoning. History: Pre-Industrial Revolution Georg Ernst Stahl mentioned carbonarii halitus in 1697 in reference to toxic vapors thought to be carbon monoxide. Friedrich Hoffmann conducted the first modern scientific investigation into carbon monoxide poisoning from coal in 1716. Herman Boerhaave conducted the first scientific experiments on the effect of carbon monoxide (coal fumes) on animals in the 1730s.Joseph Priestley is considered to have first synthesized carbon monoxide in 1772. Carl Wilhelm Scheele similarly isolated carbon monoxide from charcoal in 1773 and thought it could be the carbonic entity making fumes toxic. Torbern Bergman isolated carbon monoxide from oxalic acid in 1775. Later in 1776, the French chemist de Lassone produced CO by heating zinc oxide with coke, but mistakenly concluded that the gaseous product was hydrogen, as it burned with a blue flame. In the presence of oxygen, including atmospheric concentrations, carbon monoxide burns with a blue flame, producing carbon dioxide. Antoine Lavoisier conducted similar inconclusive experiments to Lassone in 1777. The gas was identified as a compound containing carbon and oxygen by William Cruickshank in 1800.Thomas Beddoes and James Watt recognized carbon monoxide (as hydrocarbonate) to brighten venous blood in 1793. Watt suggested coal fumes could act as an antidote to the oxygen in blood, and Beddoes and Watt likewise suggested hydrocarbonate has a greater affinity for animal fiber than oxygen in 1796. In 1854, Adrien Chenot similarly suggested carbon monoxide to remove the oxygen from blood and then be oxidized by the body to carbon dioxide. The mechanism for carbon monoxide poisoning is widely credited to Claude Bernard whose memoirs beginning in 1846 and published in 1857 phrased, "prevents arterials blood from becoming venous". Felix Hoppe-Seyler independently published similar conclusions in the following year. History: Advent of industrial chemistry Carbon monoxide gained recognition as an essential reagent in the 1900s. Three industrial processes illustrate its evolution in industry. In the Fischer–Tropsch process, coal and related carbon-rich feedstocks are converted into liquid fuels via the intermediacy of CO. Originally developed as part of the German war effort to compensate for their lack of domestic petroleum, this technology continues today. Also in Germany, a mixture of CO and hydrogen was found to combine with olefins to give aldehydes. This process, called hydroformylation, is used to produce many large scale chemicals such as surfactants as well as specialty compounds that are popular fragrances and drugs. For example, CO is used in the production of vitamin A. In a third major process, attributed to researchers at Monsanto, CO combines with methanol to give acetic acid. Most acetic acid is produced by the Cativa process. Hydroformylation and the acetic acid syntheses are two of myriad carbonylation processes. Physical and chemical properties: Carbon monoxide is the simplest oxocarbon and is isoelectronic with other triply-bonded diatomic species possessing 10 valence electrons, including the cyanide anion, the nitrosonium cation, boron monofluoride and molecular nitrogen. It has a molar mass of 28.0, which, according to the ideal gas law, makes it slightly less dense than air, whose average molar mass is 28.8. Physical and chemical properties: The carbon and oxygen are connected by a triple bond that consists of a net two pi bonds and one sigma bond. The bond length between the carbon atom and the oxygen atom is 112.8 pm. This bond length is consistent with a triple bond, as in molecular nitrogen (N2), which has a similar bond length (109.76 pm) and nearly the same molecular mass. Carbon–oxygen double bonds are significantly longer, 120.8 pm in formaldehyde, for example. The boiling point (82 K) and melting point (68 K) are very similar to those of N2 (77 K and 63 K, respectively). The bond-dissociation energy of 1072 kJ/mol is stronger than that of N2 (942 kJ/mol) and represents the strongest chemical bond known.The ground electronic state of carbon monoxide is a singlet state since there are no unpaired electrons. Physical and chemical properties: Bonding and dipole moment Carbon and oxygen together have a total of 10 electrons in the valence shell. Following the octet rule for both carbon and oxygen, the two atoms form a triple bond, with six shared electrons in three bonding molecular orbitals, rather than the usual double bond found in organic carbonyl compounds. Since four of the shared electrons come from the oxygen atom and only two from carbon, one bonding orbital is occupied by two electrons from oxygen, forming a dative or dipolar bond. This causes a C←O polarization of the molecule, with a small negative charge on carbon and a small positive charge on oxygen. The other two bonding orbitals are each occupied by one electron from carbon and one from oxygen, forming (polar) covalent bonds with a reverse C→O polarization since oxygen is more electronegative than carbon. In the free carbon monoxide molecule, a net negative charge δ– remains at the carbon end and the molecule has a small dipole moment of 0.122 D.The molecule is therefore asymmetric: oxygen has more electron density than carbon and is also slightly positively charged compared to carbon being negative. By contrast, the isoelectronic dinitrogen molecule has no dipole moment. Physical and chemical properties: Carbon monoxide has a computed fractional bond order of 2.6, indicating that the "third" bond is important but constitutes somewhat less than a full bond. Thus, in valence bond terms, –C≡O+ is the most important structure, while :C=O is non-octet, but has a neutral formal charge on each atom and represents the second most important resonance contributor. Because of the lone pair and divalence of carbon in this resonance structure, carbon monoxide is often considered to be an extraordinarily stabilized carbene. Isocyanides are compounds in which the O is replaced by an NR (R = alkyl or aryl) group and have a similar bonding scheme. Physical and chemical properties: If carbon monoxide acts as a ligand, the polarity of the dipole may reverse with a net negative charge on the oxygen end, depending on the structure of the coordination complex. See also the section "Coordination chemistry" below. Physical and chemical properties: Bond polarity and oxidation state Theoretical and experimental studies show that, despite the greater electronegativity of oxygen, the dipole moment points from the more-negative carbon end to the more-positive oxygen end. The three bonds are in fact polar covalent bonds that are strongly polarized. The calculated polarization toward the oxygen atom is 71% for the σ-bond and 77% for both π-bonds.The oxidation state of carbon in carbon monoxide is +2 in each of these structures. It is calculated by counting all the bonding electrons as belonging to the more electronegative oxygen. Only the two non-bonding electrons on carbon are assigned to carbon. In this count, carbon then has only two valence electrons in the molecule compared to four in the free atom. Occurrence: Carbon monoxide occurs in various natural and artificial environments. Photochemical degradation of plant matter for example generates an estimated 60 billion kilograms/year. Typical concentrations in parts per million are as follows: Atmospheric presence Carbon monoxide (CO) is present in small amounts (about 80 ppb) in the Earth's atmosphere. Most of the rest comes from chemical reactions with organic compounds emitted by human activities and natural origins due to photochemical reactions in the troposphere that generate about 5 × 1012 kilograms per year. Other natural sources of CO include volcanoes, forest and bushfires, and other miscellaneous forms of combustion such as fossil fuels. Small amounts are also emitted from the ocean, and from geological activity because carbon monoxide occurs dissolved in molten volcanic rock at high pressures in the Earth's mantle. Because natural sources of carbon monoxide vary from year to year, it is difficult to accurately measure natural emissions of the gas. Occurrence: Carbon monoxide has an indirect effect on radiative forcing by elevating concentrations of direct greenhouse gases, including methane and tropospheric ozone. CO can react chemically with other atmospheric constituents (primarily the hydroxyl radical, •OH) that would otherwise destroy methane. Through natural processes in the atmosphere, it is oxidized to carbon dioxide and ozone. Carbon monoxide is short-lived in the atmosphere (with an average lifetime of about one to two months), and spatially variable in concentration.Due to its long lifetime in the mid-troposphere, carbon monoxide is also used as a tracer for pollutant plumes. Occurrence: Pollution Urban pollution Carbon monoxide is a temporary atmospheric pollutant in some urban areas, chiefly from the exhaust of internal combustion engines (including vehicles, portable and back-up generators, lawnmowers, power washers, etc.), but also from incomplete combustion of various other fuels (including wood, coal, charcoal, oil, paraffin, propane, natural gas, and trash). Large CO pollution events can be observed from space over cities. Occurrence: Role in ground level ozone formation Carbon monoxide is, along with aldehydes, part of the series of cycles of chemical reactions that form photochemical smog. It reacts with hydroxyl radical (•OH) to produce a radical intermediate •HOCO, which transfers rapidly its radical hydrogen to O2 to form peroxy radical (HO2•) and carbon dioxide (CO2). Peroxy radical subsequently reacts with nitrogen oxide (NO) to form nitrogen dioxide (NO2) and hydroxyl radical. NO2 gives O(3P) via photolysis, thereby forming O3 following reaction with O2. Occurrence: Since hydroxyl radical is formed during the formation of NO2, the balance of the sequence of chemical reactions starting with carbon monoxide and leading to the formation of ozone is: CO + 2O2 + hν → CO2 + O3(where hν refers to the photon of light absorbed by the NO2 molecule in the sequence) Although the creation of NO2 is the critical step leading to low level ozone formation, it also increases this ozone in another, somewhat mutually exclusive way, by reducing the quantity of NO that is available to react with ozone. Occurrence: Indoor pollution In closed environments, the concentration of carbon monoxide can rise to lethal levels. On average, 170 people in the United States die every year from carbon monoxide produced by non-automotive consumer products. Occurrence: These products include malfunctioning fuel-burning appliances such as furnaces, ranges, water heaters, and gas and kerosene room heaters; engine-powered equipment such as portable generators (and cars left running in attached garages); fireplaces; and charcoal that is burned in homes and other enclosed areas. Many deaths have occurred during power outages due to severe weather such as Hurricane Katrina and the 2021 Texas power crisis. Occurrence: Mining Miners refer to carbon monoxide as "whitedamp" or the "silent killer". It can be found in confined areas of poor ventilation in both surface mines and underground mines. The most common sources of carbon monoxide in mining operations are the internal combustion engine and explosives; however, in coal mines, carbon monoxide can also be found due to the low-temperature oxidation of coal. The idiom "Canary in the coal mine" pertained to an early warning of a carbon monoxide presence. Occurrence: Astronomy Beyond Earth, carbon monoxide is the second-most common diatomic molecule in the interstellar medium, after molecular hydrogen. Because of its asymmetry, this polar molecule produces far brighter spectral lines than the hydrogen molecule, making CO much easier to detect. Interstellar CO was first detected with radio telescopes in 1970. It is now the most commonly used tracer of molecular gas in general in the interstellar medium of galaxies, as molecular hydrogen can only be detected using ultraviolet light, which requires space telescopes. Carbon monoxide observations provide much of the information about the molecular clouds in which most stars form.Beta Pictoris, the second brightest star in the constellation Pictor, shows an excess of infrared emission compared to normal stars of its type, which is caused by large quantities of dust and gas (including carbon monoxide) near the star. Occurrence: In the atmosphere of Venus carbon monoxide occurs as a result of the photodissociation of carbon dioxide by electromagnetic radiation of wavelengths shorter than 169 nm. It has also been identified spectroscopically on the surface of Neptune's moon Triton.Solid carbon monoxide is a component of comets. The volatile or "ice" component of Halley's Comet is about 15% CO. At room temperature and at atmospheric pressure, carbon monoxide is actually only metastable (see Boudouard reaction) and the same is true at low temperatures where CO and CO2 are solid, but nevertheless it can exist for billions of years in comets. There is very little CO in the atmosphere of Pluto, which seems to have been formed from comets. This may be because there is (or was) liquid water inside Pluto. Occurrence: Carbon monoxide can react with water to form carbon dioxide and hydrogen: CO + H2O → H2 + CO2This is called the water-gas shift reaction when occurring in the gas phase, but it can also take place (very slowly) in an aqueous solution. If the hydrogen partial pressure is high enough (for instance in an underground sea), formic acid will be formed: CO + H2O → HCOOHThese reactions can take place in a few million years even at temperatures such as found on Pluto. Chemistry: Carbon monoxide has a wide range of functions across all disciplines of chemistry. The four premier categories of reactivity involve metal-carbonyl catalysis, radical chemistry, cation and anion chemistries. Chemistry: Coordination chemistry Most metals form coordination complexes containing covalently attached carbon monoxide. Only metals in lower oxidation states will complex with carbon monoxide ligands. This is because there must be sufficient electron density to facilitate back-donation from the metal dxz-orbital, to the π* molecular orbital from CO. The lone pair on the carbon atom in CO also donates electron density to the dx2−y2 on the metal to form a sigma bond. This electron donation is also exhibited with the cis effect, or the labilization of CO ligands in the cis position. Nickel carbonyl, for example, forms by the direct combination of carbon monoxide and nickel metal: Ni + 4 CO → Ni(CO)4 (1 bar, 55 °C)For this reason, nickel in any tubing or part must not come into prolonged contact with carbon monoxide. Nickel carbonyl decomposes readily back to Ni and CO upon contact with hot surfaces, and this method is used for the industrial purification of nickel in the Mond process.In nickel carbonyl and other carbonyls, the electron pair on the carbon interacts with the metal; the carbon monoxide donates the electron pair to the metal. In these situations, carbon monoxide is called the carbonyl ligand. One of the most important metal carbonyls is iron pentacarbonyl, Fe(CO)5: Many metal–CO complexes are prepared by decarbonylation of organic solvents, not from CO. For instance, iridium trichloride and triphenylphosphine react in boiling 2-methoxyethanol or DMF to afford IrCl(CO)(PPh3)2. Chemistry: Metal carbonyls in coordination chemistry are usually studied using infrared spectroscopy. Chemistry: Organic and main group chemistry In the presence of strong acids and water, carbon monoxide reacts with alkenes to form carboxylic acids in a process known as the Koch–Haaf reaction. In the Gattermann–Koch reaction, arenes are converted to benzaldehyde derivatives in the presence of AlCl3 and HCl. Organolithium compounds (e.g. butyl lithium) react with carbon monoxide, but these reactions have little scientific use. Chemistry: Although CO reacts with carbocations and carbanions, it is relatively nonreactive toward organic compounds without the intervention of metal catalysts.With main group reagents, CO undergoes several noteworthy reactions. Chlorination of CO is the industrial route to the important compound phosgene. With borane CO forms the adduct H3BCO, which is isoelectronic with the acetylium cation [H3CCO]+. CO reacts with sodium to give products resulting from C−C coupling such as sodium acetylenediolate 2Na+·C2O2−2. It reacts with molten potassium to give a mixture of an organometallic compound, potassium acetylenediolate 2K+·C2O2−2, potassium benzenehexolate 6K+C6O6−6, and potassium rhodizonate 2K+·C6O2−6.The compounds cyclohexanehexone or triquinoyl (C6O6) and cyclopentanepentone or leuconic acid (C5O5), which so far have been obtained only in trace amounts, can be regarded as polymers of carbon monoxide. At pressures exceeding 5 GPa, carbon monoxide converts to polycarbonyl, a solid polymer that is metastable at atmospheric pressure but is explosive. Chemistry: Laboratory preparation Carbon monoxide is conveniently produced in the laboratory by the dehydration of formic acid or oxalic acid, for example with concentrated sulfuric acid. Another method is heating an intimate mixture of powdered zinc metal and calcium carbonate, which releases CO and leaves behind zinc oxide and calcium oxide: Zn + CaCO3 → ZnO + CaO + COSilver nitrate and iodoform also afford carbon monoxide: CHI3 + 3AgNO3 + H2O → 3HNO3 + CO + 3AgIFinally, metal oxalate salts release CO upon heating, leaving a carbonate as byproduct: Na2C2O4 → Na2CO3 + CO Production: Thermal combustion is the most common source for carbon monoxide. Carbon monoxide is produced from the partial oxidation of carbon-containing compounds; it forms when there is not enough oxygen to produce carbon dioxide (CO2), such as when operating a stove or an internal combustion engine in an enclosed space. For example, during World War II, a gas mixture including carbon monoxide was used to keep motor vehicles running in parts of the world where gasoline and diesel fuel were scarce. External (with a few exceptions) charcoals or wood gas generators were fitted, and the mixture of atmospheric nitrogen, hydrogen, carbon monoxide, and small amounts of other gases produced by gasification was piped to a gas mixer. The gas mixture produced by this process is known as wood gas. Production: A large quantity of CO byproduct is formed during the oxidative processes for the production of chemicals. For this reason, the process off-gases have to be purified. Many methods have been developed for carbon monoxide production. Production: Industrial production A major industrial source of CO is producer gas, a mixture containing mostly carbon monoxide and nitrogen, formed by combustion of carbon in air at high temperature when there is an excess of carbon. In an oven, air is passed through a bed of coke. The initially produced CO2 equilibrates with the remaining hot carbon to give CO. The reaction of CO2 with carbon to give CO is described as the Boudouard reaction. Above 800 °C, CO is the predominant product: CO2 (g) + C (s) → 2 CO (g) (ΔHr = 170 kJ/mol)Another source is "water gas", a mixture of hydrogen and carbon monoxide produced via the endothermic reaction of steam and carbon: H2O (g) + C (s) → H2 (g) + CO (g) (ΔHr = 131 kJ/mol)Other similar "synthesis gases" can be obtained from natural gas and other fuels. Production: Carbon monoxide can also be produced by high-temperature electrolysis of carbon dioxide with solid oxide electrolyzer cells. One method developed at DTU Energy uses a cerium oxide catalyst and does not have any issues of fouling of the catalyst. 2 CO2 → 2 CO + O2Carbon monoxide is also a byproduct of the reduction of metal oxide ores with carbon, shown in a simplified form as follows: MO + C → M + COCarbon monoxide is also produced by the direct oxidation of carbon in a limited supply of oxygen or air. 2 C + O2 → 2 COSince CO is a gas, the reduction process can be driven by heating, exploiting the positive (favorable) entropy of reaction. The Ellingham diagram shows that CO formation is favored over CO2 in high temperatures. Use: Chemical industry Carbon monoxide is an industrial gas that has many applications in bulk chemicals manufacturing. Large quantities of aldehydes are produced by the hydroformylation reaction of alkenes, carbon monoxide, and H2. Hydroformylation is coupled to the Shell higher olefin process to give precursors to detergents. Phosgene, useful for preparing isocyanates, polycarbonates, and polyurethanes, is produced by passing purified carbon monoxide and chlorine gas through a bed of porous activated carbon, which serves as a catalyst. World production of this compound was estimated to be 2.74 million tonnes in 1989. CO + Cl2 → COCl2Methanol is produced by the hydrogenation of carbon monoxide. In a related reaction, the hydrogenation of carbon monoxide is coupled to C−C bond formation, as in the Fischer–Tropsch process where carbon monoxide is hydrogenated to liquid hydrocarbon fuels. This technology allows coal or biomass to be converted to diesel. In the Cativa process, carbon monoxide and methanol react in the presence of a homogeneous Iridium catalyst and hydroiodic acid to give acetic acid. This process is responsible for most of the industrial production of acetic acid. Use: Metallurgy Carbon monoxide is a strong reductive agent and has been used in pyrometallurgy to reduce metals from ores since ancient times. Carbon monoxide strips oxygen off metal oxides, reducing them to pure metal in high temperatures, forming carbon dioxide in the process. Carbon monoxide is not usually supplied as is, in the gaseous phase, in the reactor, but rather it is formed in high temperature in presence of oxygen-carrying ore, or a carboniferous agent such as coke, and high temperature. The blast furnace process is a typical example of a process of reduction of metal from ore with carbon monoxide. Use: Likewise, blast furnace gas collected at the top of blast furnace, still contains some 10% to 30% of carbon monoxide, and is used as fuel on Cowper stoves and on Siemens-Martin furnaces on open hearth steelmaking. Lasers Carbon monoxide has also been used as a lasing medium in high-powered infrared lasers. Use: Proposed use as fuel on Mars Carbon monoxide has been proposed for use as a fuel on Mars. Carbon monoxide/oxygen engines have been suggested for early surface transportation use as both carbon monoxide and oxygen can be straightforwardly produced from the carbon dioxide atmosphere of Mars by zirconia electrolysis, without using any Martian water resources to obtain hydrogen, which would be needed to make methane or any hydrogen-based fuel. Biological and physiological properties: Physiology Carbon monoxide is a bioactive molecule which acts as a gaseous signaling molecule. It is naturally produced by many enzymatic and non-enzymatic pathways, the best understood of which is the catabolic action of heme oxygenase on the heme derived from hemoproteins such as hemoglobin. Following the first report that carbon monoxide is a normal neurotransmitter in 1993, carbon monoxide has received significant clinical attention as a biological regulator. Biological and physiological properties: Because of carbon monoxide's role in the body, abnormalities in its metabolism have been linked to a variety of diseases, including neurodegenerations, hypertension, heart failure, and pathological inflammation. In many tissues, carbon monoxide acts as anti-inflammatory, vasodilatory, and encouragers of neovascular growth. In animal model studies, carbon monoxide reduced the severity of experimentally induced bacterial sepsis, pancreatitis, hepatic ischemia/reperfusion injury, colitis, osteoarthritis, lung injury, lung transplantation rejection, and neuropathic pain while promoting skin wound healing. Therefore, there is significant interest in the therapeutic potential of carbon monoxide becoming pharmaceutical agent and clinical standard of care. Biological and physiological properties: Medicine Studies involving carbon monoxide have been conducted in many laboratories throughout the world for its anti-inflammatory and cytoprotective properties. These properties have the potential to be used to prevent the development of a series of pathological conditions including ischemia reperfusion injury, transplant rejection, atherosclerosis, severe sepsis, severe malaria, or autoimmunity. Many pharmaceutical drug delivery initiatives have developed methods to safely administer carbon monoxide, and subsequent controlled clinical trials have evaluated the therapeutic effect of carbon monoxide. Biological and physiological properties: Microbiology Microbiota may also utilize carbon monoxide as a gasotransmitter. Carbon monoxide sensing is a signaling pathway facilitated by proteins such as CooA. The scope of the biological roles for carbon monoxide sensing is still unknown. Biological and physiological properties: The human microbiome produces, consumes, and responds to carbon monoxide. For example, in certain bacteria, carbon monoxide is produced via the reduction of carbon dioxide by the enzyme carbon monoxide dehydrogenase with favorable bioenergetics to power downstream cellular operations. In another example, carbon monoxide is a nutrient for methanogenic archaea which reduce it to methane using hydrogen.Carbon monoxide has certain antimicrobial properties which have been studied to treat against infectious diseases. Biological and physiological properties: Food science Carbon monoxide is used in modified atmosphere packaging systems in the US, mainly with fresh meat products such as beef, pork, and fish to keep them looking fresh. The benefit is two-fold, carbon monoxide protects against microbial spoilage and it enhances the meat color for consumer appeal. The carbon monoxide combines with myoglobin to form carboxymyoglobin, a bright-cherry-red pigment. Carboxymyoglobin is more stable than the oxygenated form of myoglobin, oxymyoglobin, which can become oxidized to the brown pigment metmyoglobin. This stable red color can persist much longer than in normally packaged meat. Typical levels of carbon monoxide used in the facilities that use this process are between 0.4% and 0.5%.The technology was first given "generally recognized as safe" (GRAS) status by the U.S. Food and Drug Administration (FDA) in 2002 for use as a secondary packaging system, and does not require labeling. In 2004, the FDA approved CO as primary packaging method, declaring that CO does not mask spoilage odor. The process is currently unauthorized in many other countries, including Japan, Singapore, and the European Union. Biological and physiological properties: Toxicity Carbon monoxide poisoning is the most common type of fatal air poisoning in many countries. The Centers for Disease Control and Prevention estimates that several thousand people go to hospital emergency rooms every year to be treated for carbon monoxide poisoning. According to the Florida Department of Health, "every year more than 500 Americans die from accidental exposure to carbon monoxide and thousands more across the U.S. require emergency medical care for non-fatal carbon monoxide poisoning." The American Association of Poison Control Centers (AAPCC) reported 15,769 cases of carbon monoxide poisoning resulting in 39 deaths in 2007. In 2005, the CPSC reported 94 generator-related carbon monoxide poisoning deaths.Carbon monoxide is colorless, odorless, and tasteless. As such, it is relatively undetectable. It readily combines with hemoglobin to produce carboxyhemoglobin which potentially affects gas exchange; therefore exposure can be highly toxic. Concentrations as low as 667 ppm may cause up to 50% of the body's hemoglobin to convert to carboxyhemoglobin. A level of 50% carboxyhemoglobin may result in seizure, coma, and fatality. In the United States, the OSHA limits long-term workplace exposure levels above 50 ppm.In addition to affecting oxygen delivery, carbon monoxide also binds to other hemoproteins such as myoglobin and mitochondrial cytochrome oxidase, metallic and non-metallic cellular targets to affect many cell operations. Biological and physiological properties: Weaponization In ancient history, Hannibal executed Roman prisoners with coal fumes during the Second Punic War.Carbon monoxide had been used for genocide during the Holocaust at some extermination camps, the most notable by gas vans in Chełmno, and in the Action T4 "euthanasia" program.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Alexandroff extension** Alexandroff extension: In the mathematical field of topology, the Alexandroff extension is a way to extend a noncompact topological space by adjoining a single point in such a way that the resulting space is compact. It is named after the Russian mathematician Pavel Alexandroff. Alexandroff extension: More precisely, let X be a topological space. Then the Alexandroff extension of X is a certain compact space X* together with an open embedding c : X → X* such that the complement of X in X* consists of a single point, typically denoted ∞. The map c is a Hausdorff compactification if and only if X is a locally compact, noncompact Hausdorff space. For such spaces the Alexandroff extension is called the one-point compactification or Alexandroff compactification. The advantages of the Alexandroff compactification lie in its simple, often geometrically meaningful structure and the fact that it is in a precise sense minimal among all compactifications; the disadvantage lies in the fact that it only gives a Hausdorff compactification on the class of locally compact, noncompact Hausdorff spaces, unlike the Stone–Čech compactification which exists for any topological space (but provides an embedding exactly for Tychonoff spaces). Example: inverse stereographic projection: A geometrically appealing example of one-point compactification is given by the inverse stereographic projection. Recall that the stereographic projection S gives an explicit homeomorphism from the unit sphere minus the north pole (0,0,1) to the Euclidean plane. The inverse stereographic projection S−1:R2↪S2 is an open, dense embedding into a compact Hausdorff space obtained by adjoining the additional point ∞=(0,0,1) . Under the stereographic projection latitudinal circles z=c get mapped to planar circles {\textstyle r={\sqrt {(1+c)/(1-c)}}} . It follows that the deleted neighborhood basis of (0,0,1) given by the punctured spherical caps c≤z<1 corresponds to the complements of closed planar disks {\textstyle r\geq {\sqrt {(1+c)/(1-c)}}} . More qualitatively, a neighborhood basis at ∞ is furnished by the sets S−1(R2∖K)∪{∞} as K ranges through the compact subsets of R2 . This example already contains the key concepts of the general case. Motivation: Let c:X↪Y be an embedding from a topological space X to a compact Hausdorff topological space Y, with dense image and one-point remainder {∞}=Y∖c(X) . Then c(X) is open in a compact Hausdorff space so is locally compact Hausdorff, hence its homeomorphic preimage X is also locally compact Hausdorff. Moreover, if X were compact then c(X) would be closed in Y and hence not dense. Thus a space can only admit a Hausdorff one-point compactification if it is locally compact, noncompact and Hausdorff. Moreover, in such a one-point compactification the image of a neighborhood basis for x in X gives a neighborhood basis for c(x) in c(X), and—because a subset of a compact Hausdorff space is compact if and only if it is closed—the open neighborhoods of ∞ must be all sets obtained by adjoining ∞ to the image under c of a subset of X with compact complement. The Alexandroff extension: Let X be a topological space. Put X∗=X∪{∞}, and topologize X∗ by taking as open sets all the open subsets U of X together with all sets of the form V=(X∖C)∪{∞} where C is closed and compact in X. Here, X∖C denotes the complement of C in X. Note that V is an open neighborhood of ∞, and thus any open cover of {∞} will contain all except a compact subset C of X∗, implying that X∗ is compact (Kelley 1975, p. 150). The space X∗ is called the Alexandroff extension of X (Willard, 19A). Sometimes the same name is used for the inclusion map c:X→X∗. The properties below follow from the above discussion: The map c is continuous and open: it embeds X as an open subset of X∗ The space X∗ is compact. The image c(X) is dense in X∗ , if X is noncompact. The space X∗ is Hausdorff if and only if X is Hausdorff and locally compact. The space X∗ is T1 if and only if X is T1. The one-point compactification: In particular, the Alexandroff extension c:X→X∗ is a Hausdorff compactification of X if and only if X is Hausdorff, noncompact and locally compact. In this case it is called the one-point compactification or Alexandroff compactification of X. The one-point compactification: Recall from the above discussion that any Hausdorff compactification with one point remainder is necessarily (isomorphic to) the Alexandroff compactification. In particular, if X is a compact Hausdorff space and p is a limit point of X (i.e. not an isolated point of X ), X is the Alexandroff compactification of X∖{p} Let X be any noncompact Tychonoff space. Under the natural partial ordering on the set C(X) of equivalence classes of compactifications, any minimal element is equivalent to the Alexandroff extension (Engelking, Theorem 3.5.12). It follows that a noncompact Tychonoff space admits a minimal compactification if and only if it is locally compact. Non-Hausdorff one-point compactifications: Let (X,τ) be an arbitrary noncompact topological space. One may want to determine all the compactifications (not necessarily Hausdorff) of X obtained by adding a single point, which could also be called one-point compactifications in this context. So one wants to determine all possible ways to give X∗=X∪{∞} a compact topology such that X is dense in it and the subspace topology on X induced from X∗ is the same as the original topology. The last compatibility condition on the topology automatically implies that X is dense in X∗ , because X is not compact, so it cannot be closed in a compact space. Non-Hausdorff one-point compactifications: Also, it is a fact that the inclusion map c:X→X∗ is necessarily an open embedding, that is, X must be open in X∗ and the topology on X∗ must contain every member of τ So the topology on X∗ is determined by the neighbourhoods of ∞ . Any neighborhood of ∞ is necessarily the complement in X∗ of a closed compact subset of X , as previously discussed. Non-Hausdorff one-point compactifications: The topologies on X∗ that make it a compactification of X are as follows: The Alexandroff extension of X defined above. Here we take the complements of all closed compact subsets of X as neighborhoods of ∞ . This is the largest topology that makes X∗ a one-point compactification of X The open extension topology. Here we add a single neighborhood of ∞ , namely the whole space X∗ . This is the smallest topology that makes X∗ a one-point compactification of X Any topology intermediate between the two topologies above. For neighborhoods of ∞ one has to pick a suitable subfamily of the complements of all closed compact subsets of X ; for example, the complements of all finite closed compact subsets, or the complements of all countable closed compact subsets. Further examples: Compactifications of discrete spaces The one-point compactification of the set of positive integers is homeomorphic to the space consisting of K = {0} U {1/n | n is a positive integer} with the order topology. A sequence {an} in a topological space X converges to a point a in X , if and only if the map f:N∗→X given by f(n)=an for n in N and f(∞)=a is continuous. Here N has the discrete topology. Polyadic spaces are defined as topological spaces that are the continuous image of the power of a one-point compactification of a discrete, locally compact Hausdorff space. Compactifications of continuous spaces The one-point compactification of n-dimensional Euclidean space Rn is homeomorphic to the n-sphere Sn. As above, the map can be given explicitly as an n-dimensional inverse stereographic projection. Further examples: The one-point compactification of the product of κ copies of the half-closed interval [0,1), that is, of [0,1)κ , is (homeomorphic to) [0,1]κ Since the closure of a connected subset is connected, the Alexandroff extension of a noncompact connected space is connected. However a one-point compactification may "connect" a disconnected space: for instance the one-point compactification of the disjoint union of a finite number n of copies of the interval (0,1) is a wedge of n circles. Further examples: The one-point compactification of the disjoint union of a countable number of copies of the interval (0,1) is the Hawaiian earring. This is different from the wedge of countably many circles, which is not compact. Given X compact Hausdorff and C any closed subset of X , the one-point compactification of X∖C is X/C , where the forward slash denotes the quotient space. If X and Y are locally compact Hausdorff, then (X×Y)∗=X∗∧Y∗ where ∧ is the smash product. Recall that the definition of the smash product: A∧B=(A×B)/(A∨B) where A∨B is the wedge sum, and again, / denotes the quotient space. Further examples: As a functor The Alexandroff extension can be viewed as a functor from the category of topological spaces with proper continuous maps as morphisms to the category whose objects are continuous maps c:X→Y and for which the morphisms from c1:X1→Y1 to c2:X2→Y2 are pairs of continuous maps fX:X1→X2,fY:Y1→Y2 such that fY∘c1=c2∘fX . In particular, homeomorphic spaces have isomorphic Alexandroff extensions.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded