text
stringlengths
60
353k
source
stringclasses
2 values
**Ring flip** Ring flip: In organic chemistry, a ring flip (also known as a ring inversion or ring reversal) is the interconversion of cyclic conformers that have equivalent ring shapes (e.g., from a chair conformer to another chair conformer) that results in the exchange of nonequivalent substituent positions. The overall process generally takes place over several steps, involving coupled rotations about several of the molecule's single bonds, in conjunction with minor deformations of bond angles. Most commonly, the term is used to refer to the interconversion of the two chair conformers of cyclohexane derivatives, which is specifically referred to as a chair flip, although other cycloalkanes and inorganic rings undergo similar processes. Chair flip: As stated above, a chair flip is a ring inversion specifically of cyclohexane (and its derivatives) from one chair conformer to another, often to reduce steric strain. The term, "flip" is misleading, because the direction of each carbon remains the same; what changes is the orientation. A conformation is a unique structural arrangement of atoms, in particular one achieved through the rotation of single bonds. A conformer is a conformational isomer, a blend of the two words. Chair flip: Cyclohexane There exist many different conformations for cyclohexane, such as chair, boat, and twist-boat, but the chair conformation is the most commonly observed state for cyclohexanes because it requires the least amount of energy. The chair conformation minimizes both angle strain and torsional strain by having all carbon-carbon bonds at 110.9° and all hydrogens staggered from one another.The molecular motions involved in a chair flip are detailed in the figure on the right: The half-chair conformation (D, 10.8 kcal/mol, C2 symmetry) is the energy maximum when proceeding from the chair conformer (A, 0 kcal/mol reference, D3d symmetry) to the higher energy twist-boat conformer (B, 5.5 kcal/mol, D2 symmetry). The boat conformation (C, 6.9 kcal/mol, C2v symmetry) is a local energy maximum for the interconversion of the two mirror image twist-boat conformers, the second of which is converted to the other chair confirmation through another half-chair. At the end of the process, all axial positions have become equatorial and vice versa. The overall barrier of 10.8 kcal/mol corresponds to a rate constant of about 105 s–1 at room temperature. Note that the twist-boat (D2) conformer and the half-chair (C2) transition state are in chiral point groups and are therefore chiral molecules. In the figure, the two depictions of B and two depictions of D are pairs of enantiomers. Chair flip: As a consequence of the chair flip, the axially-substituted and equatorially-substituted conformers of a molecule like chlorocyclohexane cannot be isolated at room temperature. However, in some cases, the isolation of individual conformers of substituted cyclohexane derivatives has been achieved at low temperatures (–150 °C). Chair flip: Axial and equatorial positions As noted above, by transitioning from one chair conformer to another, all axial positions become equatorial and all equatorial positions become axial. Substituent groups in equatorial positions roughly follow along the equator of the cyclohexane ring and are perpendicular to the axis, while substituents in axial positions roughly follow the imaginary axis of the carbon ring and are perpendicular to the equator. Chair flip: Diaxial interactions or axial-axial interactions is what the steric strain between an axial substituent and another axial group, typically a hydrogen, on the same side of a chair conformation ring. The interaction is labeled by the carbon number they come from. A 1,3-diaxial interaction happens between the atoms connected to the first and third carbons. The more interactions the more strain on the molecule and the conformations with the most strain are less likely to be seen. An example is cyclopropane which, because of its planar geometry, has six fully eclipsed carbon and axial hydrogen bonds making the strain 116 kJ/mol (27.7 kcal/mol). Strain can also be decreased when the carbon-carbon bond angles are close or at the preferred bond angle of 109.5°, meaning a ring having six tetrahedral carbons is typically lower than that of most rings. Examples: Cyclohexane is a prototype for low-energy degenerate ring flipping. Two 1H NMR signals should be observed in principle, corresponding to axial and equatorial protons. However, due to the cyclohexane chair flip, only one signal is seen for a solution of cyclohexane at room temperature, as the axial and equatorial proton rapidly interconvert relative to the NMR time scale. The coalescence temperature at 60 MHz is ca. –60 °C. As a consequence of the chair flip, the axially-substituted and equatorially-substituted conformers of a molecule like chlorocyclohexane cannot be isolated at room temperature. However, in some cases, the isolation of individual conformers of substituted cyclohexane derivatives has been achieved at low temperatures (–150 °C). Examples: Most compounds with nonplanar rings engage in degenerate ring flipping. One well-studied example is titanocene pentasulfide, where the inversion barrier is high relative to cyclohexane's. Hexamethylcyclotrisiloxane on the other hand is subject to a very low barrier. Examples: Bicycloalkanes are alkanes containing two rings that are connected to each other by sharing two carbon atoms. Orientation within bicycloalkanes is dependent on the cis or trans orientation of the hydrogen shared by the different rings instead of the methyl groups present in the rings.Tetrodotoxin is one of the world's most potent toxins. It is made up of multiple six member rings set in chair conformations, with each ring but one containing an atom other than carbon.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Solid-state battery** Solid-state battery: A solid-state battery deploys solid-state technology using solid electrodes and a solid electrolyte, instead of the liquid or polymer gel electrolytes found in lithium-ion or lithium polymer batteries.While solid electrolytes were first discovered in the 19th century, several drawbacks have prevented widespread application. Developments in the late 20th and early 21st century have caused renewed interest in solid-state battery technologies, especially in the context of electric vehicles, starting in the 2010s. Solid-state battery: Solid-state technology batteries can provide potential solutions for many problems of liquid Li-ion batteries, such as flammability, limited voltage, unstable solid-electrolyte interphase formation, poor cycling performance and strength.Materials proposed for use as solid electrolytes in solid-state batteries include ceramics (e.g., oxides, sulfides, phosphates), and solid polymers. Solid-state batteries have found use in pacemakers, RFID and wearable devices. Solid-state technology used in these batteries is potentially safer, with higher energy densities, but at a much higher cost. Challenges to widespread adoption include energy and power density, durability, material costs, sensitivity and stability. History: Between 1831 and 1834, Michael Faraday discovered the solid electrolytes silver sulfide and lead(II) fluoride, which laid the foundation for solid-state ionics.By the late 1950s, several silver-conducting electrochemical systems employed solid electrolytes, but such systems possessed undesirable qualities, including low energy density and cell voltages, and high internal resistance. In 1967, the discovery of fast ionic conduction β - alumina for a broad class of ions (Li+, Na+, K+, Ag+, and Rb+) kick-started excitement for and the development of new solid-state electrochemical devices with increased energy density. Most immediately, molten sodium / β - alumina / sulfur cells were developed at Ford Motor Company in the US, and NGK in Japan. This excitement for solid-state electrolytes manifested in the discovery of new systems in both organics, i.e. poly(ethylene) oxide (PEO), and inorganics such as NASICON. However, many of these systems commonly required operation at elevated temperatures, and / or were expensive to produce, enabling only limited commercial deployment. A new class of solid-state electrolyte developed by the Oak Ridge National Laboratory, Lithium phosphorus oxynitride (LiPON), emerged in the 1990s. While LiPON was successfully used to make thin film lithium-ion batteries, such applications were limited due to the cost associated with deposition of the thin-film electrolyte, along with the small capacities that could be accessed using the thin film format.In 2011, the landmark work of Kamaya et al. demonstrated the first solid-electrolyte, Li1.5Al0.5Ge1.5(PO4)3 (LAGP), capable of achieving a bulk ionic conductivity in excess of liquid electrolyte counterparts at room temperature. With this, bulk solid-ion conductors could finally compete technologically with Li-ion counterparts, leading to the modern era of solid-state research. Commercial research and development since 2010: As technology advanced into the new millennium, researchers and companies in the automotive and transportation industries experienced revitalized interest in solid-state battery technologies. In 2011, Bolloré launched a fleet of their BlueCar model cars, first in cooperation with carsharing service Autolib, and later released to retail customers. The car was meant to showcase the company's diversity of electric-powered cells in the application, and featured a 30 kWh lithium metal polymer (LMP) battery with a polymeric electrolyte, created by dissolving lithium salt in a co-polymer (polyoxyethylene). Commercial research and development since 2010: In 2012, Toyota soon followed suit and began conducting experimental research into solid-state batteries for applications in the automotive industry in order to remain competitive in the EV market. At the same time, Volkswagen began partnering with small technology companies specializing in the technology. Commercial research and development since 2010: A series of technological breakthroughs ensued. In 2013, researchers at the University of Colorado Boulder announced the development of a solid-state lithium battery, with a solid composite cathode based on an iron-sulfur chemistry, that promised higher energy capacity compared to already-existing SSBs.In 2017, John Goodenough, the co-inventor of Li-ion batteries, unveiled a solid-state glass battery, using a glass electrolyte and an alkali-metal anode consisting of lithium, sodium or potassium. Later that year, Toyota announced the deepening of its decades-long partnership with Panasonic, including a collaboration on solid-state batteries. Due to its early intensive research and coordinated collaborations with other industry leaders, Toyota holds the most SSB-related patents. However, other car makers independently developing solid-state battery technologies quickly joined a growing list that includes BMW, Honda, Hyundai Motor Company and Nissan. Other automotive-related companies, such as Spark plug maker NGK, have retrofitted their business expertise and models to cater to evolving demand for ceramic-based solid state batteries, in the face of perceived obsolescence of the conventional fossil-fuel paradigm.Major developments continued to unfold into 2018, when Solid Power, spun off from the University of Colorado Boulder research team, received $20 million in funding from Samsung and Hyundai to establish a small manufacturing line that could produce copies of its all-solid-state, rechargeable lithium-metal battery prototype, with a predicted 10 megawatt hours of capacity per year.QuantumScape, another solid-state battery startup that spun out of a collegiate research group (in this case, Stanford University) drew attention that same year, when Volkswagen announced a $100 million investment into the team's research, becoming the largest stakeholder, joined by investor Bill Gates. With the goal to establish a joint production project for mass production of solid-state batteries, Volkswagen endowed QuantumScape with an additional $200 million in June 2020, and QuantumScape IPO'd on the NYSE on November 29, 2020, as part of a merger with Kensington Capital Acquisition, to raise additional equity capital for the project. QuantumScape has "scheduled mass production to begin in the second half of 2024".Qing Tao started the first Chinese production line of solid-state batteries in 2018 as well, with the initial intention of supplying SSBs for “special equipment and high-end digital products”; however, the company has spoken with several car manufacturers with the intent to potentially expand into the automotive space.In July 2021, Murata Manufacturing announced that it will begin mass production of all-solid-state batteries in the coming months, aiming to supply them to manufacturers of earphones and other wearables. Commercial research and development since 2010: The battery capacity is up to 25mAh at 3.8V, making it suitable for small mobile devices such as earbuds, but not for electric vehicles. Lithium-Ion cells used in electric vehicles typically offer 2,000 to 5,000 mAh at similar voltage: an EV would need at least 100 times as many of the Murata cells to provide equivalent power. Commercial research and development since 2010: Ford Motor Company and BMW funded the startup Solid Power with $130 million, and as of 2022 the company has raised a total of $540 million.In September 2021, Toyota announced their plan to use a solid-state battery in some future car models, starting with hybrid models in 2025, due to the cost and lower power requirements.In early 2022, Swiss Clean Battery (SCB) announced its plans to open the world's first factory for sustainable solid-state batteries in Frauenfeld by 2024 with an initial production of 1.2 GWh which is planned to be scaled to 7.6 GWh.In January 2022, ProLogium signed a technical cooperation agreement with Mercedes-Benz, a subsidiary of the Daimler Group. The money invested by Mercedes-Benz will be used for solid-state battery development and production preparations.In July 2022, Svolt announced the production of a 20 Ah electric battery with an energy density of 350-400 Wh/kg. Materials: Solid-state electrolytes (SSEs) candidate materials include ceramics such as lithium orthosilicate, glass, sulfides and RbAg4I5. Mainstream oxide solid electrolytes include Li1.5Al0.5Ge1.5(PO4)3 (LAGP), Li1.4Al0.4Ti1.6(PO4)3 (LATP), perovskite-type Li3xLa2/3-xTiO3 (LLTO), and garnet-type Li6.4La3Zr1.4Ta0.6O12 (LLZO) with metallic Li. The thermal stability versus Li of the four SSEs was in order of LAGP < LATP < LLTO < LLZO. Chloride superionic conductors have been proposed as another promising solid electrolyte. They are ionic conductive as well as deformable sulfides, but at the same time not troubled by the poor oxidation stability of sulfides. Other than that, their cost is considered lower than oxide and sulfide SSEs. The present chloride solid electrolyte systems can be divided into two types: Li3MCl6 and Li2M2/3Cl4. M Elements include Y, Tb-Lu, Sc, and In. The cathodes are lithium based. Variants include LiCoO2, LiNi1/3Co1/3Mn1/3O2, LiMn2O4, and LiNi0.8Co0.15Al0.05O2. The anodes vary more and are affected by the type of electrolyte. Examples include In, Si, GexSi1−x, SnO–B2O3, SnS –P2S5, Li2FeS2, FeS, NiP2, and Li2SiS3.One promising cathode material is Li-S, which (as part of a solid lithium anode/Li2S cell) has a theoretical specific capacity of 1670 mAh g−1, "ten times larger than the effective value of LiCoO2". Sulfur makes an unsuitable cathode in liquid electrolyte applications because it is soluble in most liquid electrolytes, dramatically decreasing the battery's lifetime. Sulfur is studied in solid state applications. Recently, a ceramic textile was developed that showed promise in a Li-S solid state battery. This textile facilitated ion transmission while also handling sulfur loading, although it did not reach the projected energy density. The result "with a 500-μm-thick electrolyte support and 63% utilization of electrolyte area" was "71 Wh/kg." while the projected energy density was 500 Wh/kg.Li-O2 also have high theoretical capacity. The main issue with these devices is that the anode must be sealed from ambient atmosphere, while the cathode must be in contact with it.A Li/LiFePO4 battery shows promise as a solid state application for electric vehicles. A 2010 study presented this material as a safe alternative to rechargeable batteries for EV's that "surpass the USABC-DOE targets".A cell with a pure silicon μSi||SSE||NCM811 anode was assembled by Darren H.S Tan et al. using μSi anode(purity of 99.9 wt %), solid state electrolyte (SSE) and lithium nickel cobalt manganese oxide (NCM811) cathode. This kind of solid state battery demonstrated a high current density up to 5 mA cm−2, a wide range of working temperature (-20 °C and 80 °C), and areal capacity (for the anode) of up to 11 mAh cm−2 (2890 mAh/g). At the same time, after 500 cycles under 5 mA cm−2, the batteries still provide 80% of capacity retention, which is the best performance of μSi all solid-state battery reported so far.Chloride solid electrolytes also show promise over conventional oxide solid electrolytes owing to chloride solid electrolytes having theoretically higher ionic conductivity and better formability. In addition chloride solid electrolyte’s exceptionally high oxidation stability and high ductility add to its performance. In particular a lithium mixed-metal chloride family of solid electrolytes, Li2InxSc0.666-xCl4 developed by Zhou et tal., show high ionic conductivity (2.0 mS cm−1) over a wide range of composition. This is owing to the chloride solid electrolyte being able to be used in conjunction with bare cathode active materials as opposed to coated cathode active materials and its low electronic conductivity. Alternative cheaper chloride solid electrolyte compositions with lower, but still impressive, ionic conductivity can be found with an Li2ZrCl6 solid electrolyte. This particular chloride solid electrolyte maintains a high room temperature ionic conductivity (0.81 mS cm−1), deformability, and has a high humidity tolerance. Uses: Solid-state batteries are potentially useful in pacemakers, RFIDs, wearable devices, and electric vehicles. Uses: Electric vehicles Hybrid and plug-in electric cars use a variety of battery technologies, including Li-ion, nickel–metal hydride (NiMH), lead–acid, and electric double-layer capacitor (or ultracapacitor), with Li-ion dominating the market.Honda stated in 2022 that it planned to start operation of a demonstration line for the production of all-solid-state batteries in Spring 2024, and Nissan announced that, by FY2028, it aims to launch an electric vehicle with all-solid-state batteries that are to be developed in-house.In June 2023, Toyota updated their strategy for battery electric vehicles, announcing that they will not use commercial solid-state batteries until at least 2027. Uses: Wearables The characteristics of high energy density and keeping high performance even in harsh environments are expected in realization of new wearable devices that are smaller and more reliable than ever. Uses: Equipment in space In March 2021, industrial manufacturer Hitachi Zosen Corporation announced a solid-state battery they claimed has one of the highest capacities in the industry and has a wider operating temperature range, potentially suitable for harsh environments like space. A test mission was launched in February 2022, and in August, Japan Aerospace Exploration Agency (JAXA) announced the solid-state batteries had properly operated in space, powering camera equipment in the Japanese Experiment Module Kibō on the International Space Station (ISS). Uses: Drones Being lighter weight and more powerful than traditional lithium ion batteries it is reasonable that drones would benefit from Solid State batteries. Vayu Aerospace, a drone manufacturer and designer, noted an increased flight time after they incorporated them into their G1 long flight drone. Challenges: Cost Thin-film solid-state batteries are expensive to make and employ manufacturing processes thought to be difficult to scale, requiring expensive vacuum deposition equipment. As a result, costs for thin-film solid-state batteries become prohibitive in consumer-based applications. It was estimated in 2012 that, based on then-current technology, a 20 Ah solid-state battery cell would cost US$100,000, and a high-range electric car would require between 800 and 1,000 of such cells. Likewise, cost has impeded the adoption of thin film solid-state batteries in other areas, such as smartphones. Challenges: Temperature and pressure sensitivity Low temperature operations may be challenging. Solid-state batteries historically have had poor performance.Solid-state batteries with ceramic electrolytes require high pressure to maintain contact with the electrodes. Solid-state batteries with ceramic separators may break from mechanical stress.In November 2022, Japanese research group, consisting of Kyoto University, Tottori University and Sumitomo Chemical, announced that they have managed to operate solid-state batteries stably without applying pressure with 230Wh/kg capacity by using copolymerized new materials for electrolyte.In June 2023, Japanese research group of the Graduate School of Engineering at Osaka Metropolitan University announced that they have succeeded in stabilizing the high-temperature phase of Li3PS4 (α-Li3PS4) at room temperature. It was via rapid heating to crystallize the Li3PS4 glass. Challenges: Interfacial resistance High interfacial resistance between a cathode and solid electrolyte has been a long-standing problem for all-solid-state batteries. Interfacial instability The interfacial instability of the electrode-electrolyte has always been a serious problem in solid state batteries. After solid state electrolyte contacts with electrode, the chemical and/or electrochemical side reactions at the interface usually produce a passivated interface, which impedes the diffusion of Li+ across the electrode-SSE interface. Upon high-voltage cycling, some SSEs may undergo oxidative degradation. Challenges: Dendrites Solid lithium (Li) metal anodes in solid-state batteries are replacement candidates in lithium-ion batteries for higher energy densities, safety, and faster recharging times. Such anodes tend to suffer from the formation and the growth of Li dendrites, non-uniform metal growths which penetrate the electrolyte lead to electrical short circuits. This shorting leads to energy discharge, overheating, and sometimes fires or explosions due to thermal runaway. Li dendrites reduce coulombic efficiency.The exact mechanisms of dendrite growth remain a subject of research. Studies of metal dendrite growth in solid electrolytes began with research of molten sodium / sodium - β - alumina / sulfur cells at elevated temperature. In these systems, dendrites sometimes grow as a result of micro-crack extension due to the presence of plating-induced pressure at the sodium / solid electrolyte interface. However, dendrite growth may also occur due to chemical degradation of the solid electrolyte.In Li-ion solid electrolytes stable to Li metal, dendrites propagate primarily due to pressure build up at the electrode / solid electrolyte interface, leading to crack extension. Meanwhile, for solid electrolytes which are chemically unstable against their respective metal, interphase growth and eventual cracking often prevents dendrites from forming.Dendrite growth in solid-state Li-ion cells can be mitigated by operating the cells at elevated temperature, or by using residual stresses to fracture toughen electrolytes, thereby deflecting dendrites and delaying dendrite induced short-circuiting. Aluminum-containing electronic rectifying interphases between the solid-state electrolyte and the lithium metal anode have also been shown to be effective in preventing dendrite growth. Challenges: Mechanical failure A common failure mechanism in solid-state batteries is mechanical failure through volume changes in the anode and cathode during charge and discharge due to the addition and removal of Li-ions from the host structures. Challenges: Cathode Cathodes will typically consist of active cathode particles mixed with SSE particles to assist with ion conduction. As the battery charges/discharges, the cathode particles change in volume typically on the order of a few percent. This volume change leads to the formation of interparticle voids which worsens contact between the cathode and SSE particles, resulting in a significant loss of capacity due to the restriction in ion transport.One proposed solution to this issue is to take advantage of the anisotropy of volume change in the cathode particles. As many cathode materials experience volume changes only along certain crystallographic directions, if the secondary cathode particles are grown along a crystallographic direction which does not expand greatly with charge/discharge, then the change in volume of the particles can be minimized. Another proposed solution is to mix different cathode materials which have opposite expansion trends in the proper ratio such that the net volume change of the cathode is zero. For instance, LiCoO2 (LCO) and LiNi0.9Mn0.05Co0.05O2 (NMC) are two well-known cathode materials for Li-ion batteries. LCO has been shown to undergo volume expansion when discharged while NMC has been shown to undergo volume contraction when discharged. Thus, a composite cathode of LCO and NMC at the correct ratio could undergo minimal volume change under discharge as the contraction of NMC is compensated by the expansion of LCO. Challenges: Anode Ideally a solid-state battery would use a pure lithium metal anode due to its high energy capacity. However, lithium undergoes a large increase of volume during charge at around 5 µm per 1 mAh/cm2 of plated Li. For electrolytes with a porous microstructure, this expansion leads to an increase in pressure which can lead to creep of Li metal through the electrolyte pores and short of the cell. Lithium metal has a relatively low melting point of 453K and a low activation energy for self-diffusion of 50 kJ/mol, indicating its high propensity to significantly creep at room temperature. It has been shown that at room temperature lithium undergoes power-law creep where the temperature is high enough relative to the melting point that dislocations in the metal can climb out of their glide plane to avoid obstacles. The creep stress under power-law creep is given by: exp ⁡(QcmRT) Where R is the gas constant, T is temperature, ε˙ is the uniaxial strain rate, σcreep is the creep stress, and for lithium metal 6.6 , 37 kJ⋅mol−1 , 10 5Pa⋅s−1 .For lithium metal to be used as an anode, great care must be taken to minimize the cell pressure to relatively low values on the order of its yield stress of 0.8 MPa. The normal operating cell pressure for lithium metal anode is anywhere from 1-7 MPa. Some possible strategies to minimize stress on the lithium metal are to use cells with springs of a chosen spring constant or controlled pressurization of the entire cell. Another strategy may be to sacrifice some energy capacity and use a lithium metal alloy anode which typically has a higher melting temperature than pure lithium metal, resulting in a lower propensity to creep. While these alloys do expand quite a bit when lithiated, often to a greater degree than lithium metal, they also possess improved mechanical properties allowing them to operate at pressures around 50 MPa. This higher cell pressure also has the added benefit of possibly mitigating void formation in the cathode. Advantages: Solid-state battery technology is believed to deliver higher energy densities (2.5x).They may avoid the use of dangerous or toxic materials found in commercial batteries, such as organic electrolytes.Because most liquid electrolytes are flammable and solid electrolytes are nonflammable, solid-state batteries are believed to have lower risk of catching fire. Fewer safety systems are needed, further increasing energy density at the module or cell pack level. Recent studies show that heat generation inside is only ~20-30% of conventional batteries with liquid electrolyte under thermal runaway.Solid-state battery technology is believed to allow for faster charging. Higher voltage and longer cycle life are also possible. Thin film solid state batteries: Background The earliest thin film solid state batteries is found by Keiichi Kanehori in 1986, which is based on the Li electrolyte. However, at that time, the technology was insufficient to power larger electronic devices so it was not fully developed. During recent years, there has been much research in the field. Garbayo demonstrated that “polyamorphism” exists besides crystalline states for thin film Li-garnet solid state batteries in 2018, Moran demonstrated that ample can manufacture ceramic films with the desired size range of 1–20 μm in 2021. Thin film solid state batteries: Structure Anode materials: Li is favored because of its storage properties, alloys of Al, Si and Sn are also suitable as anodes. Cathode materials: require having light weight, good cyclical capacity and high energy density. Usually include LiCoO2, LiFePO4, TiS2, V2O5and LiMnO2. Preparation techniques Some methods are listed below. Physical methods: Magnetron sputtering (MS) is one of the most widely used processes for thin film manufacturing, which is based on physical vapor deposition. Ion-beam deposition (IBD) is similar to the first method, however, bias is not applied and plasma doesn't occur between the target and the substrate in this process. Pulsed laser deposition (PLD), laser used in this method has a high power pulses up to about 108 W cm−2. Vacuum evaporation (VE) is a method to prepare alpha-Si thin films. During this process, Si evaporates and deposits on a metallic substrate. Chemical methods: Electrodeposition (ED) is for manufacturing Si films, which is convenient and economically viable technique. Chemical vapor deposition (CVD) is a deposition technique allowing to make thin films with a high quality and purity. Glow discharge plasma deposition (GDPD) is a mixed physicochemical process. In this process, synthesis temperature has been increased to decrease the extra hydrogen content in the films. Development of thin film system Lithium-Oxygen and Nitrogen based polymer thin film electrolytes has got fully used in solid state batteries. Non-Li based thin film solid state batteries have been studied, such as Ag-doped germanium chalcogenide thin film solid state electrolyte system. Barium-doped thin film system has also been studied, which thickness can be 2μm at least. In addition, Ni can also be a component in thin film. There are also other methods to fabricate the electrolytes for thin film solid state batteries, which are 1.electrostatic-spray deposition technique, 2. DSM-Soulfill process and 3. Using MoO3 nanobelts to improve the performance of lithium based thin film solid state batteries. Advantages Compared with other batteries, the thin film batteries have both high gravimetric as well as volumetric energy densities. These are important indicators to measure battery performance of energy stored. In addition to high energy density, thin-film solid-state batteries have long lifetime, outstanding flexibility and low weight. These properties make thin film solid state batteries suitable for use in various fields such as electric vehicles, military facilities and medical devices. Thin film solid state batteries: Challenges Its performance and efficiency are constrained by the nature of its geometry. The current drawn from a thin film battery largely depends on the geometry and interface contacts of the electrolyte/cathode and the electrolyte/anode interfaces Low thickness of the electrolyte and the interfacial resistance at the electrode and electrolyte interface affect the output and integration of thin film systems. Thin film solid state batteries: During the charging-discharging process, considerable change of volumetric makes the loss of material.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Ispell** Ispell: Ispell is a spelling checker for Unix that supports most Western languages. It offers several interfaces, including a programmatic interface for use by editors such as Emacs. Unlike GNU Aspell, ispell will only suggest corrections that are based on a Damerau–Levenshtein distance of 1; it will not attempt to guess more distant corrections based on English pronunciation rules. Ispell: Ispell has a very long history that can be traced back to a program that was originally written in 1971 in PDP-10 Assembly language by R. E. Gorin, and later ported to the C programming language and expanded by many others. It is currently maintained by Geoff Kuenning. The generalized affix description system introduced by ispell has since been imitated by other spelling checkers such as MySpell. Ispell: Like most computerized spelling checkers, ispell works by reading an input file word by word, stopping when a word is not found in its dictionary. Ispell then attempts to generate a list of possible corrections and presents the incorrect word and any suggestions to the user, who can then choose a correction, replace the word with a new one, leave it unchanged, or add it to the dictionary. Ispell: Ispell pioneered the idea of a programming interface, which was originally intended for use by Emacs. Other applications have since used the feature to add spell-checking to their own interface, and GNU Aspell has adopted the same interface so that it can be used with the same set of applications. There are ispell dictionaries for most widely spoken Western languages. Ispell is available under a specific open-source license.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Truespeech** Truespeech: Truespeech is a proprietary audio codec produced by the DSP Group. It is designed for encoding voice data at low bitrates (8.5kbps for 8kHz samples), and to be embedded into DSP chips. Truespeech had been integrated into Windows Media Player in older versions of Windows, but no longer supported since Windows Vista. It was also the format used by the voice chat features of Yahoo! Messenger. It is implemented through the Tsd32.dll A Truespeech decoder was implemented in the 0.5 release of FFmpeg.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**1917 Philadelphia Phillies season** 1917 Philadelphia Phillies season: The following lists the events of the 1917 Philadelphia Phillies season. Regular season: Season standings Record vs. opponents Roster Player stats: Batting Starters by position Note: Pos = Position; G = Games played; AB = At bats; H = Hits; Avg. = Batting average; HR = Home runs; RBI = Runs batted in Other batters Note: G = Games played; AB = At bats; H = Hits; Avg. = Batting average; HR = Home runs; RBI = Runs batted in Pitching Starting pitchers Note: G = Games pitched; IP = Innings pitched; W = Wins; L = Losses; ERA = Earned run average; SO = Strikeouts Other pitchers Note: G = Games pitched; IP = Innings pitched; W = Wins; L = Losses; ERA = Earned run average; SO = Strikeouts Relief pitchers Note: G = Games pitched; W = Wins; L = Losses; SV = Saves; ERA = Earned run average; SO = Strikeouts
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Companion ad** Companion ad: In online advertising, a companion ad is a display ad shown alongside a video or audio ad, usually displayed on top of the player and/or on its side. It is displayed at the same time than the master ad and offers the user a spot to click. It can continue to be displayed after the master ad has finished playing. They are called companion ads because they are thought of as a companion to the main video or audio adCompanion ads are seen as giving an advantage to brands because customers will have access to the brand after the video ad ends in the event they gain interestServices like YouTube (video ad) or Spotify (audio ad) allow advertisers to add companion ads to their master ads. Companion ad: For video ads, as standardized by the Interactive Advertising Bureau, the companion ad is defined alongside its master ad inside the VAST response. It is characterized by the creative's resolution and type, the file URL and a click-through URL.Companion ads are also used on connected TVs as a new way for brands and advertisers to engage deeper with the TV viewers. In that context, the companion ad is clicked with the remote controller instead of the mouse (PC) or finger (mobile). Services like Roku also support inserting companion ads on TVs through set-top boxes.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**LG Cookie (KP500)** LG Cookie (KP500): The LG Cookie, model no. KP500, or Cyon Cooky (쿠키) in South Korea, is a discontinued touchscreen mobile phone announced on 30 September 2008. LG targeted the entry-level touchscreen market keeping the cost of the Cookie as low as possible by omitting some of the features found on higher-end products, such as 3G. The LG Cookie was highly popular, and is credited for starting the "cheap touchscreen craze". Features: Its main feature is a 3-inch, 240 x 400 pixel touchscreen, powered by an ARM9E CPU with a clock rate of 175 MHz. It has a 3.15 MP camera capable of capturing still images and MPEG-4 video capture at 12 frame/s, but has no flash module. The LG KP500 Cookie also has an FM radio receiver with RDS and an accelerometer motion sensor with support for auto-rotating display. Software installed on the handset included a document viewer for DOC, XLS, and PDF formats, and a Java MIDP 2.0 games player. The battery is capable of standby time of up to 350 hours and talk time of up to 3 hours 30 minutes. Features: The phone was originally released in four colors: Black, Vandyke Brown, Anodizing Silver, and Elegant Gold. This was later increased to ten colors including white, pink and purple. Model differences: The LG KP501 is a variant of the KP500 with slightly different shaped front buttons and some minor software changes. The South Korean Cooky model has a slightly different weight and dimension compared to the Cookie. Sales and reception: With the Cookie, LG brought a basic and affordable mobile phone but one that included a touchscreen. The Register reviewed it and gave it a score of 70%. GSM Arena in its review wrote that the LG Cookie "simply makes sense", adding that "it doesn't seek to impress but is straightforward, credible and convincing." Softpedia in its review said its best features are its "cheap price and the exceptional look and finishes", with the biggest drawback being difficulty to use in sunlight.LG Cookie recorded over two million unit sales worldwide in the first five months after its launch in December 2008. It sold 1.2 million units in Europe, 600,000 in Asia and emerging markets, and 100,000 in Korea, where LG claimed that it was the most popular handset as of March 2009. LG planned to expand the Cookie’s availability from 40 to 60 countries as part of its push to hit 13 million in sales worldwide.In July 2009, LG reported sales of 5 million for the Cookie, making it the company's fastest selling touchscreen phone yet. At the end of the year, LG reported that it had shipped over 10 million units, including over five million in Europe, two million in Latin America and two million in Asia.At launch, the Cookie was virtually the first basic touchscreen phone on the market. Its popularity led to a swathe of rivals in 2009 offering similar touch phones at low prices, such as Samsung's S5230 Star/Tocco Lite. The LG Cookie's successor, LG Pop, was introduced in late 2009. Later LG Cookie models: After the original LG KP500, the Cookie brand was extended by LG with many more budget phones released in the series, for various different markets.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Psiphon** Psiphon: Psiphon is a free and open-source Internet censorship circumvention tool that uses a combination of secure communication and obfuscation technologies, such as a VPN, SSH, and a Web proxy. Psiphon is a centrally managed and geographically diverse network of thousands of proxy servers, using a performance-oriented, single- and multi-hop routing architecture.Psiphon is specifically designed to support users in countries considered to be "enemies of the Internet". The codebase is developed and maintained by Psiphon, Inc., which operates systems and technologies designed to assist Internet users to securely bypass the content-filtering systems used by governments to impose censorship of the Internet. The original concept for Psiphon (1.0) was developed by the Citizen Lab at the University of Toronto, building upon previous generations of web proxy software systems, such as the "Safe Web" and "Anonymizer" systems. Psiphon: In 2007 Psiphon, Inc. was established as an independent Ontario corporation that develops advanced censorship circumvention systems and technologies. Psiphon, Inc. and the Citizen Lab at the Munk School of Global Affairs, University of Toronto occasionally collaborate on research projects, through the Psi-Lab partnership. Psiphon currently consists of three separate but related open-source software projects: 3.0 – A cloud-based run-time tunneling system. 2.0 – A cloud-based secure proxy system. 1.0 – The original home-based server software (released by the Citizen Lab in 2004, rewritten and launched in 2006). Psiphon 1.X is no longer supported by Psiphon, Inc. or the Citizen Lab. History: The original concept for Psiphon envisioned an easy-to-use and lightweight Internet proxy, designed to be installed and operated by individual computer users, who would then host private connections for friends and family in countries where the Internet is censored. According to Nart Villeneuve, "The idea is to get (users) to install this on their computer, and then deliver the location of that circumventor, to people in filtered countries by the means they know to be the most secure. What we're trying to build is a network of trust among people who know each other, rather than a large tech network that people can just tap into." Psiphon 1.0 was launched by the Citizen Lab on 1 December 2006 as open-source software.In early 2007, Psiphon, Inc. was established as a Canadian corporation independent of the Citizen Lab and the University of Toronto. The original code (1.6) was made available under the GNU General Public License. In 2008, Psiphon was awarded the Netexplorateur award by the French Senate. In 2009, Psiphon was recognized with The Economist Best New Media Award by Index on Censorship. In 2011, Psiphon 1.X was officially retired and is no longer actively supported by Psiphon, Inc., or the Citizen Lab.In 2008, Psiphon, Inc. was awarded two sub-grants by the Internews operated SESAWE (Open Internet) project(s). The source of funding came from the European Parliament and the US State Department Internet Freedom program, administered by the Bureau of Democracy, Human Rights, and Labor (DRL). The objective of these grants was to develop Psiphon into a scalable anti-censorship solution capable of supporting large numbers of users across different geographic regions. The core development team grew to include a group of experienced security and encryption software engineers that previously developed Ciphershare, a secure document management system.In 2010, Psiphon, Inc. began providing services to the Broadcasting Board of Governors (US), US Department of State and the British Broadcasting Corporation. As of 2015, Psiphon, Inc. operated on the basis revenues generated from commercial operations. History: Communication via Psiphon played a major role in media coverage of the 2020 Belarusian protests. History: In 2012, Psiphon, Inc. began development of a mobile version of Psiphon 3 for use with phones running Android.In 2021, the monthly user base surged from 5,000 to over 14 million due to the Myanmar protests. It is thought that the state censorship of many other social media websites is the cause. During the 2021 Cuban protests, over one million protesters began using the tool after the government shut down many social media websites.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Quasiperiodic motion** Quasiperiodic motion: In mathematics and theoretical physics, quasiperiodic motion is in rough terms the type of motion executed by a dynamical system containing a finite number (two or more) of incommensurable frequencies.That is, if we imagine that the phase space is modelled by a torus T (that is, the variables are periodic like angles), the trajectory of the system is modelled by a curve on T that wraps around the torus without ever exactly coming back on itself. Quasiperiodic motion: A quasiperiodic function on the real line is the type of function (continuous, say) obtained from a function on T, by means of a curve R → Twhich is linear (when lifted from T to its covering Euclidean space), by composition. It is therefore oscillating, with a finite number of underlying frequencies. (NB the sense in which theta functions and the Weierstrass zeta function in complex analysis are said to have quasi-periods with respect to a period lattice is something distinct from this.) The theory of almost periodic functions is, roughly speaking, for the same situation but allowing T to be a torus with an infinite number of dimensions.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Chinese multiplication table** Chinese multiplication table: The Chinese multiplication table is the first requisite for using the Rod calculus for carrying out multiplication, division, the extraction of square roots, and the solving of equations based on place value decimal notation. It was known in China as early as the Spring and Autumn period, and survived through the age of the abacus; pupils in elementary school today still must memorise it.The Chinese multiplication table consists of eighty-one terms. It was often called the nine-nine table, or simply nine-nine, because in ancient times, the nine nine table started with 9×9: nine nines beget eighty-one, eight nines beget seventy-two ... seven nines beget sixty three, etc. two ones beget one. In the opinion of Wang Guowei, a noted scholar, the nine-nine table probably started with nine because of the "worship of nine" in ancient China; the emperor was considered the "nine five supremacy" in the Book of Change. See also Numbers in Chinese culture § Nine. Chinese multiplication table: It is also known as nine-nine song (or poem), as the table consists of eighty-one lines with four or five Chinese characters per lines; this thus created a constant metre and render the multiplication table as a poem. For example, 9x9=81 would be rendered as "九九八十一", or "nine nine eighty one", with the world for "begets" "得" implied. This makes it easy to learn by heart. A shorter version of the table consists of only forty-five sentences, as terms such as "nine eights beget seventy-two" are identical to "eight nines beget seventy-two" so there is no need to learn them twice. When the abacus replaced the counting rods in the Ming dynasty, many authors on the abacus advocated the use of the full table instead of the shorter one. They claimed that memorising it without needing a moment of thinking makes abacus calculation much faster.The existence of the Chinese multiplication table is evidence of an early positional decimal system: otherwise a much larger multiplication table would be needed with terms beyond 9×9. The Nine-nine song text in Chinese: It can be read in either row-major or column-major order. The Nine-nine table in Chinese literature: Many Chinese classics make reference to the nine-nine table: Zhoubi Suanjing: "nine nine eighty one" Guan Zi has sentences of the form "three eights beget twenty four, three sevens beget twenty-one" The Nine Chapters on the Mathematical Art: "Fu Xi invented the art of nine-nine". In Huainanzi, there were eight sentences: "nine nines beget eighty one", "eight nines beget seventy two", all the way to "two nines beget eighteen". A nine-nine table manuscript was discovered in Dun Huang. Xia Houyang's Computational Canons: "To learn the art of multiplication and division,one must understand nine-nine". The Nine-nine table in Chinese literature: The Song dynasty author Hong Zhai's Notebooks said: "three threes as nine, three fours as twelve, two eights as sixteen, four fours as sixteen, three nines as twenty seven, four nines as thirty six, six sixes as thirty six, five eights as forty, five nines as forty five, seven nines as sixty three, eight nines as seventy two, nine nines as eighty one". This suggests that the table has begun with the smallest term since the Song dynasty. The Nine-nine table in Chinese literature: Song dynasty mathematician Yang Hui's mathematics text book: Suan fa tong bian ben mo, meaning "You must learn nine nine song from one one equals one to nine nine eighty one, in small to large order" Yuan dynasty mathematician Zhu Shijie's Suanxue qimeng (Elementary mathematics): "one one equals one, two by two equals four, one by three equals three, two by three equals six, three by three equals nine, one by four equals four... nine by nine equals eight one" Archeological artifacts: At the end of the 19th century, archeologists unearthed pieces of written bamboo script from the Han dynasty in Xin Jiang. One such Han dynasty bamboo script, from Liusha, is a remnant of the nine-nine table. It starts with nine: nine nine eighty one, eight nine seventy two, seven nine sixty three, eight eight sixty four, seven eight fifty six, six eight forty eight, ... two two gets four, altogether 1100 Chinese words. Archeological artifacts: In 2002, Chinese archeologists unearthed a written wood script from a two-thousand-year-old site from the Warring States, on which was written: "four eight thirty two, five eight forty, six eight forty eight." This is the earliest artifact of the nine-nine table that has been unearthed, indicating that the nine-nine table, as well as a positional decimal system, had appeared by the Warring States period. Archeological artifacts: The nine-nine table was transmitted to Japan, and appeared in a Japanese primary mathematics book in the 10th century, beginning with 9×9.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Tetracene** Tetracene: Tetracene, also called naphthacene, is a polycyclic aromatic hydrocarbon. It has the appearance of a pale orange powder. Tetracene is the four-ringed member of the series of acenes. Tetracene: Tetracene is a molecular organic semiconductor, used in organic field-effect transistors (OFETs) and organic light-emitting diodes (OLEDs). In May 2007, researchers from two Japanese universities, Tohoku University in Sendai and Osaka University, reported an ambipolar light-emitting transistor made of a single tetracene crystal. Ambipolar means that the electric charge is transported by both positively charged holes and negatively charged electrons. Tetracene can be also used as a gain medium in dye lasers as a sensitiser in chemoluminescence. Tetracene: Jan Hendrik Schön during his time at Bell Labs (1997–2002) claimed to have developed an electrically pumped laser based on tetracene. However, his results could not be reproduced, and this is considered to be a scientific fraud.Napthacene is the main backbone component of the tetracycline class of antibiotics. Notes: Daniel Oberhaus, New Designs Could Boost Solar Cells Beyond Their Limits, Wired, July 11th 2019
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Torricelli's law** Torricelli's law: Torricelli's law, also known as Torricelli's theorem, is a theorem in fluid dynamics relating the speed of fluid flowing from an orifice to the height of fluid above the opening. The law states that the speed v of efflux of a fluid through a sharp-edged hole at the bottom of the tank filled to a depth h is the same as the speed that a body (in this case a drop of water) would acquire in falling freely from a height h , i.e. v=2gh , where g is the acceleration due to gravity. This expression comes from equating the kinetic energy gained, 12mv2 , with the potential energy lost, mgh , and solving for v . The law was discovered (though not in this form) by the Italian scientist Evangelista Torricelli, in 1643. It was later shown to be a particular case of Bernoulli's principle. Derivation: Under the assumptions of an incompressible fluid with negligible viscosity, Bernoulli's principle states that the hydraulic energy is constant constant at any two points in the flowing liquid. Here v is fluid speed, g is the acceleration due to gravity, y is the height above some reference point, p is the pressure, and ρ is the density. Derivation: In order to derive Torricelli's formula the first point with no index is taken at the liquid's surface, and the second just outside the opening. Since the liquid is assumed to be incompressible, ρ1 is equal to ρ2 and ; both can be represented by one symbol ρ . The pressure p1 and p2 are typically both atmospheric pressure, so p1=p2⇒p1−p2=0 . Furthermore y1−y2 is equal to the height h of the liquid's surface over the opening: v122+gh=v222 The velocity of the surface v1 can by related to the outflow velocity v2 by the continuity equation v1A=v2AA , where AA is the orifice's cross section and A is the (cylindrical) vessel's cross section. Renaming v2 to vA (A like Aperture) gives: vA22AA2A2+gh=vA22 ⇒gh=vA22(1−AA2A2). Derivation: ⇒vA=2gh1−AA2A2. Torricelli's law is obtained as a special case when the opening AA is very small relative to the horizontal cross-section of the container A1 :vA=2gh. Torricelli's law can only be applied when viscous effects can be neglected which is the case for water flowing out through orifices in vessels. Derivation: Experimental verification: Spouting can experiment Every physical theory must be verified by experiments. The spouting can experiment consists of a cylindrical vessel filled up with water and with several holes in different heights. It is designed to show that in a liquid with an open surface, pressure increases with depth. The lower a jet is on the tube, the more powerful it is. The fluid exit velocity is greater further down the tube.The outflowing jet forms a downward parabola where every parabola reaches farther out the larger the distance between the orifice and the surface is. The shape of the parabola y(x) is only dependent on the outflow velocity and can be determined from the fact that every molecule of the liquid forms a ballistic trajectory (see projectile motion) where the initial velocity is the outflow velocity vA :y(x)=−12gvA2x2. Derivation: The results confirm the correctness of Torricelli's law very well. Discharge and time to empty a cylindrical vessel: Assuming that a vessel is cylindrical with fixed cross-sectional area A , with orifice of area AA at the bottom, then rate of change of water level height dh/dt is not constant. The water volume in the vessel is changing due to the discharge V˙ out of the vessel: dVdt=Adhdt=V˙=AAvA=AA2gh⇒Adhh=AA2gdt Integrating both sides and re-arranging, we obtain T=AAA2Hg, where H is the initial height of the water level and T is the total time taken to drain all the water and hence empty the vessel. Discharge and time to empty a cylindrical vessel: This formula has several implications. If a tank with volume V with cross section A and height H , so that V=AH , is fully filled, then the time to drain all the water is T=VAA2gH. This implies that high tanks with same filling volume drains faster than wider ones. Lastly, we can re-arrange the above equation to determine the height of the water level h(t) as a function of time t as h(t)=H(1−tT)2, where H is the height of the container while T is the discharge time as given above. Discharge and time to empty a cylindrical vessel: Discharge experiment, coefficient of discharge The dicharge theory can be tested by measuring the emptying time T or time series of the water level h(t) within the cylindrical vessel. In a lot of cases such experiments do not confirm the presented dicharge theory: When comparing the theoretical predictions of the discharge process with measurements, very large differences can be found in such cases. In reality, the tank usually drains much more slowly. Looking at the discharge formula V˙=AAvA=AA2gh two quantities could be responsible for this discrepancy: the outflow velocity or the effective outflow cross section. In 1738 Daniel Bernoulli attributed the discrepancy between the theoretical and the observed outflow behavior to the formation of a vena contracta which reduces the outflow cross-section from the orifice's cross-section AA to the contracted cross-section AC and stated that the discharge is: V˙=ACvA=AC2gh Actually this is confirmed by state-of-the-art experiments (see ) in which the discharge, the outflow velocity and the cross-section of the vena contracta were measured. Here it was also shown that the outflow velocity is predicted extremely well by Torricelli's law and that no velocity correction (like a "coefficient of velocity") is needed. The problem remains how to determine the cross-section of the vena contracta. This is normally done by introducing a discharge coefficient which relates the discharge to the orifice's cross-section and Torricelli's law: real with μ=ACAA For low viscosity liquids (such as water) flowing out of a round hole in a tank, the discharge coefficient is in the order of 0.65. By discharging through a round tube or hose, the coefficient of discharge can be increased to over 0.9. For rectangular openings, the discharge coefficient can be up to 0.67, depending on the height-width ratio. Applications: Horizontal distance covered by the jet of liquid If h is height of the orifice above the ground and H is height of the liquid column from the ground (height of liquid's surface), then the horizontal distance covered by the jet of liquid to reach the same level as the base of the liquid column can be easily derived. Since h be the vertical height traveled by a particle of jet stream, we have from the laws of falling body h=12gt2⇒t=2hg, where t is the time taken by the jet particle to fall from the orifice to the ground. If the horizontal efflux velocity is v , then the horizontal distance traveled by the jet particle during the time duration t is D=vt=v2hg. Applications: Since the water level is H−h above the orifice, the horizontal efflux velocity v=2g(H−h), as given by Torricelli's law. Thus, we have from the two equations D=2h(H−h). The location of the orifice that yields the maximum horizontal range is obtained by differentiating the above equation for D with respect to h , and solving dD/dh=0 . Here we have dDdh=H−2hh(H−h). Solving dD/dh=0, we obtain h∗=H2, and the maximum range max =H. Applications: Clepsydra problem A clepsydra is a clock that measures time by the flow of water. It consists of a pot with a small hole at the bottom through which the water can escape. The amount of escaping water gives the measure of time. As given by the Torricelli's law, the rate of efflux through the hole depends on the height of the water; and as the water level diminishes, the discharge is not uniform. A simple solution is to keep the height of the water constant. This can be attained by letting a constant stream of water flow into the vessel, the overflow of which is allowed to escape from the top, from another hole. Thus having a constant height, the discharging water from the bottom can be collected in another cylindrical vessel with uniform graduation to measure time. This is an inflow clepsydra. Applications: Alternatively, by carefully selecting the shape of the vessel, the water level in the vessel can be made to decrease at constant rate. By measuring the level of water remaining in the vessel, the time can be measured with uniform graduation. This is an example of outflow clepsydra. Since the water outflow rate is higher when the water level is higher (due to more pressure), the fluid's volume should be more than a simple cylinder when the water level is high. That is, the radius should be larger when the water level is higher. Let the radius r increase with the height of the water level h above the exit hole of area a. Applications: That is, r=f(h) . We want to find the radius such that the water level has a constant rate of decrease, i.e. dh/dt=c At a given water level h , the water surface area is A=πr2 . The instantaneous rate of change in water volume is dVdt=Adhdt=πr2c. From Torricelli's law, the rate of outflow is dVdt=AAv=AA2gh, From these two equations, AA2gh=πr2c⇒h=π2c22gAA2r4. Thus, the radius of the container should change in proportion to the quartic root of its height, r∝h4. Likewise, if the shape of the vessel of the outflow clepsydra cannot be modified according to the above specification, then we need to use non-uniform graduation to measure time. The emptying time formula above tells us the time should be calibrated as the square root of the discharged water height, T∝h. More precisely, Δt=AAA2g(h1−h2) where Δt is the time taken by the water level to fall from the height of h1 to height of h2 Torricelli's original derivation: Evangelista Torricelli's original derivation can be found in the second book 'De motu aquarum' of his 'Opera Geometrica' (see ): He starts a tube AB (Figure (a)) filled up with water to the level A. Then a narrow opening is drilled at the level of B and connected to a second vertical tube BC. Due to the hydrostatic principle of communicating vessels the water lifts up to the same filling level AC in both tubes (Figure (b)). When finally the tube BC is removed (Figure (c)) the water should again lift up to this height, which is named AD in Figure (c). The reason for that behavior is the fact that a droplet's falling velocity from a height A to B is equal to the initial velocity that is needed to lift up a droplet from B to A. Torricelli's original derivation: When performing such an experiment only the height C (instead of D in figure (c)) will be reached which contradicts the proposed theory. Torricelli attributes this defect to the air resistance and to the fact that the descending drops collide with ascending drops. Torricelli's argumentation is, as a matter of fact, wrong because the pressure in free jet is the surrounding atmospheric pressure, while the pressure in a communicating vessel is the hydrostatic pressure. At that time the concept of pressure was unknown.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Pseudoboehmite** Pseudoboehmite: Pseudoboehmite is an aluminium compound with the chemical composition AlO(OH). It consists of finely crystalline boehmite. However, the water content is higher than in boehmite. History: Calvet et al. coined the name pseudoboehmite in 1952 when they synthesized pure aluminium hydroxyde gel. Its XRD pattern is similar to that of boehmite but the relative intensities of the peaks differ. Morphology: Pseudoboehmite is essentially finely crystalline boehmite which consists of the same or similar octahedral layers in the xz plane but lacks three-dimensional order because of a restricted number of unit cells in y direction. It consists of a significant number of crystallites which contain a single unit cell along y or single octahedral layers. It contains more water which is commonly intercalated between octahedral layers, normally randomly arranged, but sometimes regularly. Morphology: The water content consists of adsorbed and chemically bound water. The higher water content compared to boehmite can be explained by a smaller crystallite size. While boehmite consists of relatively long AlOOH chains that have terminal H2O groups, the chains in pseudoboehmite are significantly shorter. This translates into a significantly higher specific water content due to the terminal water groups: It is a "poorly crystallized" Al3+ compound with the composition Al2O3 * x H2O (1.0 < x < 2.0) with interplanar spacings increased in the [020]-direction to 0.67 nm in comparison with 0.617 nm for boehmite.At higher temperatures pseudoboehmite is transformed to γ-alumina but the pore size distribution remains unchanged up to 1000 °C. At around 1100 °C however, specific area significantly decreases because of sintering related to a transformation to α-Al2O3. Synthesis: Pseudoboehmite can be synthesized by aging non-crystalline aluminium hydroxide gels at pH values between 5.0 and 7.4. Uses: Pseudoboehmite is used as binder for FCC catalysts and adsorbents. It is also a raw material for activated alumina.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Strolling** Strolling: Strolling is walking along or through at a leisurely pace. Strolling is a pastime and activity enjoyed worldwide as a leisure activity. The object of strolling is to walk at a slightly slower pace in an attempt to absorb the surroundings. Works featuring the flâneur, French for a “strolling urban observer”, have appeared in European and American literature since the late 18th century. Etymology: The verb form of "stroll" may have originated from a c.1600 Cant word. This word may have been derived from the German word strollen, which in itself is a derivative of the German word strolchen, which means "to roam, travel about aimlessly, drift, rove." The German noun strolch refers to any sort of vagabond or rogue. Before the American Revolution, a stroller was the British word for a vagabond.The noun stroll came from the verb in 1814. The term "stroller" was coined in the 1920s as a "child’s push-chair". The modern-day usage of the word "stroll" does not differ greatly from its older derivatives. Health outcomes: Strolling is not an aerobic exercise. The body's energy demands whilst strolling do not require extra oxygen. Physicians therefore do not recommend strolling, but rather recommend more vigorous and aerobic forms of exercise. The American Medical Association's committee on Exercise and Physical Fitness has stated that "walking briskly, not just strolling, is the simplest and also one of the best forms of exercise".Researchers investigating the cognitive benefits to exercise have also concluded that strolling results in no significant gains to cognitive health as people age. Brisk walking and other everyday activities, such as house work or gardening, have demonstrated significant benefits to prevention of cognitive decline as the population ages.Other researchers at the Mayo Clinic posit that all activity that is not sleeping, eating, or sports activity still contributes to overall health. This has been named "Non-exercise activity thermogenesis" (NEAT) and includes everything from strolling to fidgeting in the analysis of energy consumption. Utilizing NEAT research has generated many ideas about social design of offices, schools, and living spaces to promote any physical activity, such as removing places to sit to promote standing and pacing. The body operates at a more balanced level when strolling. The heart beat is more balanced. The blood pressure is well balanced. International traditions: In Spain, a stroll is called a Paseo and is a popular after-dinner pastime. The participants, whose membership is egalitarian, wear their best clothing. Activities include chatting with neighbors and acquaintances, flirting, and gossiping. Several streets in countries with a Spanish cultural history incorporate the word: Paseo de la Reforma in Mexico City, Paseo del Prado, Havana, Paseo de Roxas in the Philippines, and Buenos Aires's Paseo La Plaza. International traditions: The similar, and widespread custom in Italy for an evening walk is called la passeggiata.Strolling or walking (Russian: гулять, gulyat') is very common in the Russian society. In contrast to many western countries strolling is very common among young people in Russia. Young people often arrange just to go for a walk. Besides the verb, the experience itself, which describes the time span of the walk, is called progulka (Russian: прогулка). Walking is so important in Russian culture that gulyat' also is a synonym for "to party".The 19th-century Russian literary critic Vissarion Belinsky described St. Petersburg as the center of urban strolling in that country, by contrast with Moscow. Rural strolls have long been a staple of Russian fiction and songs; Tchaikovsky composed a musical accompaniment to the Nikolay Grekov poem “We haven’t long to stroll”.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Liver scratch test** Liver scratch test: The liver scratch test (also known as Lazar's test) is a technique used by medical professionals during a physical exam to locate the inferior border of the liver in order to approximate the size of a patient's liver. The technique was first credited to Burton-Opitz in 1925 where it was used to identify the cardiac silhouette, however there are references of similar techniques used prior to this. The liver scratch test can be used when other exam techniques used to approximate liver size are ineffective or unavailable and is thought to be most useful if the abdomen is distended, too tender for direct palpation, the abdominal muscles are too rigid, or the patient is obese. Technique: The liver scratch test is a type of auscultatory percussion that uses the difference in sound transmission between solid and hollow organs in the abdominal cavity in order to locate the inferior edge of the liver. The test is most commonly performed by placing the stethoscope below the xiphoid process and lightly scratching the skin parallel to the expected liver edge. The examiner begins scratching in the right lower quadrant of the abdomen along the midclavicular line and moves superiorly until the sound abruptly increases in volume. This location of suddenly increased auscultation volume is marked as the inferior edge of the liver and can then be used to determine the overall liver size. Multiple variations on the exam also exist including different stethoscope placements such as over the costal margin or liver, percussing the abdomen instead of scratching, or scratching in different patterns i.e. circular or lateral directions. Controversy: Despite being commonly taught to medical trainees, the liver scratch test's value as part of the abdominal physical exam has been controversial as it has historically performed poorly. While it has been proposed to abandon the test altogether, some studies have suggested that the scratch test is at least as accurate as percussion overall in identifying the liver edge and even more accurate for young trainees.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**SCVP** SCVP: The Server-based Certificate Validation Protocol (SCVP) is an Internet protocol for determining the path between an X.509 digital certificate and a trusted root (Delegated Path Discovery) and the validation of that path (Delegated Path Validation) according to a particular validation policy. Overview: When a relying party receives a digital certificate and needs to decide whether to trust the certificate, it first needs to determine whether the certificate can be linked to a trusted certificate. This process may involve chaining the certificate back through several issuers, such as the following case: Equifax Secure eBusiness CA-1 ACME Co Certificate Authority Joe User Currently, the creation of this chain of certificates is performed by the application receiving the signed message. The process is termed "path discovery" and the resulting chain is called a "certification path". Many Windows applications, such as Outlook, use Cryptographic Application Programming Interface (CAPI) for path discovery. Overview: CAPI is capable of building certification paths using any certificates that are installed in Windows certificate stores or provided by the relying party application. The Equifax CA certificate, for example, comes installed in Windows as a trusted certificate. If CAPI knows about the ACME Co CA certificate or if it is included in a signed email and made available to CAPI by Outlook, CAPI can create the certification path above. However, if CAPI cannot find the ACME Co CA certificate, it has no way to verify that Joe User is trusted. Overview: SCVP provides us with a standards-based client-server protocol for solving this problem using Delegated Path Discovery, or DPD. When using DPD, a relying party asks a server for a certification path that meets its needs. The SCVP client's request contains the certificate that it is attempting to trust and a set of trusted certificates. The SCVP server's response contains a set of certificates making up a valid path between the certificate in question and one of the trusted certificates. The response may also contain proof of revocation status, such as OCSP responses, for the certificates in the path. Overview: Once a certification path has been constructed, it needs to be validated. An algorithm for validating certification paths is defined in RFC 5280 section 6 (signatures, expiration, name constraints, policy constraints, basic constraints, etc.). Again, this could be done locally by the client or by the SCVP server with Delegated Path Validation. SCVP facilitates Federated PKIs, such as one with a Bridge Certificate Authority.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**James–Stein estimator** James–Stein estimator: The James–Stein estimator is a biased estimator of the mean, θ , of (possibly) correlated Gaussian distributed random variables Y={Y1,Y2,...,Ym} with unknown means {θ1,θ2,...,θm} . It arose sequentially in two main published papers. The earlier version of the estimator was developed in 1956, when Charles Stein reached a relatively shocking conclusion that while the then-usual estimate of the mean, the sample mean, is admissible when m≤2 , it is inadmissible when m≥3 . Stein proposed a possible improvement to the estimator that shrinks the sample means θi towards a more central mean vector ν (which can be chosen a priori or commonly as the "average of averages" of the sample means, given all samples share the same size). This observation is commonly referred to as Stein's example or paradox. In 1961, Willard James and Charles Stein simplified the original process.It can be shown that the James–Stein estimator dominates the "ordinary" least squares approach, meaning the James–Stein estimator has a lower or equal mean squared error than the "ordinary" least square estimator. Setting: Let Y∼Nm(θ,σ2I), where the vector θ is the unknown mean of Y , which is m -variate normally distributed and with known covariance matrix σ2I We are interested in obtaining an estimate, θ^ , of θ , based on a single observation, y , of Y In real-world application, this is a common situation in which a set of parameters is sampled, and the samples are corrupted by independent Gaussian noise. Since this noise has mean of zero, it may be reasonable to use the samples themselves as an estimate of the parameters. This approach is the least squares estimator, which is θ^LS=y Stein demonstrated that in terms of mean squared error E⁡[‖θ−θ^‖2] , the least squares estimator, θ^LS , is sub-optimal to a shrinkage based estimators, such as the James–Stein estimator, θ^JS . The paradoxical result, that there is a (possibly) better and never any worse estimate of θ in mean squared error as compared to the sample mean, became known as Stein's example. The James–Stein estimator: If σ2 is known, the James–Stein estimator is given by θ^JS=(1−(m−2)σ2‖y‖2)y. The James–Stein estimator: James and Stein showed that the above estimator dominates θ^LS for any m≥3 , meaning that the James–Stein estimator always achieves lower mean squared error (MSE) than the maximum likelihood estimator. By definition, this makes the least squares estimator inadmissible when m≥3 Notice that if (m−2)σ2<‖y‖2 then this estimator simply takes the natural estimator y and shrinks it towards the origin 0. In fact this is not the only direction of shrinkage that works. Let ν be an arbitrary fixed vector of dimension m . Then there exists an estimator of the James–Stein type that shrinks toward ν, namely 3. The James–Stein estimator: The James–Stein estimator dominates the usual estimator for any ν. A natural question to ask is whether the improvement over the usual estimator is independent of the choice of ν. The answer is no. The improvement is small if ‖θ−ν‖ is large. Thus to get a very great improvement some knowledge of the location of θ is necessary. Of course this is the quantity we are trying to estimate so we don't have this knowledge a priori. But we may have some guess as to what the mean vector is. This can be considered a disadvantage of the estimator: the choice is not objective as it may depend on the beliefs of the researcher. Nonetheless, James and Stein's result is that any finite guess ν improves the expected MSE over the maximum-likelihood estimator, which is tantamount to using an infinite ν, surely a poor guess. Interpretation: Seeing the James–Stein estimator as an empirical Bayes method gives some intuition to this result: One assumes that θ itself is a random variable with prior distribution ∼N(0,A) , where A is estimated from the data itself. Estimating A only gives an advantage compared to the maximum-likelihood estimator when the dimension m is large enough; hence it does not work for m≤2 . The James–Stein estimator is a member of a class of Bayesian estimators that dominate the maximum-likelihood estimator.A consequence of the above discussion is the following counterintuitive result: When three or more unrelated parameters are measured, their total MSE can be reduced by using a combined estimator such as the James–Stein estimator; whereas when each parameter is estimated separately, the least squares (LS) estimator is admissible. A quirky example would be estimating the speed of light, tea consumption in Taiwan, and hog weight in Montana, all together. The James–Stein estimator always improves upon the total MSE, i.e., the sum of the expected squared errors of each component. Therefore, the total MSE in measuring light speed, tea consumption, and hog weight would improve by using the James–Stein estimator. However, any particular component (such as the speed of light) would improve for some parameter values, and deteriorate for others. Thus, although the James–Stein estimator dominates the LS estimator when three or more parameters are estimated, any single component does not dominate the respective component of the LS estimator. Interpretation: The conclusion from this hypothetical example is that measurements should be combined if one is interested in minimizing their total MSE. For example, in a telecommunication setting, it is reasonable to combine channel tap measurements in a channel estimation scenario, as the goal is to minimize the total channel estimation error. Conversely, there could be objections to combining channel estimates of different users, since no user would want their channel estimate to deteriorate in order to improve the average network performance.The James–Stein estimator has also found use in fundamental quantum theory, where the estimator has been used to improve the theoretical bounds of the entropic uncertainty principle for more than three measurements.An intuitive derivation and interpretation is given by the Galtonian perspective. Under this interpretation, we aim to predict the population means using the imperfectly measured sample means. The equation of the OLS estimator in a hypothetical regression of the population means on the sample means gives an estimator of the form of either the James–Stein estimator (when we force the OLS intercept to equal 0) or of the Efron-Morris estimator (when we allow the intercept to vary). Improvements: Despite the intuition that the James–Stein estimator shrinks the maximum-likelihood estimate y toward ν , the estimate actually moves away from ν for small values of ‖y−ν‖, as the multiplier on y−ν is then negative. This can be easily remedied by replacing this multiplier by zero when it is negative. The resulting estimator is called the positive-part James–Stein estimator and is given by 4. Improvements: This estimator has a smaller risk than the basic James–Stein estimator. It follows that the basic James–Stein estimator is itself inadmissible.It turns out, however, that the positive-part estimator is also inadmissible. This follows from a general result which requires admissible estimators to be smooth. Extensions: The James–Stein estimator may seem at first sight to be a result of some peculiarity of the problem setting. In fact, the estimator exemplifies a very wide-ranging effect; namely, the fact that the "ordinary" or least squares estimator is often inadmissible for simultaneous estimation of several parameters. This effect has been called Stein's phenomenon, and has been demonstrated for several different problem settings, some of which are briefly outlined below. Extensions: James and Stein demonstrated that the estimator presented above can still be used when the variance σ2 is unknown, by replacing it with the standard estimator of the variance, σ^2=1m∑(yi−y¯)2 . The dominance result still holds under the same condition, namely, m>2 The results in this article are for the case when only a single observation vector y is available. For the more general case when n vectors are available, the results are similar: θ^JS=(1−(m−2)σ2n‖y¯‖2)y¯, where y¯ is the m -length average of the n observations.The work of James and Stein has been extended to the case of a general measurement covariance matrix, i.e., where measurements may be statistically dependent and may have differing variances. A similar dominating estimator can be constructed, with a suitably generalized dominance condition. This can be used to construct a linear regression technique which outperforms the standard application of the LS estimator. Extensions: Stein's result has been extended to a wide class of distributions and loss functions. However, this theory provides only an existence result, in that explicit dominating estimators were not actually exhibited. It is quite difficult to obtain explicit estimators improving upon the usual estimator without specific restrictions on the underlying distributions.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Automatic lubrication** Automatic lubrication: Automatic lubrication (also called autolube or auto-lube) refers to a lubrication system on a two-stroke engine, in which the oil is automatically mixed with fuel and manual oil-fuel pre-mixing is not necessary. The oil is contained in a reservoir that connects to a small oil pump in the engine, which needs to be periodically refilled. Automatic lubrication: This system is commonly used for motorcycles as it eliminates the need of pre-mixing fuel and two-stroke oil. Vespa is an example where pre-mixing of two-stroke oil is required. Automatic lubrication was introduced for motorcycles by Velocette in 1913.An example of application of automatic lubrication system is Suzuki AX100 motorcycle. The motorcycle has a separate oil reservoir on its right side which supplies the cylinder with two-stroke oil proportional to engine speed. Advantages: Consistent lubrication and oil consumption is reduced greatly More effective lubrication results because the oil enters the engine in larger size droplets There is much less unwanted carbon deposited on the spark plugs, cylinder heads, pistons and exhaust system. There is much less exhaust smoke Refueling is simplified Disdvantages: The system is more complicated compared to manual pre-mixing, although it is easier for the end user. If for any reason the oil pump fails to operate properly, chance of damaging the engine is very high. The two-stroke oil tank in scooters and motorcycles is usually hidden from direct view of the rider and needs filling up occasionally. Without any indicator to indicate oil level, it is possible for a novice rider to forget to fill up the oil tank. This can end up starving the engine of oil and cause damage.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Textsound journal** Textsound journal: textsound journal (textsound) is an audio online literary magazine that publishes experimental poetry and sound. History: textsound began in 2008 as a bi-annual publication under the editorial direction of Anya Cobler, Adam Fagin, Anna Vitale, and Laura Wetherington. Selected contributors: Jaap Blonk Anne-James Chaton Paul DeMarinis Linh Dinh Kenneth Goldsmith Rick Moody Thylias Moss Alice Notley Alva Noto Leslie Scalapino Anne Tardos Edwin Torres Anne Waldman Events: On April 5, 2008, the textsound editorial collective organized a celebration in Ypsilanti, Michigan for the journal's launch featuring Barrett Watten, Joel Levise, Christine Hume, James Marks, and Viki. In the fall of 2008, the textsound collective teamed-up with Megan Levad and Adam Boehmer to curate the Work-In-Progress Reading Series at the Crazy Wisdom Bookstore in Ann Arbor, Michigan. Performers included Vievee Francis, Jill Darling, Onna Solomon, Sandy Tolbert, Aaron McCollough, Adam Boehmer, Michael Shilling, David Karczynski, T Hetzel, Katie Hartsock, Meghann Rotary, Anna Prushnikaya, and Stephanie Rowden.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Drogon (software)** Drogon (software): Drogon is a HTTP application framework written in C++, supporting either C++17 or C++14 with Boost. The name Drogon comes from the dragon named Drogon in the TV series Game of Thrones. In May 2020, Drogon has won the first place in the TechEmpower benchmark Round 19 Composite framework score.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Fight-or-flight response** Fight-or-flight response: The fight-or-flight or the fight-flight-or-freeze (also called hyperarousal or the acute stress response) is a physiological reaction that occurs in response to a perceived harmful event, attack, or threat to survival. It was first described by Walter Bradford Cannon. His theory states that animals react to threats with a general discharge of the sympathetic nervous system, preparing the animal for fighting or fleeing. More specifically, the adrenal medulla produces a hormonal cascade that results in the secretion of catecholamines, especially norepinephrine and epinephrine. The hormones estrogen, testosterone, and cortisol, as well as the neurotransmitters dopamine and serotonin, also affect how organisms react to stress. The hormone osteocalcin might also play a part.This response is recognised as the first stage of the general adaptation syndrome that regulates stress responses among vertebrates and other organisms. Name: Originally understood as the "fight-or-flight" response in Cannon's research, the state of hyperarousal results in several responses beyond fighting or fleeing. This has led people to calling it the "fight, flight, freeze" response, "fight-flight-freeze-fawn" or "fight-flight-faint-or-freeze", among other variants. The wider array of responses, such as freezing, fainting, fleeing, or experiencing fright, has led researchers to use more neutral or accommodating terminology such as "hyperarousal" or the "acute stress response". Physiology: Autonomic nervous system The autonomic nervous system is a control system that acts largely unconsciously and regulates heart rate, digestion, respiratory rate, pupillary response, urination, and sexual arousal. This system is the primary mechanism in control of the fight-or-flight response and its role is mediated by two different components: the sympathetic nervous system and the parasympathetic nervous system. Sympathetic nervous system The sympathetic nervous system originates in the spinal cord and its main function is to activate the physiological changes that occur during the fight-or-flight response. This component of the autonomic nervous system utilises and activates the release of norepinephrine in the reaction. Physiology: Parasympathetic nervous system The parasympathetic nervous system originates in the sacral spinal cord and medulla, physically surrounding the sympathetic origin, and works in concert with the sympathetic nervous system. Its main function is to activate the "rest and digest" response and return the body to homeostasis after the fight or flight response. This system utilises and activates the release of the neurotransmitter acetylcholine. Physiology: Reaction The reaction begins in the amygdala, which triggers a neural response in the hypothalamus. The initial reaction is followed by activation of the pituitary gland and secretion of the hormone ACTH. The adrenal gland is activated almost simultaneously, via the sympathetic nervous system, and releases the hormone epinephrine. The release of chemical messengers results in the production of the hormone cortisol, which increases blood pressure, blood sugar, and suppresses the immune system. The initial response and subsequent reactions are triggered in an effort to create a boost of energy. This boost of energy is activated by epinephrine binding to liver cells and the subsequent production of glucose. Additionally, the circulation of cortisol functions to turn fatty acids into available energy, which prepares muscles throughout the body for response. Catecholamine hormones, such as adrenaline (epinephrine) or noradrenaline (norepinephrine), facilitate immediate physical reactions associated with a preparation for violent muscular action and: Acceleration of heart and lung action Paling or flushing, or alternating between both Inhibition of stomach and upper-intestinal action to the point where digestion slows down or stops General effect on the sphincters of the body Constriction of blood vessels in many parts of the body Liberation of metabolic energy sources (particularly fat and glycogen) for muscular action Dilation of blood vessels for muscles Inhibition of the lacrimal gland (responsible for tear production) and salivation Dilation of pupil (mydriasis) Relaxation of bladder Inhibition of erection Auditory exclusion (loss of hearing) Tunnel vision (loss of peripheral vision) Disinhibition of spinal reflexes Shaking Function of physiological changes The physiological changes that occur during the fight or flight response are activated in order to give the body increased strength and speed in anticipation of fighting or running. Some of the specific physiological changes and their functions include: Increased blood flow to the muscles activated by diverting blood flow from other parts of the body. Physiology: Increased blood pressure, heart rate, blood sugars, and fats in order to supply the body with extra energy. The blood clotting function of the body speeds up in order to prevent excessive blood loss in the event of an injury sustained during the response. Increased muscle tension in order to provide the body with extra speed and strength. Emotional components: Emotion regulation In the context of the fight or flight response, emotional regulation is used proactively to avoid threats of stress or to control the level of emotional arousal. Emotional reactivity During the reaction, the intensity of emotion that is brought on by the stimulus will also determine the nature and intensity of the behavioral response. Individuals with higher levels of emotional reactivity may be prone to anxiety and aggression, which illustrates the implications of appropriate emotional reaction in the fight or flight response. Cognitive components: Content specificity The specific components of cognitions in the fight or flight response seem to be largely negative. These negative cognitions may be characterised by: attention to negative stimuli, the perception of ambiguous situations as negative, and the recurrence of recalling negative words. There also may be specific negative thoughts associated with emotions commonly seen in the reaction. Perception of control Perceived control relates to an individual's thoughts about control over situations and events. Perceived control should be differentiated from actual control because an individual's beliefs about their abilities may not reflect their actual abilities. Therefore, overestimation or underestimation of perceived control can lead to anxiety and aggression. Social information processing The social information processing model proposes a variety of factors that determine behavior in the context of social situations and preexisting thoughts. The attribution of hostility, especially in ambiguous situations, seems to be one of the most important cognitive factors associated with the fight or flight response because of its implications towards aggression. Other animals: Evolutionary perspective An evolutionary psychology explanation is that early animals had to react to threatening stimuli quickly and did not have time to psychologically and physically prepare themselves. The fight or flight response provided them with the mechanisms to rapidly respond to threats against survival. Other animals: Examples A typical example of the stress response is a grazing zebra. If the zebra sees a lion closing in for the kill, the stress response is activated as a means to escape its predator. The escape requires intense muscular effort, supported by all of the body's systems. The sympathetic nervous system’s activation provides for these needs. A similar example involving fight is of a cat about to be attacked by a dog. The cat shows accelerated heartbeat, piloerection (hair standing on end), and pupil dilation, all signs of sympathetic arousal. Note that the zebra and cat still maintain homeostasis in all states. Other animals: In July 1992, Behavioral Ecology published experimental research conducted by biologist Lee A. Dugatkin where guppies were sorted into "bold", "ordinary", and "timid" groups based upon their reactions when confronted by a smallmouth bass (i.e. inspecting the predator, hiding, or swimming away) after which the guppies were left in a tank with the bass. After 60 hours, 40 percent of the timid guppies and 15 percent of the ordinary guppies survived while none of the bold guppies did. Other animals: Varieties of responses Animals respond to threats in many complex ways. Rats, for instance, try to escape when threatened but will fight when cornered. Some animals stand perfectly still so that predators will not see them. Many animals freeze or play dead when touched in the hope that the predator will lose interest. Other animals: Other animals have alternative self-protection methods. Some species of cold-blooded animals change color swiftly to camouflage themselves. These responses are triggered by the sympathetic nervous system, but, in order to fit the model of fight or flight, the idea of flight must be broadened to include escaping capture either in a physical or sensory way. Thus, flight can be disappearing to another location or just disappearing in place, and fight and flight are often combined in a given situation.The fight or flight actions also have polarity – the individual can either fight against or flee from something that is threatening, such as a hungry lion, or fight for or fly towards something that is needed, such as the safety of the shore from a raging river. Other animals: A threat from another animal does not always result in immediate fight or flight. There may be a period of heightened awareness, during which each animal interprets behavioral signals from the other. Signs such as paling, piloerection, immobility, sounds, and body language communicate the status and intentions of each animal. There may be a sort of negotiation, after which fight or flight may ensue, but which might also result in playing, mating, or nothing at all. An example of this is kittens playing: each kitten shows the signs of sympathetic arousal, but they never inflict real damage.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Buttonhook** Buttonhook: A buttonhook is a tool used to facilitate the closing of buttoned shoes, gloves or other clothing. It consists of a hook fixed to a handle which may be simple or decorative as part of a dresser set or chatelaine. Sometimes they were given away as promotions with product advertising on the handle. To use, the hook end is inserted through the buttonhole to capture the button by the shank and draw it through the opening.: 7 Buttonhooks have other uses as well. At Ellis Island, screeners known as "buttonhook men" used buttonhooks to turn immigrants' eyelids inside out to look for signs of trachoma.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Gallium palladide** Gallium palladide: Gallium palladide (GaPd or PdGa) is an intermetallic combination of gallium and palladium. In the Iron monosilicide crystal structure. The compound has been suggested as an improved catalyst for hydrogenation reactions. In principle, gallium palladide can be a more selective catalyst since unlike substituted compounds, the palladium atoms are spaced out in a regular crystal structure rather than randomly.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Cable gland** Cable gland: A cable gland (more often known in the U.S. as a cord grip, cable strain relief, cable connector or cable fitting) is a device designed to attach and secure the end of an electrical cable to the equipment. A cable gland provides strain-relief and connects by a means suitable for the type and description of cable for which it is designed—including provision for making electrical connection to the armour or braid and lead or aluminium sheath of the cable, if any. Cable glands may also be used for sealing cables passing through bulkheads or gland plates. Cable glands are mostly used for cables with diameters between 1 mm and 75 mm.Cable glands are commonly defined as mechanical cable entry devices. They are used throughout a number of industries in conjunction with cable and wiring used in electrical instrumentation and automation systems. Cable glands may be used on all types of electrical power, control, instrumentation, data and telecommunications cables. They are used as a sealing and termination device to ensure that the characteristics of the enclosure which the cable enters can be maintained adequately. Cable glands are made of various plastics, and steel, brass or aluminum for industrial usage. Glands intended to resist dripping water or water pressure will include synthetic rubber or other types of elastomer seals. Certain types of cable glands may also serve to prevent entry of flammable gas into equipment enclosures, for electrical equipment in hazardous areas. Although cable glands are often called "connectors", a technical distinction can be made in the terminology, which differentiates them from quick-disconnect, conducting electrical connectors. For routing pre-terminated cables (cables with connectors), split cable glands can be used. These cable glands consist of three parts (two gland halves and a split sealing grommet) which are screwed with a hexagonal locknut (like normal cable glands). Thus, pre-assembled cables can be routed without removing the plugs. Split cable glands can reach an ingress protection of up to IP66/IP68 and NEMA 4X. Cable gland: Alternatively, split cable entry systems can be used (normally consisting of a hard frame and several sealing grommets) to route a large number of pre-terminated cables through one wall cut-out. There are at least three types of thread standards used: Panzergewinde (PG standard) Metric thread National Pipe Thread (inch system)
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Linear programming relaxation** Linear programming relaxation: In mathematics, the relaxation of a (mixed) integer linear program is the problem that arises by removing the integrality constraint of each variable. For example, in a 0–1 integer program, all constraints are of the form xi∈{0,1} .The relaxation of the original integer program instead uses a collection of linear constraints 1. The resulting relaxation is a linear program, hence the name. This relaxation technique transforms an NP-hard optimization problem (integer programming) into a related problem that is solvable in polynomial time (linear programming); the solution to the relaxed linear program can be used to gain information about the solution to the original integer program. Example: Consider the set cover problem, the linear programming relaxation of which was first considered by Lovász (1975). In this problem, one is given as input a family of sets F = {S0, S1, ...}; the task is to find a subfamily, with as few sets as possible, having the same union as F. Example: To formulate this as a 0–1 integer program, form an indicator variable xi for each set Si, that takes the value 1 when Si belongs to the chosen subfamily and 0 when it does not. Then a valid cover can be described by an assignment of values to the indicator variables satisfying the constraints xi∈{0,1} (that is, only the specified indicator variable values are allowed) and, for each element ej of the union of F, ∑{i∣ej∈Si}xi≥1 (that is, each element is covered). The minimum set cover corresponds to the assignment of indicator variables satisfying these constraints and minimizing the linear objective function min ∑ixi. Example: The linear programming relaxation of the set cover problem describes a fractional cover in which the input sets are assigned weights such that the total weight of the sets containing each element is at least one and the total weight of all sets is minimized. Example: As a specific example of the set cover problem, consider the instance F = {{a, b}, {b, c}, {a, c}}. There are three optimal set covers, each of which includes two of the three given sets. Thus, the optimal value of the objective function of the corresponding 0–1 integer program is 2, the number of sets in the optimal covers. However, there is a fractional solution in which each set is assigned the weight 1/2, and for which the total value of the objective function is 3/2. Thus, in this example, the linear programming relaxation has a value differing from that of the unrelaxed 0–1 integer program. Solution quality of relaxed and original programs: The linear programming relaxation of an integer program may be solved using any standard linear programming technique. If it happens that, in the optimal solution, all variables have integer values, then it will also be an optimal solution to the original integer program. However, this is generally not true, except for some special cases (e.g. problems with totally unimodular matrix specifications.) In all cases, though, the solution quality of the linear program is at least as good as that of the integer program, because any integer program solution would also be a valid linear program solution. That is, in a maximization problem, the relaxed program has a value greater than or equal to that of the original program, while in a minimization problem such as the set cover problem the relaxed program has a value smaller than or equal to that of the original program. Thus, the relaxation provides an optimistic bound on the integer program's solution. Solution quality of relaxed and original programs: In the example instance of the set cover problem described above, in which the relaxation has an optimal solution value of 3/2, we can deduce that the optimal solution value of the unrelaxed integer program is at least as large. Since the set cover problem has solution values that are integers (the numbers of sets chosen in the subfamily), the optimal solution quality must be at least as large as the next larger integer, 2. Thus, in this instance, despite having a different value from the unrelaxed problem, the linear programming relaxation gives us a tight lower bound on the solution quality of the original problem. Approximation and integrality gap: Linear programming relaxation is a standard technique for designing approximation algorithms for hard optimization problems. In this application, an important concept is the integrality gap, the maximum ratio between the solution quality of the integer program and of its relaxation. In an instance of a minimization problem, if the real minimum (the minimum of the integer problem) is int , and the relaxed minimum (the minimum of the linear programming relaxation) is frac , then the integrality gap of that instance is int frac . In a maximization problem the fraction is reversed. The integrality gap is always at least 1. In the example above, the instance F = {{a, b}, {b, c}, {a, c}} shows an integrality gap of 4/3. Approximation and integrality gap: Typically, the integrality gap translates into the approximation ratio of an approximation algorithm. This is because an approximation algorithm relies on some rounding strategy that finds, for every relaxed solution of size frac , an integer solution of size at most frac (where RR is the rounding ratio). If there is an instance with integrality gap IG, then every rounding strategy will return, on that instance, a rounded solution of size at least int frac . Therefore necessarily RR≥IG . The rounding ratio RR is only an upper bound on the approximation ratio, so in theory the actual approximation ratio may be lower than IG, but this may be hard to prove. In practice, a large IG usually implies that the approximation ratio in the linear programming relaxation might be bad, and it may be better to look for other approximation schemes for that problem. Approximation and integrality gap: For the set cover problem, Lovász proved that the integrality gap for an instance with n elements is Hn, the nth harmonic number. One can turn the linear programming relaxation for this problem into an approximate solution of the original unrelaxed set cover instance via the technique of randomized rounding (Raghavan & Tompson 1987). Given a fractional cover, in which each set Si has weight wi, choose randomly the value of each 0–1 indicator variable xi to be 1 with probability wi × (ln n +1), and 0 otherwise. Then any element ej has probability less than 1/(e×n) of remaining uncovered, so with constant probability all elements are covered. The cover generated by this technique has total size, with high probability, (1+o(1))(ln n)W, where W is the total weight of the fractional solution. Thus, this technique leads to a randomized approximation algorithm that finds a set cover within a logarithmic factor of the optimum. As Young (1995) showed, both the random part of this algorithm and the need to construct an explicit solution to the linear programming relaxation may be eliminated using the method of conditional probabilities, leading to a deterministic greedy algorithm for set cover, known already to Lovász, that repeatedly selects the set that covers the largest possible number of remaining uncovered elements. This greedy algorithm approximates the set cover to within the same Hn factor that Lovász proved as the integrality gap for set cover. There are strong complexity-theoretic reasons for believing that no polynomial time approximation algorithm can achieve a significantly better approximation ratio (Feige 1998). Approximation and integrality gap: Similar randomized rounding techniques, and derandomized approximation algorithms, may be used in conjunction with linear programming relaxation to develop approximation algorithms for many other problems, as described by Raghavan, Tompson, and Young. Branch and bound for exact solutions: As well as its uses in approximation, linear programming plays an important role in branch and bound algorithms for computing the true optimum solution to hard optimization problems. Branch and bound for exact solutions: If some variables in the optimal solution have fractional values, we may start a branch and bound type process, in which we recursively solve subproblems in which some of the fractional variables have their values fixed to either zero or one. In each step of an algorithm of this type, we consider a subproblem of the original 0–1 integer program in which some of the variables have values assigned to them, either 0 or 1, and the remaining variables are still free to take on either value. In subproblem i, let Vi denote the set of remaining variables. The process begins by considering a subproblem in which no variable values have been assigned, and in which V0 is the whole set of variables of the original problem. Then, for each subproblem i, it performs the following steps. Branch and bound for exact solutions: Compute the optimal solution to the linear programming relaxation of the current subproblem. That is, for each variable xj in Vi, we replace the constraint that xj be 0 or 1 by the relaxed constraint that it be in the interval [0,1]; however, variables that have already been assigned values are not relaxed. If the current subproblem's relaxed solution is worse than the best integer solution found so far, backtrack from this branch of the recursive search. If the relaxed solution has all variables set to 0 or 1, test it against the best integer solution found so far and keep whichever of the two solutions is best. Branch and bound for exact solutions: Otherwise, let xj be any variable that is set to a fractional value in the relaxed solution. Form two subproblems, one in which xj is set to 0 and the other in which xj is set to 1; in both subproblems, the existing assignments of values to some of the variables are still used, so the set of remaining variables becomes Vi \ {xj}. Recursively search both subproblems.Although it is difficult to prove theoretical bounds on the performance of algorithms of this type, they can be very effective in practice. Cutting plane method: Two 0–1 integer programs that are equivalent, in that they have the same objective function and the same set of feasible solutions, may have quite different linear programming relaxations: a linear programming relaxation can be viewed geometrically, as a convex polytope that includes all feasible solutions and excludes all other 0–1 vectors, and infinitely many different polytopes have this property. Ideally, one would like to use as a relaxation the convex hull of the feasible solutions; linear programming on this polytope would automatically yield the correct solution to the original integer program. However, in general, this polytope will have exponentially many facets and be difficult to construct. Typical relaxations, such as the relaxation of the set cover problem discussed earlier, form a polytope that strictly contains the convex hull and has vertices other than the 0–1 vectors that solve the unrelaxed problem. Cutting plane method: The cutting-plane method for solving 0–1 integer programs, first introduced for the traveling salesman problem by Dantzig, Fulkerson & Johnson (1954) and generalized to other integer programs by Gomory (1958), takes advantage of this multiplicity of possible relaxations by finding a sequence of relaxations that more tightly constrain the solution space until eventually an integer solution is obtained. This method starts from any relaxation of the given program, and finds an optimal solution using a linear programming solver. If the solution assigns integer values to all variables, it is also the optimal solution to the unrelaxed problem. Otherwise, an additional linear constraint (a cutting plane or cut) is found that separates the resulting fractional solution from the convex hull of the integer solutions, and the method repeats on this new more tightly constrained problem. Cutting plane method: Problem-specific methods are needed to find the cuts used by this method. It is especially desirable to find cutting planes that form facets of the convex hull of the integer solutions, as these planes are the ones that most tightly constrain the solution space; there always exists a cutting plane of this type that separates any fractional solution from the integer solutions. Much research has been performed on methods for finding these facets for different types of combinatorial optimization problems, under the framework of polyhedral combinatorics (Aardal & Weismantel 1997). Cutting plane method: The related branch and cut method combines the cutting plane and branch and bound methods. In any subproblem, it runs the cutting plane method until no more cutting planes can be found, and then branches on one of the remaining fractional variables.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Walk-on (sports)** Walk-on (sports): A walk-on, in American and Canadian college athletics, is an athlete who becomes part of a team without being recruited and awarded an athletic scholarship. A team's walk-on players are normally the weakest players and relegated to the scout team, and may not even be placed on the official depth chart or traveling team, while the scholarship players are the team's main players. However, a walk-on player occasionally becomes a noted member of the team. General parameters: Because of scholarship limits instituted by the NCAA, many football teams do not offer scholarships to their punters, long snappers and kickers until they have become established producers. Sometimes injury or outside issues can ravage the depth chart of a particular position, resulting in the elevation of a walk-on to a featured player. General parameters: In other situations, a walk-on may so impress the coaching staff with their play on the scout team and in practice that they are rewarded with a scholarship and made a part of the regular depth chart. Often, it is the players who achieve success in this manner that are the inspiration for future walk-ons. One significant college football national award, the Burlsworth Trophy - named for the eventual All-American former walk-on offensive lineman at the University of Arkansas, Brandon Burlsworth - has been awarded since 2010 to the most outstanding player in the top-level Football Bowl Subdivision (FBS) who began his college career as a walk-on. The only two-time recipient of the Burlsworth Trophy, Baker Mayfield (2015 and 2016), won the 2017 Heisman Trophy. He began his college career as a walk-on at Texas Tech University before transferring to the University of Oklahoma, where he received all of the aforementioned awards. General parameters: Also, there are times where a walk-on will be a dependable member of the team's practice and scout teams for several years. If a team has an extra scholarship, it may award it to the player as a token of appreciation for their hard work and devotion to the team, although the player may never actually play in a game. General parameters: Finally, in rare cases, an established scholarship player may become a walk-on in order to open up their scholarship for another player. Three such cases in men's college basketball have received notoriety in recent years: In 2011–12, three Louisville scholarship players, most notably Kyle Kuric and Chris Smith, became walk-ons to bring the Cardinals' scholarship totals down to the NCAA limit of 13. General parameters: In 2013–14, Creighton's Doug McDermott (the son of Creighton's head coach) became a walk-on after a teammate was granted a rare sixth year of eligibility by the NCAA, putting the Bluejays over the 13-scholarship limit. General parameters: In 2014–15, Xavier starting center Matt Stainbrook enrolled in the school's MBA program and gave up his scholarship for his younger brother Tim, who had been a walk-on at Xavier the year before, in order to save their family a five-figure amount in school expenses. This led him to become a driver for the on-demand car service Uber, which gained him significant notoriety during that season. Purpose: The reasons athletes choose to pursue the path of a walk-on include: The athlete is already receiving praise, but the school they are particularly interested in does not share the level of interest. This target team could either be considered more athletically prestigious, it may already be saturated at that position or the athlete chooses that school for purely academic reasons over others. The walk-on will join the team to try to win the coaches over. Purpose: The athlete may be a family member of a notable former player, alumnus or coach of the school. Often these players do not strive to be placed in a starting position, but rather to carry on the tradition of being a part of a particular team. In the case of punters and kickers, there may not be a scholarship available, but the coaches may have encouraged or invited them to join the team without offering an athletic scholarship. Purpose: The athlete has not been noticed or taken seriously by recruiters. This can be the result of either not playing the respective sport while in high school or, more commonly, the prospective walk-on played the sport in high school, and perhaps even at an exceptional level, but the level of competition around the player was subpar and led scouts to dismiss the player's ability to adapt to the college game (this is often the case in rural districts where the local public school is often the only option for high school other than homeschooling). In this case, the same drawbacks that prevent the athlete from receiving the athletic scholarship may also prevent the student from even gaining admission to higher-level colleges. Purpose: In some instances, a college coach or recruiter may designate an athlete as a "preferred walk-on" during the scouting process. In this situation, the athlete is assured a spot on the team, but the coach is unable or unwilling to offer a scholarship. In collegiate sports: Many schools that do not provide athletic scholarships still recruit student athletes, and these students can get admitted to a school with academic records that are below average for that school. The Ivy League, for example, does not permit athletic scholarships, but each school has a limited number of athletes it can recruit for each sport. Additionally, all prospective athletes are required to meet a minimum score on what the league calls the Academic Index (AI), a metric based largely on high school grade-point averages and SAT or ACT scores. The goal of the AI is to ensure that students who receive athletic admissions slots fall within one standard deviation of the credentials of the student body as a whole.Division III athletes cannot receive athletic scholarships, but frequently get an easier ride through admissions. Even though these students do not receive athletic scholarships and are not required to play to remain in school, they are not walk-ons, because they were recruited. Instead of being awarded an athletic scholarship, they were granted an athletic admissions slot to a school to which they ordinarily would not have been likely to have gained admission.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**FMN reductase** FMN reductase: In enzymology, an FMN reductase (EC 1.5.1.29) is an enzyme that catalyzes the chemical reaction FMNH2 + NAD(P)+ ⇌ FMN + NAD(P)H + H+The 3 substrates of this enzyme are FMNH2, NAD+, and NADP+, whereas its 4 products are FMN, NADH, NADPH, and H+. FMN reductase: This enzyme belongs to the family of oxidoreductases, specifically those acting on the CH-NH group of donors with NAD+ or NADP+ as acceptor. The systematic name of this enzyme class is FMNH2:NAD(P)+ oxidoreductase. Other names in common use include NAD(P)H-FMN reductase, NAD(P)H-dependent FMN reductase, NAD(P)H:FMN oxidoreductase, NAD(P)H:flavin oxidoreductase, NAD(P)H2 dehydrogenase (FMN), NAD(P)H2:FMN oxidoreductase, SsuE, riboflavin mononucleotide reductase, flavine mononucleotide reductase, riboflavin mononucleotide (reduced nicotinamide adenine dinucleotide, (phosphate)) reductase, flavin mononucleotide reductase, and riboflavine mononucleotide reductase.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Strain 121** Strain 121: Strain 121 (Geogemma barossii) is a single-celled microbe of the domain Archaea. First discovered 320 km (200 mi) off Puget Sound near a hydrothermal vent, it is a hyperthermophile, able to reproduce at 121 °C (250 °F), hence its name. It was (at the time of its discovery) the only known form of life that could tolerate such high temperatures. A temperature of 130 °C (266 °F) is biostatic for Strain 121, meaning that although growth is halted, the archaeon remains viable, and can resume reproducing once it has been transferred to a cooler medium. The ability to grow at 121 °C (250 °F) is significant because medical equipment is exposed to this temperature for sterilization in an autoclave. Prior to the 2003 discovery of Strain 121, a fifteen-minute exposure to autoclave temperatures was believed to kill all living organisms. However, Strain 121 is not infectious in humans, because it cannot grow at temperatures near 37 °C (99 °F). Strain 121 metabolizes by reducing iron oxide. Strain 121: The maximum growth temperature of strain 121 is 8 °C higher than the previous record holder, Pyrolobus fumarii (Tmax=113 °C). However, it appears highly improbable that strain 121 marks the upper limit of viable growth temperature. It may very well be the case that the true upper limit lies somewhere in the vicinity of 140 to 150 °C, the temperature range where molecular repair and resynthesis becomes unsustainable.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Real-Time Multiprogramming Operating System** Real-Time Multiprogramming Operating System: Real-Time Multiprogramming Operating System (RTMOS) was a 24-bit process control operating system developed in the 1960s by General Electric that supported both real-time computing and multiprogramming. Programming was done in assembly language or Process FORTRAN. The two languages could be used in the same program, allowing programmers to alternate between the two as desired.Multiprogramming operating systems are now considered obsolete, having been replaced by multitasking.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Dark Ages Radio Explorer** Dark Ages Radio Explorer: The Dark Ages Radio Explorer (DARE) mission is a proposed concept lunar orbiter intended to identify redshifted emanations from primaeval hydrogen atoms just as the first stars began to emit light. DARE will use the precisely redshifted 21-cm transition line from neutral hydrogen (1420.00 MHz emissions) to view and pinpoint the formation of the first illuminations of the universe and the period ending the Dark Ages of the universe. The orbiter will explore the universe as it was from around 80 million years to 420 million years after the Big Bang. The mission will deliver data pertaining to the formation of the first stars, the initial black hole accretions, and the reionization of the universe. Computer models of galaxy formation will also be tested.This mission might also add to research on dark matter decay. The DARE program will also provide insight for developing and deploying lunar surface telescopes that add to refined exoplanet exploration of nearby stars. It is expected to launch in 2023. Background: The period after recombination occurred but before stars and galaxies formed is known as the "dark ages". During this time, the majority of matter in the universe is neutral hydrogen. This hydrogen has yet to be observed, but there are experiments underway to detect the hydrogen line produced during this era. The hydrogen line is produced when an electron in a neutral hydrogen atom is excited to a state where the electron and proton have aligned spins, or de-excited as the electron and proton spins go from being aligned to anti-aligned. The energy difference between these two hyperfine states is 5.9 10 −6 electron volts, with a wavelength of 21 centimetres. At times when neutral hydrogen is in thermodynamic equilibrium with the photons in the cosmic microwave background (CMB), the neutral hydrogen and CMB are said to be "coupled", and the hydrogen line is not observable. It is only when the two temperatures differ, or decoupled, that the hydrogen line can be observed. Theoretical motivation: The Big Bang produced a hot, dense, nearly homogeneous universe. As the universe expanded and cooled, particles, then nuclei, and finally atoms formed. At a redshift of about 1100, equivalent to about 400,000 years after the Big Bang, when the primordial plasma filling the universe cooled sufficiently for protons and electrons to combine into neutral hydrogen atoms, the universe became optically thin whereby photons from this early era no longer interacted with matter. We detect these photons today as the cosmic microwave background (CMB). The CMB shows that the universe was still remarkably smooth and uniform.After the protons and electrons combined to produce the first hydrogen atoms, the universe consisted of a nearly uniform, almost completely neutral, intergalactic medium (IGM) for which the dominant matter component was hydrogen gas. With no luminous sources present, these are known as the Dark Ages. Theoretical models predict that, over the next few hundred million years, gravity slowly condensed the gas into denser and denser regions, within which the first stars eventually appeared, marking Cosmic Dawn.As more stars formed, and the first galaxies assembled, they flooded the universe with ultraviolet photons capable of ionizing hydrogen gas. A few hundred million years after Cosmic Dawn, the first stars produced enough ultraviolet photons to reionize essentially all the universe's hydrogen atoms. This Reionization era is the hallmark event of this early generation of galaxies, marking the phase transition of the IGM back to a nearly completely ionized state.The beginning of structural complexity in the universe constituted a remarkable transformation, but one that we have not yet investigated observationally. By pushing even farther back than the Hubble Space Telescope can see, the truly first structures in the universe can be studied. Theoretical models suggest that existing measurements are beginning to probe the tail end of Reionization, but the first stars and galaxies, in the Dark Ages and the Cosmic Dawn, currently lie beyond our reach.DARE will make the first measurements of the birth of the first stars and black holes and will measure the properties of the otherwise invisible stellar populations. Such observations are essential for placing existing measurements in a proper context, and to understand how the first galaxies grew from earlier generations of structures. Mission: DARE's approach is to measure the spectral shape of the sky-averaged, redshifted 21-cm signal over a radio bandpass of 40–120 MHz, observing the redshift range 11–35, which correlates to 80–420 million years after the Big Bang. DARE orbits the Moon for 3 years and takes data above the lunar farside, the only location in the inner Solar System proven to be free of human-generated radio frequency interference and any significant ionosphere. Mission: The science instrument is mounted to an RF quiet spacecraft bus and is composed of a three-element radiometer, including electrically-short, tapered, biconical dipole antennas, a receiver, and a digital spectrometer. The smooth frequency response of the antennas and the differential spectral calibration approach used for DARE is effective in removing the intense cosmic foregrounds so that the weak cosmic 21-cm signal can be detected. Similar projects: Besides DARE, are other similar projects are proposed to also study this area such as the Precision Array for Probing the Epoch of Reionization (PAPER), Low Frequency Array (LOFAR), Murchison Widefield Array (MWA), Giant Metrewave Radio Telescope (GMRT), and the Large Aperture Experiment to Detect the Dark Ages (LEDA).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Detailed engineering** Detailed engineering: Detailed engineering are studies which creates a full definition of every aspect of a project development. It includes all the studies to be performed before project construction starts. Detail engineering studies are a key component for every project development across mining, infrastructure, energy, pharmaceuticals, chemicals, and oil and gas sectors. Detailed engineering: Detailed engineering is a service which is delivered for example by global engineering companies such as Worley, Morimatsu Industry, Outotec, Hatch, Amec Foster Wheeler, M3 Engineering, Ausenco, SNC-Lavalin, Techint, and Jacobs oEngineering.Detailed engineering follows Front End Engineering Design (FEED) and Basic Engineering previous steps on the engineering process for a project development, it contains in detail diagrams and drawings for construction, civil works, instrumentation, control system, electrical facilities, management of suppliers, schedule of activities, costs, procurement of equipment, economic evaluation and also environmental impacts before starting of construction of a project. Detailed engineering: Detailed engineering is used for different stages and purposes in project development worldwide, whether it is a water treatment plant at OceanaGold Didipo gold-copper mine in the Philippines, a processing plant at Hochschild Mining Inmaculada silver mine in Peru, a molybdenum flotation plant at KGHM Sierra Gorda copper project in Chile, detailed engineering is a key component for every project development.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Golden-section search** Golden-section search: The golden-section search is a technique for finding an extremum (minimum or maximum) of a function inside a specified interval. For a strictly unimodal function with an extremum inside the interval, it will find that extremum, while for an interval containing multiple extrema (possibly including the interval boundaries), it will converge to one of them. If the only extremum on the interval is on a boundary of the interval, it will converge to that boundary point. The method operates by successively narrowing the range of values on the specified interval, which makes it relatively slow, but very robust. The technique derives its name from the fact that the algorithm maintains the function values for four points whose three interval widths are in the ratio φ:1:φ where φ is the golden ratio. These ratios are maintained for each iteration and are maximally efficient. Excepting boundary points, when searching for a minimum, the central point is always less than or equal to the outer points, assuring that a minimum is contained between the outer points. The converse is true when searching for a maximum. The algorithm is the limit of Fibonacci search (also described below) for many function evaluations. Fibonacci search and golden-section search were discovered by Kiefer (1953) (see also Avriel and Wilde (1966)). Basic idea: The discussion here is posed in terms of searching for a minimum (searching for a maximum is similar) of a unimodal function. Unlike finding a zero, where two function evaluations with opposite sign are sufficient to bracket a root, when searching for a minimum, three values are necessary. The golden-section search is an efficient way to progressively reduce the interval locating the minimum. The key is to observe that regardless of how many points have been evaluated, the minimum lies within the interval defined by the two points adjacent to the point with the least value so far evaluated. Basic idea: The diagram above illustrates a single step in the technique for finding a minimum. The functional values of f(x) are on the vertical axis, and the horizontal axis is the x parameter. The value of f(x) has already been evaluated at the three points: x1 , x2 , and x3 . Since f2 is smaller than either f1 or f3 , it is clear that a minimum lies inside the interval from x1 to x3 The next step in the minimization process is to "probe" the function by evaluating it at a new value of x, namely x4 . It is most efficient to choose x4 somewhere inside the largest interval, i.e. between x2 and x3 . From the diagram, it is clear that if the function yields f4a , then a minimum lies between x1 and x4 , and the new triplet of points will be x1 , x2 , and x4 . However, if the function yields the value f4b , then a minimum lies between x2 and x3 , and the new triplet of points will be x2 , x4 , and x3 . Thus, in either case, we can construct a new narrower search interval that is guaranteed to contain the function's minimum. Probe point selection: From the diagram above, it is seen that the new search interval will be either between x1 and x4 with a length of a + c, or between x2 and x3 with a length of b. The golden-section search requires that these intervals be equal. If they are not, a run of "bad luck" could lead to the wider interval being used many times, thus slowing down the rate of convergence. To ensure that b = a + c, the algorithm should choose x4=x1+(x3−x2) However, there still remains the question of where x2 should be placed in relation to x1 and x3 . The golden-section search chooses the spacing between these points in such a way that these points have the same proportion of spacing as the subsequent triple x1,x2,x4 or x2,x4,x3 . By maintaining the same proportion of spacing throughout the algorithm, we avoid a situation in which x2 is very close to x1 or x3 and guarantee that the interval width shrinks by the same constant proportion in each step. Probe point selection: Mathematically, to ensure that the spacing after evaluating f(x4) is proportional to the spacing prior to that evaluation, if f(x4) is f4a and our new triplet of points is x1 , x2 , and x4 , then we want ca=ab. However, if f(x4) is f4b and our new triplet of points is x2 , x4 , and x3 , then we want cb−c=ab. Eliminating c from these two simultaneous equations yields (ba)2−ba=1, or ba=φ, where φ is the golden ratio: 1.618033988 … The appearance of the golden ratio in the proportional spacing of the evaluation points is how this search algorithm gets its name. Termination condition: Any number of termination conditions may be applied, depending upon the application. The interval ΔX = X4 − X1 is a measure of the absolute error in the estimation of the minimum X and may be used to terminate the algorithm. The value of ΔX is reduced by a factor of r = φ − 1 for each iteration, so the number of iterations to reach an absolute error of ΔX is about ln(ΔX/ΔXo) / ln(r) where ΔXo is the initial value of ΔX. Because smooth functions are flat (their first derivative is close to zero) near a minimum, attention must be paid not to expect too great an accuracy in locating the minimum. The termination condition provided in the book Numerical Recipes in C is based on testing the gaps among x1 , x2 , x3 and x4 , terminating when within the relative accuracy bounds |x3−x1|<τ(|x2|+|x4|), where τ is a tolerance parameter of the algorithm, and |x| is the absolute value of x . The check is based on the bracket size relative to its central value, because that relative error in x is approximately proportional to the squared absolute error in f(x) in typical cases. For that same reason, the Numerical Recipes text recommends that τ=ε , where ε is the required absolute precision of f(x) Algorithm: Note! The examples here describe an algorithm that is for finding the minimum of a function. For maximum, the comparison operators need to be reversed. Iterative algorithm Specify the function to be minimized, f(x), the interval to be searched as {X1,X4}, and their functional values F1 and F4. Calculate an interior point and its functional value F2. The two interval lengths are in the ratio c : r or r : c where r = φ − 1; and c = 1 − r, with φ being the golden ratio. Using the triplet, determine if convergence criteria are fulfilled. If they are, estimate the X at the minimum from that triplet and return. From the triplet, calculate the other interior point and its functional value. The three intervals will be in the ratio c:cr:c. The three points for the next iteration will be the one where F is a minimum, and the two points closest to it in X. Go to step 3 Recursive algorithm javascript version: // a and c define range to search // func(x) returns value of function at x to be minimized function goldenSection(a, c, func) { function split(x1, x2) {return x1 + 0.6180339887498949*(x2-x1);} var b = split(a,c); var bv = func(b); while(a!=c) { var x = split(a,b); var xv = func(x); if(xv<bv) bv=xv,c=b,b=x; else a=c,c=x; } return b; } function test(x) {return -Math.sin(x);} console.log(goldenSection(0, 3, test)); // prints PI/2 Fibonacci search: A very similar algorithm can also be used to find the extremum (minimum or maximum) of a sequence of values that has a single local minimum or local maximum. In order to approximate the probe positions of golden section search while probing only integer sequence indices, the variant of the algorithm for this case typically maintains a bracketing of the solution in which the length of the bracketed interval is a Fibonacci number. For this reason, the sequence variant of golden section search is often called Fibonacci search. Fibonacci search: Fibonacci search was first devised by Kiefer (1953) as a minimax search for the maximum (minimum) of a unimodal function in an interval.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Carisbamate** Carisbamate: Carisbamate (YKP 509, proposed trade name Comfyde) is an experimental anticonvulsant drug that was under development by Johnson & Johnson Pharmaceutical Research and Development but never marketed. Clinical study: A phase II clinical trial in the treatment of partial seizures demonstrated that the compound has efficacy in the treatment of partial seizures and a good safety profile. Since late 2006, the compound has been undergoing a large multicenter phase III clinical trial for the treatment of partial seizures. Its mechanism of action is unknown.A double-blind, placebo-controlled trial of carisbamate in 323 patients with migraine determined that carisbamate was well tolerated at doses up to 600 mg/day, but it failed to demonstrate that the drug was sufficiently more effective than placebo in migraine prophylaxis. History: In 1998, the compound was in-licensed from SK Corp. (currently Life Science Business Division of SK Holdings), a South Korean company. On October 24, 2008, Johnson & Johnson announced that it had submitted a New Drug Application to the U.S. Food and Drug Administration (FDA) for carisbamate. Johnson & Johnson received provisional approval by the FDA to market carisbamate under the brand name of Comfyde. However, on August 21, 2009, Johnson & Johnson reported that the FDA had failed to give marketing approval.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Rules of cribbage** Rules of cribbage: The rules here are based on those of the American Cribbage Congress and apply to two-, three- or four-player games, with details of variations being listed below. The deal: Cribbage uses a standard 52-card deck of cards. The jokers are removed; the suits are equal in status. The players cut for first deal, with the player cutting the lowest card (the ace counts as one, and is the lowest card) dealing first. If the cutters tie, the cards are re-shuffled and re-cut. The deal then alternates from hand to hand. Note that because the crib (explained below) belongs to the dealer, winning the first deal confers a scoring advantage: if the game ends in an odd number of deals, the first dealer will have received an extra crib, sometimes enough to decide the game. The deal: The dealer shuffles, offers the deck to the player on their right to cut (required in tournament play), and deals cards singly to each player, starting with the player on the dealer's left. For two players, each is dealt six cards (though some play with five cards dealt to each player and two to the crib). For three or four players, each is dealt five cards. In the case of three players, a single card is dealt face down in front of the dealer to start the crib. Cards must be dealt so that each player ends up with four cards after the crib is formed, and the crib should also have four cards. During the deal, if any card is exposed by the dealer or found face-up in the deck, cards must be redealt. The crib: Once the cards have been dealt, each player chooses four cards to retain, discarding the other one or two face-down to form the "crib" that will be used later by the dealer. At this point, each player's hand and the crib will contain exactly four cards. Example cribs Two players Three players The starter: The player on the dealer's left cuts the undealt portion of the deck (leaving at least 4 cards), and the dealer reveals the top card of the bottom section, called the "starter" or the "cut", placing it on top of the deck face up. (It is illegal to peek at any other cards in the deck during this process.) If this card is a jack, the dealer scores two points for "his heels". The game can end on a cut of a jack for the dealer. The play: The play (often called pegging) starts with the player on the dealer's left and continues clockwise. Each player lays one card in turn onto the table so that it is visible, stating the cumulative value, or count, of the cards played so far. (For example, the first player lays a 4 and says "four", the next lays a 7 and says "eleven", and so on). Face cards are worth ten; aces are worth one. Each player's cards are retained face up on the table in front of that player, so that the hands can analyzed during play and then later be gathered and scored (see “The show,” below). The play: The count must not exceed 31, so a player who cannot lay a card without bringing the count above 31 passes by saying "Go". The other players continue to lay cards in turn until no cards can be played without exceeding 31. Players must lay a card if able to do so without exceeding 31. The last player to lay a card scores two points if 31 is reached exactly ("31 for two"); otherwise one point is scored, e.g., "29 for one", or "30 for one", etc. The one-point score is known as "One for go", or simply "Go". The count is then reset to zero and play resumes, starting with the player to the left of the last card played. Players with cards remaining repeat this process until all cards have been played. The last card played is treated as a final "go" as described above: two points for making the final count 31, or one point otherwise. The play: In addition to scoring one or two points for the last card, players score points according to the following rules: fifteen-two two points for making the cumulative count exactly fifteen ("fifteen two") runs three points for completing a run of three cards, regardless of the order in which they are laid (a 6, then a 4, then a 5 is a run of three even though they were not laid in order) four points for completing a run of four five points for completing a run of five six points for completing a run of six seven points for completing the run of seven; e.g. playing 2, 4, 6, A, 3, 5 and 7 pairs two points for laying a card of the same rank as the previous card, thus completing a pair six points for laying a third card of the same rank (a "pair royal" or "trips") twelve points for laying a fourth card of the same rank (a "double pair royal" or "quad")If a card completes more than one scoring combination, then all combinations are scored. For example, if the first three cards played are 5s, the second one scores two points ("ten and a pair") and the third scores eight (“fifteen-two and a pair royal for six, makes eight”). Card combinations cannot span a reset; once the total reaches 31 (or a Go has been scored) and counting has restarted at zero, cards already played are no longer available for runs or pairs. During this phase of play run combinations cannot span a pair; in a play of 2, 3, 3, 4 the pair interrupts the run so only the pair is counted for points. The play: Players choose when to lay each card in order to maximise their score according to the scheme shown below. The first player to reach 121 wins the game. Example plays Two players Three players The show: Once the play is complete, each player in turn receives points based on the content of their hand. Starting with the player on the dealer's left, players spread out their cards on the playing surface and calculate their score. The starter card turned up at the beginning of play serves as a fifth card shared in common by all hands; thus each player's score is based on their own four cards along with the starter card. Scoring combinations are the following: fifteens two points for each distinct combination of two or more cards totalling exactly fifteen (counting aces as one, face cards as ten) runs three points for a run of three consecutive cards (regardless of suit) four points for a run of four five points for a run of five pairs two points for a pair of cards of the same rank six points for three cards of the same rank (known as a "pair royal", comprising three distinct pairs) twelve points for four cards of the same rank (a "double pair royal", comprising six distinct pairs) flush four points for a flush, where all four cards in the hand are of the same suit, with an additional point if the starter card is also of that suit. (Note that four suited cards including the starter, but missing one of the cards in the hand, does not score for flush.) his nob(s) one point for holding the jack of the same suit as the starter card ("one for his nob" or "... his nobs")Common combinations are often recognized and scored as a unit. For example, a run of three cards with an additional card matching one of the three in rank, e.g., 2–2–3–4, is termed a "double run of three" and scores eight according to the above rules (two distinct runs of three and two for the pair); 2–2–3–4–5 is a "double run of four" for ten points (two distinct runs of four and two for the pair). Even more valuable are "triple runs", e.g., 2–2–2–3–4, scoring fifteen (three distinct runs of three, plus three distinct pairs) and "double-double" or "quadruple runs", e.g., 2–3–3–4–4, scoring sixteen (four distinct runs of three, plus two pairs). Combined runs may also include fifteens: a 24 hand, the largest commonly seen, can comprise a double-double run and four fifteens: for example, 4–4–5–5–6 or 6–7–7–8–8. The show: The dealer scores their hand last and then turns the cards in the crib face up. These cards, in conjunction with the starter card, are scored by the dealer as an additional hand. The rules for scoring the crib are the same as scoring a hand, with the exception of the flush; a four-card flush in the crib is not scored unless it is also the same suit as the starter card (for a total of five points). The show: The highest possible score for a hand is 29 points: a starter card of a 5, and a hand of 5, 5, 5, J with the jack being the same suit as the starter card. The score might be announced thus: Fifteen two, fifteen four, fifteen six, fifteen eight [four J-5 combinations],fifteen ten, fifteen twelve, fifteen fourteen, fifteen sixteen [four 5–5–5 combinations],double pair royal [six pairs of 5s] for twelve makes twenty-eight,and his nobs makes twenty-nine. The show: Scores between 0 and 29 are all possible, with the exception of 19, 25, 26 and 27. Players may colloquially refer to a blank hand (one scoring no points) as a "nineteen hand". Example scores Two players Three players The end: After the dealer has scored the crib, all cards are collected and the deal passes to the player on the dealer's left. The next round starts with the deal. The end: Although the rules of cribbage do not require it (except in tournament play), the traditional method of keeping score is to use a cribbage board. This is a flat board, usually made of wood, with separate series of holes that record each player's score. It is usually arranged in five-hole sections for easier scoring. Players each have two pegs that mark their current and previous scores, and all scoring is done by moving the back peg ahead of the front peg. The end: When a player reaches the target score for the game (usually 121), the game ends with that player the winner. Match: A match (much like tennis) consists of more than one game, often an odd number (3 games, 5 games, 7 games etc.). The match points are scored on the cribbage board using the holes reserved for match points. On a spiral board, these are often at the bottom of the board in a line with 5 or 7 holes. On a traditional board, they are often placed in the middle of the board or at the top/bottom. Match: Two player game In a two player game of cribbage a player scores one match point for each game won. Their opponent will begin the next game as first dealer. If a player skunks their opponent (reaches 121 points before their opponent scores 91 points) then that player scores one extra match point for that game (two match points in total). If a player double skunks their opponent (reaches 121 points before their opponent reaches 61), then they score two extra match points for the game (four match points in total). If a player triple skunks their opponent (reaches 121 points before their opponent reaches 31 points), they automatically win the match regardless of how many match points are needed to win. Double and triple skunks are not included in the official rules of cribbage play and are optional. There are several different formats for scoring match points. Match: Example match Example of a full match using Free play rules. The match is first player to score 5 match points. Match: Three player game Winner takes all When playing a three player match in a winner takes all format, the winner scores two match points (just like in two-way cribbage) for each game won. If he/she skunks just the third opponent, they score an additional match point (3 total) with second place receiving one point. If he/she double skunks both opponents, he/she still scores three match points but second place would not receive any points at all. Match: Continued play In continued play format, the winner of the match earns two match points for three player cribbage and four match points for five player cribbage (plus applicable match points if the player has skunked/double skunked their opponents). The remaining players play until there is a second winner, who scores one match point for three player cribbage and two match points for five player cribbage (with no extra points for skunking opponents). In five player cribbage, the remaining three players play until there is a third winner, who scores one match point (with no extra points for skunking opponents). Variations: Three players: Five cards are dealt to each player and one card directly to the crib, and each player then discards one card to the crib, as shown in the examples above. Three players can score individually, with the winner the first to reach 121, or in a "two against one" team format, where the two-player team must score 121 to win before the lone player reaches 61. Another variation of the "two against one" team format, is that prior to the cut, the lone player picks up the crib, examines all 8 cards, and then discards 4 cards to the crib. Both the team and lone player need to reach 121 to win.Another three player variation is to deal five cards to each player except the dealer who gets six cards. The dealer deals the first and last card to themself and then discards two cards to the crib, the other players each discard one card.Four players: Five cards are dealt to each player, each of whom discards one to the crib. The players can play as individuals or as two sets of partners. Variations: Five-card cribbage (called the "old game"): The two players are dealt five cards each, two of which are discarded into the crib. The crib thus consists of four cards but each hand only three. The first non-dealer gets a three-point start, the play (pegging) goes up to 31 only once and does not restart. The game is won by the first player to reach 61 points. Variations: Five players #1: Five cards are dealt to each player except the dealer, who has only four cards. The four non-dealers each discard one card to the crib. Variations: Five players #2: Five cards are dealt to each player. The players each discard one card to the crib. All hands are scored normally using the "starter" card. When the dealer counts the crib, the "starter" card is not used; only the five cards in the crib are used. (As usual for a crib, only a 5-card flush can score, so all 5 crib cards must be the same suit, and the dealer receives 5 points for this flush.) Ten-Card: Usually played with two players, this variant consists of each player being dealt ten card to start. Each player still throws two to the crib but then split the remaining cards into two sets of four. Only one of these new hands is used during pegging but each will be counted separately during the reveal. This faster paced version results in higher scoring hands that require more strategy in creating the best combination of cards. Variations: Muggins: This is a scoring variant in which a player who fails to count all the points to which he is entitled in the play or the show loses the unclaimed points to an opponent who calls "muggins" or "cut throat". Lowball (or "Loser's Crib"): This is a misère variant in which the normal rules apply but the aim is to avoid scoring. The loser is the first to 121. Variations: Jokers-Wild: In this variant, jokers are fully wild, with their rank and suit decided only at the moment of play. The choice of card may even replicate a card already in play, allowing for 5 of a kind (20 points), 6 of a kind (30 points), etc. When a joker is cut as the starter, the dealer scores 2 for heels and each player may choose a different rank and suit for the joker when hands are scored. Variations: Jokers-Naught: In this variant, Jokers have the numerical value of zero. This enables runs from below the ace, e.g., 0-1-2. Noting that each 0 adds a unique permutation for a combination of fifteen, one Joker doubles value of the combinations of 15 in the hand, e.g., 8 + 7 = 15 and 8 + 7 + 0 = 15. Two jokers quadruple the value of the combinations of 15 in the hand. Since Jokers have no suit, they are excluded from flush counting. Thus, a hand of 4H-5H-6H-Joker-Joker counts as 3 for the hand of all (3) hearts, one combination of 15, quadrupled for the jokers, and 3 for the run 4-5-6, totaling 3 + 8 + 3 = 14. When pegging, a fifteen can be achieved up to two times by playing jokers on the fifteen, since 15 + 0 + 0 = 15. Also, when pegging, 31 is not automatically a go, as a joker may be subsequently played upon 31. The last joker played would get the go for two. Finally, flipping over a Joker at the cut is worth one point. Variations: Toss Fives: This is a variant in which players must discard any 5s they may have into the crib (even an opponent's crib). Variations: Three Runs:In this variant, only runs of threes are counted, but are counted for each independent combination. Thus, a run of four will contain two independent runs of three for 6 points; a run of five with three independent runs of 3 will be worth 9. Double runs of four will contain either 3 or 4 independent runs of three depending on whether the pair is at the end or the middle, garnering 9+2=11 or 12+2=14 points respectively. During pegging, only runs of threes are counted. A player playing a 5 after a 2, 3, and 4, will only get 3 points for the last 3 point run. Variations: Auction Cribbage: In Auction Cribbage, any player may bid for the points in the crib after the cards are dealt. Bidding continues in turn until no further bids are offered; the winning bidder then immediately deducts that number of points from their hand; the crib is scored at the usual time and its points awarded to the winning bidder for that round. If no bid is placed, the dealer retains the crib. Variations: Null point penalty: When a player scores zero points during "the show", their opponent scores one point. This applies to both players' hands as well as in the crib. Back 10 (Backup Ten): The hand and the crib must contain points. If either hand does not, the owner of the hand must go back ten points. Variations: Canadian Doubles: A variation on doubles, the dealer and the player to the dealer's left are dealt 10 cards each. Both players keep 4 cards, give their partners 4 cards and throw two to the crib. Play proceeds normally. This game is normally over in four deals, at most five.A number of variations have been devised for playing solitaire forms of Cribbage. Variations: Cribbage Solitaire: This plays much like Cribbage without pegging. Two cards are discarded to the crib from a hand of six cards, and after this is repeated, both hands and the crib are scored, using an additional random card as the starter card. Cribbage Squares: Cards are dealt one at a time into a 4x4 grid, with the player deciding in which of the 16 spaces each card is placed. Finally, a 17th card is turned up as starter. Each horizontal row and vertical column is considered as a hand, and is scored accordingly.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Surface bundle over the circle** Surface bundle over the circle: In mathematics, a surface bundle over the circle is a fiber bundle with base space a circle, and with fiber space a surface. Therefore the total space has dimension 2 + 1 = 3. In general, fiber bundles over the circle are a special case of mapping tori. Surface bundle over the circle: Here is the construction: take the Cartesian product of a surface with the unit interval. Glue the two copies of the surface, on the boundary, by some homeomorphism. This homeomorphism is called the monodromy of the surface bundle. It is possible to show that the homeomorphism type of the bundle obtained depends only on the conjugacy class, in the mapping class group, of the gluing homeomorphism chosen. This construction is an important source of examples both in the field of low-dimensional topology as well as in geometric group theory. In the former we find that the geometry of the three-manifold is determined by the dynamics of the homeomorphism. This is the fibered part of William Thurston's geometrization theorem for Haken manifolds, whose proof requires the Nielsen–Thurston classification for surface homeomorphisms as well as deep results in the theory of Kleinian groups. In geometric group theory the fundamental groups of such bundles give an important class of HNN-extensions: that is, extensions of the fundamental group of the fiber (a surface) by the integers. Surface bundle over the circle: A simple special case of this construction (considered in Henri Poincaré's foundational paper) is that of a torus bundle.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Gradient-index optics** Gradient-index optics: Gradient-index (GRIN) optics is the branch of optics covering optical effects produced by a gradient of the refractive index of a material. Such gradual variation can be used to produce lenses with flat surfaces, or lenses that do not have the aberrations typical of traditional spherical lenses. Gradient-index lenses may have a refraction gradient that is spherical, axial, or radial. In nature: The lens of the eye is the most obvious example of gradient-index optics in nature. In the human eye, the refractive index of the lens varies from approximately 1.406 in the central layers down to 1.386 in less dense layers of the lens. This allows the eye to image with good resolution and low aberration at both short and long distances.Another example of gradient index optics in nature is the common mirage of a pool of water appearing on a road on a hot day. The pool is actually an image of the sky, apparently located on the road since light rays are being refracted (bent) from their normal straight path. This is due to the variation of refractive index between the hot, less dense air at the surface of the road, and the denser cool air above it. The variation in temperature (and thus density) of the air causes a gradient in its refractive index, causing it to increase with height. This index gradient causes refraction of light rays (at a shallow angle to the road) from the sky, bending them into the eye of the viewer, with their apparent location being the road's surface. In nature: The Earth's atmosphere acts as a GRIN lens, allowing observers to see the sun for a few minutes after it is actually below the horizon, and observers can also view stars that are below the horizon. This effect also allows for observation of electromagnetic signals from satellites after they have descended below the horizon, as in radio occultation measurements. Applications: The ability of GRIN lenses to have flat surfaces simplifies the mounting of the lens, which makes them useful where many very small lenses need to be mounted together, such as in photocopiers and scanners. The flat surface also allows a GRIN lens to be easily optically aligned to a fiber, to produce collimated output, making it applicable for endoscopy as well as for in vivo calcium imaging and optogenetic stimulation in brain.In imaging applications, GRIN lenses are mainly used to reduce aberrations. The design of such lenses involves detailed calculations of aberrations as well as efficient manufacture of the lenses. A number of different materials have been used for GRIN lenses including optical glasses, plastics, germanium, zinc selenide, and sodium chloride.Certain optical fibres (graded-index fibres) are made with a radially-varying refractive index profile; this design strongly reduces the modal dispersion of a multi-mode optical fiber. The radial variation in refractive index allows for a sinusoidal height distribution of rays within the fibre, preventing the rays from leaving the core. This differs from traditional optical fibres, which rely on total internal reflection, in that all modes of the GRIN fibres propagate at the same speed, allowing for a higher temporal bandwidth for the fibre.Antireflection coatings are typically effective for narrow ranges of frequency or angle of incidence. Graded-index materials are less constrained.A axial gradient lens has been used to concentrate sunlight onto solar cells, capturing as much as 90% of incident light when the sun is not at an optimal angle. Manufacture: GRIN lenses are made by several techniques: Neutron irradiation – Boron-rich glass is bombarded with neutrons to cause a change in the boron concentration, and thus the refractive index of the lens. Chemical vapour deposition – Involving the deposition of different glass with varying refractive indexes, onto a surface to produce a cumulative refractive change. Partial polymerisation – An organic monomer is partially polymerized using ultraviolet light at varying intensities to give a refractive gradient. Ion exchange – Glass is immersed into a liquid melt with lithium ions. As a result of diffusion, sodium ions in the glass are partially exchanged with lithium ones, with a larger amount of exchange occurring at the edge. Thus the sample obtains a gradient material structure and a corresponding gradient of the refractive index. Ion stuffing – Phase separation of a specific glass causes pores to form, which can later be filled using a variety of salts or concentration of salts to give a varying gradient. Direct laser writing – While point-by-point exposing the pre-designed structure an exposure dose is varied (scanning speed, laser power, etc.). This corresponds to spatially tunable monomer-to-polymer degree-of-conversion resulting to a different refractive index. The method is applicable to free-form micro-optical elements and multi-component optics. History: In 1854, J C Maxwell suggested a lens whose refractive index distribution would allow for every region of space to be sharply imaged. Known as the Maxwell fisheye lens, it involves a spherical index function and would be expected to be spherical in shape as well. This lens, however, is impractical to make and has little usefulness since only points on the surface and within the lens are sharply imaged and extended objects suffer from extreme aberrations. In 1905, R. W. Wood used a dipping technique creating a gelatin cylinder with a refractive index gradient that varied symmetrically with the radial distance from the axis. Disk-shaped slices of the cylinder were later shown to have plane faces with radial index distribution. He showed that even though the faces of the lens were flat, they acted like converging and diverging lens depending on whether the index was a decreasing or increasing relative to the radial distance. In 1964, a posthumous book of R. K. Luneburg was published in which he described a lens that focuses incident parallel rays of light onto a point on the opposite surface of the lens. This also limited the applications of the lens because it was difficult to use it to focus visible light; however, it had some usefulness in microwave applications. Some years later several new techniques have been developed to fabricate lenses of the Wood type. Since then at least the thinner GRIN lenses can possess surprisingly good imaging properties considering their very simple mechanical construction, while thicker GRIN lenses found application e.g. in Selfoc rods. Theory: An inhomogeneous gradient-index lens possesses a refractive index whose change follows the function n=f(x,y,z) of the coordinates of the region of interest in the medium. According to Fermat's principle, the light path integral (L), taken along a ray of light joining any two points of a medium, is stationary relative to its value for any nearby curve joining the two points. The light path integral is given by the equation L=∫SoSnds , where n is the refractive index and S is the arc length of the curve. If Cartesian coordinates are used, this equation is modified to incorporate the change in arc length for a spherical gradient, to each physical dimension: L=∫SoSn(x,y,z)x′2+y′2+z′2ds where prime corresponds to d/ds. The light path integral is able to characterize the path of light through the lens in a qualitative manner, such that the lens may be easily reproduced in the future. Theory: The refractive index gradient of GRIN lenses can be mathematically modelled according to the method of production used. For example, GRIN lenses made from a radial gradient index material, such as SELFOC Microlens, have a refractive index that varies according to: nr=no(1−Ar22) , where nr is the refractive index at a distance, r, from the optical axis; no is the design index on the optical axis, and A is a positive constant.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Camurati–Engelmann disease** Camurati–Engelmann disease: Camurati–Engelmann disease (CED) is a very rare autosomal dominant genetic disorder that causes characteristic anomalies in the skeleton. It is also known as progressive diaphyseal dysplasia. It is a form of dysplasia. Patients typically have heavily thickened bones, especially along the shafts of the long bones (called diaphyseal dysplasia). The skull bones may be thickened so that the passages through the skull that carry nerves and blood vessels become narrowed, possibly leading to sensory deficits, blindness, or deafness. Camurati–Engelmann disease: This disease often appears in childhood and is considered to be inherited, however many patients have no previous history of CED within their family. The disease is slowly progressive and, while there is no cure, there is treatment. It is named for M. Camurati and G. Engelmann. Signs and symptoms: Patients with CED complain of chronic bone pain in the legs or arms, muscle weakness (myopathy) and experience a waddling gait. Other clinical problems associated with the disease include increased fatigue, weakness, muscle spasms, headache, difficulty gaining weight, and delay in puberty. Some patients have an abnormal or absent tibia, may present with a flat foot, or scoliosis.This disease may also cause bones to become abnormally hardened which is referred to as sclerosis. This hardening may affect the bones at the base of the skull or those in the hands, feet, or jaw. This causes ongoing pain and aching within the body parts that are affected. The pain has been described as either a hot electric stabbing pain, an ever-increasing pressure sensation around the bones (especially before electrical storms) or as a constant ache that radiates through several long bones at once. Pain may also occur in the hips, wrists, knees and other joints as they essentially just 'lock-up' (often becoming very stiff, immobile and sore), mostly when walking up or down staircases, writing for extended periods of time, or during the colder months of the year. Those with the disease tend to have a very characteristic walk medically diagnosed as a 'waddling gait'. This is observed by the broad-based gait with a duck-like waddle to the swing phase, the pelvis drops to the side of the leg being raised, notable forward curvature of the lumbar spine and a marked body swing.The pain is especially severe during a 'flare-up', these can be unpredictable, exhausting and last anywhere from a few hours to several weeks. This is a common occurrence for several CED patients, often causing myopathy and extensive sleep deprivation from the chronic, severe and disabling pain. Patients may even require the use of a wheelchair (or additional carer's help with getting dressed, showering, mobility/shopping, preparing meals or lifting heavy items) especially when bedridden or housebound for days or weeks at a time. 'Flare-ups' may be attributed to, or exacerbated by growth spurts, stress, exhaustion, exercise, standing or walking for too long, illness, infection, being accidentally knocked/hurt or injured, after surgery/anaesthetics, cold weather, electrical storms, and sudden changes in barometric pressure.CED may also affect internal organs, the liver and spleen, which may become enlarged. A loss of vision and/or hearing can occur if bones are adversely affected by the hardening in the skull. Hence proactive specialist check-ups, X-rays, diagnostic tests/scans, and regular blood tests are recommended on an annual basis to monitor the CED bony growth and secondary medical issues that may arise from this condition. Cause: Camurati-Engelmann disease is caused by autosomal dominant mutations in the gene TGFB1, localized at chromosome 19q13. Diagnosis: Classification There are two forms: Type 1 is associated with TGFB1 Type 2 is not associated with TGFB1Type 1 Camurati-Engelmann Disease is associated with an error occurring in the TGFB1 protein. Affected individuals shared a haplotype between D19S881 to D19S606. TGFB1 protein is encoded by the TGF-B1 gene, which occurs on chromosome 19q13.1-13.3. This protein is responsible for a multitude of functions, one of which includes regulating the function of osteoblasts and osteoclasts, which decreases bone resorption and increases bone formation. These functions can be affected by a series of mutations that occur on exon 4, near the carboxyl terminus of the latency associated peptide, or LAP. TGFB1 is expressed as a latent form, a mature form and a B1-LAP. Mutations to R218H affect the association of the B1-LAP and the mature form of TGFB1 by conformational changes to B1-LAP. These mutations can lead to a buildup of mature TGFB1, which accumulates in the mutant R218H fibroblasts. Fibroblasts are a type of cell that creates collagen and the extracellular matrix. This suggests that R218H mutation causes a disassociation between mature-TGFB1 and B1-LAP. Mutations at the LLL12-13ins and Y81H regions decrease the secretion of TGFB1, which leads to intracellular buildup of TGFB1.Type 2 Camurati-Engelmann Disease is still speculative, with no distinct evidence to credit its existence. There are many similarities between Type 2 CED and hyperostosis generalisata with striations of the bones (HGS), with some speculating they are two phenotypic variations of the same disease. Treatment: Camurati–Engelmann disease is somewhat treatable. Glucocorticosteroids, which are anti-inflammatory and immunosuppressive agents, are used in some cases. This form of medication helps in bone strength, however can have multiple side effects. In several reports, successful treatment with glucocorticosteroids was described, as certain side effects can benefit a person with CED. This drug helps with pain and fatigue as well as some correction of radiographic abnormalities.Alternative treatments such as massage, relaxation techniques (meditation, essential oils, spa baths, music therapy, etc.), gentle stretching, and especially heat therapy have been successfully used to an extent in conjunction with pain medications. A majority of CED patients require some form of analgesics, muscle relaxant, and/or sleep inducing medication to manage the pain, specifically if experiencing frequent or severe 'flare-ups' (e.g. during winter). Notable persons: John Belluso, writer for the CBS television show Ghost Whisperer, used a wheel chair from the age of 13 because of the Camurati–Engelmann syndrome. He died on February 10, 2006, at the age of 36 in New York City.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Stability testing (pharmaceutical)** Stability testing (pharmaceutical): In pharmaceutical industry, stability testing is a process that is used to determine the quality of a drug substance or drug product over a period of specified time under specific environmental conditions.With stability testing, pharmaceutical industry inspects the quality of drug substances and drug products as per the guidelines outlined by US Food and Drug Administration and International Council for Harmonisation of Technical Requirements for Pharmaceuticals for Human Use to make sure that they retained the quality over the period of time. Stability testing depends upon the environmental factors like, light source, humidity and ambient temperature as well as the physical and chemical properties of the active drug product.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Draize test** Draize test: The Draize test is an acute toxicity test devised in 1944 by Food and Drug Administration (FDA) toxicologists John H. Draize and Jacob M. Spines. Initially used for testing cosmetics, the procedure involves applying 0.5 mL or 0.5 g of a test substance to the eye or skin of a restrained, conscious animal, and then leaving it for set amount of time before rinsing it out and recording its effects. The animals are observed for up to 14 days for signs of erythema and edema in the skin test, and redness, swelling, discharge, ulceration, hemorrhaging, cloudiness, or blindness in the tested eye. The test subject is commonly an albino rabbit, though other species are used too, including dogs. The animals are euthanized after testing if the test renders irreversible damage to the eye or skin. Animals may be re-used for testing purposes if the product tested causes no permanent damage. Animals are typically reused after a "wash out" period during which all traces of the tested product are allowed to disperse from the test site.The tests are controversial. They are viewed as cruel as well as unscientific by critics because of the differences between rabbit and human eyes, and the subjective nature of the visual evaluations. The FDA supports the test, stating that "to date, no single test, or battery of tests, has been accepted by the scientific community as a replacement [for] ... the Draize test". Because of its controversial nature, the use of the Draize test in the U.S. and Europe has declined in recent years and is sometimes modified so that anaesthetics are administered and lower doses of the test substances used. Chemicals already shown to have adverse effects in vitro are not currently used in a Draize test, thereby reducing the number and severity of tests that are carried out. Background: John Henry Draize (1900–1992) obtained a BSc in chemistry then a PhD in pharmacology, studying hyperthyroidism. He then joined the University of Wyoming and investigated plants poisonous to cattle, other livestock, and people. The U.S. Army recruited Draize in 1935 to investigate the effects of mustard gas and other chemical agents. Background: In 1938, after a number of reports of coal tar in mascara leading to blindness, the U.S. Congress passed the Federal Food, Drug, and Cosmetic Act, placing cosmetics under regulatory control. The following year Draize joined the FDA, and was soon promoted to head of the Dermal and Ocular Toxicity Branch where he was charged with developing methods for testing the side effects of cosmetic products. This work culminated in a report by Draize, his laboratory assistant, Geoffrey Woodard, and division chief, Herbert Calvery, describing how to assess acute, intermediate, and chronic exposure to cosmetics by applying compounds to the skin, penis, and eyes of rabbits.Following this report, the techniques were used by the FDA to evaluate the safety of substances such as insecticides and sunscreens and later adopted to screen many other compounds. By Draize's retirement in 1963, and despite never having personally attached his name to any technique, irritancy procedures were commonly known as "the Draize test" To distinguish the target organ, the tests are now often referred to as "the Draize eye test" and "the Draize skin test". Reliability: In 1971, before the implementation in 1981 of the modern Draize protocol, toxicologists Carrol Weil and Robert Scala of Carnegie Mellon University distributed three test substances for comparative analysis to 24 different university and state laboratories. The laboratories returned significantly different evaluations, from non-irritating to severely irritating, for the same substances. A 2004 study by the U.S. Scientific Advisory Committee on Alternative Toxicological Methods analyzed the modern Draize skin test. They found that tests would: Misidentify a serious irritant as safe: 0–0.01% Misidentify a mild irritant as safe: 3.7–5.5% Misidentify a serious irritant as a mild irritant: 10.3–38.7% Descriptions of the test: Anti-testing According to the American National Anti-Vivisection Society, solutions of products are applied directly into the animals' eyes, which can cause "intense burning, itching and pain". Clips are placed on the rabbits' eyelids to hold them open during the test period, which can last several days, during which time the rabbits are placed in restraining stocks. The chemicals often leave the eyes "ulcerated and bleeding". In the Draize test for skin irritancy, the test substances are applied to skin that is shaved and abraded (several layers of skin are removed with sticky tape), then covered with plastic sheeting. Descriptions of the test: Pro-testing According to the British Research Defence Society, the Draize eye test is now a "very mild test", in which small amounts of substances are used and are washed out of the eye at the first sign of irritation. In a letter to Nature, written to refute an article saying that the Draize test had not changed much since the 1940s, Andrew Huxley wrote: "A substance expected from its chemical nature to be seriously painful must not be tested in this way; the test is permissible only if the substance has already been shown not to cause pain when applied to skin, and in vitro pre-screening tests are recommended, such as a test on an isolated and perfused eye. Permission to carry out the test on several animals is given only if the test has been performed on a single animal and a period of 24 hours has been allowed for injury to become evident." Differences between the rabbit eye and the human eyes: Kirk Wilhelmus, professor in the Department of Ophthalmology at Baylor College of Medicine, conducted a comprehensive review of the Draize eye test in 2001. He also reported that differences in anatomy and biochemistry between the rabbit and human eye indicate that testing substances on rabbits might not predict the effects on humans. However, he noted "that eyes of rabbits are generally more susceptible to irritating substances than the eyes of humans" making them a conservative model of the human eye. Wilhelmus concluded "The Draize eye test ... has assuredly prevented harm" to humans, but predicts it will be "supplanted as in vitro and clinical alternatives emerge for assessing irritancy of the ocular surface". Alternatives: Industry and regulatory bodies responsible for public health are actively assessing animal-free tests to reduce the requirement for Draize testing. Before 2009 the Organisation for Economic Co-operation and Development (OECD) had not validated any alternative methods for testing eye or skin irritation potential. However, since 2000 OECD had validated alternative tests for corrosivity, meaning acids, bases and other corrosive substances are no longer required to be Draize tested on animals. The alternative tests include a human skin equivalent model and the transepicutaneous resistance test (TER). In addition, the use of human corneal cell line (HCE-T cells) is also another good alternative method to test eye irritation on potential chemicals.In September 2009 the OECD validated two alternatives to the Draize eye test: the bovine cornea opacity test (BCOP) and isolated chicken eye test (ICE). A 1995 study funded by the European Commission and British Home Office evaluated these among nine potential replacements, including the hens' egg chorioallantoic membrane (HET-CAM) assay and an epithelial model cultivated from human corneal cells, in comparison with Draize test data. The study found that none of the alternative tests, taken alone, proved to be a reliable replacement for the animal test, however a post hoc analysis of the data found that in certain combinations of tests, an "excellent performance" was observed.Positive results from some of these tests have been accepted by regulatory bodies, such as the British Health and Safety Executive and US Department of Health and Human Services, without testing on live animals, but negative results (no irritation) required further in vivo testing. Regulatory bodies have therefore begun to adopt a tiered testing strategy for skin and eye irritation, using alternatives to reduce Draize testing of substances with the most severe effects. Regulations: UK In Britain, the Home Office publishes guideline for eye irritancy tests, with the aim of reducing suffering to the animals. In its 2006 guidelines, it "strongly encourages" in vitro screening of all compounds before testing on animals, and mandates the use of validated alternatives when available. It requires that the test solution's "physical and chemical properties are not such that a severe adverse reaction could be predicted"; therefore "known corrosive substances or those with a high oxidation or reduction potential must not be tested."The test design requires that the substance be tested on one rabbit initially, and the effect of the substance on the skin must be ascertained before it can be introduced into the eye. If a rabbit shows signs of "severe pain" or distress it must be immediately killed, the study terminated and the compound may not be tested on other animals. In tests where severe eye irritancy is considered likely, a washout should closely follow testing in the eye of the first rabbit. In the UK, any departure from these guidelines requires prior approval from the Secretary of State.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Lazarus sign** Lazarus sign: The Lazarus sign or Lazarus reflex is a reflex movement in brain-dead or brainstem failure patients, which causes them to briefly raise their arms and drop them crossed on their chests (in a position similar to some Egyptian mummies). The phenomenon is named after the Biblical figure Lazarus of Bethany, whom Jesus raised from the dead according to the Gospel of John. Causes: Like the knee jerk reflex, the Lazarus sign is an example of a reflex mediated by a reflex arc—a neural pathway which passes via the spinal column but not through the brain. As a consequence, the movement is possible in brain-dead patients whose organs have been kept functioning by life-support machines, precluding the use of complex involuntary motions as a test for brain activity. It has been suggested by neurologists studying the phenomenon that increased awareness of this and similar reflexes "may prevent delays in brain-dead diagnosis and misinterpretations."The reflex is often preceded by slight shivering motions of the patient's arms, or the appearance of goose bumps on the arms and torso. The arms then begin to flex at the elbows before lifting to be held above the sternum. They are often brought from here towards the neck or chin and touch or cross over. Short exhalations have also been observed coinciding with the action. Occurrences: The phenomenon has been observed to occur several minutes after the removal of medical ventilators used to pump air in and out of brain-dead patients. It also occurs during testing for apnea—that is, suspension of external breathing and motion of the lung muscles—which is one of the criteria for determining brain death used for example by the American Academy of Neurology.Occurrences of the Lazarus sign in intensive-care units have been mistaken for evidence of resuscitation of patients. They may frighten those who witness the movement, and have been viewed by some as miraculous events.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Lists of 20th-century earthquakes** Lists of 20th-century earthquakes: This list of 20th-century earthquakes is a global list of notable earthquakes that occurred in the 20th century. After 1900 most earthquakes have some degree of instrumental records and this means that the locations and magnitudes are more reliable than for earlier events. To prevent this list becoming unmanageable, only those of magnitude 6 and above are included unless they are notable for some other reason. 1991–2000: Key to magnitudes ML = Local magnitude (Richter) MS = Surface wave magnitude Mw = Moment magnitude
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Coptic cross** Coptic cross: The Coptic cross is any of a number of Christian cross variants associated in some way with Coptic Christians. Typical form: The typical form of the "Coptic cross" used in the Coptic Church is made up of two bold lines of equal length that intersect at the middle at right angles. Each line terminates in three points, representing the Trinity of the Father, the Son, and the Holy Spirit. Altogether, the cross has 12 points symbolizing the Apostles, whose mission was to spread the Gospel message throughout the world.This form of Coptic cross is widely used in the Coptic church and the Ethiopian and Eritrean churches, and so this form of the cross may also be called the "Ethiopian cross" or "Axum cross". Bertran de la Farge dates it to the 4th century and cites it as a predecessor of the Occitan cross. Typical form: History and variation Old Coptic crosses often incorporate a circle, as in the form called a "Coptic cross" by Rudolf Koch in his The Book of Signs (1933). Sometimes the arms of the cross extend through the circle (dividing it into four quadrants), as in the "Celtic cross".In 1984, a modern variant of the Coptic Cross composed of three bars intersecting at right angles in three dimensions was given as a gift by the Coptic Orthodox Church and mounted on the top of the All Africa Conference of Churches building since the Coptic Church is considered to be the mother church in Africa. Popular culture: Many Copts have the cross tattooed as a sign of faith on the inside of their right arm at the wrist.One of the forms of the Coptic cross, which is referred to as the Ethiopian Coptic cross, was worn by Stevie Ray Vaughan. Keith Richards also wears an Ethiopian Coptic Cross.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Regular measure** Regular measure: In mathematics, a regular measure on a topological space is a measure for which every measurable set can be approximated from above by open measurable sets and from below by compact measurable sets. Definition: Let (X, T) be a topological space and let Σ be a σ-algebra on X. Let μ be a measure on (X, Σ). A measurable subset A of X is said to be inner regular if sup compact and measurable } and said to be outer regular if inf open and measurable } A measure is called inner regular if every measurable set is inner regular. Some authors use a different definition: a measure is called inner regular if every open measurable set is inner regular. Definition: A measure is called outer regular if every measurable set is outer regular. A measure is called regular if it is outer regular and inner regular. Examples: Regular measures Lebesgue measure on the real line is a regular measure: see the regularity theorem for Lebesgue measure. Any Baire probability measure on any locally compact σ-compact Hausdorff space is a regular measure. Any Borel probability measure on a locally compact Hausdorff space with a countable base for its topology, or compact metric space, or Radon space, is regular. Examples: Inner regular measures that are not outer regular An example of a measure on the real line with its usual topology that is not outer regular is the measure μ where μ(∅)=0 , μ({1})=0 , and μ(A)=∞ for any other set A The Borel measure on the plane that assigns to any Borel set the sum of the (1-dimensional) measures of its horizontal sections is inner regular but not outer regular, as every non-empty open set has infinite measure. A variation of this example is a disjoint union of an uncountable number of copies of the real line with Lebesgue measure. Examples: An example of a Borel measure μ on a locally compact Hausdorff space that is inner regular, σ-finite, and locally finite but not outer regular is given by Bourbaki (2004, Exercise 5 of section 1) as follows. The topological space X has as underlying set the subset of the real plane given by the y-axis of points (0,y) together with the points (1/n,m/n2) with m,n positive integers. The topology is given as follows. The single points (1/n,m/n2) are all open sets. A base of neighborhoods of the point (0,y) is given by wedges consisting of all points in X of the form (u,v) with |v − y| ≤ |u| ≤ 1/n for a positive integer n. This space X is locally compact. The measure μ is given by letting the y-axis have measure 0 and letting the point (1/n,m/n2) have measure 1/n3. This measure is inner regular and locally finite, but is not outer regular as any open set containing the y-axis has measure infinity. Examples: Outer regular measures that are not inner regular If μ is the inner regular measure in the previous example, and M is the measure given by M(S) = infU⊇S μ(U) where the inf is taken over all open sets containing the Borel set S, then M is an outer regular locally finite Borel measure on a locally compact Hausdorff space that is not inner regular in the strong sense, though all open sets are inner regular so it is inner regular in the weak sense. The measures M and μ coincide on all open sets, all compact sets, and all sets on which M has finite measure. The y-axis has infinite M-measure though all compact subsets of it have measure 0. Examples: A measurable cardinal with the discrete topology has a Borel probability measure such that every compact subset has measure 0, so this measure is outer regular but not inner regular. The existence of measurable cardinals cannot be proved in ZF set theory but (as of 2013) is thought to be consistent with it. Examples: Measures that are neither inner nor outer regular The space of all ordinals at most equal to the first uncountable ordinal Ω, with the topology generated by open intervals, is a compact Hausdorff space. The measure that assigns measure 1 to Borel sets containing an unbounded closed subset of the countable ordinals and assigns 0 to other Borel sets is a Borel probability measure that is neither inner regular nor outer regular.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Tert-Butyldiphenylsilyl** Tert-Butyldiphenylsilyl: tert-Butyldiphenylsilyl, also known as TBDPS, is a protecting group for alcohols. Its formula is C16H19Si-. Development: The tert-butyldiphenylsilyl group was first suggested as a protecting group by Hanessian and Lavallée in 1975. It was designed to supersede the use of Corey's tert-butyldimethylsilyl as a protecting group for alcohols: In addition to retaining all the known features that are associated with silyl ethers, such as their ease and selectivity of formation, their adaptability to various analytical techniques, and their compatibility with a variety of conditions or synthetic transformations in organic chemistry, the [TBDPS] group offers some unique and novel features that constitute a significant improvement over the existing related groups, and warrants their communication at this time.The novel features that they highlight are the increased resistance to acidic hydrolysis and increased selectivity towards protection of primary hydroxyl groups. The group is unaffected by treatment with 80% acetic acid, which catalyses the deprotection of O-tetrapyranyl, O-trityl and O-tert-butyldimethylsilyl ethers. It is also unaffected by 50% trifluoroacetic acid (TFA), and survives the harsh acidic conditions used to install and remove isopropylidene or benzylidene acetals. Applications in chemical synthesis: The TBDPS group is prized for its increased stability towards acidic conditions and nucleophilic species over the other silyl ether protecting groups. This can be thought of as arising from the extra steric bulk of the groups surrounding the silicon atom. The protecting group is easily introduced by using the latent nucleophilicity of the hydroxyl group and an electrophilic source of TBDPS. This might involve using the triflate or the less reactive chloride of TBDPS along with a mild base such as 2,6-lutidine or pyridine and potentially a catalyst such as DMAP or imidazole.The ease of installation of the protecting group follows the order: 1o > 2o > 3o, allowing the least hindered hydroxyl group to be protected in the presence of more hindered hydroxyls. Applications in chemical synthesis: Protection of equatorial hydroxyl groups can be achieved over axial hydroxyl groups by the use of a cationic silyl species generated by tert-butyldiphenylsilyl chloride and a halogen abstractor, silver nitrate. Applications in chemical synthesis: The increased stability towards acidic hydrolysis and nucleophilic species allows for the TBDPS groups in a substrate to be retained while other silyl ethers are removed. The TMS group may easily be removed in the presence of a TBDPS group by reaction with TsOH. The group is even more resistant to acid hydrolysis than the bulky TIPS. However, in the presence of a fluoride source such as TBAF or TAS-F, TIPS groups are more stable than TBDPS groups. The TBDPS group is of similar stability to the TBDMS group and is more stable in the presence of fluoride than all other simple alkyl silyl ethers. It is possible to remove the TBDPS group selectively, leaving a TBDMS group intact, using NaH in HMPA at 0 °C for five minutes. Stability: The TBDPS group is stable under a wide variety of conditions:
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Pandering (politics)** Pandering (politics): Pandering is the act of expressing one's views in accordance with the likes of a group to which one is attempting to appeal. The term is most notably associated with politics. In pandering, the views one is expressing are merely for the purpose of drawing support up to and including votes and do not necessarily reflect one's personal values.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Shark attack prevention** Shark attack prevention: There are a range of shark attack prevention techniques employed to reduce the risk of shark attack and keep people safe. They include removing sharks by various fishing methods, separating people and sharks, as well as observation, education and various technology-based solutions.Techniques that involve culling sharks are contentious. Environmental groups have voiced concern over the impact of reduced shark numbers on ocean ecosystems and the problem of by-catch of other marine life, particularly endangered species. Because sharks are important to the ecosystem, removing them harms the ecosystem. Nets: Shark net The majority of shark nets used are gillnets, which is a wall of netting that hangs in the water and captures the targeted sharks by entanglement. The nets may be as much as 186 metres (610 ft) long, set at a depth of 6 metres (20 ft), have a mesh size of 500 millimetres (20 in) and are designed to catch sharks longer than 2 metres (6.6 ft) in length.Shark nets do not offer complete protection but work on the principle of "fewer sharks, fewer attacks". They reduce occurrence via shark mortality. Shark nets such as those in New South Wales are designed to entangle and capture sharks that pass near them. Historical shark attack figures suggest that the use of shark nets does markedly reduce the incidence of shark attack when implemented on a regular and consistent basis.A downside with shark nets is that they do result in bycatch, including threatened and endangered species. Between September 2017 and April 2018, 403 animals perished in the nets in New South Wales, including 10 critically endangered grey nurse sharks, 7 dolphins, 7 green turtles and 14 great white sharks. However note that bycatch from shark nets is minor compared to bycatch from commercial fishing with estimates of 50 million sharks caught unintentionally each year.Total cost for the shark netting program in New South Wales for the 2009/10 year was approximately A$1 million, which included the cost of the nets, contractors, observers and shark technicians, shark meshing equipment (dolphin pingers and whale alarms etc.), and compliance audit activities. For the 51 beaches protected, this represents a financial cost of approximately A$20,000 per beach per year. Nets: Shark nets have been criticized by environmentalists and conservationists; they say shark nets damage the marine ecosystem. The current net program in New South Wales has been described as being "extremely destructive" to marine life; it has also been called "outdated and ineffective". The New South Wales government prohibits people from rescuing entangled animals — this prohibition has been called "heartless and cruel". Nets: Shark barrier A shark barrier (otherwise known as a "shark-proof enclosure" or "beach enclosure") is seabed-to-surface protective barrier that is placed around a beach to separate people from sharks. Shark barriers form a fully enclosed swimming area that prevents sharks from entering. Shark barrier design has evolved from rudimentary fencing materials to netted structures held in place with buoys and anchors. Recent designs have used plastics to increase strength and versatility. Nets: When deployed in sheltered areas shark barriers offer complete protection and are seen as a more environmentally friendly option as they largely avoid bycatch. However barriers are not effective on surf beaches because they usually disintegrate in the swell and so are normally constructed only around sheltered areas such as harbour beaches.A shark barrier installed at Middleton beach in Albany, Western Australia cost A$340,000 to install, with annual maintenance budgeted at A$30,000 per annum. On Réunion Island in 2015 two shark proof enclosures cost €2 million to install and €1 million a year to maintain. Drum lines: A drum line is an unmanned aquatic trap used to lure and capture large sharks using baited hooks. They are typically deployed near popular swimming beaches with the intention of reducing the number of sharks in the vicinity and therefore the probability of shark attack. Drum lines were first deployed to protect users of the marine environment from sharks in Queensland, Australia in 1962. During this time, they were just as successful in reducing the frequency of shark attacks as the shark nets. More recently, drumlines have also been used with great success in Recife, Brazil where the number of attacks has been shown to have reduced by 97% when the drumlines are deployed. While shark nets and drum lines share the same purpose, drum lines are more effective at targeting the three sharks that are considered most dangerous to swimmers: the bull shark, tiger shark and great white shark. SMART drumlines can also be utilised to move sharks, which greatly reduces mortality of sharks and bycatch to less than 2%.In 2014 a three-month trial utilising up to 60 drum lines in Western Australia cost A$1.28 million.Drum line programmes that involve culling have been criticized for being environmentally destructive and speciesist, and have sparked public demonstrations and vocal opposition, particularly from environmentalists, animal welfare advocates and ocean activists. Conservationists say the death of sharks on drum lines harms the marine ecosystem. The current drum line program in Queensland has been called "outdated, cruel and ineffective". However environmental damage from drumlines is minor compared to commercial fishing with estimates of 50 million sharks caught unintentionally each year. Other protection methods: Moving sharks Moving sharks is a way of reducing shark attacks and reducing shark mortality, by capturing, transporting and releasing the sharks further shore. In Recife, Brazil, sharks that were near shore were captured and physically moved offshore with 70% of potentially aggressive sharks and 78% of other animals caught released alive. Sharks that were moved did not return to the same location. This technique has also been successfully demonstrated in the NSW North Coast SMART drumline trial (Australia) where 99% of targeted sharks and 98% of other animals caught were released alive. Other protection methods: Electronic shark deterrents Electronic devices create an electromagnetic field to deter shark attacks and are used by surfers, scuba divers, snorkelers, spearfishers, ocean kayak fishers, swimming areas off boats and for ocean fishing. The Ocean Guardian devices are considered one of the few electrical devices on the market that has performed independent trials to determine the effectiveness at deterring shark attacks. Whilst it is noted the Shark Shield Technology does not work in all situations and divers have been attacked whilst wearing Shark Shield, new modelling research from Flinders University states that the proper use of personal electronic deterrents is an effective way to prevent future deaths and injuries. It is estimated that these devices can save up to 1063 Australian lives along the coastline over the next 50 years.The West Australian government in 2017 announced that they are supporting the Ocean Guardian FREEDOM7 and the Ocean Guardian FREEDOM+ Surf via a $200 subsidiary. Other protection methods: Shark tagging and tracking Across the world a sample of sharks have been tagged with electronic devices that transmit their location. Acoustic tags transmit pulses which are detected by underwater listening stations when the sharks swim close by, typically within 500 metres. Fin-mounted satellite tags are also commonly used. These tags allow shark movements and behaviour to be monitored and studied and swimmers and surfers can be warned if a shark is detected close to shore.However a limitation with the system is the tagging will only highlight a very small portion of the dangerous sharks present. It also may lead people into a false sense of security. Other protection methods: Shark spotting Shark-spotting programs using drones, fixed-wing aircraft, helicopters, patrol boats, beach patrols, observation towers and even blimps, are being used and trialled in various locations across the globe. However visibility issues with water clarity can be a problem particularly with aerial patrols, which have been found to identify less than 20% of sharks present. There is also the financial cost of hiring aircraft and/or personnel to conduct the surveillance. Other protection methods: Personal shark repellents A shark repellent is any method of driving sharks away from an area and includes magnetic shark repellent, electropositive shark repellents, electrical repellents (including Shark Shield) and semiochemicals. One example is a product called Anti-Shark 100 which is an aerosol can that contains an extract of dead shark tissue. There is a range of evidence that supports the effectiveness of this product. Chemical repellents have been researched since before the 1940s, some of which have raised concerns as to their effectiveness. The semiochemical used in the Anti-shark 100 product have been independently tested & verified on Caribbean reef sharks, however there are concerns it may attract Tiger and White sharks.Other examples of personal shark protection technologies include wearing interruption patterned or camouflage wetsuits, magnetic repellents incorporating a small magnet in a band worn on the wrist or ankle, acoustic repellents that mimic the sound of orcas and changing surf board colours. However, either the products associated with these technologies have not been independently tested, or independent tests highlighted that they did not work.In 2018 independent tests were carried out on five Shark Repellent technologies using Great white sharks. Only Shark Shield’s Ocean Guardian Freedom+ Surf showed measureable results, with encounters reduced from 96% to 40%. Rpela (electrical repellent technology), SharkBanz bracelet & SharkBanz surf leash (magnetic shark repellent technology) and Chillax Wax (semiochemical) showed no measureable effect on reducing shark attacks. Other protection methods: Protection by dolphins There are documented instances of bottlenose dolphins protecting humans from shark attacks, one off the coast of New Zealand in 2004 and one attack on a surfer in northern California in August 2007. There is no accepted explanation for this behavior; as mentioned in the Journal of Zoology, "The importance of interactions between sharks and cetaceans has been a subject of much conjecture, but few studies have addressed these interactions". In some cases, sharks have been seen attacking, or trying to attack dolphins. The presence of porpoises does not indicate the absence of sharks as both eat the same food and surfers have been attacked by sharks whilst in the presence of dolphins. Other protection methods: Shark sonar In September 2015, a shark sonar created by Ric Richardson was tested in Byron Bay, Australia. It is a passive sonar device that does not interfere with animals like dolphins. The final device will sit on the ocean floor and claimed to detect sharks over 2,5 meters long when they are 100 meters away. An alarm will notify people that they have to go out of the water. Other shark sonar systems have been tested, however results have concluded that the shark sonar systems are not effective. By country: Australia Queensland and New South Wales In Queensland and NSW systematic long term shark control programs using shark nets and drumlines are utilised to reduce the risk of shark attack. Since 1936 sharks nets have been utilised off Sydney beaches. Nowadays they are employed on both NSW and Queensland beaches; 83 beaches are meshed in Queensland compared with NSW's current 51.The technique of setting drum lines is also used in Queensland and New South Wales, where they have been used in association with shark nets. Before 1962, there were 82 recorded attacks. Since the policy was implemented there has only been one recorded death, at Amity Point in January 2006. 21-year-old Sarah Kate Whiley was attacked by as many as three sharks in Rainbow Channel. The attack occurred in an unpatrolled area. Queensland Fisheries Minister John McVeigh has described the longevity of the netting and drum line program as being "a good indicator that it had the support of most Queenslanders".The baited drum lines attract sharks from within a 3–5 km radius, preventing sharks from reaching swimming areas. They also capture less bycatch than shark nets.There were a total of 97 fatalities attributed to shark attacks in Queensland between 1858 and 2014. In New South Wales there were a total of 96 fatalities attributed to shark attack between 1771 and 2014. By country: The current shark mitigation programs in Queensland and New South Wales have been called culls, and have been criticized by environmentalists, who say removing sharks harms the marine ecosystem. Between 1950 and 2008, 577 great white sharks and 352 tiger sharks died in the nets in New South Wales — also during this period, 15,135 marine animals were caught and died in the nets, including whales, turtles, rays, dolphins, and dugongs. In Queensland, from 2001 to 2018, a total of 10,480 sharks died on drum lines. By country: NSW North Coast shark net and smart drumline trial Following 11 shark attacks along the NSW north coast between 2014 and 2016, including two fatalities, Shark nets and SMART drumlines were deployed in December 2016 to cover five additional beaches along the NSW North Coast in a two-year trial. Five nets were deployed off Seven Mile Beach off Lennox Head; Sharpes, Shelly and Lighthouse beaches off Ballina; and Main Beach at Evans Head. Twenty five drumlines were also deployed among nets at Ballina and Evans Head beaches (15 off Ballina; 10 off Evans Head). The trial was successful with no shark attacks occurring at the protected beaches. The SMART drumline caught 230 targeted sharks with 99% of targeted sharks and 98% of other animals caught were released alive. The SMART drumlines are now being expanded to other NSW regions. The shark net trial caught 11 targeted sharks and had a 54% survival rate for all animals caught and will not be continued. By country: Western Australia In August 2018 following continual shark attacks and public pressure the West Australian state government announced a trial of "smart" drumlines along Western Australia's South West coast, near Gracetown. On the 12 May 2021 the trial ended after only two white sharks were captured and it was concluded the trial did not reduce the risk of shark attack.Western Australia also deploys shark enclosures in a range of locations as well as aerial shark spotters, beach patrols, shark tagging efforts and associated tracking and notification systems. By country: There were a total of 114 unprovoked shark attacks in West Australia between 1870 and 2016, including 16 fatalities since 2000. By country: South Australia In South Australia, spotter planes and a small number of patrolled swimming beaches are the only methods used to mitigate the risk of shark attack. On 6 February 2014, Port Lincoln tuna "baron" Hagen Stehr expressed his support for the Western Australian shark cull. He also stated that his business' spotter planes had observed increases in great white shark numbers off the west coast of Eyre Peninsula. He acknowledged that his tuna farming operations attract some sharks. He told The Advertiser that he believed "selected culling of sharks is a must. It is crazy stuff to put them under protection so it becomes a major offence to kill them." Critics of Stehr's stance note that a cull of sharks in SA would be beneficial to his business, as tuna is a major source of food for sharks. Shark attack survivor turned conservationist Rodney Fox has spoken out against the cull, saying "When a shark attacks someone, we go 'the shark needs to be punished'. They don't live under our laws. It's a different world down there and it should be treated differently."At September 2014, there had been a total of 82 shark attacks in South Australia since 1836, including 20 fatalities. By country: South Africa In KwaZulu-Natal, South Africa a long term shark control program utilising a combination of shark nets and drum lines are used to mitigate the risk of shark attack. The region's shark attack statistics primarily reflect the effectiveness of netting, as drum lines were only introduced recently, following their successful use for over 40 years in Queensland, Australia. The KwaZulu-Natal Sharks Board (KZNSB) says "Both types of equipment function by reducing shark numbers in the vicinity of protected beaches, thereby lowering the probability of encounters between sharks and people at those beaches." The KZNSB says, "At Durban, from 1943 until the installation of shark nets in 1952, there were seven fatal attacks. Since the installation of nets there have been no fatalities at Durban and no incidents resulting in serious injury." The KZNSB also says, "At KwaZulu-Natal's other [netted] beaches, from 1940 until most of those beaches were first netted in the 1960s, there were 16 fatal attacks and 11 resulting in serious injury. In the three decades since nets were installed, there have been no fatal attacks at those beaches and only four resulting in serious injury." The presence of nets has greatly reduced the number of shark attacks along Natal beaches. It is unclear whether more sharks caught on drum lines survive when compared to shark net captures in KwaZulu-Natal, but the lines have shown reduced non-target species bycatch. Drum lines set in the region are baited with 500 grams of meat per hook and are believed to only attract sharks from several hundred metres away.Seasonal and temporary bathing bans and "discretionary bathing" are additional strategies employed in the region. Bans often follow net displacement or damage due to storms or swell, or net removal due to whale stranding. Nets are also removed during the annual sardine run to limit the degree of bycatch during the event. Pressure from the tourism industry to reinstate nets during the sardine run has previously proven "disastrous", resulting in large numbers of shark and dolphin mortalities.The "shark control" program in KwaZulu-Natal has been called "archaic" and "disastrous to the ecosystem" (by environmentalists). In a 30-year period, more than 33,000 sharks have died in KwaZulu-Natal's shark mitigation program — also during this period, 2,211 turtles, 8,448 rays, and 2,310 dolphins died. Environmental groups say KwaZulu-Natal's "shark control" program is unethical and harms the marine ecosystem. Activists in Durban say that Durban's shark nets serve no purpose. By country: United States In Hawaii, seven short term shark culls utilising long-lines took place between 1959 and 1976. During this time, 4,668 sharks were caught, at a cost of US$300,000. Although the Hawaiian resident and tourist human population increased dramatically, the number of shark attacks remained constant (in contrast to Florida, where the number of shark attacks has increased in line with human population increases) and the short term programs were not considered a success by the authors of a shark culling study. The study concluded that the culls "do not appear to have had measurable effects on the rate of shark attacks in Hawaiian waters". By country: The publication came at a time during intense community debate over culling in Hawaii, documented by local journalist Jim Borg in his 1993 book, Tigers of the Sea, Hawaii’s Deadly Tiger Sharks. Borg detailed the debates between the study's authors and other scientists who argued that the experiences of South Africa's KwaZulu-Natal Shark Board demonstrated the effectiveness of the culling. The debate began with the November 1991 shark attack which resulted in the death of Martha Morrell off Maui and embroiled the subsequently formed Hawaii Shark Task Force, Borg writes. By country: Statistics from 2013 showed the number of shark attacks in Hawaii was spiking. By country: Brazil Drumlines and long lines have been used successfully in Recife in a long term program, where shark attacks have been reduced by around 97%. Sharks are initially caught on baited drum lines. Once captured, the sharks (if found alive) are humanely handled and tagged. They are then relocated offshore and their movements are tracked. The project is known as the Shark Monitoring Program of Recife (SMPR). A report assessing the program's performance was published in 2013. It stated: "Overall, the SMPR seems to be less detrimental than shark meshing strategies while clearly contributing for enhancing bather safety; thus, it may provide an effective, ecologically balanced tool for assisting in shark attack mitigation." Réunion Island (France) The prevalence of shark attacks at Réunion Island—there were 19 attacks between 2011 and 2016, including seven which were fatal—prompted Réunion island's government to carry out a range of systematic long term shark protection activities, including a shark cull, utilising "smart" drumlines and longlines. In the five years to August 2016, more than 170 sharks died as part of the cull.In 2015, two shark-proof fences were strung at beaches to the west of the island, at a cost of €2 million. Maintenance of the fences is projected to cost €1 million a year. The protective nets / shark enclosures at the two beaches have a total length of just under 1 mile and are subject to damage from heavy swell. On 27 August 2016 a surfer lost an arm and a foot from a shark attack while surfing within one of the enclosures. It was reported that at the time of the attack there was a two-meter hole in the nets, most probably caused by the swell.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Simple harmonic motion** Simple harmonic motion: In mechanics and physics, simple harmonic motion (sometimes abbreviated SHM) is a special type of periodic motion an object experiences due to a restoring force whose magnitude is directly proportional to the distance of the object from an equilibrium position and acts towards the equilibrium position. It results in an oscillation that is described by a sinusoid which continues indefinitely (if uninhibited by friction or any other dissipation of energy). Simple harmonic motion: Simple harmonic motion can serve as a mathematical model for a variety of motions, but is typified by the oscillation of a mass on a spring when it is subject to the linear elastic restoring force given by Hooke's law. The motion is sinusoidal in time and demonstrates a single resonant frequency. Other phenomena can be modeled by simple harmonic motion, including the motion of a simple pendulum, although for it to be an accurate model, the net force on the object at the end of the pendulum must be proportional to the displacement (and even so, it is only a good approximation when the angle of the swing is small; see small-angle approximation). Simple harmonic motion can also be used to model molecular vibration. Simple harmonic motion: Simple harmonic motion provides a basis for the characterization of more complicated periodic motion through the techniques of Fourier analysis. Introduction: The motion of a particle moving along a straight line with an acceleration whose direction is always towards a fixed point on the line and whose magnitude is proportional to the distance from the fixed point is called simple harmonic motion. Introduction: In the diagram, a simple harmonic oscillator, consisting of a weight attached to one end of a spring, is shown. The other end of the spring is connected to a rigid support such as a wall. If the system is left at rest at the equilibrium position then there is no net force acting on the mass. However, if the mass is displaced from the equilibrium position, the spring exerts a restoring elastic force that obeys Hooke's law. Introduction: Mathematically, the restoring force F is given by where F is the restoring elastic force exerted by the spring (in SI units: N), k is the spring constant (N·m−1), and x is the displacement from the equilibrium position (m). Introduction: For any simple mechanical harmonic oscillator: When the system is displaced from its equilibrium position, a restoring force that obeys Hooke's law tends to restore the system to equilibrium.Once the mass is displaced from its equilibrium position, it experiences a net restoring force. As a result, it accelerates and starts going back to the equilibrium position. When the mass moves closer to the equilibrium position, the restoring force decreases. At the equilibrium position, the net restoring force vanishes. However, at x = 0, the mass has momentum because of the acceleration that the restoring force has imparted. Therefore, the mass continues past the equilibrium position, compressing the spring. A net restoring force then slows it down until its velocity reaches zero, whereupon it is accelerated back to the equilibrium position again. Introduction: As long as the system has no energy loss, the mass continues to oscillate. Thus simple harmonic motion is a type of periodic motion. If energy is lost in the system, then the mass exhibits damped oscillation. Note if the real space and phase space plot are not co-linear, the phase space motion becomes elliptical. The area enclosed depends on the amplitude and the maximum momentum. Dynamics: In Newtonian mechanics, for one-dimensional simple harmonic motion, the equation of motion, which is a second-order linear ordinary differential equation with constant coefficients, can be obtained by means of Newton's 2nd law and Hooke's law for a mass on a spring. where m is the inertial mass of the oscillating body, x is its displacement from the equilibrium (or mean) position, and k is a constant (the spring constant for a mass on a spring). Dynamics: Therefore, Solving the differential equation above produces a solution that is a sinusoidal function: cos sin ⁡(ωt), where {\textstyle \omega ={\sqrt {{k}/{m}}}.} The meaning of the constants c1 and c2 can be easily found: setting t=0 on the equation above we see that x(0)=c1 , so that c1 is the initial position of the particle, c1=x0 ; taking the derivative of that equation and evaluating at zero we get that x˙(0)=ωc2 , so that c2 is the initial speed of the particle divided by the angular frequency, c2=v0ω . Thus we can write: This equation can also be written in the form: where A=c12+c22 tan ⁡φ=c2c1, sin cos ⁡φ=c1A or equivalently A=|c1+c2i|, arg ⁡(c1+c2i) In the solution, c1 and c2 are two constants determined by the initial conditions (specifically, the initial position at time t = 0 is c1, while the initial velocity is c2ω), and the origin is set to be the equilibrium position. Each of these constants carries a physical meaning of the motion: A is the amplitude (maximum displacement from the equilibrium position), ω = 2πf is the angular frequency, and φ is the initial phase.Using the techniques of calculus, the velocity and acceleration as a function of time can be found: Speed: ωA2−x2 Maximum speed: v = ωA (at equilibrium point) Maximum acceleration: Aω2 (at extreme points)By definition, if a mass m is under SHM its acceleration is directly proportional to displacement. Dynamics: where Since ω = 2πf, and, since T = 1/f where T is the time period, These equations demonstrate that the simple harmonic motion is isochronous (the period and frequency are independent of the amplitude and the initial phase of the motion). Energy: Substituting ω2 with k/m, the kinetic energy K of the system at time t is and the potential energy is In the absence of friction and other energy loss, the total mechanical energy has a constant value Examples: The following physical systems are some examples of simple harmonic oscillator. Examples: Mass on a spring A mass m attached to a spring of spring constant k exhibits simple harmonic motion in closed space. The equation for describing the period shows the period of oscillation is independent of the amplitude, though in practice the amplitude should be small. The above equation is also valid in the case when an additional constant force is being applied on the mass, i.e. the additional constant force cannot change the period of oscillation. Examples: Uniform circular motion Simple harmonic motion can be considered the one-dimensional projection of uniform circular motion. If an object moves with angular speed ω around a circle of radius r centered at the origin of the xy-plane, then its motion along each coordinate is simple harmonic motion with amplitude r and angular frequency ω. Examples: Oscillatory motion It is the motion of a body when it moves to and from about a definite point. This type of motion is also called oscillatory motion or vibratory motion. The time period is able to be calculated by where l is the distance from rotation to centre of mass of object undergoing SHM and g being gravitational field constant. This is analogous to the mass-spring system. Examples: Mass of a simple pendulum In the small-angle approximation, the motion of a simple pendulum is approximated by simple harmonic motion. The period of a mass attached to a pendulum of length l with gravitational acceleration g is given by This shows that the period of oscillation is independent of the amplitude and mass of the pendulum but not of the acceleration due to gravity, g , therefore a pendulum of the same length on the Moon would swing more slowly due to the Moon's lower gravitational field strength. Because the value of g varies slightly over the surface of the earth, the time period will vary slightly from place to place and will also vary with height above sea level. Examples: This approximation is accurate only for small angles because of the expression for angular acceleration α being proportional to the sine of the displacement angle: where I is the moment of inertia. When θ is small, sin θ ≈ θ and therefore the expression becomes which makes angular acceleration directly proportional and opposite to θ, satisfying the definition of simple harmonic motion (that net force is directly proportional to the displacement from the mean position and is directed towards the mean position). Examples: Scotch yoke A Scotch yoke mechanism can be used to convert between rotational motion and linear reciprocating motion. The linear motion can take various forms depending on the shape of the slot, but the basic yoke with a constant rotation speed produces a linear motion that is simple harmonic in form.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Army Battle Command System** Army Battle Command System: The Army Battle Command System (ABCS) is a digital Command, Control, Communications, Computers and Intelligence (C4I) system for the US Army. It includes a mix of fixed/semi-fixed and mobile networks. It is also designed for interoperability with US and Coalition C4I systems. Army Battle Command System (ABCS) Version 6.4 is an integrated suite that allows troops to obtain an automated view of friendly activity and supply movement; plan fires, receive situation and intelligence reports, view the airspace and receive automatically disseminated weather reports. Systems: ABCS is intended to function as a System of systems concept, with the ultimate goal of being similar to what the internet provides to civilians. Similar to how those using the internet have no need to know the location of the network they connect to, ABCS is intended to provide the same capability. In this way, the ABCS system will allow commanders to see multiple systems on one screen and easily transfer data from one to the next. The system also provides up-to-date information on a map-based display. Systems: Despite these capabilities, the system does have limitations. In particular, it does not integrate well with the GCCS systems used for joint operations. This creates a risk of bad data and database errors in such scenarios. Systems: ABCS combines seven packages into a single system: The Maneuver Control System (MCS) allows the operator to define routes and view overlays to provide situational awareness. MCS is being phased out and replaced with "Lightning", an ABCS enabled Flash/Java Program that uses the Web Browser interface. It allows users to publish products from CPOF without using the BCS (Battle Command Server) PASS (Publish and Subscribe Service) Server, making Lightning more flexible as it can be used on any Secret Internet Protocol Router Network (SIPRnet-System) as there is no interface software required besides your web browser (Typically IE 8.0 or higher, not compatible with Opera or Firefox at this time.) The system was developed and integrated by Ford Aerospace and Communications Corp. (FACC), Colorado Springs, Colorado. Systems: The Air and Missile Defense Workstations (AMDWS) provide soldiers with an Air Defense picture, and supports the Surface Launched Advanced Medium Range Air-to-Air Missile (SLAM-RAAM) Air Defense Artillery (ADA) system by providing an automated defense planning capability for deployed units. The Battle Command Sustainment & Support System (BCS3) integrates multiple data sources into one program and provides commanders with a visual layout of battlefield logistics. The All Source Analysis System (ASAS) can analyze incidents and help determine the patterns of Improvised Explosive Device-related incidents. A commander can determine locations that are typical for IED attacks, so that they know to warn their soldiers of such a threat. The Advanced Field Artillery Tactical Data System (AFATDS) plan and execute fires during each phase of action, whether a deliberate attack or defensive operation. AFATDS is fielded to all Active Component Army and Marine Corps units. About 90% of the National Guard has been fielded. AFATDS is installed on large-deck amphibious assault vessels of the United States Navy. Systems: The Force XXI Battle Command, Brigade & Below/Blue Force Tracking (FBCB2/BFT) system uses satellite and terrestrial communications technology to track and display friendly vehicles and aircraft that appear on a computer screen as blue icons over a topographical map or satellite image of the ground. Commanders and Soldiers can add red icons that show up as enemy on the screen, and are simultaneously broadcast to all the other FBCB2/BFT users on the battlefield. There are about 15,000 FBCB2/BFT systems in use today. Systems: The Tactical Airspace Integration System (TAIS) is an automated system for battlefield airspace management.Additional systems that are integrated with the ABCS suite include: Digital Topographic Support System (DTSS) Provides digital Terrain Analysis, terrain data base(s), updated terrain products, and hard copy repro, in support of Terrain Visualization, IPB, C2, and Battle Staff DMP (CORPS/DIV/BDE). Systems: The Global Command and Control System - Army (GCCS-A) provides a common picture of Army tactical operations to the Joint and Coalition community, and facilitates interoperability of systems across Army/Joint theaters, however no true synchronization occurs with PASS/DDS which introducing many issues on the battlefield for Soldiers, Marines, Sailors and Air force personnel. This issue can potentially put their lives at risk. Systems: The Integrated Meteorological System (IMETS) provides Commanders at all echelons with an automated weather system to receive, process, and disseminate weather observations, forecasts, and weather and environmental effects decision aids for ABCS. The Command Post of the Future (CPOF) application communicates with ABCS through GCCS-J, DDS/PASS and other means.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Mechanically stabilized earth** Mechanically stabilized earth: Mechanically stabilized earth (MSE or reinforced soil) is soil constructed with artificial reinforcing. It can be used for retaining walls, bridge abutments, seawalls, and dikes. Although the basic principles of MSE have been used throughout history, MSE was developed in its current form in the 1960s. The reinforcing elements used can vary but include steel and geosynthetics. MSE is the term usually used in the US to distinguish it from the trade name "Reinforced Earth". Elsewhere "reinforced soil" is the generally accepted term. Description: MSE walls stabilize unstable slopes and retain the soil on steep slopes and under crest loads. The wall face is often of precast, segmental blocks, panels or geocells that can tolerate some differential movement. The walls are infilled with granular soil, with or without reinforcement, while retaining the backfill soil. Reinforced walls utilize horizontal layers typically of geogrids. The reinforced soil mass, along with the facing, forms the wall. In many types of MSE’s, each vertical fascia row is inset, thereby providing individual cells that can be infilled with topsoil and planted with vegetation to create a green wall. Description: The main advantages of MSE walls compared to conventional reinforced concrete walls are their ease of installation and quick construction. They do not require formwork or curing and each layer is structurally sound as it is laid, reducing the need for support, scaffolding or cranes. They also do not require additional work on the facing. Description: In addition to the flexibility of MSE walls in design and construction, seismic testing conducted on a large scale shaking table laboratory at the Japan National Institute of Agricultural Engineering (Tsukuba City), showed that modular block reinforced walls, and even more so geocell retention walls, retain sufficient flexibility to withstand large deformations without loss of structural integrity, and have high seismic load resistance. Highway overpasses along interstates often employ the INTER-LOK System. History: Using straw, sticks, and branches to reinforce adobe bricks and mud dwellings has happened since the earliest part of human history, Parts of the Great Wall of China are formed as reinforced soil as are the ziggurats of the Middle East. In the 1960s French engineer Henri Vidal invented the modern form of MSE, termed Terre Armee (reinforced earth) using steel strip reinforcements. The first geosynthetic-reinforced soil walls were built in France in 1970 and 1971. Geosynthetic-reinforced walls have been in use in the United States since 1974. Bell and Steward (1977) describe some of these early applications, which were primarily geotextile wrapped-face walls supporting logging roads in the northwestern United States.Since the 1980s the development of reinforced soil has been dramatic using a range of construction forms and reinforcements including metallic and polymeric anchors, strips and grids. The first modern forms of reinforced soil were constructed in Europe in the late 1960s. The first MSE wall in the United States was built in 1971 on State Route 39 near Los Angeles. Reinforcement: Reinforcement placed in horizontal layers throughout the height of the wall provides the tensile strength to hold the soil together. The reinforcement materials of MSE can vary. Originally, long steel strips 50 to 120 mm (2 to 5 in) wide were used as reinforcement. These strips are sometimes ripped, although not always, to provide added friction. There are also prefabricated pile sleeve options to reduce negative skin friction on piles embedded behind MSE bridge abutments. Sometimes steel grids or meshes are also used as reinforcement. Several types of geosynthetics can be used including geogrids and geotextiles. The reinforcing geosynthetics can be made of high-density polyethylene, polyester, and polypropylene. These materials may be ribbed and are available in various sizes and strengths.For erosion control and load support the upper layer can be reinforced by geocell materials.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Dasolampanel** Dasolampanel: Dasolampanel (INN, USAN, code name NGX-426) is an orally bioavailable analog of tezampanel and thereby competitive antagonist of the AMPA and kainate receptors which was under development by Raptor Pharmaceuticals/Torrey Pines Therapeutics for the treatment of chronic pain conditions including neuropathic pain and migraine. It was developed as a follow-on compound to tezampanel, as tezampanel is not bioavailable orally and must be administered by intravenous injection, but ultimately neither drug was ever marketed.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Lab website** Lab website: A lab(s) website is a specific type of website most commonly dedicated to research and development programs. Relating to the classic scientific research environment - the laboratory - existing lab websites predominantly fall into two categories, the real-world and the virtual. Real-world laboratory websites: Real-world lab sites relate to the activities and research conducted by laboratories existing outside the Internet. In general, these sites tend to offer users a chance to see results of past research, rather than detailed views of contemporary research. Examples of these types of labs from the aviation world include Boeing’s Phantom Works, which covers the research arm of the Boeing Corporation, and Lockheed Martin's Advanced Development Program, aka Skunk Works. Virtual laboratory websites: A number of companies and institutions have created virtual lab websites specifically for research into Internet-based products. This research environment is seen as both podium and a playpen for Internet-borne companies. In many cases, the labs offer visitors a chance to learn more about the company's products currently in development and to try the work in progress. One of the best-known examples is Google Labs. Since its inception, Google Labs has resulted in the trial and launch of live products such as Gmail, Google Calendar, and Google Videos. Virtual laboratory websites: Similar examples from large web-based companies include Yahoo! Next and Microsoft Live Labs. One recent notable addition is Digg Labs, illustrating the Digg social bookmarking community's activities in near real-time. The labs are composed of the swarm and the stack activity displays. Mozilla has added a lab area to its product offering.Virtual laboratories are not the sole domain of companies and institutions. Some are created by individuals and exist solely as websites. Media labs: Traditional print and broadcast media companies have also begun to experiment with dedicating specific areas on their websites to advanced projects. One of the first companies credited with creating its own lab area was Reuters. When founded, the Reuters lab offered a limited number of products for visitors to experiment with, including the news and quotes widget and their mobile service. Media labs: The BBC has created a derivation on the lab idea with their BBC Backstage site. Backstage's slogan "Use our stuff to build your stuff" openly invites developers to use the BBC's various feeds and API's to power a new range of non-commercial products and services. The backstage site has allowed the BBC to create a developer network, a location for all those working with the BBC's content to come together and share their ideas and prototypes amongst their peers. The site also contains a blog. The Guardian newspaper in the UK has taken the idea of a lab to the next level with its Comment is free product. Created by Ben Hammersley, Comment is Free was made as a fully interactive extension to the Guardian Unlimited’s blogging system. The site contains the political and opinion material from both The Guardian and its sister paper The Observer, as well as work from over 600 separate subject-based experts, selected to write on their topics of knowledge. Users are encouraged to read and comment, and all posts are automatically linked to Technorati to return contextual blogosphere results. Media labs: In November 2006, NEWS.com.au, the breaking news section of News Digital Media launched News Lab, the first media-driven R&D website within News Corporation (N.B. News Corp also operates FIM Lab but this is currently without a website). The site aims to collect users' feedback on new products and amend them accordingly. Monitoring experimentation: While some media companies choose to create their own experimental areas, others create dedicated areas to document the efforts of others. The Washington Post's blog section, referred to as the Mashington Post records the efforts of Internet users' experimentation with combinations of pre-existing data, referred to as mashups.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Tiapride** Tiapride: Tiapride is a drug that selectively blocks D2 and D3 dopamine receptors in the brain. It is used to treat a variety of neurological and psychiatric disorders including dyskinesia, alcohol withdrawal syndrome, negative symptoms of psychosis, and agitation and aggression in the elderly. A derivative of benzamide, tiapride is chemically and functionally similar to other benzamide antipsychotics such as sulpiride and amisulpride known for their dopamine antagonist effects. Medical uses: Alcoholism Research in animal models and clinical studies in alcoholic patients have found that tiapride has anxiolytic effects. Dopamine hyperactivity has been linked with alcohol withdrawal syndrome (AWS), suggesting that tiapride's antidopaminergic effects are the most likely mechanism for its clinical efficacy, although others believe some other mechanism might be involved. Alcoholic patients treated with tiapride at a dosage of 300 mg/day reported reduced psychological distress and improved abstinence from alcohol. In another study in which alcoholic patients were given titrated doses up to 800 mg/day, subjects showed significant improvements in ratings of withdrawal, craving, psychiatric symptoms and quality of life.While tiapride does not affect positive symptoms of psychosis such as hallucinosis or delirium sometimes manifested in alcohol withdrawal syndrome, if combined with a drug such as carbamazepine that addresses those symptoms, it is ideal for treating alcohol dependency because its metabolism does not depend on liver function and it has low potential for abuse. This sets it apart from the benzodiazepines, which are contraindicated with alcohol and can be addictive. Moreover, tiapride's rapid onset makes intravenous or intramuscular injection prior to or during withdrawal episodes particularly effective. Medical uses: Agitation and aggression Agitation and aggression are also associated with hyperdopaminergic activity. Antipsychotic drugs are the most common treatment for these symptoms, but often come with a host of side-effects including orthostatic hypotension and deficits in vigilance and attention. One clinical study in agitated elderly patients compared the effects of tiapride, haloperidol and placebo and found that while the two drugs had comparable efficacy superior to the placebo effect, tiapride had fewer and less severe side effects than haloperidol.Tiapride's selectivity for the limbic system, which is associated with emotion, could underlie its particular efficacy in treating these affective disorders. Moreover, its selectivity for the dopaminergic system is thought to account for its avoidance of the side effects typically associated with other neuroleptic drugs, such as chlorpromazine, which act on a number of neurotransmitter systems. Medical uses: Movement disorders While tiapride preferentially targets the limbic system over the striatum, its moderate antagonistic effect on striatal dopamine receptors makes it effective in treating motor deficits that involve this area, such as tardive dyskinesia and chorea. Tiapride's moderate efficacy at D2 receptors may explain why it is able to treat motor symptoms without the extrapyramidal symptoms caused by excess dopamine blockage, which are sometimes seen in haloperidol or chlorpromazine. One clinical study of patients with tardive dyskinesia associated with Parkinson's disease found that tiapride significantly improved motor abilities without affecting other parkinsonian symptoms. Side effects: Although it is considered a "safe" medicine, it is, like sulpiride, strictly contraindicated for patients under the age of 18 due to its effects during the process of puberty. This is likely related to its side effects on levels of the hormone prolactin, which is involved in sexual development. There are also insufficient clinical data on the other side effects in adolescents. Side effects: Tiapride has been found to cause excess prolactin levels in plasma, which can cause decreased libido, infertility and increased risk of breast cancer. This is because dopamine plays a primary role in regulating prolactin release by binding to D2 receptors on prolactin-secreting cells in the anterior pituitary. Thus, when tiapride blocks these receptors these cells are disinhibited and release more prolactin. Side effects: The side-effect reported most commonly to the U.S. Food and Drug Administration (FDA) is rhabdomyolysis, a condition characterized by muscle tissue breakdown. Cardiac abnormalities such as prolongation of the QT interval and torsades de pointes have also been observed.Dosages above approximately 300 mg/day risk inducing tardive dyskinesia. However, given the drug's fairly wide window of tolerable doses, dosages can often be titrated to obtain the desired effect without bringing about motor deficits. In general, tiapride is considered an atypical antipsychotic because of its low risk for extrapyramidal symptoms, such as akinesia and akathesia. These effects are thought to be reduced in tiapride relative to typical antipsychotics because of its selectivity for the limbic system over extrapyramidal areas that control movement. Pharmacodynamics: Tiapride is a dopamine D2 and D3 receptor antagonist. It is more selective than other neuroleptic drugs such as haloperidol and risperidone, which not only target four of the five known dopamine receptor subtypes (D1-4), but also block serotonin (5-HT2A, 2C), α1- and α2-adrenergic, and histamine H1 receptors. Compared to these drugs, tiapride has a relatively moderate affinity for its target receptors, displacing 50 percent of 3H-raclopride binding at a concentration of 320 nM at D2 receptors and a concentration of 180 nM at D3 receptors. Pharmacodynamics: Tiapride displays a relatively high regional selectivity for limbic areas. One study found that, in contrast with haloperidol, which displays equal affinity for receptors in the rat limbic system and striatum, tiapride shows over three times as much affinity for limbic areas than striatal areas. Another study in rats found tiapride's affinity for the septum, a limbic region, to be over thirty times as high as for the striatum.Efficacy at the D2 receptor is moderate, with 80 percent of receptors occupied even in the presence of excess tiapride concentrations. Pharmacokinetics: Tiapride is primarily taken orally in the form of a tablet, but can also be administered via intravenous or intramuscular injection. A liquid oral formulation is also available for elderly patients with difficulty chewing solids. For all three methods of administration, the bioavailability of tiapride is approximately 75 percent. Peak plasma concentrations are attained between 0.4 and 1.5 hours following administration, and steady-state concentrations achieved 24 to 48 hours after beginning administration 3 times a day. It distributes rapidly and exhibits virtually no binding to plasma proteins, giving it a relatively high volume of distribution. Benzamide and its derivatives are highly water-soluble, and because of their polarity are believed to cross the blood–brain barrier via carrier-mediated transport. Elimination of tiapride, mostly in its original form, occurs through renal excretion with a half-life of 3 to 4 hours.Recommended dosages of tiapride vary with clinical symptoms. In alcoholic patients, delirium or pre-delirium associated with alcohol withdrawal can be alleviated by administration of 400–1200 mg/day or up to 1800 mg/day if necessary. Tremors and other dyskinsias can be treated with 300–800 mg/day. For reducing agitation and aggression in elderly patients, 200–300 mg/day is recommended. Availability: Tiapride is marketed under various trade names and is widely available outside of the United States. The most common trade name for tiapride is Tiapridal, which is used throughout Europe, Russia, as well as parts of South America, the Middle East, and North Africa. It is also sold under different names in Italy (Italprid, Sereprile), Japan (Tialaread, Tiaryl, Tiaprim, Tiaprizal), Chile (Sereprid), Germany (Tiaprid, Tiapridex), and China (Tiapride).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**BD+20 594b** BD+20 594b: BD+2 0594b (also known as K2-56b) is a massive exoplanet discovered by the Kepler spacecraft in collaboration with the HARPS spectrometer at La Silla in Chile. Naming: BD+20 594b indicates that the planet circles a star found in the Bonner Durchmusterung catalogue, BD +20° 594, the 594th entry in the +20-degree zone (declinations from +19 to +20 degrees); and that it is the first planet discovered orbiting that star. K2-56b indicates that the planet circles a star catalogued in the Kepler 2 mission catalogue (part of the extended K2 Kepler mission), the 56th one in the catalogue; and that it is the first planet discovered orbiting that star. Planet: With a radius of 2.2 R🜨 and a mass of 16.31 M🜨, BD+20594b is substantially smaller than Neptune. Taking the estimates of its radius and mass at face value, the composition of the planet would be rocky, hence making it classified as a mega-Earth. BD+20594b's exact composition is still unknown. Planet: The planet was discovered on January 28, 2016 by astrophysicist Néstor Espinoza and his team from the Catholic University of Chile, using data from the two-wheeled Kepler mission (K2). It orbits a K-type star 496.08 light years away in the constellation Taurus.It is believed that planets with a radius greater than 1.6 times the Earth's are not usually rocky, making BD+20594b an exception to this rule.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Structure and Interpretation of Computer Programs** Structure and Interpretation of Computer Programs: Structure and Interpretation of Computer Programs (SICP) is a computer science textbook by Massachusetts Institute of Technology professors Harold Abelson and Gerald Jay Sussman with Julie Sussman. It is known as the "Wizard Book" in hacker culture. It teaches fundamental principles of computer programming, including recursion, abstraction, modularity, and programming language design and implementation. MIT Press published the first edition in 1984, and the second edition in 1996. It was formerly used as the textbook for MIT's introductory course in computer science. SICP focuses on discovering general patterns for solving specific problems, and building software systems that make use of those patterns.MIT Press published the JavaScript edition in 2022. Content: The book describes computer science concepts using Scheme, a dialect of Lisp. It also uses a virtual register machine and assembler to implement Lisp interpreters and compilers. Content: Topics in the books are: Chapter 1: Building Abstractions with Procedures The Elements of Programming Procedures and the Processes They Generate Formulating Abstractions with Higher-Order Procedures Chapter 2: Building Abstractions with Data Introduction to Data Abstraction Hierarchical Data and the Closure Property Symbolic Data Multiple Representations for Abstract Data Systems with Generic Operations Chapter 3: Modularity, Objects, and State Assignment and Local State The Environment Model of Evaluation Modeling with Mutable Data Concurrency: Time Is of the Essence Streams Chapter 4: Metalinguistic Abstraction The Metacircular Evaluator Variations on a Scheme – Lazy Evaluation Variations on a Scheme – Nondeterministic Computing Logic Programming Chapter 5: Computing with Register Machines Designing Register Machines A Register-Machine Simulator Storage Allocation and Garbage Collection The Explicit-Control Evaluator Compilation Characters: Several fictional characters appear in the book: Alyssa P. Hacker, a Lisp hacker Ben Bitdiddle Cy D. Fect, a "reformed C programmer" Eva Lu Ator Lem E. Tweakit Louis Reasoner, a loose reasoner License: The book is licensed under a Creative Commons Attribution-ShareAlike 4.0 International (CC BY-SA 4.0) license. Coursework: The book was used as the textbook for MIT's former introductory programming course, 6.001, from fall 1984 through its last semester, in fall 2007. Other schools also made use of the book as a course textbook. Various versions of the JavaScript edition have been used by the National University of Singapore since 2012 in the course CS1101S. Reception: Byte recommended SICP in 1986 "for professional programmers who are really interested in their profession". The magazine said that the book was not easy to read, but that it would expose experienced programmers to both old and new topics. Influence: SICP has been influential in computer science education, and several later books have been inspired by its style. Influence: Structure and Interpretation of Classical Mechanics (SICM), another book that uses Scheme as an instructional element, by Gerald Jay Sussman and Jack Wisdom Software Design for Flexibility, by Chris Hanson and Gerald Jay Sussman How to Design Programs (HtDP), which intends to be a more accessible book for introductory Computer Science, and to address perceived incongruities in SICP Essentials of Programming Languages (EoPL), a book for Programming Languages courses
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Feature-oriented domain analysis** Feature-oriented domain analysis: Feature oriented domain analysis (FODA) is a domain analysis method which introduced feature modelling to domain engineering. FODA was developed in 1990 following several U.S. Government research projects. Its concepts have been regarded as critically advancing software engineering and software reuse. History: Feature-oriented domain analysis was first developed by the Software Engineering Institute in 1990. In the initial technical report, a study performed determined that feature oriented domain analysis was not only beneficial, but was described as a "necessary first step" for software reuse. The report introduced the concept of feature models to domain engineering in an effort to represent the standard features within the family of systems in the domain as well as the relationships between those features. Since then, feature models have been characterized as "the greatest contribution of domain engineering to software engineering".Much of the work leading up to the development of FODA was sponsored by the U.S. Department of Defense through research programs related to software reuse during the late 1980s. FODA was developed as a comprehensive analysis and refinement of technology developed from 1983–1990. While some aspects of FODA have changed, and it has become integrated with model-driven engineering, FODA is still known as the method that initially introduced feature models to domain engineering. Purpose: The intent of feature-oriented domain analysis is to support functional and architectural reuse. The objective is to create a domain model which represents a family of systems which can then be refined into the particular desired system within the domain To do this, the scope of the domain must be analyzed (known as FODA context analysis) to identify not only the systems in the domain but also the external systems which interact with the domain. FODA feature analysis then analyzes the end-user's view of the configurable requirements and candidate systems within the domain. From the developed feature model, customers can select from configurable requirements to specify a final system. Through this process, feature-oriented domain analysis ensures that a business can meet customers' demands efficiently through reuse of technology.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Equilibrium gel** Equilibrium gel: Equilibrium gel is made from a synthetic clay. Unlike other gels, it maintains the same consistency throughout its structure and is stable, which means it does not separate into sections of solid mass and those of more liquid mass. Equilibrium gel filtration liquid chromatography is a technique used for the quantitation of ligand binding. Synthesis: The gel is created by suspending synthetic clay in water. The initial fluid transformed into gel after a few months with concentrations of up to 1% clay by weight. After three years the substance separated into two phases. One phase was clay-rich while the other was clay-poor. However, at concentrations above 1% no such phase separation occurred. Unlike the lower concentrations where the arrangement of clay particles was continually in flux, the particles above 1% concentration locked into a stable structure which is known as equilibrium gel.Clay particles interact in an anisotropic way differing from the typical isotropic way of colloidal particles, which normally interact with all of their nearest neighbors when forming a gel. The clay particles are disc-shaped giving them an asymmetric charge distribution with a net positive charge on their edges and net negative on their faces. This doesn't allow them to interact with their neighbors, and they tend to form T-bonds. This lets clay particles connect in a chain and allows the gel to form at a low density. Properties: Equilibrium gel is similar to any gel in the way that it is a colloid in which the disperse phase has combined with the dispersion medium to produce a semisolid material. The difference with equilibrium gel is that it will not separate over time into two separate phase like all other gels. In a study taking place over seven years, scientist concluded that colloidal clays at slightly higher concentrations evolved reversibly and continuously from the empty liquid state to an arrested structure. From this observed properties the name equilibrium gel was derived.Equilibrium gel shares the traits of all soft matter. Soft matter is a conceptual term that can be used to categorize polymers, liquid crystals, colloids, amphilphilic molecules, glass, granular and biological materials. One of the main characteristics of Equilibrium gel as with soft matter is that it displays various mesoscopic structures originating from a large number of internal degrees of freedoms of each molecule. Applications: Scientists are already coming up with potential applications for equilibrium gel. One such application is batteries containing a gel electrolyte. Producing a relatively high power for a given weight, the battery could be incorporated into microscope devices if the gel could be made at a low enough density.Equilibrium gel could also be used as coatings to deliver drugs into the body. Using the gel for coatings instead of other substances would be beneficial. This is due to the fact that the gel would allow the coatings to be lighter, thus reducing the amount of material that enters the body. The coatings protect against the bodies immune system and dissolve when the drug reaches its target.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Ifosfamide** Ifosfamide: Ifosfamide (IFO), sold under the brand name Ifex among others, is a chemotherapy medication used to treat a number of types of cancer. This includes testicular cancer, soft tissue sarcoma, osteosarcoma, bladder cancer, small cell lung cancer, cervical cancer, and ovarian cancer. It is administered by injection into a vein.Common side effects include hair loss, vomiting, blood in the urine, infections, and kidney problems. Other severe side effects include bone marrow suppression and decreased level of consciousness. Use during pregnancy will likely result in harm to the baby. Ifosfamide is in the alkylating agent and nitrogen mustard family of medications. It works by disrupting the duplication of DNA and the creation of RNA.Ifosfamide was approved for medical use in the United States in 1987. It is on the World Health Organization's List of Essential Medicines. Medical uses: It is given as a treatment for a variety of cancers, including: Testicular cancer Breast cancer Lymphoma (Hodgkin and non-Hodgkin) Soft tissue sarcoma Osteosarcoma or bone tumor Lung cancer Cervical cancer Ovarian cancer Administration It is a white powder which, when prepared for use in chemotherapy, becomes a clear, colorless fluid. The delivery is intravenous. Ifosfamide is often used in conjunction with mesna to avoid internal bleeding in the patient, in particular hemorrhagic cystitis. Ifosfamide is given quickly, and in some cases can be given as quickly as an hour. Side effects: Hemorrhagic cystitis is rare when ifosfamide is given with mesna. A common and dose-limiting side effect is encephalopathy (brain dysfunction). It occurs in some form in up to 50% of people receiving the agent. The reaction is probably mediated by chloroacetaldehyde, one of the breakdown products of the ifosfamide molecule, which has chemical properties similar to acetaldehyde and chloral hydrate. The symptoms of ifosfamide encephalopathy can range from mild (difficulty concentrating, fatigue), to moderate (delirium, psychosis), to severe (nonconvulsive status epilepticus or coma). In children, this can interfere with neurological development. Apart from the brain, ifosfamide can also affect peripheral nerves. The severity of the reaction can be classified according to either the National Cancer Institute or the Meanwell criteria (grade I–IV). Previous brain problems and low levels of albumin in the blood increase the likelihood of ifosfamide encephalopathy. In most cases, the reaction resolves spontaneously within 72 hours. If it develops during an infusion of the drug, discontinuing the infusion is advised. The most effective treatment for severe (grade III–IV) encephalopathy is an intravenous solution of methylene blue, which appears to shorten the duration of encephalopathy; the exact mechanism of action of methylene blue is unclear. In some cases, methylene blue may be used as a prophylaxis before further doses of ifosfamide are administered. Other treatments include albumin and thiamine, and dialysis as a rescue modality.Ifosfamide may also cause a normal anion gap acidosis, specifically renal tubular acidosis type 2.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Small for gestational age** Small for gestational age: Small for gestational age (SGA) newborns are those who are smaller in size than normal for the gestational age. SGA is most commonly defined as a weight below the 10th percentile for the gestational age. SGA predicts susceptibility to hypoglycemia, hypothermia, and polycythemia. By definition, at least 10% of all newborns will be labeled SGA. All SGA babies should be watched for signs of failure to thrive, hypoglycemia and other health conditions. Causes: Being small for gestational age is broadly either: Being constitutionally small, or caused by a genetic trait of the baby Intrauterine growth restriction, also called "pathological SGA" Diagnosis: The condition is defined by birth weight and/or length. Intrauterine growth restriction is generally diagnosed by measuring the mother's uterus, with the fundal height being less than it should be for that stage of the pregnancy. If it is suspected, the mother will usually be sent for an ultrasound to confirm. Management: Ninety percent of babies born SGA catch up in growth by the time they reach 2 years old. For the 10 percent of those that are SGA without catchup growth by 2 years old, an endocrinologist should be consulted. Some cases warrant growth hormone therapy. Hypoglycemia is common in asymmetrical SGA babies because their larger brains burn calories at a faster rate than their usually limited fat stores hold. Hypoglycemia is treated by frequent feedings and/or additions of cornstarch-based products (such as Duocal powder) to the feedings. There are some common conditions and disorders found in many that are SGA (and especially those that are SGA without catchup growth by 2 years old). Management: Gastroenterologist - for gastrointestinal issues such as reflux and/or delayed gastric emptying Dietitian - to address caloric deficits. Dietitians are usually brought in for cases that include failure to thrive. According to the theory of thrifty phenotype, causes of growth restriction also trigger epigenetic responses in the fetus that are otherwise activated in times of chronic food shortage, and if the offspring develops in an environment rich in food, it may be more prone to metabolic disorders such as obesity and type II diabetes. Management: Speech-language pathologist or occupational therapist - Occupational therapists may also treat sensory issues Behaviorist - for feeding issues, a behavioral approach may also be used, but usually for older children (over 2) Allergist - to diagnose or rule out food allergies (not necessarily more common in those SGA than the normal population) Ear, nose and throat doctor - to diagnose enlarged adenoids or tonsils (not necessarily more common in those SGA than the normal population)For intrauterine growth restriction (during pregnancy), possible treatments include the early induction of labor, though this is only done if the condition has been diagnosed and seen as a risk to the health of the fetus. Terminology: If small for gestational age babies have been the subject of intrauterine growth restriction, formerly known as intrauterine growth retardation, the term "SGA associated with intrauterine growth restriction" is used. Terminology: Intrauterine growth restriction refers to a condition in which a fetus is unable to achieve its genetically determined potential size. This functional definition seeks to identify a population of fetuses at risk for modifiable but otherwise poor outcomes. This definition intentionally excludes fetuses that are small for gestational age (SGA) but are not pathologically small. Infants born SGA with severe short stature (or severe SGA) are defined as having a length less than 2.5 standard deviation scores below the mean.A related term is low birth weight, defined as an infant with a birth weight (that is, mass at the time of birth) of less than 2500 g (5 lb 8 oz), regardless of gestational age at the time of birth. Terminology: Other related terms include "very low birth weight", which is less than 1500 g, and "extremely low birth weight", which is less than 1000 g. Normal Weight at term delivery is 2500 g - 4200 g. SGA is not a synonym of low birth weight, very low birth weight, or extremely low birth weight. Example: 35-week gestational age delivery, 2250 g weight is appropriate for gestational age but is still low birth weight. One third of low-birth-weight neonates - infants weighing less than 2500 g - are small for gestational age. Terminology: There is an 8.1% incidence of low birth weight in developed countries, and 6–30% in developing countries. Much of this can be attributed to the health of the mother during pregnancy. One third of babies born with a low birth weight are also small for gestational age. Infants that are born at low birth weights are at risk of developing neonatal infection.Both low and high maternal serum Vitamin D (25-OH) are associated with higher incidence SGA in white women, although the correlation does not seem to hold for African American women.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Digital geologic mapping** Digital geologic mapping: Digital geologic mapping is the process by which geological features are observed, analyzed, and recorded in the field and displayed in real-time on a computer or personal digital assistant (PDA). The primary function of this emerging technology is to produce spatially referenced geologic maps that can be utilized and updated while conducting field work. Traditional geologic mapping: Geologic mapping is an interpretive process involving multiple types of information, from analytical data to personal observation, all synthesized and recorded by the geologist. Geologic observations have traditionally been recorded on paper, whether on standardized note cards, in a notebook, or on a map. Mapping in the digital era: In the 21st century, computer technology and software are becoming portable and powerful enough to take on some of the more mundane tasks a geologist must perform in the field, such as precisely locating oneself with a GPS unit, displaying multiple images (maps, satellite images, aerial photography, etc.), plotting strike and dip symbols, and color-coding different physical characteristics of a lithology or contact type (e.g., unconformity) between rock strata. Additionally, computers can now perform some tasks that were difficult to accomplish in the field, for example, handwriting or voice recognition and annotating photographs on the spot.Digital mapping has positive and negative effects on the mapping process; only an assessment of its impact on a geological mapping project as a whole shows whether it provides a net benefit. With the use of computers in the field, the recording of observations and basic data management changes dramatically. The use of digital mapping also affects when data analysis occurs in the mapping process, but does not greatly affect the process itself. Mapping in the digital era: Advantages Data entered by a geologist may have fewer errors than data transcribed by a data entry clerk. Data entry by geologists in the field may take less total time than subsequent data entry in the office, potentially reducing the overall time needed to complete a project. The spatial extent of real world objects and their attributes can be entered directly into a database with geographic information system (GIS) capability. Features can be automatically color-coded and symbolized based on set criteria. Multiple maps and imagery (geophysical maps, satellite images, orthophotos, etc.) can easily be carried and displayed on-screen. Geologists may upload each other's data files for the next day's field work as reference. Data analysis may start immediately after returning from the field, since the database has already been populated. Mapping in the digital era: Data can be constrained by dictionaries and dropdown menus to ensure that data are recorded systematically and that mandatory data are not forgotten Labour-saving tools and functionality can be provided in the field e.g. structure contours on the fly, and 3D visualisation Systems can be wirelessly connected to other digital field equipment (such as digital cameras and sensor webs) Disadvantages Computers and related items (extra batteries, stylus, cameras, etc.) must be carried in the field. Mapping in the digital era: Field data entry into the computer may take longer than physically writing on paper, possibly resulting in longer field programs. Data entered by multiple geologists may contain more inconsistencies than data entered by one person, making the database more difficult to query. Written descriptions convey to the reader detailed information through imagery that may not be communicated by the same data in parsed format. Geologists may be inclined to shorten text descriptions because they are difficult to enter (either by handwriting or voice recognition), resulting in loss of data. There are no original, hardcopy field maps or notes to archive. Paper is a more stable medium than digital format. Educational and scientific uses: Some universities and secondary educators are integrating digital geologic mapping into class work. For example, The GeoPad project [1] describes the combination of technology, teaching field geology, and geologic mapping in programs such as Bowling Green State University’s geology field camp.[2] At Urbino University (Italy) it:Università di Urbino, Field Digital Mapping Techniques are integrated in Earth and Environmental Sciences courses since 2006 [3] [4]. Educational and scientific uses: The MapTeach program is designed to provide hands-on digital mapping for middle and high school students.[5] The SPLINT [6] project in the UK is using the BGS field mapping system as part of their teaching curriculum Digital mapping technology can be applied to traditional geologic mapping, reconnaissance mapping, and surveying of geologic features. At international digital field data capture (DFDC) meetings, major geological surveys (e.g., British Geological Survey and Geological Survey of Canada) discuss how to harness and develop the technology.[7] Many other geological surveys and private companies are also designing systems to conduct scientific and applied geological mapping of, for example, geothermal springs and mine sites. Equipment: The initial cost of digital geologic computing and supporting equipment may be significant. In addition, equipment and software must be replaced occasionally due to damage, loss, and obsolescence. Products moving through the market are quickly discontinued as technology and consumer interests evolve. A product that works well for digital mapping may not be available for purchase the following year; however, testing multiple brands and generations of equipment and software is prohibitively expensive. Equipment: Common essential features Some features of digital mapping equipment are common to both survey or reconnaissance mapping and “traditional” comprehensive mapping. The capture of less data-intensive reconnaissance mapping or survey data in the field can be accomplished by less robust databases and GIS programs, and hardware with a smaller screen size. Equipment: Devices and software are intuitive to learn and easy to use Rugged, as typically defined by military standards (MIL-STD-810) and ingress protection ratings Waterproof Screen is easy to read in bright sunlight and on gray sky days Removable static memory cards can be used to back up data Memory on board is recoverable Real-time and post-processing differential correction for GPS locations Portable battery with at least 9 hours of life at near constant use Can change batteries in the field Batteries should have no “memory,” such as with NiCd Chargeable by unconventional power sources (generators, solar, etc.) Wireless real-time link to GPS or built-in GPS Wireless real-time link from computer to camera and other peripherals USB port(s) Features essential to capture traditional geologic observations Hardware and software only recently (in 2000) became available that can satisfy most of the criteria necessary for digitally capturing "traditional" mapping data. Equipment: Screen about 5 in × 7 in (130 mm × 180 mm)—compact but large enough to see map features. In 2009, some traditional mapping is conducted on PDAs. Lightweight—ideally less than 3 lbs. Transcription to digital text from handwriting and voice recognition. Can store paragraphs of data (text fields). Can store complex relational database with drop-down lists. Operating system and hardware are compatible with a robust GIS program. At least 512 MB memory. Equipment: Technology History Software Since every geologic mapping project covers an area with unique lithologies and complexities, and every geologist has a unique style of mapping, no software is perfect for digital geologic mapping out of the box. The geologist can choose to either modify their mapping style to the available software, or modify the software to their mapping style, which may require extensive programming. As of 2009, available geologic mapping software requires some degree of customization for a given geologic mapping project. Some digital-mapping geologists/programmers have chosen to highly customize or extend ESRI's ArcGIS instead. At digital field data capture meetings such as at the British Geological Survey in 2002 [17] some organisations agreed to share development experiences, and some software systems are now available to download for free.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Löffler's syndrome** Löffler's syndrome: Löffler's syndrome is a disease in which eosinophils accumulate in the lung in response to a parasitic infection. The parasite can be Ascaris, Strongyloides stercoralis, or Dirofilaria immitis which can enter the body through contact with the soil. The symptoms of Löffler's syndrome include those of a parasitic infection such as abdominal pain and cramping, skin rashes and fatigue. Löffler's syndrome itself will cause difficulty breathing, wheeze, coughing as well as a fever. Diagnosis: The diagnosis of Loffler's syndrome can be challenging, as the diagnostic criteria can be vague and consistent with a multitude of diseases or conditions. The disease's developmental trajectory is mostly unknown. Upon examination of symptoms, a doctor will likely request a chest x-ray looking for migratory pulmonary infiltrate, and blood testing, to confirm a diagnosis. Symptoms tend to be brief, but can range from mild to severe and include: fever, vomiting, increased respirations or difficulty breathing, cough, wheeze, and rash. Symptoms typically follow an exposure to allergens or certain drugs, and last approximately two weeks.Eosinophilia is the main feature of diagnostic criteria for Loffler's syndrome. Eosinophils are white blood cells that fight infection by destroying foreign substances in the body. This increase is determined through a blood test called a complete blood count, or CBC. A result of over 500 cells/mcL (cells per microliter of blood) is considered elevated. The normal range for eosinophils is less than 350 cells/mcL. Prevention: While the outcomes of this syndrome have never led to death, the symptoms can last anywhere from 2 to 4 weeks after the parasite enters the body. Prevention of this syndrome is education-based, consisting of educating individuals on proper handwashing techniques, as well as how to correctly dispose of feces. Epidemiology: This syndrome can be found anywhere. However, it is abnormally prevalent in tropical areas, showing higher prevalence in men than women. This syndrome is also exceedingly common in the warm damp parts of the world. The syndrome is also more likely to be contracted by small children since they spend an increased amount of time outside in the dirt. While it is still a mystery why the prevalence is higher in Indians, the warm damp environment is a perfect place for the parasites to grow and thrive. The epidemiological aspect of Löffler's syndrome isn't well known since there have been minimal statistics reported on the topic. History: In 1909 a man named H. French first described the condition. Then in 1932 Wilhelm Löffler[1] drew attention to the disease in cases of eosinophilic pneumonia caused by the parasites Ascaris lumbricoides,[2] Strongyloides stercoralis and the hookworms Ancylostoma duodenale and Necator americanus. Finally in 1943 the condition was called tropical eosinophilia by RJ Weingarten, and later officially named Löffler's syndrome. The most well-known case of Löffler's syndrome was in a young boy from Louisiana. He arrived at the hospital reporting a high fever after three days, as well as having rapid breathing. "He was hospitalized and treated with supplemental oxygen, intravenous methylprednisolone, and nebulized albuterol." The boy's symptoms quickly subsided and upon further investigation it was discovered that the boy worked caring for pigs. A test was then performed on the pigs' fecal matter and surrounding soil; it contained the parasite that had caused the boy's ailment.Another incident again involved a young boy who had been experiencing vomiting and a fever for a span of 3 months. When the doctors finally took an echocardiograph of the child they discovered that the "patient's admission blood count showed leukocytosis with an abnormally elevated level of peripheral eosinophils." The child was then diagnosed with Löffler's endocarditis, and immediately began immunosuppressive therapy to decline the eosinophilic count. History: Although Löffler only described eosinophilic pneumonia in the context of infection, many authors give the term "Löffler's syndrome" to any form of acute onset pulmonary eosinophilia no matter what the underlying cause. If the cause is unknown, it is specified and called "simple pulmonary eosinophilia". Cardiac damage caused by the damaging effects of eosinophil granule proteins (e.g. major basic protein) is known as Loeffler endocarditis and can be caused by idiopathic eosinophilia or eosinophilia in response to parasitic infection.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Peripheral nerve interface** Peripheral nerve interface: A peripheral nerve interface is the bridge between the peripheral nervous system and a computer interface which serves as a bi‐directional information transducer recording and sending signals between the human body and a machine processor. Interfaces to the nervous system usually take the form of electrodes for stimulation and recording, though chemical stimulation and sensing are possible. Research in this area is focused on developing peripheral nerve interfaces for the restoration of function following disease or injury to minimize associated losses. Peripheral nerve interfaces also enable electrical stimulation and recording of the peripheral nervous system to study the form and function of the peripheral nervous system. For example, recent animal studies have demonstrated high accuracy in tracking physiological meaningful measures, like joint angle. Many researchers also focus in the area of neuroprosthesis, linking the human nervous system to bionics in order to mimic natural sensorimotor control and function. Successful implantation of peripheral nerve interfaces depend on a number of factors which include appropriate indication, perioperative testing, differentiated planning, and functional training. Typically microelectrode devices are implanted adjacent to, around or within the nerve trunk to establish contact with the peripheral nervous system. Different approaches may be used depending on the type of signal desired and attainable. Function: The primary purpose of a neural interface is to enable two-way exchange of information with the nervous system for a sustained period of time to enable effective and high density stimulation and recording. The peripheral nervous system (PNS) is responsible for relaying information from the brain and spinal cord to the extremities of the body and back. The function of a peripheral nerve interface is to assist the nervous system when peripheral nerve function is compromised. To supplement the roles of the nervous system, interfaces need to augment motor function as well as discern sensory information. The feasibility of peripheral nerve stimulation to achieve a desired motor output has been demonstrated and is one of the major driving forces for this area of research. Information throughout the nervous system is exchanged primarily through action potentials. These signals occur at varying numbers and intervals dependent on both the neuroanatomical and neurochemical make up of the individual and localized region. Information may be either introduced or read out by inducing or recovering action potentials from the body. Successful development and implementation of a peripheral nerve interface would allow for both the introduction of information to the nervous system, and extraction of information from the nervous system. Problems and limitations: Problems and limitations in peripheral nerve interfacing are both biophysical and biological in nature. These challenges include: Fidelity of the interface in terms of functional resolution Relatively weak, noise-ridden electrical signals causing a challenging interface design constraint Interface implantation-associated injury to nerve fibers of interest Stability of the interface over time due to inflammation Managing inadvertent consequences such as pain or false sensory/motor stimulation due to physical movement or inflammation-associated triggering of neural activity Application: Peripheral nerve interfaces are used for pain modulation, restoration of motor function following spinal cord injury or stroke, treatment of epilepsy by electrical stimulation of the vagus nerve, nerve stimulation to control micturition, occipital nerve stimulation for chronic migraines and to interface with neuroprosthetics. Types: A wide variety of electrode designs have been researched, tested, and manufactured. These electrodes lie on a spectrum varying in degrees of invasiveness. Research in this area seeks to address issues centered around peripheral nerve/tissue damage, access to efferent and afferent signals, and selective recording/stimulation of nerve tissue. Ideally peripheral nerve interfaces are optimally designed to interface with biological constraints of peripheral nerve fibers, match the mechanical and electrical properties of the surrounding tissue, biocompatible with minimal immune response, high sensor resolution, are minimally invasive, and chronically stable with low signal-to-noise ratios. Strongest signals are recorded from nodes of ranvier. Peripheral nerve interfaces may be divided into extraneural and intrafascular categories. Types: Epineurial electrode interface Epineurial electrodes are fabricated as longitudinal strips holding two or more contact sites to interface with peripheral nerves. These electrodes are placed on the nerve and secured by suturing to the epineurium. The suturing process requires delicate surgery and can be torn from the nerve if excessive motion creates tension. Since the electrode is sutured to the epineurium it is unlikely to damage the nerve trunk. Types: Helicoidal electrode interface Helicoidal electrodes are placed circumjacent to the nerve and are made of flexible platinum ribbon in a helical design. This design allows the electrode to conform to the size and shape of the nerve in attempts to minimize mechanical trauma. The structural design causes low selectivity. Helicoidal electrodes are currently used for FES stimulation of the vagus nerve to control intractable epilepsy, sleep apnea, and to treat depressive syndromes. Types: Book electrode interface The book electrode consists of a silicone rubber block with slots. Each slot contains three platinum foils which function as electrodes, anode electrodes and one cathode. The spinal roots of the nerve are placed into these slots and the slots are then covered with a flap made of silicone and fixed with silicone glue. This electrode is mostly used to interrupt reflex circuits of the dorsal sacral roots and to control bladder function. Book electrodes are still considered very bulky.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Toaster Strudel** Toaster Strudel: Toaster Strudel is the brand name of a toaster pastry convenience food, prepared by heating the frozen pastries in a toaster and then spreading the included icing packet on top. The brand is historically notable for being stored frozen, due to innovations in 1980s food manufacturing processes. History: The Toaster Strudel is marketed under the Pillsbury brand, formerly of the Pillsbury Company. The product has found considerable success since being deployed in 1985 as competition with Kellogg's Pop-Tarts brand of non-frozen toaster pastries. In 1994, the company launched the advertising slogan "Something better just popped up". As of August 2013, the company increased the foreign branding, launching a brand ambassador character named Hans Strudel, and the new slogan of "Get Zem Göing". In 2001, General Mills acquired the Toaster Strudel product line with its purchase of Pillsbury. In 2023, General Mills used the advertising slogan, "Gooey. Flaky. Happy". Varieties: Toaster Strudels come in several flavors, with strawberry, blueberry, and apple flavors being the most common varieties. They also come in flavors such as cinnamon roll, chocolate, and boston cream pie. In 2020, the company released a limited-edition "Mean Girls" Toaster Strudel, which featured pink icing instead of the brand's traditional white icing. In popular culture: In the 2004 film Mean Girls, it was fictionally claimed that Gretchen Wieners' family fortune was due to her father's invention of the Toaster Strudel.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Plumbylene** Plumbylene: Plumbylenes (or plumbylidenes) are divalent organolead(II) analogues of carbenes, with the general chemical formula, R2Pb, where R denotes a substituent. Plumbylenes possess 6 electrons in their valence shell, and are considered open shell species. The first plumbylene reported was the dialkylplumbylene, [(Me3Si)2CH]2Pb, which was synthesized by Michael F. Lappert et al in 1973.Plumbylenes may be further classified into carbon-substituted plumbylenes, plumbylenes stabilized by a group 15 or 16 element, and monohalogenated plumbylenes (RPbX). Synthesis: Plumbylenes can generally be synthesized via the transmetallation of PbX2 (where X denotes halogen) with an organolithium (RLi) or Grignard reagent (RMgX). The first reported plumbylene, [((CH3)3Si)2CH]2Pb, was synthesized by Michael F. Lappert et al by transmetallation of PbCl2 with [((CH3)3Si)2CH]Li. The addition of equimolar RLi to PbX2 produces the monohalogenated plumbylene (RPbX); addition of 2 equivalents leads to disubstituted plumbylene (R2Pb). Adding an organolithium or Grignard reagent with a different organic substituent (i.e. R’Li/R’MgX) from RPbX leads to the synthesis of heteroleptic plumbylenes (RR’Pb). Dialkyl-, diaryl-, diamido-, dithioplumbylenes, and monohalogenated plumbyelenes have been successfully synthesized this way. Synthesis: Transmetallation with [((CH3)3Si)2N]2Pb as the Pb(II) precursor has also been used to synthesize diarylplumbylenes, disilylplumbylenes, and saturated N-heterocyclic plumbylenes. Alternatively, plumbylenes may be synthesized from the reductive dehalogenation of tetravalent organolead compounds (R2PbX2). Structure and bonding: The key aspects of bonding and reactivity in plumbylenes are dictated by the inert pair effect, whereby the combination of a widening s–p orbital energy gap as a trend down the group 14 elements and a strong relativistic contraction of the 6s orbital lead to a limited degree of sp hybridization and the 6s orbital being deep in energy and inert. Consequently, plumbylenes exclusively have a singlet spin state due to the large singlet–triplet energy gap, and tend to exist in an equilibrium between monomeric and dimeric forms in solution. This is in contrast to carbenes, which often have a triplet ground state and readily dimerize to form alkenes. Structure and bonding: In dimethyllead, (CH3)2Pb, the Pb–C bond length is 2.267 Å and the C–Pb–C bond angle is 93.02°; the singlet–triplet gap is 36.99 kcal mol−1. Structure and bonding: Diphenyllead, (C6H5)2Pb was computed with GAMESS at the B3PW91 level of theory using the basis sets 6-311+G(2df,p) for C and H and def2-svp for Pb with the ECP60MDF pseudopotential, in an adapted procedure (which uses the cc-pVTZ basis set for Pb instead). The molecular orbitals (MOs) (visualized using Chimera) and natural bond orbitals (NBOs) (visualized using multiwfn) generated are produced below, and qualitatively identical to the literature. As expected, the HOMO is 6s-dominated, and the LUMO is 6p-dominated. The NBOs are of the 6s lone pair and vacant 6p orbital respectively. Structure and bonding: The Pb–C bond distance was found to be 2.303 Å and the C–Pb–C angle 105.7°. Notwithstanding the different levels of theory, the larger bond angle for (C6H5)2Pb compared to (CH3)2Pb can be rationalized by the greater repulsion between the sterically bulkier phenyl groups relative to methyl groups. Atoms in molecules (AIM) topology analysis revealed critical points in (C6H5)2Pb, and is consistent with the literature. Structure and bonding: Plumbylenes occur as reactive intermediates in the formation of tetravalent plumbanes (R4Pb). Although the inert pair effect suggests the divalent state should be thermodynamically more stable than the tetravalent state, in the absence of stabilizing substituents, plumbylenes are sensitive to heat and light, and tend to undergo polymerization and disproportionation, forming elemental lead in the process.Plumbylenes can be stabilized as monomers by the use of sterically bulky ligands (kinetic stabilization) or heteroatom-containing substituents that can donate electron density into the vacant 6p orbital (thermodynamic stabilization). Structure and bonding: Dimerization Plumbylenes are able to undergo dimerization in two ways: either through the formation of a Pb=Pb double bond to form a formal diplumbene, or through bridging halide interactions. Unhalogenated plumbylenes tend to exist in an equilibrium between the monomeric and dimeric form in solution, and, due to the low dimerization energy, as either monomers or dimers in the solid state, depending on the steric bulk of substituents. However, increasing the steric bulk of lead-bound substituents can prevent the close association of plumbylene molecules and allow the plumbylene to exist exclusively as monomers in solution or even in the solid state. Structure and bonding: The driving force for dimerization in general arises from the Lewis amphoteric nature of plumbylenes, which possess a Lewis acidic vacant 6p orbital and a weakly Lewis basic 6s lone pair, which can act as electron acceptor and donor orbitals respectively.These diplumbenes possess a trans-bent structure similar to that in lighter, non-carbon congeners (disilenes, digermylenes, distannylenes). The observed Pb–Pb bond lengths in diplumbenes (2.90 – 3.53 Å) have been found to typically be longer than those in tetravalent diplumbanes R3PbPbR3 (2.84 – 2.97 Å). This, together with the low computed dimerization energy (energy released from the formation of dimers from monomers) of 24 kJ mol−1 for Pb2H4, indicates weak multiple bonding. This counterintuitive result is due to the pair of 6s-6p donor-acceptor interactions representing the Pb=Pb double bond in diplumbenes being less energetically favourable compared to the overlap of spn orbitals (with a higher degree of hybridization than in diplumbenes) in the Pb–Pb single bond in diplumbanes.In monohalogenated plumbylenes, the halogen atom on one plumbylene is able to donate a lone pair into the vacant 6p orbital of the lead atom on a separate plumbylene in a bridging mode. Monohalogenated plumbylenes have been found to generally exist as monomers in solution and dimers in the solid state, but, again, sufficiently bulky substituents on lead can sterically block this dimerization mode.Due to decreasing dimerization energy down Group 14, while monohalogenated stannylenes and plumbylenes dimerize via the halogen-bridging mode, monohalogenated silylenes and germylenes tend to dimerize via the abovementioned multiply-bonded mode instead. Structure and bonding: In a recent study, an N-heterocyclic plumbylene was shown to undergo dimerization leading to C–H activation, existing in solution in an equilibrium between the monomer and a dimer resulting from cleavage of an aryl C–H bond and formation of Pb–C and N–H bonds. DFT studies proposed that the reaction occurred via electrophilic substitution at the arene of one plumbylene by the lead atom of another, and involves concerted Pb–C and N–H bond formation instead of insertion of Pb into the C–H bond. Structure and bonding: Stabilizing intramolecular interactions with substituents bearing lone pairs Plumbylenes may be stabilized by electron donation into the vacant orbital of the lead atom. The two common intramolecular modes are resonance from a lone pair on the atom directly attached to the lead or by coordination from a Lewis base elsewhere in the molecule.For example, Group 15 or 16 elements directly adjacent to Pb donate a lone pair in manner similar to their stabilizing effect on Fisher carbenes. Common examples of more remote electron-donors include nitrogen atoms that can lead to a six-memberd ring by bonding to the lead. Even a fluorine atom on a remote trifluoromethyl group has been seen forming a coordination to lead in [2,4,6-(CF3)3C6H2]2Pb. Structure and bonding: Agostic interactions Agostic interactions have also been shown to stabilize plumbylenes. DFT computations on the compounds [(R(CH3)2Si){(CH3)2P(BH3)}CH]2Pb (R = Me or Ph) found that agostic interactions between bonding B–H orbitals and the vacant 6p orbital lowered the energy of the molecule by ca. 38 kcal mol−1; this was supported by X-ray crystal structures showing the favourable positioning of said B–H bonds in proximity of Pb. Reactivity: As previously mentioned, unstabilized plumbylenes are prone to polymerization and disproportionation, and plumbylenes without bulky substituents tend to dimerize in one of two modes. Below, the reactions of stabilized plumbylenes (at least at the temperatures at which they were studied) are listed. Reactivity: Lewis acid-base adduct formation Plumbylenes are Lewis acidic via the vacant 6p orbital and tend to form adducts with Lewis bases, such as trimethylamine N-oxide (Me3NO), 1-azidoadamantane (AdN3), and mesityl azide (MesN3). In contrast, the reaction between stannylenes and Me3NO produces the corresponding distannoxane (from oxidation of Sn(II) to Sn(IV)) instead of the Lewis adduct, which can be attributed to tin being a period above Pb, experiencing the inert pair effect to a lesser degree and hence having a higher susceptibility to oxidation.In the case of AdN3, the terminal N of the azidoadamantane binds to the plumbylene via a bridging mode between the Lewis acidic Pb and the Lewis basic P atom; in the case of MesN3, the azide evolves N2 to form a nitrene, which then inserts into a C-H bond of an arene substituent and coordinates to Pb as a Lewis base. Reactivity: Insertion Similar to carbenes and other Group 14 congeners, plumbylenes have been shown to undergo insertion reactions, specifically into C–X (X = Br, I) and Group 16 E–E (E = S, Se) bonds. Reactivity: Insertions into lead-substituent bonds can also occur.27 In the examples below, insertion is accompanied by intramolecular rearrangement to place more electron-donating heteroatoms next to the electron-deficient lead.27 Transmetallation Plumbylenes are known to undergo nucleophilic substitution with organometallic reagents to form transmetallated products.28 In an unusual example, the use of TlPF6, bearing the weakly coordinating anion PF6−, led to the formation of crystals of an oligonuclear lead compound with a chain structure upon work-up, highlighting the interesting reactivity of plumbylenes.28 In addition, plumbylenes can also undergo metathesis with group 13 E(CH3)3 (E = Al, Ga) compounds. Reactivity: Plumbylenes bearing different substituents can also undergo transmetallation and exchange substituents, with the driving force being the relief of steric strain and the low Pb-C bond dissociation energy. Applications: Plumbylenes can be used as concurrent σ-donor-σ-acceptor ligands to metal complexes, functioning as σ-donor via its filled 6s orbital and σ-acceptor via its empty 6p orbital. Room temperature-stable plumbylenes have also been suggested as precursors in chemical vapour deposition (CVD) and atomic layer deposition (ALD) of lead-containing materials. Dithioplumbylenes and dialkoxyplumbylenes may be useful as precursors for preparing the semiconductor material lead sulphide and piezoelectric PZT respectively.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Earth observation satellites transmission frequencies** Earth observation satellites transmission frequencies: The earth is constantly monitored by several satellites operating in the earth exploration-satellite service (EESS) or space research service (SRS). These artificial satellites have onboard space radio stations from which they gather data. The data is transmitted back to earth via feeder links. This article lists a number of current active Earth observation satellites and their downlink transmission frequencies.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Development (music)** Development (music): In music, development is a process by which a musical idea is communicated in the course of a composition. It refers to the transformation and restatement of initial material. Development is often contrasted with musical variation, which is a slightly different means to the same end. Development is carried out upon portions of material treated in many different presentations and combinations at a time, while variation depends upon one type of presentation at a time.In this process, certain central ideas are repeated in different contexts or in altered form so that the mind of the listener consciously or unconsciously compares the various incarnations of these ideas. Listeners may apprehend a "tension between expected and real results" (see irony), which is one "element of surprise" in music. This practice has its roots in counterpoint, where a theme or subject might create an impression of a pleasing or affective sort, but delight the mind further as its contrapuntal capabilities are gradually unveiled. Development (music): In sonata form, the middle section (between the exposition and the recapitulation) is called the development. Typically, in this section, material from the exposition section is developed. In some older texts, this section may be referred to as free fantasia.According to the Oxford Companion to Music there are several ways of developing a theme. These include: The division of a theme into parts, each of which can be developed in any of the above ways or recombined in a new way. Similarly, two or more themes can be developed in combination; in some cases, themes are composed with this possibility in mind. Development (music): Alteration of pitch intervals while retaining the original rhythm. Rhythmic displacement, so that the metrical stress occurs at a different point in the otherwise unchanged theme. Sequence, either diatonically within a key or through a succession of keys. Development (music): The Scherzo movement from Beethoven's Piano Sonata No. 15 in D major, Op 28 (the "Pastoral" Sonata) shows a number of these processes at work on a small scale. Charles Rosen (2002) marvels at the simplicity of the musical material: "The opening theme consists of nothing but four F sharps in descending octaves, followed by a light and simple I/ii/V7/I cadence with a quirky motif repeated four times." These opening eight bars provide all the material Beethoven needs to furnish his development, which takes place in bars 33-48: The division of a theme into parts: The falling octave in the first two bars and the repeated staccato chord in the left hand in bars 5-8 are the two fragments that Beethoven later develops: Alteration of pitch intervals: The somewhat bald falling octave idea in the first four bars is transformed in bars 33-36 into an elegant shape ending with an upward-curving semitone: Rhythmic displacement: In this movement, the repeated left hand chords in bar 5 are displaced so that in bar 33 onwards, they fall on the 2nd and 3rd beats: Sequence and the development of two or more themes in combination: In bars 33-48, the two fragments combine and the development goes through a modulating sequence that touches on a succession of keys; The following outline demonstrates Beethoven’s strategic planning, which he applied on a larger scale in the development sections of some of his major works. The bass line traces a decisive progression through a rising chromatic scale:To quote Rosen again, writing à propos of this movement: "As Beethoven's contemporary, the painter John Constable, said, making something out of nothing is the true work of the artist." Development on a larger scale: Not all development takes place in what is commonly known as the "development section" of a work. It can take place at any point in the musical argument. For instance, the “immensely energetic sonata movement” that forms the main body of the overture to Mozart’s Opera Don Giovanni announces the following theme during the initial exposition. It consists of two contrasting phrases: “first determined, then soft and conspiratorial.”William Mann says “the first, insistent phrase [of the above] is very important. At once it is taken up imitatively by various departments of the orchestra, and [starting in] A major, jumps through several related keys.” Each repetition of the descending phrase is subtly altered one note at a time, causing the music to pass from the key of A major, through A minor and thence via a chord of G7 to the remote key of C major, and thence back to A major. Development on a larger scale: The central section of the Overture (the part commonly known as the "development section") utilizes both phrases of the theme “in new juxtapositions and new tonalities,” developing it through repetition in a modulating sequence. The steady plod of the bass line against the sequential repetitions of the “soft and conspiratorial” phrase outlines a circle of fifths chord progression: Simultaneously, Mozart adds to the mix and continues to develop the imitative counterpoint that grew out of the first phrase. In the words of Willam Mann, this development “unites both halves” of the theme. This is how this tightly woven texture pans out:
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Trabecula** Trabecula: A trabecula (PL: trabeculae, from Latin for 'small beam') is a small, often microscopic, tissue element in the form of a small beam, strut or rod that supports or anchors a framework of parts within a body or organ. A trabecula generally has a mechanical function, and is usually composed of dense collagenous tissue (such as the trabecula of the spleen). It can be composed of other material such as muscle and bone. In the heart, muscles form trabeculae carneae and septomarginal trabeculae. Cancellous bone is formed from groupings of trabeculated bone tissue. Trabecula: In cross section, trabeculae of a cancellous bone can look like septa, but in three dimensions they are topologically distinct, with trabeculae being roughly rod or pillar-shaped and septa being sheet-like. When crossing fluid-filled spaces, trabeculae may offer the function of resisting tension (as in the penis, see for example trabeculae of corpora cavernosa and trabeculae of corpus spongiosum) or providing a cell filter (as in the trabecular meshwork of the eye). Bone trabecula: Structure Trabecular bone, also called cancellous bone, is porous bone composed of trabeculated bone tissue. It can be found at the ends of long bones like the femur, where the bone is actually not solid but is full of holes connected by thin rods and plates of bone tissue. The holes (the volume not directly occupied by bone trabecula) is the intertrabecular space, and is occupied by red bone marrow, where all the blood cells are made, as well as fibrous tissue. Even though trabecular bone contains a lot of intertrabecular space, its spatial complexity contributes the maximal strength with minimum mass. It is noted that the form and structure of trabecular bone are organized to optimally resist loads imposed by functional activities, like jumping, running and squatting. And according to Wolff's law, proposed in 1892, the external shape and internal architecture of bone are determined by external stresses acting on it. The internal structure of the trabecular bone firstly undergoes adaptive changes along stress direction and then the external shape of cortical bone undergoes secondary changes. Finally bone structure becomes thicker and denser to resist external loading. Bone trabecula: Because of the increased occurrence of total joint replacement and its impact on bone remodeling, understanding the stress-related and adaptive process of trabecular bone has become a central concern for bone physiologists. To understand the role of trabecular bone in age-related bone structure and in the design for bone-implant systems, it is important to study the mechanical properties of trabecular bone as a function of variables such as anatomic site, bone density, and age related issues. Mechanical factors including modulus, uniaxial strength, and fatigue properties are must be taken into account. Bone trabecula: Typically, the porosity percent of trabecular bone is in the range 75–95% and the density ranges from 0.2 to 0.8 g/cm3. It is noted that the porosity can reduce the strength of the bone, but also reduce its weight. The porosity and the manner that porosity is structured affect the strength of material. Thus, the micro structure of trabecular bone is typically oriented and ''grain'' of porosity is aligned in a direction at which mechanical stiffness and strength are greatest. Because of the microstructural directionality, the mechanical properties of trabecular bone are highly an-isotropic. The range of young's modulus for trabecular bone is 800 to 14,000 MPa and the strength of failure is 1 to 100 MPa. Bone trabecula: As mentioned above, the mechanical properties of trabecular bone are very sensitive to apparent density. The relationship between modulus of trabecular bone and its apparent density was demonstrated by Carter and Hayes in 1976. The resulting equation states: E=a+b⋅ρc where E represents the modulus of trabecular bone in any loading direction, ρ represents the apparent density, and a, b, and c are constants depending on the architecture of tissue. Bone trabecula: Using scanning electron microscopy, it was found that the variation in trabecular architecture with different anatomic sites lead to different modulus. To understand structure-anisotropy and material property relations, one must correlate the measured mechanical properties of anisotropic trabecular specimens with the stereological descriptions of their architecture.The compressive strength of trabecular bone is also very important because it is believed that the inside failure of trabecular bone arise from compressive stress. On the stress-strain curves for both trabecular bone and cortical bone with different apparent density, there are three stages in stress-strain curve. The first is the linear region where individual trabecula bend and compress as the bulk tissue is compressed. The second stage occurs after yielding, where trabecular bonds start to fracture, and the final stage is the stiffening stage. Typically, lower density trabecular areas offer more deformed staging before stiffening than higher density specimens.In summary, trabecular bone is very compliant and heterogeneous. The heterogeneous character makes it difficult to summarize the general mechanical properties for trabecular bone. High porosity makes trabecular bone compliant and large variations in architecture leads to high heterogeneity. The modulus and strength vary inversely with porosity and are highly dependent on the porosity structure. The effects of aging and small cracking of trabecular bone on its mechanical properties are a source of further study. Bone trabecula: Clinical significance Studies have shown that once a human reaches adulthood, bone density steadily decreases with age, to which loss of trabecular bone mass is a partial contributor. Loss of bone mass is defined by the World Health Organization as osteopenia if bone mineral density (BMD) is one standard deviation below mean BMD in young adults, and is defined as osteoporosis if it is more than 2.5 standard deviations below the mean. A low bone density greatly increases risk for stress fracture, which can occur without warning. The resulting low-impact fractures from osteoporosis most commonly occur in the upper femur, which consists of 25-50% trabecular bone depending on the region, in the vertebrae, which are about 90% trabecular, or in the wrist.When trabecular bone volume decreases, its original plate-and-rod structure is disturbed; plate-like structures are converted to rod-like structures and pre-existing rod-like structures thin until they disconnect and resorb into the body. Changes in trabecular bone are typically gender-specific, with the most notable differences in bone mass and trabecular microstructure occurring within the age range for menopause. Trabeculae degradation over time causes a decrease in bone strength that is disproportionately large in comparison to volume of trabecular bone loss, leaving the remaining bone vulnerable to fracture.With osteoporosis there are often also symptoms of osteoarthritis, which occurs when cartilage in joints is under excessive stress and degrades over time, causing stiffness, pain, and loss of movement. With osteoarthritis, the underlying bone plays a significant role in cartilage degradation. Thus any trabecular degradation can significantly affect stress distribution and adversely affect the cartilage in question.Due to its strong effect on overall bone strength, there is currently strong speculation that analysis in patterns of trabeculae degradation may be useful in the near future in tracking the progression of osteoporosis. Bone trabecula: Birds The hollow design of bird bones is multifunctional. It establishes high specific strength and supplements open airways to accommodate the skeletal pneumaticity common to many birds. The specific strength and resistance to buckling is optimized through a bone design that combines a thin, hard shell that encases a spongy core of trabeculae. The allometry of the trabeculae allows the skeleton to tolerate loads without significantly increasing the bone mass. The Red-Tailed Hawk optimizes its weight with a repeating pattern of V-shaped struts that give the bones the necessary lightweight and stiff characteristics. The inner network of trabeculae shifts mass away from the neutral axis, which ultimately increases the resistance to buckling.As in humans, the distribution of trabeculae in bird species is uneven and is dependent on load conditions. The bird with the highest density of trabeculae is the kiwi, a flightless bird. There is also uneven distribution of trabeculae within similar species such as the great spotted woodpecker or grey-headed woodpecker. After examining a microCT scan of the woodpecker's forehead, temporomandibulum, and occiput it was determined that there is significantly more trabeculae in the forehead and occiput. Besides the difference in distribution, the aspect ratio of the individual struts was higher in woodpeckers than in other birds of similar size such as the Eurasian Hoopoe or the lark. The woodpeckers’ trabeculae are more plate-like while the hawk and lark have rod-like structures networked through their bones. The decrease in strain on the woodpecker's brain has been attributed to the higher quantity of thicker plate-like struts packed more closely together than the hawk or hoopoe or the lark. Conversely, the thinner rod-like structures would lead to greater deformation. A destructive mechanical test with 12 samples show the woodpecker's trabeculae design has an average ultimate strength of 6.38MPa, compared to the lark's 0.55MPa.Woodpeckers' beaks have tiny struts supporting the shell of the beak, but to a lesser extent compared to the skull. As a result of fewer trabeculae in the beak, the beak has a higher stiffness (1.0 GPa) compared to the skull (0.31 GPa). While the beak absorbs some of the impact from pecking, most of the impact is transferred to the skull where more trabeculae are actively available to absorb the shock. The ultimate strength of woodpeckers' and larks' beaks are similar, inferring the beak has a lesser role in impact absorption. One measured advantage of the woodpecker's beak is the slight overbite (upper beak is 1.6mm longer than lower beak) which offers a bimodal distribution of force due to the asymmetric surface contact. The staggered timing of impact induces a lower strain on the trabeculae in the forehead, occiput, and beak. Bone trabecula: Trabecula in other organisms The larger the animal, the higher the load forces on its bones. Trabecular bone increases stiffness by increasing the amount of bone per unit volume or by altering the geometry and arrangement of individual trabeculae as body size and bone loading increases. Trabecular bone scales allometrically, reorganizing the bones’ internal structure to increase the ability of the skeleton to sustain loads experienced by the trabeculae. Furthermore, scaling of trabecular geometry can moderate trabecular strain. Load acts as a stimulus to the trabecular, changing its geometry so as to sustain or mitigate strain loads. By using finite element modelling, a study tested four different species under an equal apparent stress (σapp) to show that trabecular scaling in animals alters the strain within the trabecular. It was observed that the strain within trabeculae from each species varied with the geometry of the trabeculae. From a scale of tens of micrometers, which is approximately the size of osteocytes, the figure below shows that thicker trabeculae exhibited less strain. The relative frequency distributions of element strain experienced by each species shows a higher elastic moduli of the trabeculae as the species size increases. Bone trabecula: Additionally, trabeculae in larger animals are thicker, further apart, and less densely connected than those in smaller animals. Intra-trabecular osteon can commonly be found in thick trabeculae of larger animals, as well as thinner trabeculae of smaller animals such as cheetah and lemurs. The osteons play a role in the diffusion of nutrients and waste products of osteocytes by regulating the distance between osteocytes and bone surface to approximately 230 μm. Bone trabecula: Due to an increased reduction of blood oxygen saturation, animals with high metabolic demands tend to have a lower trabecular thickness (Tb.Th) because they require increased vascular perfusion of trabeculae. The vascularization by tunneling osteons changes the trabecular geometry from solid to tube-like, increasing bending stiffness for individual trabeculae and sustaining blood supply to deep tissue osteocytes. Bone trabecula: Bone volume fraction (BV/TV) was found to be relatively constant for the variety of animal sizes tested. Larger animals did not show a significantly larger mass per unit volume of trabecular bone. This may be due to an adaptation which reduces the physiological cost of producing, maintaining, and moving tissue. However, BV/TV showed significant positive scaling in avian femoral condyles. Larger birds present decreased flight habits due to avian BV/TV allometry. The flightless kiwi, weighing only 1–2 kg, had the greatest BV/TV of the birds tested in the study. This shows that trabecular bone geometry is related to ‘prevailing mechanical conditions’, so the differences in trabecular geometry in the femoral head and condyle could be attributed to different loading environments of coxofemoral and femorotibial joints. Bone trabecula: The woodpecker’s ability to resist repetitive head impact is correlated with its unique micro/nano-hierarchical composite structures. Microstructure and nanostructure of the woodpecker’s skull consists of an uneven distribution of spongy bone, the organizational shape of individual trabeculae. This affects the woodpecker's mechanical properties, allowing the cranial bone to withstand a high ultimate strength (σu). Compared to the cranial bone of the lark, the woodpecker cranial bone is denser and less spongy, having a more plate-like structure rather than the more rod-like structure observed in larks. Furthermore, the woodpecker's cranial bone is thicker and more individual trabeculae. Relative to the trabeculae in lark, the woodpecker’s trabecular is more closely spaced and more plate-like. [19] These properties result in higher ultimate strength in the cranial bone of the woodpecker. History: The diminutive form of Latin trabs, means a beam or bar. In the 19th century, the neologism trabeculum (with an assumed plural of trabecula) became popular, but is less etymologically correct. Trabeculum persists in some countries as a synonym for the trabecular meshwork of the eye, but this can be considered poor usage on the grounds of both etymology and descriptive accuracy. Other uses: For the skull development component, see trabecular cartilage.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**3dmiX** 3dmiX: 3dmiX is a computer program for BeOS that displays each track of an audio as an object on the virtual 3D sound stage and allows the users to modify its panning and volume by dragging the object around. The program was previously named 3dsound and also Benoit's Mix after its creator, Benoit Schillings, now CTO at Google X. The program is often cited as an example of a cool application for BeOS.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**MatrixSSL** MatrixSSL: MatrixSSL is an open-source TLS/SSL implementation designed for custom applications in embedded hardware environments.The MatrixSSL library contains a full cryptographic software module that includes industry-standard public key and symmetric key algorithms. It is now called the Inside Secure TLS Toolkit. Features: Features: Protocol versions SSL 3.0 TLS 1.0 TLS 1.1 TLS 1.2 TLS 1.3 DTLS 1.0 DTLS 1.2 Public key algorithms RSA Elliptic curve cryptography Diffie–Hellman Symmetric key algorithms AES AES-GCM Triple DES ChaCha ARC4 SEED Supported cipher suites TLS_AES_128_GCM_SHA256 (TLS 1.3) TLS_AES_256_GCM_SHA384 (TLS 1.3) TLS_CHACHA20_POLY1305_SHA256 (TLS 1.3) TLS_DHE_RSA_WITH_AES_128_CBC_SHA TLS_DHE_RSA_WITH_AES_256_CBC_SHA TLS_DHE_RSA_WITH_AES_128_CBC_SHA256 TLS_DHE_RSA_WITH_AES_256_CBC_SHA256 SSL_DHE_RSA_WITH_3DES_EDE_CBC_SHA TLS_RSA_WITH_SEED_CBC_SHA TLS_DHE_PSK_WITH_AES_128_CBC_SHA TLS_DHE_PSK_WITH_AES_256_CBC_SHA TLS_PSK_WITH_AES_128_CBC_SHA TLS_PSK_WITH_AES_256_CBC_SHA TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384 TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256 TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA256 TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 TLS_ECDH_ECDSA_WITH_AES_128_CBC_SHA TLS_ECDH_ECDSA_WITH_AES_256_CBC_SHA TLS_ECDH_RSA_WITH_AES_128_CBC_SHA TLS_ECDH_RSA_WITH_AES_256_CBC_SHA TLS_ECDH_ECDSA_WITH_AES_128_CBC_SHA256 TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256 TLS_RSA_WITH_AES_128_CBC_SHA TLS_RSA_WITH_AES_256_CBC_SHA TLS_RSA_WITH_AES_128_CBC_SHA256 TLS_RSA_WITH_AES_256_CBC_SHA256 TLS_RSA_WITH_AES_128_GCM_SHA256 TLS_RSA_WITH_AES_256_GCM_SHA384 SSL_RSA_WITH_3DES_EDE_CBC_SHA SSL_RSA_WITH_RC4_128_SHA SSL_RSA_WITH_RC4_128_MD5 TLS_DH_anon_WITH_AES_128_CBC_SHA TLS_DH_anon_WITH_AES_256_CBC_SHA SSL_DH_anon_WITH_3DES_EDE_CBC_SHA SSL_DH_anon_WITH_RC4_128_MD5 Client authentication Secure Renegotiation Standard Session Resumption Stateless Session Resumption Transport independent PKCS#1 and PKCS#8 key parsing False Start Max Fragment Length extension Optional PKCS#11 Crypto Interface
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Algebraic graph theory** Algebraic graph theory: Algebraic graph theory is a branch of mathematics in which algebraic methods are applied to problems about graphs. This is in contrast to geometric, combinatoric, or algorithmic approaches. There are three main branches of algebraic graph theory, involving the use of linear algebra, the use of group theory, and the study of graph invariants. Branches of algebraic graph theory: Using linear algebra The first branch of algebraic graph theory involves the study of graphs in connection with linear algebra. Especially, it studies the spectrum of the adjacency matrix, or the Laplacian matrix of a graph (this part of algebraic graph theory is also called spectral graph theory). For the Petersen graph, for example, the spectrum of the adjacency matrix is (−2, −2, −2, −2, 1, 1, 1, 1, 1, 3). Several theorems relate properties of the spectrum to other graph properties. As a simple example, a connected graph with diameter D will have at least D+1 distinct values in its spectrum. Aspects of graph spectra have been used in analysing the synchronizability of networks. Branches of algebraic graph theory: Using group theory The second branch of algebraic graph theory involves the study of graphs in connection to group theory, particularly automorphism groups and geometric group theory. The focus is placed on various families of graphs based on symmetry (such as symmetric graphs, vertex-transitive graphs, edge-transitive graphs, distance-transitive graphs, distance-regular graphs, and strongly regular graphs), and on the inclusion relationships between these families. Certain of such categories of graphs are sparse enough that lists of graphs can be drawn up. By Frucht's theorem, all groups can be represented as the automorphism group of a connected graph (indeed, of a cubic graph). Another connection with group theory is that, given any group, symmetrical graphs known as Cayley graphs can be generated, and these have properties related to the structure of the group. Branches of algebraic graph theory: This second branch of algebraic graph theory is related to the first, since the symmetry properties of a graph are reflected in its spectrum. In particular, the spectrum of a highly symmetrical graph, such as the Petersen graph, has few distinct values (the Petersen graph has 3, which is the minimum possible, given its diameter). For Cayley graphs, the spectrum can be related directly to the structure of the group, in particular to its irreducible characters. Branches of algebraic graph theory: Studying graph invariants Finally, the third branch of algebraic graph theory concerns algebraic properties of invariants of graphs, and especially the chromatic polynomial, the Tutte polynomial and knot invariants. The chromatic polynomial of a graph, for example, counts the number of its proper vertex colorings. For the Petersen graph, this polynomial is 12 67 230 529 814 775 352 ) . In particular, this means that the Petersen graph cannot be properly colored with one or two colors, but can be colored in 120 different ways with 3 colors. Much work in this area of algebraic graph theory was motivated by attempts to prove the four color theorem. However, there are still many open problems, such as characterizing graphs which have the same chromatic polynomial, and determining which polynomials are chromatic.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Radiodetermination-satellite service** Radiodetermination-satellite service: Radiodetermination-satellite service is – according to Article 1.41 of the International Telecommunication Union's (ITU) Radio Regulations (RR) – defined as «A radiocommunication service for the purpose of radiodetermination involving the use of one or more space stations. This service may also include feeder links necessary for its own operation.» See also Classification: This radiocommunication service is classified in accordance with ITU Radio Regulations (article 1) as follows: Radiodetermination service (article 1.40) Radiodetermination-satellite service (article 1.41) Radionavigation service (article 1.42) Radiolocation service (article 1.48)The below depicted satellites are carrier of space radio stations dedicated to the radiodetermination-satellite service Frequency allocation: The allocation of radio frequencies is provided according to Article 5 of the ITU Radio Regulations (edition 2012).In order to improve harmonisation in spectrum utilisation, the majority of service-allocations stipulated in this document were incorporated in national Tables of Frequency Allocations and Utilisations which is within the responsibility of the appropriate national administration. The allocation might be primary, secondary, exclusive, and shared. Frequency allocation: primary allocation: is indicated by writing in capital letters secondary allocation: is indicated by small letters exclusive or shared utilization: is within the responsibility of administrationsExample of frequency allocation
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Test compression** Test compression: Test compression is a technique used to reduce the time and cost of testing integrated circuits. The first ICs were tested with test vectors created by hand. It proved very difficult to get good coverage of potential faults, so Design for testability (DFT) based on scan and automatic test pattern generation (ATPG) were developed to explicitly test each gate and path in a design. These techniques were very successful at creating high-quality vectors for manufacturing test, with excellent test coverage. However, as chips got bigger and more complex the ratio of logic to be tested per pin increased dramatically, and the volume of scan test data started causing a significant increase in test time, and required tester memory. This raised the cost of testing. Test compression: Test compression was developed to help address this problem. When an ATPG tool generates a test for a fault, or a set of faults, only a small percentage of scan cells need to take specific values. The rest of the scan chain is don't care, and are usually filled with random values. Loading and unloading these vectors is not a very efficient use of tester time. Test compression takes advantage of the small number of significant values to reduce test data and test time. In general, the idea is to modify the design to increase the number of internal scan chains, each of shorter length. These chains are then driven by an on-chip decompressor, usually designed to allow continuous flow decompression where the internal scan chains are loaded as the data is delivered to the decompressor. Many different decompression methods can be used. One common choice is a linear finite state machine, where the compressed stimuli are computed by solving linear equations corresponding to internal scan cells with specified positions in partially specified test patterns. Experimental results show that for industrial circuits with test vectors and responses with very low fill rates, ranging from 3% to 0.2%, the test compression based on this method often results in compression ratios of 30 to 500 times.With a large number of test chains, not all the outputs can be sent to the output pins. Therefore, a test response compactor is also required, which must be inserted between the internal scan chain outputs and the tester scan channel outputs. The compactor must be synchronized with the data decompressor, and must be capable of handling unknown (X) states. (Even if the input is fully specified by the decompressor, these can result from false and multi-cycle paths, for example.) Another design criteria for the test result compressor is that it should give good diagnostic capabilities, not just a yes/no answer.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Tripwire force** Tripwire force: A tripwire force (sometimes called a glass plate) is a strategic approach in deterrence theory. The tripwire force is a military force smaller than that of a potential adversary, which is designed to signal the defending side's commitment to an armed response to future aggression without triggering a security spiral. Concept: A tripwire force is a military force significantly smaller than the forces of a potential adversary. The tripwire force helps deter aggression through the demonstration of the defending side's commitment to militarily counter an armed attack, even if the tripwire force cannot mount a sustained resistance itself. In the event an attack occurs, it helps defend against the aggressor by slowing the advance of the aggressor's forces to allow the defender time to marshal additional resources. The tripwire force can, in some instances, also be useful in deterring salami attacks.Because the tripwire force is too small, by itself, to present an offensive threat, it can be deployed without triggering the security dilemma.The term "glass plate" has been used as a synonym for tripwire force; an attack against the force metaphorically shatters the "glass" between peace and war.The credibility of a tripwire force is tied to the "force having relevant combat capabilities and being of sufficient size that an adversary could neither sidestep nor capture the force" as well as to the potential of the defender to actually mobilize reserves robust enough to launch a counter-attack in a timely manner. Examples: Examples in practice United States Army Berlin, a U.S. Army formation posted to West Berlin during the Cold War, has been referred to as a tripwire force. Because a limited Soviet incursion into West Berlin, which resulted in no American casualties, might cause the sitting United States President to hesitate in mounting a counter-offensive, the Soviet Union – it was felt by western military planners – would have a strategic incentive to take such an action. By stationing American forces in West Berlin, U.S. casualties would be guaranteed during any future Soviet attack. In this way the United States would deny itself the political ability to abandon the conflict which would, in turn, guarantee a U.S. response up to – and including – the battlefield deployment of nuclear weapons. Realizing this, the Soviet Union would not take offensive action against West Berlin even though it might be militarily capable of doing so. Examples: NATOs stance in the larger European theatre were also seen largely as a tripwire, whose primary purpose was to trigger the release of nuclear attacks on the Warsaw Pact. The British 1957 Defence White Paper was based on a detailed look at the British Army of the Rhine's part as a tripwire and concluded it was larger than it needed to be to serve this function. If the force's primary purpose was to simply delay an advance until it became overwhelming and thus indicated a "real war", presumably being destroyed in the process, then a smaller force would work just as well. Accordingly, the BAOR was reduced from 77,000 to 64,000 over the next year.The deployment, in the mid 1970s, of a Soviet brigade to Cuba was at the time perceived by some to represent the introduction of a tripwire force onto the island – a method of deterring aggression against Cuba from "potential attackers who would not want to engage" the full Soviet Army.British military forces in the Falkland Islands (Isles Malvinas) prior to the Falklands War were intended to serve as a tripwire force, though were ultimately an ineffective one as they were so small and lightly armed that they didn't represent a credible signal to Argentina of UK military commitment to the islands. The Argentine invasion force had been given orders to overcome resistance without inflicting British casualties, and during the initial invasion successfully managed to bypass or capture all British units without resorting to deadly force.United States Forces Korea have also been referred to as a tripwire force due to the perception that they are too diminutive to singularly repel an attack by the Korean People's Army. Rather, they serve to convey "the certainty of American involvement should the North Koreans be tempted to invade".Since 2014, several members of NATO deployed forces to the Baltic states as a stated tripwire against possible Russian actions. Examples: Examples in theory Paul K. Davis and John Arquilla have argued that the United States should have placed a tripwire force in Kuwait prior to the Iraqi invasion of Kuwait as a method of signaling to Iraq the commitment of the U.S. to an armed response. In this way, they state, the Gulf War might have been avoided.In 2014 Saudi Arabia reportedly requested the deployment of Pakistani military units to Yemen to act as a tripwire force in the event of an attack against the kingdom by Iran via Yemen.In 2015 Michael E. O'Hanlon theorized that a United States tripwire force could continue to be deployed in a hypothetically reunified Korea to meet American security guarantees to the region while avoiding provocation of China. According to O'Hanlon, a small enough U.S. military deployment in Korea, posted at a sufficient distance from the Chinese border, would not present an offensive threat to the PRC but would ensure the likelihood of American casualties in the event of a land invasion of the Korean Peninsula, thereby guaranteeing future American military commitment to any realized conflict.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Lithium superoxide** Lithium superoxide: Lithium superoxide is an unstable inorganic salt with formula LiO2. A radical compound, it can be produced at low temperature in matrix isolation experiments, or in certain nonpolar, non-protic solvents. Lithium superoxide is also a transient species during the reduction of oxygen in a lithium–air galvanic cell, and serves as a main constraint on possible solvents for such a battery. For this reason, it has been investigated thoroughly using a variety of methods, both theoretical and spectroscopic. Structure: The LiO2 molecule is a misnomer: the bonds between lithium and oxygen are highly ionic, with almost complete electron-transfer. The force constant between the two oxygen atoms matches the constants measured for the superoxide anion (O2−) in other contexts. The bond length for the O-O bond was determined to be 1.34 Å. Using a simple crystal structure optimization, the Li-O bond was calculated to be approximately 2.10 Å.There have been quite a few studies regarding the clusters formed by LiO2 molecules. The most common dimer has been found to be the cage isomer. Second to it is the singlet bypyramidal structure. Studies have also been done on the chair complex and the planar ring, but these two are less favorable, though not necessarily impossible. Production and reactions: Lithium superoxide is extremely reactive because of the odd number of electrons present in the π* molecular orbital of the superoxide anion. Matrix isolation techniques can produce pure samples of the compound, but they are only stable at 15-40 K.At higher (but still cryogenic) temperatures, lithium superoxide can be produced by ozonating lithium peroxide (Li2O2) in freon 12: The resulting product is only stable up to −35 °C.Alternatively, lithium electride dissolved in anhydrous ammonia will reduce oxygen gas to yield the same product: Lithium superoxide is, however, only metastable in ammonia, gradually oxidizing the solvent to water and nitrogen gas: Unlike other known decompositions of LiO2, this reaction bypasses lithium peroxide. Occurrence: Like other superoxides, lithium superoxide is the product of a one-electron reduction of an oxygen molecule. It thus appears whenever oxygen is mixed with single-electron redox catalysts, such as p-benzoquinone. Occurrence: In batteries Lithium superoxide also appears at the cathode of a lithium-air galvanic cell during discharge, as in the following reaction: Li+ + e− + O2 → LiO2This product typically then reacts and proceed to form lithium peroxide, Li2O2 2LiO2 → Li2O2 + O2The mechanism for this last reaction has not been confirmed and developing a complete theory of the oxygen reduction process remains a theoretical challenge as of 2022. Indeed, recent work suggests that LiO2 can be stabilized via a suitable cathode made of graphene with iridium nanoparticles.A significant challenge when investigating these batteries is finding an ideal solvent in which to perform these reactions; current candidates are ether- and amide-based, but these compounds readily react with the superoxide and decompose. Nevertheless, lithium-air cells remain the focus of intense research, because of their large energy density—comparable to the internal combustion engine. Occurrence: In the atmosphere Lithium superoxide can also form for extended periods of time in low-density, high-energy environments, such as the upper atmosphere. The mesosphere contains a persistent layer of alkali metal cations ablated from meteors. For sodium and potassium, many of the ions bond to form particles of the corresponding superoxide. It is currently unclear whether lithium should react analogously.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**BU-48** BU-48: BU-48 is a drug that is used in scientific research. It is from the oripavine family, related to better-known drugs such as etorphine and buprenorphine. The parent compound from which BU-48 was derived (with N-methyl rather than methylcyclopropyl on the nitrogen and lacking the aliphatic hydroxyl group) is a powerful μ-opioid agonist 1000 times more potent than morphine, but in contrast BU-48 has only weak analgesic effects and instead acts primarily as a δ-opioid agonist. Its main effects are to produce convulsions, but it may also have antidepressant effects.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Calcium modulating ligand** Calcium modulating ligand: Calcium modulating ligand (CAMLG or CAML), also known as calcium-modulating cyclophilin ligand, is a signalling protein recognized by the TNF receptor TACI. Function: The immunosuppressant drug cyclosporin A blocks a calcium-dependent signal from the T-cell receptor (TCR) that normally leads to T-cell activation. When bound to cyclophilin B, cyclosporin A binds and inactivates the key signaling intermediate calcineurin. The protein encoded by this gene functions similarly to cyclosporin A, binding to cyclophilin B and acting downstream of the TCR and upstream of calcineurin by causing an influx of calcium. This integral membrane protein appears to be a new participant in the calcium signal transduction pathway, implicating cyclophilin B in calcium signaling, even in the absence of cyclosporin. Interactions: CAMLG has been shown to interact with TNFRSF13B.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Senotherapy** Senotherapy: Senotherapy is an early-stage basic research field for development of possible therapeutic agents and strategies to specifically target cellular senescence, an altered cell state associated with ageing and age-related diseases. The name derives from intent of the proposed anti-aging drug to halt "senescence". As of 2019, much of the research remains preliminary and there are no drugs approved for this purpose. Types: Senotherapeutics include: Geroprotectors – agents/strategies which prevent or reverse the senescent state by preventing triggers of cellular senescence, such as DNA damage, oxidative stress, proteotoxic stress, telomere shortening (i.e. telomerase activators).SASP inhibitors – agents interfering with pro-inflammatory senescence-associated secretory phenotype (SASP) production, including: Glucocorticoids as potent suppressors of selected components of the SASP Statins such as simvastatin, that can reduce the expression of pro-inflammatory cytokines (IL-6, IL-8, and MCP-1) JAK1/2 inhibitors such as ruxolitinib NF-κB and p38 inhibitors IL-1α blockers Mitochondrial depleters in the case of impaired mitophagySenolytics – small molecules that specifically induce cell death in senescent cells, targeting survival pathways and anti-apoptotic mechanisms, antibodies and antibody-mediated drug delivery medications. Unlike SASP inhibitors, senolytics can be effective by intermittent rather than continuous application.Senomorphics – small molecules that suppress senescent phenotypes without cell killingGene therapy strategies – edit the genes of the cells of an organism in order to increase their resistance to aging, senile diseases and to prolong the life of the organism
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Diethylene glycol diglycidyl ether** Diethylene glycol diglycidyl ether: Diethylene glycol diglycidyl ether (DEGDGE) is an organic chemical in the glycidyl ether family with the formula C10H18O5.. The oxirane functionality makes it useful as a reactive diluent for epoxy resin viscosity reduction. Manufacture: The product is manufactured by adding diethylene glycol and a Lewis acid catalyst into a reactor and streaming in epichlorohydrin slowly to control the exothermic reaction. This forms the halohydrin, which is dehydrochlorinated with sodium hydroxide. This forms the diglycidyl ether. The waste products are sodium chloride, water and excess sodium hydroxide (alkaline brine). One of the quality control tests would involve measuring the epoxy value by determination of the epoxy equivalent weight. Uses: A key use is as a modifier for epoxy resins as a reactive diluent and flexibilizer. The molecule has 2 oxirane functionalities, and thus does not at as a chain terminator but it modifies and reduces the viscosity of epoxy resins. These reactive diluent modified epoxy resins may then be further formulated into CASE applications: coatings, (including antimicrobial versions) adhesives, sealants, and elastomers. The use of the diluent does effect mechanical properties and microstructure of epoxy resins.The species has also been used to synthesize other chemical compounds.The toxicology has been studied.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Rap metal** Rap metal: Rap metal is a fusion genre which combines hip hop with heavy metal. It usually consists of heavy metal guitar riffs, funk metal elements, rapped vocals and sometimes turntables. History: Origins and early development (1980s–early 1990s) Rap metal's roots are based both in hip hop acts who sampled heavy metal music, such as Beastie Boys, MC Strecker Cypress Hill, Esham and Run-DMC, and rock bands who fused heavy metal and hip hop influences, such as 24-7 Spyz and Faith No More.Scott Ian of Anthrax (who helped pioneer the genre) believes Rage Against the Machine invented the genre.In 1987, the heavy metal band Anthrax fused hip hop with heavy metal for their extended play I'm the Man. The next year rapper Sir Mix-a-Lot teamed up with Metal Church for his 1988 single "Iron Man", from his debut album Swass, loosely based upon the Black Sabbath song of the same name. Rap metal can be found in a track from the industrial metal band Ministry in their 1989 album The Mind Is a Terrible Thing to Taste on the track "Test" for which they hired rappers The Grand Wizard (K. Lite) and The Slogan God (Tommie Boyskee) to perform vocals. In 1990, the rapper Ice-T formed a heavy metal band called Body Count, and while performing at the 1991 Lollapalooza tour performed a set that was half rap songs and half metal songs. Stuck Mojo and Clawfinger, both formed in 1989, are considered to be another two pioneers of the genre. Anthrax in 1991 teamed up with Public Enemy for a remake of the latter's "Bring the Noise" that fused hip hop with thrash metal. Also in 1991, the thrash metal band Tourniquet featured the hip hop group P.I.D. on the song "Spineless" from their album Psycho Surgery. History: Rise in popularity (1990s–early 2000s) In the 1990s, rap metal became a popular style of music. For instance, the band Faith No More's song "Epic" was a major success and peaked at number 9 on the Billboard Hot 100. 1993 saw the release of the Judgment Night soundtrack that featured numerous collaborations between rappers, musicians and rock and metal group of bands. Rage Against the Machine's 1996 album Evil Empire entered the Billboard 200 at number one, and in 1999, their third studio album, The Battle of Los Angeles, also debuted in top spot in the Billboard 200, selling 430,000 copies in its first week. Each of the band's albums became at least platinum hits. Biohazard played on the Ozzfest mainstage alongside Ozzy Osbourne, Slayer, Danzig, Fear Factory, and Sepultura. In support of the album, Biohazard embarked on a short co-headlining tour of Europe with Suicidal Tendencies. History: On August 18, 1998, Atlantic released rap metal musician Kid Rock's Devil Without a Cause behind the single "Welcome 2 the Party (Ode 2 the Old School)" and Kid Rock went on the Vans Warped Tour to support the album. Sales of "Welcome 2 The Party" and Devil Without a Cause were slow, though the 1998 Warped Tour in Northampton, Massachusetts stimulated regional interest in Massachusetts and New England. This led to substantial airplay of the single "I Am The Bullgod" during the summer and fall of 1998 on Massachusetts rock staples WZLX and WAAF. In early December 1998, while DJing at a club, he met and became friends with MTV host Carson Daly. He talked Daly into getting him a performance on MTV and on December 28, 1998, he performed on MTV Fashionably Loud in Miami, Florida, creating a buzz from his performance, even upstaging Jay-Z. In May, his sales began taking off with the third single "Bawitdaba" and by April 1999, Devil Without a Cause had achieved a gold disc. The following month, Devil, as he predicted, went platinum. Kid Rock's first major tour was Limptropolis, where he opened for Limp Bizkit with Staind. He solidified his superstardom with a Woodstock 1999 performance and on July 24 of that year, he was double platinum. The following single "Cowboy", a mix of southern rock, country, and rap, was an even bigger hit, making the Top 40. It even became the theme song of WCW's Jeff Jarrett. Rock's next single, the slow back porch blues ballad "Only God Knows Why", was the biggest hit off the album, charting at No. 19 on the Billboard Hot 100. It was one of the first songs to use the autotune effect. By the time the final single, "Wasting Time", was released, the album had sold 7 million copies. Devil Without a Cause was certified 11 times platinum by the RIAA on April 17, 2003. According to Nielsen SoundScan, as of 2013, actual sales are 9.3 million. Kid Rock was nominated as Best New Artist at the 2000 Grammy Awards, but lost to Christina Aguilera. He was nominated for "Bawitdaba" for Best Hard Rock Performance, but lost to Metallica's "Whiskey in the Jar". In 1998, Ice Cube released his long-awaited album War & Peace Vol. 1 (The War Disc) which had some elements of nu metal and rap metal on some tracks. The album debuted at No. 7 on the Billboard 200 chart, selling 180,000 copies in the first week. History: It reached the height of its popularity during 1999, with the Port Huron Times-Herald describing the summer of that year as a "bipolar menu of harsh rap-metal and gooey teen pop." Around this time, the style started to attract criticism in the mainstream, particularly after the troubled Woodstock 1999 festival, which featured many artists associated with rap metal and nu/alternative metal, such as Kid Rock, Limp Bizkit, Rage Against the Machine and Reveille. Pop punk musician Jeff Brogowski told The Morning Call newspaper in 1999 that "these macho rap-metal bands are just so mean-spirited. Look what happened at Woodstock (last summer). All the violence, looting and the fires. Something strange is going on. Maybe it has something do with all the economic prosperity. It's getting ugly like it was during the '80s, when so many people and bands were so cocky."The nu/rap metal band Limp Bizkit's 1999 album Significant Other climbed to No. 1 on the Billboard 200, selling 643,874 copies in its first week of release. In its second week of release, the album sold an additional 335,000 copies. The band's follow-up album, Chocolate Starfish and the Hot Dog Flavored Water, set a record for highest week-one sales of a rock album with over one million copies sold in the U.S. in its first week of release, with 400,000 of those sales coming on its first day, making it the fastest-selling rock album ever, breaking the record held for 7 years by Pearl Jam's Vs. That same year, Papa Roach's major label debut Infest became a platinum hit. Cypress Hill incorporated direct heavy metal influences into their 2000 album Skull & Bones, which featured six tracks in which rappers B-Real and Sen Dog were backed by a band including Fear Factory members Christian Olde Wolbers and Dino Cazares and Rage Against the Machine drummer Brad Wilk. B-Real also formed a rap metal group, Kush, with Wolbers, Fear Factory drummer Raymond Herrera and Deftones guitarist Stephen Carpenter. According to B-Real, Kush is more aggressive than other bands in the genre. SX-10, formed in 1996 by Sen Dog, also performs rap rock and rap metal.In 2000, the rap metal band P.O.D.'s 1999 album The Fundamental Elements of Southtown went platinum and was the 143rd best-selling album of 2000. Late in 2000, Linkin Park released their debut album Hybrid Theory, which remains both the best-selling debut album by any artist in the 21st century, and the best-selling nu metal album of all time. The album was also the best-selling album in all genres in 2001, offsetting sales by prominent pop acts like Backstreet Boys and N'Sync, earning the band a Grammy Award for their second single "Crawling", with the fourth single, "In the End", released late in 2001, becoming one of the most recognized songs in the first decade of the 21st century. The rap rock band Crazy Town also broke into the mainstream success of nu metal with their 1999 album The Gift of Game, especially their number 1 hit single, "Butterfly", which peaked at number 1 on many charts including the Billboard Hot 100 during March 2001, remaining on the Hot 100 for 23 weeks. It also peaked at number 1 on the Modern Rock Tracks chart and the Hot Dance Singles chart as well as peaking number 6 on the Rhythmic Top 40, number 2 on the Top 40 Mainstream chart and number 4 on the Top 40 Tracks chart. Their album The Gift of Game peaked at number 9 on the Billboard 200. Worldwide the album sold more than 2.5 million units, with more than 1.5 million in the US alone. Also that year was Saliva's Every Six Seconds which was also a commercial success, debuting at no. 6 on the Billboard 200. In 2001, the band P.O.D.'s Satellite album went triple platinum and peaked at #6 on the Billboard 200 chart. History: Decline (2010s) Proyecto Eskhata, a Spanish band which debuted in 2012, has received much press coverage in Spain for its fusion of progressive rock and rap metal, which journalists have described as "progressive rap metal". Influence on other genres: Nu metal Nu metal (also known as nü-metal and aggro-metal) is a genre that combines elements of heavy metal music with elements of other music genres such as hip hop, alternative metal, funk, industrial and groove metal. Nu metal bands have drawn elements and influences from a variety of musical styles, including rap metal and other heavy metal subgenres. Influence on other genres: Trap metal Trap metal (also known as ragecore, death rap, hardcore trap, industrial trap and scream trap or screamo trap) is a fusion genre that combines elements of trap music and heavy metal, as well as elements of other genres, like industrial and nu metal. It is characterized by distorted beats, hip hop flows, harsh vocals, and down tuned heavy metal guitars. Bones has been considered by Kerrang! to be one of the earliest practitioners of the genre, performing trap metal tracks beginning around 2014. British rapper Scarlxrd is often associated with the genre and is considered a pioneer of trap metal. WQHT described OG Maco's 2014 eponymous EP as being a part of the genre's early development. Other artists associated with trap metal include Dropout Kings, Bone Crew, Ghostemane, ZillaKami, Fever 333, Ho99o9, City Morgue, Kid Bookie, Kim Dracula, Backxwash, Banshee, Denzel Curry, and $uicideboy$, as well as the early careers of XXXTentacion, 6ix9ine and Ski Mask the Slump God.The stylistic influences of trap metal vary widely, with some artists such as City Morgue and Ho99o9 drawing influence from hardcore punk, while other artists such as Ghostemane have pioneered their own sounds with influences from genres including black metal, gothic rock, industrial metal, and emo.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Soluble NSF attachment protein** Soluble NSF attachment protein: Soluble N-ethylmaleimide-Sensitive Factor Attachment Proteins (SNAP, or Sec17p in yeast) are a family of cytosolic adaptor proteins involved in vesicular fusion at membranes during intracellular transport and exocytosis. SNAPs interact with proteins of the SNARE complex and NSF to play a key role in recycling the components of the fusion complex. SNAPs are involved in the priming of the vesicle fusion complex during assembly, as well as in the disassembly following a vesicle fusion event. Following membrane fusion, the tethering SNARE proteins complex disassembles in response to steric changes originating from the ATPase NSF. The energy provided by NSF is transferred throughout the SNARE complex and SNAP, allowing the proteins to untangle, and recycled for future fusion events. Mammals have three SNAP genes: α-SNAP, β-SNAP, and γ-SNAP. α- and γ-SNAP are expressed throughout the body, while β-SNAP is specific to the brain. The yeast homolog of the human SNAP is Sec17, the structural diagram of which is included on this page. Function: The function of SNAP proteins have been primarily related to the role which the play in the assemble and disassembly of SNARE complex required for vesicle fusion events. According to the SNARE hypothesis developed in the early 1990s, SNAP protein are localized to the membranes and are central in mediating Ca2+ dependent vesicle fusion at these sites. SNAPs associate with the proteins of the SNARE (SNAP REceptor) complex, a class of type II integral membrane protein, as well as the ATPase NSF, largely based on electrostatic interactions. The interaction of the SNAPs with SNAREs takes place before interaction of the complex with NSF (Sec18 in yeast) suggesting a sequence for the priming assembly may be necessary. The assembled complex which includes SNAP, SNARE, and NSF is known as the 20S complex. Some of the first proteins identified as the receptors of SNAPs were syntaxin 1, SNAP-25 (synaptosome associated protein, 25kDa), and VAMP (synaptobrevin). These proteins contain transmembrane regions that can be found in both intracellular vesicles and as part of extracellular trafficking machinery. Figure 1 shows interactions of the vesicular and membrane SNARE proteins with NSF and SNAP in the assembly, fusion, and disassembly process that accompanies vesicle fusion events. Function: Initial binding of NSF to SNAP been is likely related to interactions of the 63 N-terminal and 37 C-terminal amino acid residues of SNAP with NSF protein. The interaction with SNAP stimulates the ATPase activity of the NSF when assembled into the 20S complex, and ultimately leads to ATP hydrolysis that result in the disruption of the heterooligomeric complex. This has the potential to reduce or block synaptic transmission, ultimately leading to the loss of signaling downstream. Further information on this is included in the toxicology section below. Function: While assembly of the complex can take place under only conditions where a components and a membrane is present, disassembly requires that NSF can hydrolyze ATP. Use of chelating agents, non-hydrolysable analogues of GTP, or application of an alkylating agent N-ethylmaleimide (NEM), therefore, has been used to demonstrate prevention of vesicle fusion in vitro. Blocking the assembly of the 20S complex also prevents the ATP-hydrolysis reaction from taking place at NSF. Function: Limitations of the Original SNARE Theory of Vesicle Fusion The SNARE theory of vesicle fusion, describes the action mechanism of SNAREs, SNAP, and NSF, but does not completely explain all known vesicle fusion related kinetics. The theory was first put forth by James Rothman and co-workers starting in the early 1990s and predicted that SNAPs and NSF recognized paired vesicle-SNARE (v-SNARE)/ target-SNARE (t-SNARE) complexes at membranes and bound to them thus creating the 20S complex. These complex form similar structures for both synaptic and vacuolar systems including the Golgi transport. Function: Data generated experimentally in recent years lead some to question the completeness of the model. Although it was known since the 1960s that Ca2+ influx was responsible for synaptic signaling, a collaboration in 1992 between Thomas Südhof and Reinhardt Jahn tied the link between calcium, SNARE complexes and synaptic signaling, suggesting that vesicle fusion events were not rate limited by the SNARE complex formation as previously thought. At the time, the SNARE complex model could not account for the rapid release of neurotransmitters into synaptic clefts, as the complex disassociation and recycling was thought to be rate limiting for further vesicle fusion. Function: Further studies demonstrated that the ATP hydrolysis step occurs prior to a calcium ion mediated fusion event, and thus revealing, that SNAP and NSF proteins initiate disassembly the 20S complex before the docking event takes place directly at the membrane. The existence of these ATP primed vesicles for fusion at the pre-synaptic membrane is facilitated by the interactions of SNAP and NSF. Function: It is now understood that the 20S complex does not disassociate immediately following ATP hydrolysis, but rather remains tethered until intracellular Ca2+ achieve significantly high levels to facilitate docking. A depolarizing current that leads to the opening of voltage dependent ion channels permits the influx of Ca2+ into the cell where the molecular clamp protein (a SNARE) called synaptotagmin acts in a Ca2+ sensitive manner to facilitate fusion of the vesicle to the membrane up to a rate of one vesicle per 100us. The exocytosis of neurotransmitters as regulated by Ca2+ therefore, has faster kinetics than would be possible by the SNARE-recycling model alone. Figure 1 summarizes the updated model of the SNARE hypothesis. Function: Significance in Toxicology The 20S complex is a known target for Clostridium neurotoxins including Botulinum A, C. and E, which block synaptic transmission by disrupting the complex and preventing neurotransmitter release into the synaptic space. The disruption to synaptic transmission is caused by serotype B toxins cleaving VAMP-2/synaptobrevin-2, but not type 1 SNARE proteins. Botulinum toxins do not directly interact with SNAP, but the indirectly impact its ability to assemble into the 20S complex leading to impaired synaptic transmission at the neuromuscular junction. The blocking of acetylcholine release onto the endplate leads to muscle paralysis and, if left untreated, death. Poisoning by botulinum toxin generally occurs through ingestion of material contaminated with the toxin producing bacteria or absorbance of the toxin through the skin. Function: SNARE complexes containing SNAP are also targets for tetanus toxin which likewise inhibit vesicle fusion and neurotransmitter release through anterograde transport of the toxin into the CNS. Prevention of 20S SNARE complex assembly due to cleaving of substituent proteins prevents SNAPs from interacting with the receptor proteins in a non-competitive manner. Genetics: Expression of the three SNAP proteins in mammalian is tissue dependent with α-SNAP (33kD) and γ-SNAP (36kD) expressed throughout the body, and β-SNAP (34kD) primarily found in brain tissues. α-SNAP and β-SNAP share approximately 83% sequence homology and are encoded by NAPA and NAPB on chromosomes 19 and 18, respectively in humans. β-SNAP protein is encoded by the NAPB on chromosome 20. Changes in temporal expression have been observed in rodent models during embryonic development but similar changes in humans is yet to be verified. Expression data in the early years after discovery of the protein group in the 1990s were primarily confirmed though use of Western blott and allowed expression of the mRNA and later cDNA. Use of Immunofluorescent localization showed strong association of the proteins to intracellular membranes including the ER and Golgi bodies as well as vesicles.Deletions in α-SNAP gene have also been found to be lethal in utero in rodent models with hyh (hydrocephalus with hop gait) while hyh due to missense mutations lead to 40% lower levels of expression. The effects of the mutations develop in utero and become more severe over time, ultimately leading to worsening hydrocephalus and death. Reduced expression of α-SNAP in hyh/hyh mice is also associated with CD4 T-cell effector cytokine deficiency.Yeast (S. cerevisiae) homolog of the SNAP gene known as Sec17p has 67% similarity to mammalian α-SNAP or approximately 34% homology with alpha and 33% with beta. It has been studied based on its function in yeast vacuolar fusion. The lethality of the double null mutation in this animal highlights the importance of this class of proteins in intra and inter-cell communication and survival. Structure: Use of TEM and FRET imaging techniques was widely applied at the beginning of the century to resolve the SNARE complex and expanded to include SNAP proteins as well. The 20S complex ultimately forms a rod of 2.5 nm width by 15 nm in length that assembles along the axis of two coiled coils of interacting SNARE proteins. The binding of SNAP to the lateral side of SNARE complex rod takes place at the membrane during the priming step. This interaction requires intact N-terminal residues 63 and 37 on the SNARE protein which may directly interact with one or more alpha-helical domains of the SNAP. NSF binding to α-SNAP has also been shown to be negatively impacted by the phosphorylation of NSF or the Y83E mutant that displays phosphomimic properties. The unwinding of the coiled-coil structures following ATP hydrolysis by NSF is also accompanied by a conformational change in syntaxin (SNARE) prior to vesicle fusion. Structure: These structural finding have been confirmed by use of Quick-Freeze/Deep-Etch EM that also describes the ternary SNARE complex as a similarly elongated rod-like assembly around the SNARE proteins with N-terminal binding of SNAP.The yeast homolog Sec17, pictured above contains fourteen α-helices and has the approximate dimensions of 85 Å × 35 Å × 35 Å with multiple conserved residues along the packing face of the protein. Blocking of Sec17/SNAP interaction with SNAREs and Sec18/NSF has recently been reported in the literature using small molecules binding to PA (phosphatidic acid) to prevent priming activity and limit vesicle fusion. Role in disease: Blocking of SNARE complex assembly, and therefore indirectly interfering with SNAP function, has a wide variety of application as evidenced by the diverse treatment utility of Botox which can be used to block vesicle fusion and neurotransmitter release. Targeting of SNAP protein receptors has been found both to be disease causing and has broad application when targeted with therapeutics. Role in disease: Outlined below are recent publications indicating more direct associations of SNAPs in disease course and development. Notably, the role of SNAPs in disease states is still primarily related to its interaction as part of the SNARE complexes. Abnormal levels of multiple vesicular trafficking proteins are often observed in conjunction and a compound effect may lead to a disruption in signaling. Role in disease: Colorectal cancer In a studies of colorectal cancer of neuroendocrine markers, the expression of α-SNAP and β-SNAP were found to be higher in undifferentiated cells when compared to controls, and were associated with more aggressive disease. Similarly, expression of other vesicle trafficking proteins including synaptophysin, SNAP-25 (SNARE), VAMP2 and syntaxin-1 were also found to have various levels of increase small cell undifferentiated carcinomas.Aberrant of signaling and trafficking of proteins in cancer cells has been previously reported based on SNARE complex interactions for α-SNAP within implication of its role as a negative regulator of autophagy and the MAPK pathway thorough dephosphorylating. Depletion of α-SNAP has been reported to impair Golgi body integrity and assembly of vesicle fusion proteins at signaling junctions, while overexpression delays apoptosis in HeLa cells. Role in disease: Epilepsy Association of α-SNAP with v-SNARE (vesicle), t-SNARE (target) proteins with synatxin-1 forms the 7S SNARE complex in central neurons used in vesicle transport. Downregulation of alpha SNAPs has been documented to increase susceptibility to seizures in rodent models. In the same study a decrease alpha SNAP expression has been observed in patients with temporal lobe epilepsy as well as in the epileptic rat model. An accumulation of the 7S complexes was also observed in synapse of the hippocampus in chronic rodent models for epilepsy. The suspected mechanism may involve priming of the SNARE-SNAP-NSF complex to increase vesicle fusion at the membranes, however the exact mechanism by which the upregulation of the 7S complex occurs in not well understood. Role in disease: Down syndrome In a study of fetal brain development β-SNAP levels were found to be comparable between samples taken from Down syndrome (DS) affected and non-affected individuals. Presence of α-SNAP in comparison was only observed in half of DS affected samples. Reduction in α-SNAP along with other observed changes to protein expression may indicate impaired synaptogenesis from very early on in development. Role in disease: Huntington's disease Vesicle fusion proteins evaluated in a study of rodent Huntington's disease (HD) model found higher levels of α-SNAP in the hippocampus and lower expression in the striatum of HD mice compared to controls. It is notable that multiple other proteins involved in vesicle fusion also experienced decreased expression in the striatum along with increased expression in the hippocampus and the contributing effects could not yet be deconvoluted. The interaction of mutant huntingtin gene and vesicle fusion proteins may also be potentially responsible for deranged synaptic development or degeneration observed in the condition. Role in disease: Prion disease Upregulation of α-SNAP was observed in mice with knock out 14-3-3 gamma protein suggesting a relationship between progression but not the pathogenesis of Creutzfeldt-Jakob Disease (CJD). Increased levels of 14-3-3 proteins are used diagnostically to confirm CJD but based on literature may not play a causal role in the disease. Intervention strategies: Interaction of α-SNAP with AMPA receptors for glutamate may be potential target to improve synaptic plasticity through mechanism of stabilization at membranes where SNAPs are present. Additionally, α-SNAP has been implicated in surfactant and acrosomal exocytosis in alveolar cells and sperm cells respectively, although the exact mechanism are yet to be identified. SNAP protein isoforms are not a currently druggable target and may prove difficult to target as they serve primarily a scaffolding role. Insufficiency in expression is indicated in a number of neurodegenerative and immune related conditions where the primary treatment strategy may focus on gene-therapy as replacement option. Intervention strategies: The potential for application to clinical therapy include the development of targeted regulators for β-SNAP for treatment of CNS pathologies including epilepsy. Use of Inositol Polyphosphates to inhibit β-SNAP and synaptogamin interactions can also block neurotransmitter release and may be potentially useful in broader regulations of synaptic networks. Intervention strategies: Small molecule agents that can be used to block SNARE complex activity through interaction with SNAPs and have been used in vitro, but their practical use may extend to in vivo systems as well. In colorectal cancers where elevated α-SNAP levels were observed, siRNA technology may be employed to deplete overexpression, but the novelty of this technology may be limited until further experience with the platform is gather and safety is well-demonstrated.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Fluoroelastomer** Fluoroelastomer: A fluoroelastomer is a fluorocarbon-based synthetic rubber. Fluoroelastomers generally have wide chemical resistance. Composition: Several compositions of fluoroelastomers exist including FKM (by ASTM D1418 standard, equivalent to FPM by ISO/DIN 1629 standard); perfluoro-elastomers (FFKM); and tetrafluoro ethylene/propylene rubbers (FEPM). Performance: The performance of fluoroelastomers in aggressive chemicals depends on the nature of the base polymer and the compounding ingredients used for moulding the final products (e.g. O-rings, shaft seals). This performance can vary significantly when end-users purchase polymer-containing rubber goods from different sources. Fluoroelastomers are generally compatible with hydrocarbons, but incompatible with ketones such as acetone and organic acids such as acetic acid. Weblinks: Properties of Elastomers - Chemical Resitancelist (PDF; 0,6 MB) Designing with Fluoroelastomers (PDF; 0,8 MB)
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**266P/Christensen** 266P/Christensen: 266P/Christensen is a periodic comet in the Solar System. It will next come to perihelion in December 2026. It has been suggested as the source of the 1977 "Wow! Signal".
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Direct shear test** Direct shear test: A direct shear test is a laboratory or field test used by geotechnical engineers to measure the shear strength properties of soil or rock material, or of discontinuities in soil or rock masses.The U.S. and U.K. standards defining how the test should be performed are ASTM D 3080, AASHTO T236 and BS 1377-7:1990, respectively. For rock the test is generally restricted to rock with (very) low shear strength. The test is, however, standard practice to establish the shear strength properties of discontinuities in rock. Direct shear test: The test is performed on three or four specimens from a relatively undisturbed soil sample. A specimen is placed in a shear box which has two stacked rings to hold the sample; the contact between the two rings is at approximately the mid-height of the sample. A confining stress is applied vertically to the specimen, and the upper ring is pulled laterally until the sample fails, or through a specified strain. The load applied and the strain induced is recorded at frequent intervals to determine a stress–strain curve for each confining stress. Several specimens are tested at varying confining stresses to determine the shear strength parameters, the soil cohesion (c) and the angle of internal friction, commonly known as friction angle ( ϕ ). The results of the tests on each specimen are plotted on a graph with the peak (or residual) stress on the y-axis and the confining stress on the x-axis. The y-intercept of the curve which fits the test results is the cohesion, and the slope of the line or curve is the friction angle. Direct shear test: Direct shear tests can be performed under several conditions. The sample is normally saturated before the test is run, but can be run at the in-situ moisture content. The rate of strain can be varied to create a test of undrained or drained conditions, depending on whether the strain is applied slowly enough for water in the sample to prevent pore-water pressure buildup. A direct shear test machine is required to perform the test. The test using the direct shear machine determines the consolidated drained shear strength of a soil material in direct shear.The advantages of the direct shear test over other shear tests are the simplicity of setup and equipment used, and the ability to test under differing saturation, drainage, and consolidation conditions. These advantages have to be weighed against the difficulty of measuring pore-water pressure when testing in undrained conditions, and possible spuriously high results from forcing the failure plane to occur in a specific location. Direct shear test: The test equipment and procedures are slightly different for test on discontinuities.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**VANOS** VANOS: VANOS is a variable valve timing system used by BMW on various automotive petrol engines since 1992. The name is an abbreviation of the German words for variable camshaft timing (German: variable Nockenwellensteuerung). The initial version (retrospectively renamed "single VANOS") was solely used on the intake camshaft, while the later "double VANOS" systems are used on intake and exhaust camshafts. Since 2001, VANOS is often used in conjunction with the valvetronic variable valve lift system. Operation: VANOS is a variator system that varies the timing of the valves by moving the position of the camshafts in relation to the drive gear. The relative timing between inlet and exhaust valves is changed. At lower engine speeds, the position of the camshaft is moved so the valves are opened later, as this improves idling quality and smooth power development. As the engine speed increases, the valves are opened earlier: this enhances torque, reduces fuel consumption and lowers emissions. At high engine speeds, the valves are opened later again, because this allows full power delivery. Single VANOS: The first-generation single VANOS system adjusts the timing of the intake camshaft to one of two positions — e.g. the camshaft is advanced at certain engine speeds. VANOS was first introduced in 1992 on the BMW M50 engine used in 3 and 5 Series. In 1998 single infinitely variable VANOS was introduced on the BMW M62 V8 engine. Double VANOS: The second-generation double VANOS system adjusts the timing of the intake and exhaust camshafts with continuously variable adjustment, based on engine speed and throttle opening. The first double VANOS system appeared on the S50B32 engine in 1996.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Bioenergetics** Bioenergetics: Bioenergetics is a field in biochemistry and cell biology that concerns energy flow through living systems. This is an active area of biological research that includes the study of the transformation of energy in living organisms and the study of thousands of different cellular processes such as cellular respiration and the many other metabolic and enzymatic processes that lead to production and utilization of energy in forms such as adenosine triphosphate (ATP) molecules. That is, the goal of bioenergetics is to describe how living organisms acquire and transform energy in order to perform biological work. The study of metabolic pathways is thus essential to bioenergetics. Overview: Bioenergetics is the part of biochemistry concerned with the energy involved in making and breaking of chemical bonds in the molecules found in biological organisms. It can also be defined as the study of energy relationships and energy transformations and transductions in living organisms. The ability to harness energy from a variety of metabolic pathways is a property of all living organisms. Growth, development, anabolism and catabolism are some of the central processes in the study of biological organisms, because the role of energy is fundamental to such biological processes. Life is dependent on energy transformations; living organisms survive because of exchange of energy between living tissues/ cells and the outside environment. Some organisms, such as autotrophs, can acquire energy from sunlight (through photosynthesis) without needing to consume nutrients and break them down. Other organisms, like heterotrophs, must intake nutrients from food to be able to sustain energy by breaking down chemical bonds in nutrients during metabolic processes such as glycolysis and the citric acid cycle. Importantly, as a direct consequence of the First Law of Thermodynamics, autotrophs and heterotrophs participate in a universal metabolic network—by eating autotrophs (plants), heterotrophs harness energy that was initially transformed by the plants during photosynthesis.In a living organism, chemical bonds are broken and made as part of the exchange and transformation of energy. Energy is available for work (such as mechanical work) or for other processes (such as chemical synthesis and anabolic processes in growth), when weak bonds are broken and stronger bonds are made. The production of stronger bonds allows release of usable energy. Overview: Adenosine triphosphate (ATP) is the main "energy currency" for organisms; the goal of metabolic and catabolic processes are to synthesize ATP from available starting materials (from the environment), and to break- down ATP (into adenosine diphosphate (ADP) and inorganic phosphate) by utilizing it in biological processes. In a cell, the ratio of ATP to ADP concentrations is known as the "energy charge" of the cell. A cell can use this energy charge to relay information about cellular needs; if there is more ATP than ADP available, the cell can use ATP to do work, but if there is more ADP than ATP available, the cell must synthesize ATP via oxidative phosphorylation.Living organisms produce ATP from energy sources via oxidative phosphorylation. The terminal phosphate bonds of ATP are relatively weak compared with the stronger bonds formed when ATP is hydrolyzed (broken down by water) to adenosine diphosphate and inorganic phosphate. Here it is the thermodynamically favorable free energy of hydrolysis that results in energy release; the phosphoanhydride bond between the terminal phosphate group and the rest of the ATP molecule does not itself contain this energy. An organism's stockpile of ATP is used as a battery to store energy in cells. Utilization of chemical energy from such molecular bond rearrangement powers biological processes in every biological organism. Overview: Living organisms obtain energy from organic and inorganic materials; i.e. ATP can be synthesized from a variety of biochemical precursors. For example, lithotrophs can oxidize minerals such as nitrates or forms of sulfur, such as elemental sulfur, sulfites, and hydrogen sulfide to produce ATP. In photosynthesis, autotrophs produce ATP using light energy, whereas heterotrophs must consume organic compounds, mostly including carbohydrates, fats, and proteins. The amount of energy actually obtained by the organism is lower than the amount present in the food; there are losses in digestion, metabolism, and thermogenesis.Environmental materials that an organism intakes are generally combined with oxygen to release energy, although some nutrients can also be oxidized anaerobically by various organisms. The utilization of these materials is a form of slow combustion because the nutrients are reacted with oxygen (the materials are oxidized slowly enough that the organisms do not produce fire). The oxidation releases energy, which may evolve as heat or be used by the organism for other purposes, such as breaking chemical bonds. Types of reactions: An exergonic reaction is a spontaneous chemical reaction that releases energy. It is thermodynamically favored, indexed by a negative value of ΔG (Gibbs free energy). Over the course of a reaction, energy needs to be put in, and this activation energy drives the reactants from a stable state to a highly energetically unstable transition state to a more stable state that is lower in energy (see: reaction coordinate). The reactants are usually complex molecules that are broken into simpler products. The entire reaction is usually catabolic. The release of energy (called Gibbs free energy) is negative (i.e. −ΔG) because energy is released from the reactants to the products. Types of reactions: An endergonic reaction is an anabolic chemical reaction that consumes energy. It is the opposite of an exergonic reaction. It has a positive ΔG because it takes more energy to break the bonds of the reactant than the energy of the products offer, i.e. the products have weaker bonds than the reactants. Thus, endergonic reactions are thermodynamically unfavorable. Additionally, endergonic reactions are usually anabolic.The free energy (ΔG) gained or lost in a reaction can be calculated as follows: ΔG = ΔH − TΔS where ∆G = Gibbs free energy, ∆H = enthalpy, T = temperature (in kelvins), and ∆S = entropy. Examples of major bioenergetic processes: Glycolysis is the process of breaking down glucose into pyruvate, producing two molecules of ATP (per 1 molecule of glucose) in the process. When a cell has a higher concentration of ATP than ADP (i.e. has a high energy charge), the cell can't undergo glycolysis, releasing energy from available glucose to perform biological work. Pyruvate is one product of glycolysis, and can be shuttled into other metabolic pathways (gluconeogenesis, etc.) as needed by the cell. Additionally, glycolysis produces reducing equivalents in the form of NADH (nicotinamide adenine dinucleotide), which will ultimately be used to donate electrons to the electron transport chain.Gluconeogenesis is the opposite of glycolysis; when the cell's energy charge is low (the concentration of ADP is higher than that of ATP), the cell must synthesize glucose from carbon- containing biomolecules such as proteins, amino acids, fats, pyruvate, etc. For example, proteins can be broken down into amino acids, and these simpler carbon skeletons are used to build/ synthesize glucose.The citric acid cycle is a process of cellular respiration in which acetyl coenzyme A, synthesized from pyruvate dehydrogenase, is first reacted with oxaloacetate to yield citrate. The remaining eight reactions produce other carbon-containing metabolites. These metabolites are successively oxidized, and the free energy of oxidation is conserved in the form of the reduced coenzymes FADH2 and NADH. These reduced electron carriers can then be re-oxidized when they transfer electrons to the electron transport chain.Ketosis is a metabolic process whereby ketone bodies are used by the cell for energy (instead of using glucose). Cells often turn to ketosis as a source of energy when glucose levels are low; e.g. during starvation.Oxidative phosphorylation and the electron transport chain is the process where reducing equivalents such as NADPH, FADH2 and NADH can be used to donate electrons to a series of redox reactions that take place in electron transport chain complexes. These redox reactions take place in enzyme complexes situated within the mitochondrial membrane. These redox reactions transfer electrons "down" the electron transport chain, which is coupled to the proton motive force. This difference in proton concentration between the mitochondrial matrix and inner membrane space is used to drive ATP synthesis via ATP synthase.Photosynthesis, another major bioenergetic process, is the metabolic pathway used by plants in which solar energy is used to synthesize glucose from carbon dioxide and water. This reaction takes place in the chloroplast. After glucose is synthesized, the plant cell can undergo photophosphorylation to produce ATP. Cotransport: In August 1960, Robert K. Crane presented for the first time his discovery of the sodium-glucose cotransport as the mechanism for intestinal glucose absorption. Crane's discovery of cotransport was the first ever proposal of flux coupling in biology and was the most important event concerning carbohydrate absorption in the 20th century. Chemiosmotic theory: One of the major triumphs of bioenergetics is Peter D. Mitchell's chemiosmotic theory of how protons in aqueous solution function in the production of ATP in cell organelles such as mitochondria. This work earned Mitchell the 1978 Nobel Prize for Chemistry. Other cellular sources of ATP such as glycolysis were understood first, but such processes for direct coupling of enzyme activity to ATP production are not the major source of useful chemical energy in most cells. Chemiosmotic coupling is the major energy producing process in most cells, being utilized in chloroplasts and several single celled organisms in addition to mitochondria. Energy balance: Energy homeostasis is the homeostatic control of energy balance – the difference between energy obtained through food consumption and energy expenditure – in living systems.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Second-level domain** Second-level domain: In the Domain Name System (DNS) hierarchy, a second-level domain (SLD or 2LD) is a domain that is directly below a top-level domain (TLD). For example, in example.com, example is the second-level domain of the .com TLD. Second-level domain: Second-level domains commonly refer to the organization that registered the domain name with a domain name registrar. Some domain name registries introduce a second-level hierarchy to a TLD that indicates the type of entity intended to register an SLD under it. For example, in the .uk namespace a college or other academic institution would register under the .ac.uk ccSLD, while companies would register under .co.uk. Strictly speaking, domains like .ac.uk and .co.uk are second level domain themselves, since they are right below a TLD. A list of the official TLDs can be found at icann.org and iana.org. An ordinal-free term to denote domains under which people can register their own domain name is public suffix domain (PSD). Country-code second-level domains: Algeria Australia Austria In Austria there are two second-level domains available for the public: .co.at intended for commercial enterprises .or.at intended for organizations.The second-level domain .priv.at is restricted to Austrian citizens only, while .ac.at and .gv.at are reserved for educational institutions and governmental bodies respectively. Brazil France In France, there are various second-level domains available for certain sectors, including .avocat.fr for attorneys, .aeroport.fr for airports and .veterinaire.fr for vets. Hungary New Zealand Nigeria .com.ng .gov.ng Pakistan India Israel Japan Russia South Africa South Korea Spain Sri Lanka Thailand Trinidad and Tobago co.tt com.tt org.tt net.tttravel.tt museum.tt aero.tt tel.tt name.tt charity.tt mil.tt edu.tt gov.tt Turkey In Turkey, domain registrations, including the registration of second-level domains is administrated by nic.tr. There 17 active second-level domains under the .tr TLD. The registration of domains is restricted to Turkish individuals and businesses, or foreign companies with a business activity in Turkey. Second-level domains include .com.tr for commercial ventures, .edu.tr for academic institutions and .name.tr for personal use. Ukraine Ukraine second-level domains include: .gov.ua – available for government agencies. .com.ua – for commercial use. .in.ua – for commercial use. .org.ua – intended for non-profit organizations. .net.ua – intended for Internet providers. .edu.ua – for academic institutions.There are also numerous geographic names. United Kingdom United States A two-letter second-level domain is formally reserved for each U.S. state, federal territory, and the District of Columbia. Zambia Historic second-level domains: There are several second-level domains which are no longer available. Australia Second-level domains under .au which are no longer available include: .conf.au originally intended for conferences; .gw.au for the Australian Academic and Research networks; info.au for general information, .otc.au and .telememo.au for the X.400 mail systems. Historic second-level domains: Canada Prior to 12 Oct 2010 there were second level domain based on province: .ab.ca — Alberta, .bc.ca — British Columbia, .mb.ca — Manitoba, .nb.ca — New Brunswick, .nf.ca — Newfoundland, .nl.ca — Newfoundland and Labrador, .ns.ca — Nova Scotia, .nt.ca — Northwest Territories, .nu.ca — Nunavut, .on.ca — Ontario, .pe.ca — Prince Edward Island, .qc.ca — Quebec, .sk.ca — Saskatchewan, .yk.ca — YukonSince 2010, some have been replaced (for example, alberta.ca) while others have remained under the provincial two letter SLD (e.g., Calgary Board of Education www.cbe.ab.ca) and others were moved to more traditional subdomains (www.transportation.alberta.ca). Historic second-level domains: France Historic second-level domains for France included: .tm.fr (for brands), .com.fr (for commercial use) and .asso.fr. The Netherlands Historic second-level domains for the Netherlands included: .co.nl (for commercial use) Yugoslavia In 2006 the .yu ccTLD was replaced by .rs (for Serbia) and .me (for Montenegro). Second-level domains under .yu included: .ac.yu – for academic institutions, .co.yu for commercial enterprises; .org.yu for organizations and .cg.yu for residents of Montenegro. Only legal entities were allowed to register names under .yu and its second-level domains. Tuvalu Historic second-level domains for Tuvalu included: co.tv Legal issues: As a result of ICANN's generic top-level domain (gTLD) expansion, the risk of domain squatting has increased significantly. For example, based on current regulations, the registration of the gTLDs .olympics or .redcross is not allowed; however, the registration of sites such as olympics.example or redcross.example is not controlled. Experts say that further restrictions are needed for second-level domains under the new gTLD .health, as well. For example, second-level domains under .tobacco.health or .diet.health can be easily misused by companies and therefore are a potential threat to Internet users.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Advanced Telecommunications Computing Architecture** Advanced Telecommunications Computing Architecture: Advanced Telecommunications Computing Architecture (ATCA or AdvancedTCA) is the largest specification effort in the history of the PCI Industrial Computer Manufacturers Group (PICMG), with more than 100 companies participating. Known as AdvancedTCA, the official specification designation PICMG 3.x (see below) was ratified by the PICMG organization in December 2002. AdvancedTCA is targeted primarily to requirements for "carrier grade" communications equipment, but has recently expanded its reach into more ruggedized applications geared toward the military/aerospace industries as well. This series of specifications incorporates the latest trends in high speed interconnect technologies, next-generation processors, and improved Reliability, Availability and Serviceability (RAS). Mechanical specifications: An AdvancedTCA board (blade) is 280 mm deep and 322 mm high. The boards have a metal front panel and a metal cover on the bottom of the printed circuit board to limit electromagnetic interference and to limit the spread of fire. The locking injector-ejector handle (lever) actuates a microswitch to let the Intelligent Platform Management Controller (IPMC) know that an operator wants to remove a board, or that the board has just been installed, thus activating the hot-swap procedure. AdvancedTCA boards support the use of PCI Mezzanine Card (PMC) or Advanced Mezzanine Card (AMC) expansion mezzanines. Mechanical specifications: The shelf supports RTMs (Rear Transition Modules). RTMs plug into the back of the shelf in slot locations that match the front boards. The RTM and the front board are interconnected through a Zone-3 connector. The Zone-3 connector is not defined by the AdvancedTCA specification. Mechanical specifications: Each shelf slot is 30.48 mm wide. This allows for 14-board chassis to be installed in a 19-inch rack-mountable system and 16 boards in an ETSI rack-mountable system. A typical 14-slot system is 12 or 13 rack units high. The large AdvancedTCA shelves are targeted to the telecommunication market so the airflow goes in the front of the shelf, across the boards from bottom to top, and out the rear of the shelf. Smaller shelves that are used in enterprise applications typically have horizontal air flow. Mechanical specifications: The small-medium AdvancedTCA shelves are targeted to the telecommunication market; for the lab research operation, some shelves have an open cover in order to make testing easier. Backplane architecture: The AdvancedTCA backplane provides point-to-point connections between the boards and does not use a data bus. The backplane definition is divided into three sections; Zone-1, Zone-2, and Zone-3. The connectors in Zone-1 provide redundant −48 VDC power and Shelf Management signals to the boards. The connectors in Zone-2 provide the connections to the Base Interface and Fabric Interface. All Fabric connections use point-to-point 100 Ω differential signals. Zone-2 is called "Fabric Agnostic" which means that any Fabric that can use 100 Ω differential signals can be used with an AdvancedTCA backplane.The connectors in Zone-3 are user defined and are usually used to connect a front board to a Rear Transition Module. The Zone-3 area can also hold a special backplane to interconnect boards with signals that are not defined in the AdvancedTCA specification. Backplane architecture: The AdvancedTCA Fabric specification uses Logical Slots to describe the interconnections. The Fabric Switch Boards go in Logical Slots 1 and 2. The chassis manufacturer is free to decide the relationship between Logical and Physical Slots in a chassis. The chassis Field Replaceable Units (FRU) data includes an Address Table that describes the relationship between the Logical and Physical slots. Backplane architecture: The Shelf Managers communicate with each board and FRU in the chassis with IPMI (Intelligent Platform Management Interface) protocols running on redundant I²C buses on the Zone-1 connectors. The Base Interface is the primary Fabric on the Zone-2 connectors and allocates 4 differential pairs per Base Channel. It is wired as a Dual-Star with redundant fabric hub slots at the core. It is commonly used for out of band management, firmware uploading, OS boot, etc. The Fabric Interface on the backplane supports many different Fabrics and can be wired as a Dual-Star, Dual-Dual-Star, Mesh, Replicated-Mesh or other architectures. It allocates 8 differential pairs per Fabric Channel and each Channel can be divided into four 2-pair Ports. The Fabric Interface is typically used to move data between the boards and the outside network. The Synchronization Clock Interface routes MLVDS (Multipoint Low-voltage differential signaling) clock signals over multiple 130 Ω buses. The clocks are typically used to synchronize telecom interfaces. Update Channel Interface is a set of 10 differential signal pairs that interconnect two slots. Which slots are interconnected depends on the particular backplane design. These are signals commonly used to interconnect two hub boards, or redundant processor boards. Fabrics: The Base Interface can only be 10BASE-T, 100BASE-TX, or 1000BASE-T Ethernet. Since all boards and hubs are required to support one of these interfaces there is always a network connection to the boards. The Fabric is commonly SerDes Gigabit Ethernet, but can also be Fibre Channel, XAUI 10-Gigabit Ethernet, InfiniBand, PCI Express, or Serial RapidIO. Any Fabric that can use the point-to-point 100 Ω differential signals can be used with an AdvancedTCA backplane. The PICMG 3.1 Ethernet/Fibre Channel specification has been revised to include IEEE 100GBASE-KR4 signaling to the existing IEEE 40GBASE-KR4, 10GBASE-KX4, 10GBASE-KR, and XAUI signaling. Blades (boards): AdvancedTCA blades can be Processors, Switches, AMC carriers, etc. A typical shelf will contain one or more switch blades and several processor blades. Blades (boards): When they are first inserted into the shelf the onboard IPMC is powered from the redundant −48 V on the backplane. The IPMC sends an IPMI event message to the Shelf Manager to let it know that it has been installed. The Shelf Manager reads information from the blade and determines if there is enough power available. If there is, the Shelf Manager sends a command to the IPMC to power-up the payload part of the blade. The Shelf Manager also determines what fabric ports are supported by the blade. It then looks at the fabric interconnect information for the backplane to find out what fabric ports are on the other end of the fabric connections. If the fabric ports on both ends of the backplane wires match then it sends an IPMI command to both blades to enable the matching ports. Blades (boards): Once the blade is powered-up and connected to the fabrics the Shelf Manager listens for event messages from the sensors on the blade. If a temperature sensor reports that it is too warm then the Shelf Manager will increase the speed of the fans. The FRU data in the board contains descriptive information like the manufacturer, model number, serial number, manufacturing date, revision, etc. This information can be read remotely to perform an inventory of the blades in a shelf. Shelf Management: The Shelf Manager monitors and controls the boards (blades) and FRU in the shelf. If any sensor reports a problem the Shelf Manager can take action or report the problem to a System Manager. This action could be something simple like making the fans go faster, or more drastic such as powering off a board. Each board and FRU contains inventory information (FRU Data) that can be retrieved by the Shelf Manager. The FRU data is used by the Shelf Manager to determine if there is enough power available for a board or FRU and if the Fabric ports that interconnect boards are compatible. The FRU data can also reveal the manufacturer, manufacturing date, model number, serial number, and asset tag. Shelf Management: Each blade, intelligent FRU, and Shelf Manager contains an Intelligent Platform Management Controller (IPMC). The Shelf Manager communicates with the boards and intelligent FRUs with IPMI protocols running on redundant I²C buses. IPMI protocols include packet checksums to ensure that data transmission is reliable. It is also possible to have non-intelligent FRUs managed by an intelligent FRUs. These are called Managed FRUs and have the same capabilities as an intelligent FRU. Shelf Management: The interconnection between the Shelf Manager and the boards is a redundant pair of Intelligent Platform Management Buses (IPMBs). The IPMB architecture can be a pair of buses (Bused IPMB) or a pair of radial connections (Radial IPMB). Radial IPMB implementations usually include the capability to isolate individual IPMB connections to improve reliability in the event of an IPMC failure. Shelf Management: The Shelf Manager communicates with outside entities with RMCP (IPMI over TCP/IP), HTTP, SNMP over an Ethernet network. Some Shelf Managers support the Hardware Platform Interface, a technical specification defined by the Service Availability Forum. New specification activity: Two new working groups have been started to adapt ATCA to the specific requirements of physics research. New specification activity: WG1: Physics xTCA I/O, Timing and Synchronization Working GroupWG1 will define rear I/O for AMC modules and a new component called the μRTM. Additions will be made to the μTCA Shelf specification to accommodate the μRTM and to the ATCA specification to accommodate AMC Rear I/O for an ATCA carrier RTM. Signal lines be identified for use as clocks, gates, and triggers that are commonly used in Physics data acquisition systems. WG2: Physics xTCA Software Architectures and Protocols Working GroupWG2 will define a common set of software architectures and supporting infrastructure to facilitate inter-operability and portability of both hardware and software modules among the various applications developed for the Physics xTCA platform and that will minimize the development effort and time required to construct experiments and systems using that platform. New specification activity: A working group was formed to extend ATCA to non-telecom markets. New specification activity: PICMG 3.7 ATCA Extensions for Applications Outside the Telecom Central OfficeThe goals of this new working group are to define enhanced features to support double-wide boards; add enhancements to support 600W single-slot boards and 800W double-slot boards; add support for double-sided shelves with full sized boards plugged into both the front and rear of the shelf; and add support for 10Gbs signaling on the Base Interface. PICMG specifications: 3.0 is the "base" or "core" specification. The AdvancedTCA definition alone defines a Fabric agnostic chassis backplane that can be used with any of the Fabrics defined in the following specifications: 3.1 Ethernet (and Fibre Channel) 3.2 InfiniBand 3.3 StarFabric 3.4 PCI Express (and PCI Express Advanced Switching) 3.5 RapidIO
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Lafutidine** Lafutidine: Lafutidine (INN) is a second generation histamine H2 receptor antagonist having multimodal mechanism of action and used to treat gastrointestinal disorders. It is marketed in South Korea, Japan and India. Medical use: Lafutidine is used to treat gastric ulcers, duodenal ulcers, as well as wounds in the lining of the stomach associated with acute gastritis and acute exacerbation of chronic gastritis. Adverse effects: Adverse events observed during clinical trials included constipation, diarrhea, drug rash, nausea, vomiting and dizziness. Mechanism of action: Like other H2 receptor antagonists, lafutidine acts by preventing the secretion of gastric acid. It also activates calcitonin gene-related peptide, resulting in the stimulation of nitric oxide (NO) and regulation of gastric mucosal blood flow, increases somatostatin levels also resulting in less gastric acid secretion, causes the stomach lining to generate more mucin, inhibits neutrophil activation thus preventing injury from inflammation, and blocks the attachment of Helicobacter pylori to gastric cells. Trade names: It is marketed in Japan as Stogar by UCB and in India as Lafaxid by Zuventus Healthcare. It is also marketed in South Korea as Ildong Lafutidine by Ildong Pharmaceutical Co Ltd.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Optical disc recording modes** Optical disc recording modes: In optical disc authoring, there are multiple modes for recording, including Disc-At-Once, Track-At-Once, and Session-At-Once. CD Disc-At-Once: Disc-At-Once (DAO) for CD-R media is a mode that masters the disc contents in one pass, rather than a track at a time as in Track At Once. DAO mode, unlike TAO mode, allows any amount of audio data (or no data at all) to be written in the "pre-gaps" between tracks. CD Disc-At-Once: One use of this technique, for example, is to burn track introductions to be played before each track starts. A CD player will generally display a negative time offset counting up to the next track when such pre-gap introductions play. Pre-gap audio before the first track of the CD makes it possible to burn an unnumbered, "hidden" audio track. This track can only be accessed by "rewinding" from the start of the first track, backwards into the pre-gap audio. CD Disc-At-Once: DAO recording is also the only way to write data to the unused R-W sub-channels. This allows for extended graphic and text features on an audio CD such as CD+G and CD-Text. It is also the only way to write audio files that link together seamlessly with no gaps, a technique often used in progressive rock, trance and other music genres. CD Track-At-Once: Track-At-Once (TAO) is a recording mode where the recording laser stops after each track is finished and two run-out blocks are written. One link block and four run-in blocks are written when the next track is recorded. TAO discs can have both data and audio at the same time. There are 2 TAO writing modes Mode 1 Mode 2 XA DVD-R Disc At Once: Disc-At-Once (DAO) recording for DVD-R media is a mode in which all data is written sequentially to the disc in one uninterrupted recording session. The on-disk contents result in a lead-in area, followed by the data, and closed by a lead-out area. The data is addressable in sectors of 2048 bytes each, with the first sector address being zero. There are no run-out blocks as in CD-R disc-at-once. Session At Once: Session-At-Once (SAO) recording allows multiple sessions to be recorded and finalized on a single disc. The resulting disc can be read by computer drives, but sessions after the first are generally not readable by CD Audio equipment. Audio Master Quality Recording: Audio Master Quality Recording was introduced by Yamaha in 2002.The feature is only available on some models, notably the Yamaha's CRW3200 and CRW-F1 series, and Plextor's Premium 2. Audio Master Quality Recording: CD Recorders with this feature are no longer manufactured.It uses the Disc-At-Once method, usually at 1x, but some recorders allow for 4x and 8x speed mode. Since the pits and lands are longer, the quantity of information that can fit on a disc is less than with a normal method: 63 minutes instead of 74 minutes on a 650MB CD or 68 min instead 80 minutes on a 700MB CD.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**PACT (compiler)** PACT (compiler): PACT was a series of compilers for the IBM 701 and IBM 704 scientific computers. Their development was conducted jointly by IBM and a committee of customers starting in 1954. PACT I was developed for the 701, and PACT IA for the 704. The emphasis in that early generation of compilers was minimization of the memory footprint, because memory was a very expensive resource at the time. The word "compiler" was not in widespread use at the time, so most of the 1956 papers described it as an "(automatic) coding system", although the word compiler was also used in some papers.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Townsend (unit)** Townsend (unit): The townsend (symbol Td) is a physical unit of the reduced electric field (ratio E/N), where E is electric field and N is concentration of neutral particles. It is named after John Sealy Townsend, who conducted early research into gas ionisation. Definition: It is defined by the relation 10 21 10 17 V⋅cm2. For example, an electric field of 2.5 10 4V/m in a medium with the density of an ideal gas at 1 atm, the Loschmidt constant 2.6867811 10 25 m−3 gives 10 21 V⋅m2 which corresponds to 1Td Uses: This unit is important in gas discharge physics, where it serves as scaling parameter because the mean energy of electrons (and therefore many other properties of discharge) is typically a function of E/N over broad range of E and N The concentration N , which is in ideal gas simply related to pressure and temperature, controls the mean free path and collision frequency. The electric field E governs the energy gained between two successive collisions. Uses: Reduced electric field being a scaling factor effectively means, that increasing the electric field intensity E by some factor q has the same consequences as lowering gas density N by factor q.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Berbamunine synthase** Berbamunine synthase: In enzymology, a berbamunine synthase (EC 1.14.19.66, Formerly EC 1.1.3.34 and EC 1.14.21.3) is an enzyme that catalyzes the chemical reaction (S)-N-methylcoclaurine + (R)-N-methylcoclaurine + NADPH + H+ + O2 ⇌ berbamunine + NADP+ + 2 H2OThe 5 substrates of this enzyme are (S)-N-methylcoclaurine, (R)-N-methylcoclaurine, NADPH, H+, and O2, whereas its 3 products are berbamunine, NADP+, and H2O. Berbamunine synthase: This enzyme belongs to the family of oxidoreductases, specifically those acting on paired donors, with O2 as oxidant and incorporation or reduction of oxygen. The oxygen incorporated need not be derived from O2 with NADH or NADPH as one donor, and the other dehydrogenated. The systematic name of this enzyme class is (S)-N-methylcoclaurine,NADPH:oxygen oxidoreductase (C-O phenol-coupling). This enzyme is also called (S)-N-methylcoclaurine oxidase (C-O phenol-coupling). This enzyme participates in alkaloid biosynthesis i.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**RANGAP1** RANGAP1: Ran GTPase-activating protein 1 is an enzyme that in humans is encoded by the RANGAP1 gene. Function: RanGAP1, is a homodimeric 65-kD polypeptide that specifically induces the GTPase activity of RAN, but not of RAS by over 1,000-fold. RanGAP1 is the immediate antagonist of RCC1, a regulator molecule that keeps RAN in the active, GTP-bound state. The RANGAP1 gene encodes a 587-amino acid polypeptide. The sequence is unrelated to that of GTPase activators for other RAS-related proteins, but is 88% identical to Rangap1 (Fug1), the murine homolog of yeast Rna1p. RanGAP1 and RCC1 control RAN-dependent transport between the nucleus and cytoplasm. RanGAP1 is a key regulator of the RAN GTP/GDP cycle. Interactions: RanGAP1 is a trafficking protein which helps transport other proteins from the cytoplasm to the nucleus. Small ubiquitin-related modifier needs to be associated with it before it can be localized at the nuclear pore.RANGAP1 has been shown to interact with: Ran, and UBE2I.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Multicollinearity** Multicollinearity: In statistics, multicollinearity (also collinearity) is a phenomenon in which one predictor variable in a multiple regression model can be linearly predicted from the others with a substantial degree of accuracy. In this situation, the coefficient estimates of the multiple regression may change erratically in response to small changes in the model or the data. Multicollinearity does not reduce the predictive power or reliability of the model as a whole, at least within the sample data set; it only affects calculations regarding individual predictors. That is, a multivariable regression model with collinear predictors can indicate how well the entire bundle of predictors predicts the outcome variable, but it may not give valid results about any individual predictor, or about which predictors are redundant with respect to others. Multicollinearity: Note that in statements of the assumptions underlying regression analyses such as ordinary least squares, the phrase "no multicollinearity" usually refers to the absence of perfect multicollinearity, which is an exact (non-stochastic) linear relation among the predictors. In such a case, the design matrix X has less than full rank, and therefore the moment matrix XTX cannot be inverted. Under these circumstances, for a general linear model y=Xβ+ϵ , the ordinary least squares estimator β^OLS=(XTX)−1XTy does not exist. Multicollinearity: In any case, multicollinearity is a characteristic of the design matrix, not the underlying statistical model. Multicollinearity leads to non-identifiable parameters. Definition: Collinearity is a linear association between two explanatory variables. Two variables are perfectly collinear if there is an exact linear relationship between them. For example, X1 and X2 are perfectly collinear if there exist parameters λ0 and λ1 such that, for all observations i ,X2i=λ0+λ1X1i .Multicollinearity refers to a situation in which more than two explanatory variables in a multiple regression model are highly linearly related. There is perfect multicollinearity if, for example as in the equation above, the correlation between two independent variables equals 1 or −1. In practice, perfect multicollinearity in a data set is rare. More commonly, the issue of multicollinearity arises when there is an approximate linear relationship among two or more independent variables. Definition: Mathematically, a set of variables is perfectly multicollinear if there exist one or more exact linear relationships among some of the variables. That is, for all observations i where λk are constants and Xki is the ith observation on the kth explanatory variable. Definition: To explore one issue caused by multicollinearity, consider the process of attempting to obtain estimates for the parameters of the multiple regression equation Yi=β0+β1X1i+⋯+βkXki+εi .The ordinary least squares estimates involve inverting the matrix XTX , where 11 ⋯Xk1⋮⋮⋮1X1N⋯XkN] is an N×(k+1) matrix, where N is the number of observations, k is the number of explanatory variables, and N≥k+1 . If there is an exact linear relationship (perfect multicollinearity) among the independent variables, then at least one of the columns of X is a linear combination of the others, and so the rank of X (and therefore of XTX ) is less than k+1 , and the matrix XTX will not be invertible. Definition: Perfect multicollinearity is fairly common when working with raw datasets, which frequently contain redundant information. Once redundancies are identified and removed, however, nearly multicollinear variables often remain due to correlations inherent in the system being studied. In such a case, Equation (1) may be modified to include an error term vi :λ0+λ1X1i+λ2X2i+⋯+λkXki+vi=0 .In this case, there is no exact linear relationship among the variables, but the Xj variables are nearly perfectly multicollinear if the variance of vi is small for some set of values for the λ 's. In this case, the matrix XTX has an inverse, but is ill-conditioned so that a given computer algorithm may or may not be able to compute an approximate inverse; if it can, the resulting computed inverse may be highly sensitive to slight variations in the data (due to magnified effects of either rounding error or slight variations in the sampled data points) and so may be inaccurate or sample-dependent. Detection: The following are indicators that multicollinearity may be present in a model: Large changes in the estimated regression coefficients occur when a predictor variable is added or deleted. Insignificant regression coefficients for the affected variables occur in the multiple regression, despite a rejection of the joint hypothesis that those coefficients are all zero (using an F-test). If a multivariable regression finds an insignificant coefficient of a particular explanator, yet a simple linear regression of the explained variable on this explanatory variable shows its coefficient to be significantly different from zero—this situation indicates multicollinearity in the multivariable regression. Some authors have suggested a formal detection-tolerance or the variance inflation factor (VIF) for multicollinearity: tolerance=1−Rj2,VIF=1tolerance, where Rj2 is the coefficient of determination of a regression of explanator j on all the other explanators. A tolerance of less than 0.20 or 0.10, a VIF of 5 or 10 and above, or both, indicates a multicollinearity problem. Detection: Farrar–Glauber test: If the variables are found to be orthogonal, there is no multicollinearity; if the variables are not orthogonal, then at least some degree of multicollinearity is present. C. Robert Wichers has argued that Farrar–Glauber partial correlation test is ineffective in that a given partial correlation may be compatible with different multicollinearity patterns. The Farrar–Glauber test has also been criticized by other researchers. Detection: Condition number test: The standard measure of ill-conditioning in a matrix is the condition index. This determines if the inversion of the matrix is numerically unstable with finite-precision numbers (standard computer floats and doubles), indicating the potential sensitivity of the computed inverse to small changes in the original matrix. The condition number is computed by finding the square root of the maximum eigenvalue divided by the minimum eigenvalue of the design matrix. If the condition number is above 30, the regression may have severe multicollinearity; multicollinearity exists if, in addition, two or more of the variables related to the high condition number have high proportions of variance explained. One advantage of this method is that it also shows which variables are causing the problem. Detection: Perturbing the data: Multicollinearity can be detected by adding random noise to the data, re-running the regression many times, and seeing how much the coefficients change. Detection: Construction of a correlation matrix among the explanatory variables yields indications as to the likelihood that any given couplet of right-hand-side variables are creating multicollinearity problems. Correlation values (off-diagonal elements) of at least 0.4 are sometimes interpreted as indicating a multicollinearity problem. This procedure is, however, highly problematic and cannot be recommended. Intuitively, correlation describes a bivariate relationship, whereas collinearity is a multivariate phenomenon. Consequences: One consequence of a high degree of multicollinearity is that, even if the matrix XTX is invertible, a computer algorithm may be unsuccessful in obtaining an approximate inverse, and if it does obtain one, the inverse may be numerically inaccurate. But even in the presence of an accurate XTX matrix, the following consequences arise. Consequences: The usual interpretation of a regression coefficient is that it estimates the effect of a one-unit change in an independent variable, X1 , holding the other variables constant. In the presence of multicollinearity, this tends to be less precise than if predictors were uncorrelated with one another. If X1 is highly correlated with another independent variable X2 in the given data set, then X1 and X2 have a particular linear stochastic relationship in the set. In other words, changes in X1 are not independent of changes in X2 . This correlation creates an imprecise estimate of the effect of independent changes in X1 In some sense, the collinear variables contain the same information about the dependent variable. If nominally "different" measures quantify the same phenomenon, then they are redundant. Alternatively, if the variables are accorded different names and perhaps employ different numeric measurement scales but are highly correlated with each other, then they suffer from redundancy. Consequences: One of the features of multicollinearity is that the standard errors of the affected coefficients tend to be large. In this case, the test of the hypothesis that the coefficient is equal to zero may lead to a failure to reject a false null hypothesis of no effect of the explanator, a type II error. Consequences: Another issue with multicollinearity is that small changes to the input data can lead to large changes in the model, even resulting in changes in the sign of parameter estimates.A principal danger of such data redundancy is overfitting in regression analysis models. The best regression models are those in which the predictor variables each correlate highly with the dependent variable (outcome) but correlate only minimally with each other. Such a model is often called "low noise" and will be statistically robust (that is, it will predict reliably across numerous samples of variable sets drawn from the same statistical population). Consequences: So long as the underlying specification is correct, multicollinearity does not bias results; it just produces large standard errors in the related independent variables. More importantly, the usual use of regression is to take coefficients from the model and then apply them to other data. Since multicollinearity causes imprecise estimates of coefficient values, the resulting out-of-sample predictions will also be imprecise. And if the pattern of multicollinearity in the new data differs from that in the data that was fitted, such extrapolation may introduce large errors in the predictions.However, if the underlying specification is anything less than complete and correct, multicollinearity amplifies misspecification biases. Even though not often recognized in methods texts, this is a common problem in the social sciences where a complete, correct specification of an OLS regression model is rarely known and at least some relevant variables will be unobservable. As a result, the estimated coefficients of correlated independent variables in an OLS regression will be biased by multicollinearity. As the correlation approaches one, the coefficient estimates will misleadingly tend toward infinite magnitudes in opposite directions, even if the variables’ true effects are small and of the same sign. Remedies: Avoid the dummy variable trap; including a dummy variable for every category (e.g., summer, autumn, winter, and spring) and including a constant term in the regression together guarantee perfect multicollinearity. Use independent subsets of data for estimation, and then apply those estimates to the whole data set. This may result in a slightly higher variance than that of the subsets, but the expectation of the coefficient values should be the same. Observe how much the coefficient values vary. Leave the model as is, despite multicollinearity. The presence of multicollinearity doesn't affect the efficiency of extrapolating the fitted model to new data, provided that the predictor variables follow the same pattern of multicollinearity in the new data as in the data on which the regression model is based. Drop one of the variables. An explanatory variable may be dropped to produce a model with significant coefficients. However, this loses information. Omission of a relevant variable results in biased coefficient estimates for the remaining explanatory variables that are correlated with the dropped variable. Obtain more data, if possible. This is the preferred solution. More data can produce more precise parameter estimates (with lower standard errors), as seen from the formula in variance inflation factor for the variance of the estimate of a regression coefficient in terms of the sample size and the degree of multicollinearity. Remedies: Mean-center the predictor variables. Generating polynomial terms (i.e., for x1 , x12 , x13 , etc.) or interaction terms (i.e., x1×x2 , etc.) can cause some multicollinearity if the variable in question has a limited range (e.g., [2,4]). Mean-centering will eliminate this special kind of multicollinearity. However, in general, this has no effect. It can be useful in overcoming problems arising from rounding and other computational steps if a carefully designed computer program is not used. Remedies: Standardize the independent variables. This may help reduce a false flagging of a condition index above 30. It has also been suggested that by using the Shapley value, a game theory tool, the model could account for the effects of multicollinearity. The Shapley value assigns a value for each predictor and assesses all possible combinations of importance. Use Tikhonov regularization (also known as ridge regression). Use principal component regression. Use partial least squares regression. If the correlated explanators are different lagged values of the same underlying explanator, then a distributed lag technique can be used, imposing a general structure on the relative values of the coefficients to be estimated. Treat highly linearly related variables as a group and study their group effects (see discussion below) instead of their individual effects. At the group level, multicollinearity is not a problem, so no remedies are needed. Multicollinearity and group effects: Strongly correlated predictor variables appear naturally as a group. Their collective impact on the response variable can be measured by group effects. For a group of predictor variables {X1,X2,…,Xq} , a group effect is defined as a linear combination of their parameters: ξ(w)=w1β1+w2β2+⋯+wqβq where w=(w1,w2,…,wq)⊺ is a weight vector satisfying {\textstyle \sum _{j=1}^{q}|w_{j}|=1} . It has an interpretation as the expected change in the response variable Y when variables in the group X1,X2,…,Xq change by the amount w1,w2,…,wq , respectively, at the same time with variables not in the group held constant. Group effects generalize the individual effects in that (1) if q=1 , then the group effect reduces to an individual effect, and (2) if wi=1 and wj=0 for j≠i , then the group effect also reduces to an individual effect. A group effect is said to be meaningful if the underlying simultaneous changes of the q variables represented by the weight vector (w1,w2,…,wq)⊺ are probable. When {X1,X2,…,Xq} is a group of strongly correlated variables, β1=ξ(w1) is not meaningful as a group effect since its underlying simultaneous changes represented by w1=(1,0,…,0)⊺∈Rq are not probable. This is because, due to their strong correlations, it is unlikely that other variables in the group will remain unchanged when X1 increases by one unit. This observation also applies to parameters of other variables in the group. For strongly correlated predictor variables, group effects that are not meaningful, such as the βi 's, cannot be accurately estimated by the least squares regression. On the other hand, meaningful group effects can be accurately estimated by the least squares regression. This shows that strongly correlated predictor variables should be handled as a group, and multicollinearity is not a problem at the group level. For a discussion on how to identify meaningful group effects, see linear regression. Occurrence: Survival analysis Multicollinearity may represent a serious issue in survival analysis. The problem is that time-varying covariates may change their value over the timeline of the study. A special procedure is recommended to assess the impact of multicollinearity on the results. Occurrence: Interest rates for different terms to maturity In various situations, it might be hypothesized that multiple interest rates of various terms to maturity all influence some economic decision, such as the amount of money or some other financial asset to hold, or the amount of fixed investment spending to engage in. In this case, including these various interest rates will in general create a substantial multicollinearity problem because interest rates tend to move together. If each of the interest rates has its separate effect on the dependent variable, it can be extremely difficult to separate out their effects. Occurrence: Common factors The bias-amplifying combination of multicollinearity and misspecification may occur when studies attempt to tease out the effects of two independent variables that (1) are linked by a substantive common factor, and (2) contain unobservable but substantive components (not mere error terms) that are orthogonal to the common factor and that affect the dependent variable separately from any effect of the common factor. For example, studies sometimes include the same variable twice in a regression, measured at two different points in time. A time-invariant factor common to both variables causes the multicollinearity, while the unobservable nature of the common factor or the time-specific orthogonal components causes the misspecification. The same structure may apply to other substantive variable pairs with a common factor such as two types of knowledge, intelligence, conflict, or financial measures (such as the interest rates mentioned above). The two main implications of the presence of such common factors among independent variables of a regression analysis are that, as the correlation of independent variables approaches one due to a sizeable common factor, (1) their coefficient estimates will misleadingly tend toward infinite magnitudes in opposite directions, even if the variables’ true effects are small and of the same sign, and (2) the magnitudes of the biased coefficients will be amplified at a similar pace to the standard errors and therefore t-statistics may remain artificially large. Counter-intuitive type I errors are a likely result, rather than the type II errors typically associated with multicollinearity.To convince readers that this form of multicollinearity is not biasing results, studies should not merely "drop" one of the collinear variables. Rather, they should present separate regression results with each of the collinear variables in isolation followed by a regression that contains both variables. Consistent coefficient signs and magnitudes across these specifications represent strong evidence that common-factor multicollinearity is not biasing results. Extension: The concept of lateral collinearity expands on the traditional view of multicollinearity, comprising also collinearity between explanatory and criteria (i.e., explained) variables, in the sense that they may be measuring almost the same thing as each other.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Mesosome** Mesosome: Mesosomes or chondrioids are folded invaginations in the plasma membrane of bacteria that are produced by the chemical fixation techniques used to prepare samples for electron microscopy. Although several functions were proposed for these structures in the 1960s, they were recognized as artifacts by the late 1970s and are no longer considered to be part of the normal structure of bacterial cells. These extensions are in the form of vesicles, tubules and lamellae. Initial observations: These structures are invaginations of the plasma membrane observed in gram-positive bacteria that have been chemically fixed to prepare them for electron microscopy. They were first observed in 1953 by George B. Chapman and James Hillier, who referred to them as "peripheral bodies." They were termed "mesosomes" by Fitz-James in 1960.Initially, it was thought that mesosomes might play a role in several cellular processes, such as cell wall formation during cell division, chromosome replication, or as a site for oxidative phosphorylation. The mesosome was thought to increase the surface area of the cell, aiding the cell in cellular respiration. This is analogous to cristae in the mitochondrion in eukaryotic cells, which are finger-like projections and help eukaryotic cells undergo cellular respiration. Mesosomes were also hypothesized to aid in photosynthesis, cell division, DNA replication, and cell compartmentalisation. Disproof of hypothesis: These models were called into question during the late 1970s when data accumulated suggesting that mesosomes are artifacts formed through damage to the membrane during the process of chemical fixation, and do not occur in cells that have not been chemically fixed. By the mid to late 1980s, with advances in cryofixation and freeze substitution methods for electron microscopy, it was generally concluded that mesosomes do not exist in living cells. However, a few researchers continue to argue that the evidence remains inconclusive, and that mesosomes might not be artifacts in all cases.Recently, similar folds in the membrane have been observed in bacteria that have been exposed to some classes of antibiotics, and antibacterial peptides (defensins). The appearance of these mesosome-like structures may be the result of these chemicals damaging the plasma membrane and/or cell wall.The case of the proposal and then disproof of the mesosome hypothesis has been discussed from the viewpoint of the philosophy of science as an example of how a scientific idea can be falsified and the hypothesis then rejected, and analyzed to explore how the scientific community carries out this testing process.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Depside** Depside: A depside is a type of polyphenolic compound composed of two or more monocyclic aromatic units linked by an ester group. Depsides are most often found in lichens, but have also been isolated from higher plants, including species of the Ericaceae, Lamiaceae, Papaveraceae and Myrtaceae.Certain depsides have antibiotic, anti-HIV, antioxidant, and anti-proliferative activity in vitro. As inhibitors of prostaglandin synthesis and leukotriene B4 biosynthesis, some depsides have in vitro anti-inflammatory activity.A depsidase is a type of enzyme that cuts depside bonds. One such enzyme is tannase. Examples: Gyrophoric acid, found in the lichen Cryptothecia rubrocincta, is a depside. Merochlorophaeic acid, isolated from lichens of the genus Cladonia, is an inhibitor of prostaglandin synthesis. Some depsides are described as anti-HIV.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Homeoptoton** Homeoptoton: The homeoptoton (from the Greek homoióptoton, "similar in the cases"), is a figure of speech consisting in ending the last words of a distinct part of the speech with the same syllable or letter. Example: "In necessariis unitas, in dubiis libertas, in omnibus caritas" ("In necessary things unity, in doubtful things liberty, in all things charity"). "Hominem laudem egentem virtutis, abundantem felicitates" ("Am I to praise a man abounding in good luck, but lacking in virtue?").
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Pelvic inflammatory disease** Pelvic inflammatory disease: Pelvic inflammatory disease, also known as pelvic inflammatory disorder (PID), is an infection of the upper part of the female reproductive system, namely the uterus, fallopian tubes, and ovaries, and inside of the pelvis. Often, there may be no symptoms. Signs and symptoms, when present, may include lower abdominal pain, vaginal discharge, fever, burning with urination, pain with sex, bleeding after sex, or irregular menstruation. Untreated PID can result in long-term complications including infertility, ectopic pregnancy, chronic pelvic pain, and cancer.The disease is caused by bacteria that spread from the vagina and cervix. While it has been reported that infections by Neisseria gonorrhoeae or Chlamydia trachomatis are present in 75 to 90 percent of cases, the strong association of PID with these infections is often a misconception. In the UK it is reported by the NHS that infections by Neisseria gonorrhoeae and Chlamydia trachomatis are responsible for only a quarter of PID cases. Often, multiple different bacteria are involved.Without treatment, about 10 percent of those with a chlamydial infection and 40 percent of those with a gonorrhea infection will develop PID. Risk factors are generally similar to those of sexually transmitted infections and include a high number of sexual partners and drug use. Vaginal douching may also increase the risk. The diagnosis is typically based on the presenting signs and symptoms. It is recommended that the disease be considered in all women of childbearing age who have lower abdominal pain. A definitive diagnosis of PID is made by finding pus involving the fallopian tubes during surgery. Ultrasound may also be useful in diagnosis.Efforts to prevent the disease include not having sex or having few sexual partners and using condoms. Screening women at risk for chlamydial infection followed by treatment decreases the risk of PID. If the diagnosis is suspected, treatment is typically advised. Treating a woman's sexual partners should also occur. In those with mild or moderate symptoms, a single injection of the antibiotic ceftriaxone along with two weeks of doxycycline and possibly metronidazole by mouth is recommended. For those who do not improve after three days or who have severe disease, intravenous antibiotics should be used.Globally, about 106 million cases of chlamydia and 106 million cases of gonorrhea occurred in 2008. The number of cases of PID, however, is not clear. It is estimated to affect about 1.5 percent of young women yearly. In the United States, PID is estimated to affect about one million people each year. A type of intrauterine device (IUD) known as the Dalkon shield led to increased rates of PID in the 1970s. Current IUDs are not associated with this problem after the first month. Signs and symptoms: Symptoms in PID range from none to severe. If there are symptoms, fever, cervical motion tenderness, lower abdominal pain, new or different discharge, painful intercourse, uterine tenderness, adnexal tenderness, or irregular menstruation may be noted.Other complications include endometritis, salpingitis, tubo-ovarian abscess, pelvic peritonitis, periappendicitis, and perihepatitis. Signs and symptoms: Complications PID can cause scarring inside the reproductive system, which can later cause serious complications, including chronic pelvic pain, infertility, ectopic pregnancy (the leading cause of pregnancy-related deaths in adult females), and other complications of pregnancy. Occasionally, the infection can spread to the peritoneum causing inflammation and the formation of scar tissue on the external surface of the liver (Fitz-Hugh–Curtis syndrome). Cause: Chlamydia trachomatis and Neisseria gonorrhoeae are common causes of PID. However, PID can also be caused by other untreated infections, like bacterial vaginosis. Data suggest that PID is often polymicrobial. Isolated anaerobes and facultative microorganisms have been obtained from the upper genital tract. N. gonorrhoeae has been isolated from fallopian tubes, facultative and anaerobic organisms were recovered from endometrial tissues.The anatomical structure of the internal organs and tissues of the female reproductive tract provides a pathway for pathogens to ascend from the vagina to the pelvic cavity thorough the infundibulum. The disturbance of the naturally occurring vaginal microbiota associated with bacterial vaginosis increases the risk of PID.N. gonorrhoea and C. trachomatis are the most common organisms. The least common were infections caused exclusively by anaerobes and facultative organisms. Anaerobes and facultative bacteria were also isolated from 50 percent of the patients from whom Chlamydia and Neisseria were recovered; thus, anaerobes and facultative bacteria were present in the upper genital tract of nearly two-thirds of the PID patients. PCR and serological tests have associated extremely fastidious organism with endometritis, PID, and tubal factor infertility. Microorganisms associated with PID are listed below.Cases of PID have developed in people who have stated they have never had sex. Diagnosis: Upon a pelvic examination, cervical motion, uterine, or adnexal tenderness will be experienced. Mucopurulent cervicitis and or urethritis may be observed. In severe cases more testing may be required such as laparoscopy, intra-abdominal bacteria sampling and culturing, or tissue biopsy.Laparoscopy can visualize "violin-string" adhesions, characteristic of Fitz-Hugh–Curtis perihepatitis and other abscesses that may be present.Other imaging methods, such as ultrasonography, computed tomography (CT), and magnetic imaging (MRI), can aid in diagnosis. Blood tests can also help identify the presence of infection: the erythrocyte sedimentation rate (ESR), the C-reactive protein (CRP) level, and chlamydial and gonococcal DNA probes.Nucleic acid amplification tests (NAATs), direct fluorescein tests (DFA), and enzyme-linked immunosorbent assays (ELISA) are highly sensitive tests that can identify specific pathogens present. Serology testing for antibodies is not as useful since the presence of the microorganisms in healthy people can confound interpreting the antibody titer levels, although antibody levels can indicate whether an infection is recent or long-term.Definitive criteria include histopathologic evidence of endometritis, thickened filled Fallopian tubes, or laparoscopic findings. Gram stain/smear becomes definitive in the identification of rare, atypical and possibly more serious organisms. Two thirds of patients with laparoscopic evidence of previous PID were not aware they had PID, but even asymptomatic PID can cause serious harm. Diagnosis: Laparoscopic identification is helpful in diagnosing tubal disease; a 65 percent to 90 percent positive predictive value exists in patients with presumed PID.Upon gynecologic ultrasound, a potential finding is tubo-ovarian complex, which is edematous and dilated pelvic structures as evidenced by vague margins, but without abscess formation. Diagnosis: Differential diagnosis A number of other causes may produce similar symptoms including appendicitis, ectopic pregnancy, hemorrhagic or ruptured ovarian cysts, ovarian torsion, and endometriosis and gastroenteritis, peritonitis, and bacterial vaginosis among others.Pelvic inflammatory disease is more likely to reoccur when there is a prior history of the infection, recent sexual contact, recent onset of menses, or an IUD (intrauterine device) in place or if the partner has a sexually transmitted infection.Acute pelvic inflammatory disease is highly unlikely when recent intercourse has not taken place or an IUD is not being used. A sensitive serum pregnancy test is typically obtained to rule out ectopic pregnancy. Culdocentesis will differentiate hemoperitoneum (ruptured ectopic pregnancy or hemorrhagic cyst) from pelvic sepsis (salpingitis, ruptured pelvic abscess, or ruptured appendix).Pelvic and vaginal ultrasounds are helpful in the diagnosis of PID. In the early stages of infection, the ultrasound may appear normal. As the disease progresses, nonspecific findings can include free pelvic fluid, endometrial thickening, uterine cavity distension by fluid or gas. In some instances the borders of the uterus and ovaries appear indistinct. Enlarged ovaries accompanied by increased numbers of small cysts correlates with PID.Laparoscopy is infrequently used to diagnose pelvic inflammatory disease since it is not readily available. Moreover, it might not detect subtle inflammation of the fallopian tubes, and it fails to detect endometritis. Nevertheless, laparoscopy is conducted if the diagnosis is not certain or if the person has not responded to antibiotic therapy after 48 hours.No single test has adequate sensitivity and specificity to diagnose pelvic inflammatory disease. A large multisite U.S. study found that cervical motion tenderness as a minimum clinical criterion increases the sensitivity of the CDC diagnostic criteria from 83 percent to 95 percent. However, even the modified 2002 CDC criteria do not identify women with subclinical disease. Prevention: Regular testing for sexually transmitted infections is encouraged for prevention. The risk of contracting pelvic inflammatory disease can be reduced by the following: Using barrier methods such as condoms; see human sexual behaviour for other listings. Seeking medical attention if you are experiencing symptoms of PID. Using hormonal combined contraceptive pills also helps in reducing the chances of PID by thickening the cervical mucosal plug & hence preventing the ascent of causative organisms from the lower genital tract. Seeking medical attention after learning that a current or former sex partner has, or might have had a sexually transmitted infection. Getting a STI history from your current partner and strongly encouraging they be tested and treated before intercourse. Diligence in avoiding vaginal activity, particularly intercourse, after the end of a pregnancy (delivery, miscarriage, or abortion) or certain gynecological procedures, to ensure that the cervix closes. Reducing the number of sexual partners. Sexual monogamy. Abstinence Treatment: Treatment is often started without confirmation of infection because of the serious complications that may result from delayed treatment. Treatment depends on the infectious agent and generally involves the use of antibiotic therapy although there is no clear evidence of which antibiotic regimen is more effective and safe in the management of PID. If there is no improvement within two to three days, the patient is typically advised to seek further medical attention. Hospitalization sometimes becomes necessary if there are other complications. Treating sexual partners for possible STIs can help in treatment and prevention.For women with PID of mild to moderate severity, parenteral and oral therapies appear to be effective. It does not matter to their short- or long-term outcome whether antibiotics are administered to them as inpatients or outpatients. Typical regimens include cefoxitin or cefotetan plus doxycycline, and clindamycin plus gentamicin. An alternative parenteral regimen is ampicillin/sulbactam plus doxycycline. Erythromycin-based medications can also be used. A single study suggests superiority of azithromycin over doxycycline. Another alternative is to use a parenteral regimen with ceftriaxone or cefoxitin plus doxycycline. Clinical experience guides decisions regarding transition from parenteral to oral therapy, which usually can be initiated within 24–48 hours of clinical improvement. Prognosis: Even when the PID infection is cured, effects of the infection may be permanent. This makes early identification essential. Treatment resulting in cure is very important in the prevention of damage to the reproductive system. Formation of scar tissue due to one or more episodes of PID can lead to tubal blockage, increasing the risk of the inability to get pregnant and long-term pelvic/abdominal pain. Certain occurrences such as a post pelvic operation, the period of time immediately after childbirth (postpartum), miscarriage or abortion increase the risk of acquiring another infection leading to PID. Epidemiology: Globally about 106 million cases of chlamydia and 106 million cases of gonorrhea occurred in 2008. The number of cases of PID; however, is not clear. It is estimated to affect about 1.5 percent of young women yearly. In the United States PID is estimated to affect about one million people yearly. Rates are highest with teenagers and first time mothers. PID causes over 100,000 women to become infertile in the US each year.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Ground support equipment** Ground support equipment: Ground support equipment (GSE) is the support equipment found at an airport, usually on the apron, the servicing area by the terminal. This equipment is used to service the aircraft between flights. As the name suggests, ground support equipment is there to support the operations of aircraft whilst on the ground. The role of this equipment generally involves ground power operations, aircraft mobility, and cargo/passenger loading operations. Ground support equipment: Many airlines subcontract ground handling to an airport or a handling agent, or even to another airline. Ground handling addresses the many service requirements of a passenger aircraft between the time it arrives at a terminal gate and the time it departs for its next flight. Speed, efficiency, and accuracy are important in ground handling services in order to minimize the turnaround time (the time during which the aircraft remains parked at the gate). Ground support equipment: Small airlines sometimes subcontract maintenance to a larger carrier, as it may be a better alternative to setting up an independent maintenance base. Some airlines may enter into a Maintenance and Ground Support Agreement (MAGSA) with each other, which is used by airlines to assess costs for maintenance and support to aircraft.Most ground services are not directly related to the actual flying of the aircraft, and instead involve other service tasks. Cabin services ensure passenger comfort and safety. They include such tasks as cleaning the passenger cabin and replenishment of on-board consumables or washable items such as soap, pillows, tissues, blankets, and magazines. Security checks are also made to make sure no threats have been left on the aircraft. Ground support equipment: Airport GSE comprises a diverse range of vehicles and equipment necessary to service aircraft during passenger and cargo loading and unloading, maintenance, and other ground-based operations. The wide range of activities associated with aircraft ground operations lead to an equally wide-ranging fleet of GSE. For example, activities undertaken during a typical aircraft gate period include: cargo loading and unloading, passenger loading and unloading, potable water storage, lavatory waste tank drainage, aircraft refueling, engine and fuselage examination and maintenance, and food and beverage catering. Airlines employ specially designed GSE to support all these operations. Moreover, electrical power and conditioned air are generally required throughout gate operational periods for both passenger and crew comfort and safety, and many times these services are also provided by GSE. Non-powered equipment: Dollies Dollies are used for the transportation of loose baggages, oversized bags, mail bags, loose cargo carton boxes, etc. between the aircraft and the terminal or sorting facility. Dollies for loose baggage are fitted with a brake system which blocks the wheels from moving when the connecting rod is not attached to a tug. Most dollies for loose baggage are completely enclosed except for the sides which use plastic curtains to protect items from weather. In the US, these dollies are called Baggage Cart, but in Europe Baggage Cart means passenger baggage trolleys. Non-powered equipment: Gallery Airport dolly gallery Chocks Chocks are used to prevent an aircraft from moving while parked at the gate or in a hangar. Chocks are placed in the front ('fore') and back ('aft') of the wheels of landing gear. They are made out of hard wood or hard rubber. Corporate safety guidelines in the US almost always specify that chocks must be used in a pair on the same wheel and they must be placed in physical contact with the wheel. Therefore, "chocks" are typically found in pairs connected by a segment of rope or cable. The word "chock" is also used as a verb, defined as the act of placing chocks in front and back of the wheel. Non-powered equipment: Aircraft tripod jack They are used to support a parked aircraft to prevent their tail from drooping or even falling to the ground. When the passengers in the front get off an aircraft, the aircraft becomes tail heavy and the tail will droop. Using the jack is optional but not all aircraft need it. When needed, they are tugged to the tail and set up by manpower. Once set up, no supervision to the jack is needed until the aircraft is ready to leave. Non-powered equipment: Aircraft service stairs Aircraft service stairs help the maintenance technician to reach the bottom of aircraft. Powered equipment: Refuelers Aircraft refuelers can be either a self-contained fuel truck, or a hydrant truck or cart. Fuel trucks are self-contained, typically containing up to 15,000 US gallons (12,000 imp gal; 57,000 L) of fuel and have their own pumps, filters, hoses, and other equipment. A hydrant cart or truck hooks into a central pipeline network and provides fuel to the aircraft. There is a significant advantage with hydrant systems when compared to fuel trucks, as fuel trucks must be periodically replenished. Powered equipment: Refueler gallery Tugs and tractors The tugs and tractors at an airport have several purposes and represent the essential part of ground support services. They are used to move all equipment that can not move itself. This includes bag carts, mobile air conditioning units, air starters, and lavatory carts. Powered equipment: Ground power units A ground power unit is a vehicle capable of supplying power to aircraft parked on the ground. Ground power units may also be built into the jetway, making it even easier to supply electrical power to aircraft. Many aircraft require 28 V of direct current and 115 V 400 Hz of alternating current. The electric energy is carried from a generator to a connection on the aircraft via 3 phase 4-wire insulated cable capable of handling 261 amps (90 kVA). These connectors are standard for all aircraft, as defined in ISO 6858. Powered equipment: A so-called "solid state unit" converts power from AC to DC along with current separation for aircraft power requirements. Solid state units can be supplied stationary, bridge-mounted or as a mobile unit. Powered equipment: Buses Buses at airports are used to move people from the terminal to either an aircraft or another terminal. The specific term for airport buses that drive on the apron only is apron bus. Apron buses may have a low profile like the Guangtai or Neoplan aircraft buses because people disembark directly to the apron. Some airports use buses that are raised to the level of a passenger terminal and can only be accessed from a door on the 2nd level of the terminal. These odd-looking buses are usually referred to as "people movers" or "mobile lounges". Airport buses are usually normal city buses or specialized terminal buses. Specialized airport buses have very low floor and wide doors on both sides of the bus for most efficient passenger movement and flexibility in depot parking. The biggest producers of airport buses are in China (Weihai, Shenyang, Beijing, Jinhua), Portugal and Slovenia. Powered equipment: Container loader Container loaders, also known as cargo loaders or "K loaders", are used for the loading and unloading of containers and pallets into and out of aircraft. The loader has two platforms which raise and descend independently. The containers or pallets on the loader are moved with the help of built-in rollers or wheels. There are different container and pallet loaders. Powered equipment: 3.5 T 7 T (standard version, wide-body, universal, high) 14 T 30 TFor military transport planes special container and pallet loaders are used. Some military applications use airborne loaders, which are transportable within the transport plane itself. Container and pallet loaders are mainly produced in France, Germany, Latvia, Spain, Canada, Brazil, Japan, China, and the United States. Container loader gallery Transporters Transporters are cargo platforms constructed so that, beside loading and unloading containers, they can also transport the cargo. These transporters are not typically used in the United States. Powered equipment: Air start unit An air start unit (also known as a "start cart") is a device used to start an aircraft's engines when it is not equipped with an on-board APU or the APU is not operational. There are three primary types of these devices that exist currently: a stored air cart, a gas turbine based unit, and a diesel engine driven screw compressor unit. All three devices create a source of low pressure, high volume air to start the aircraft engines. Typically one or two hoses are connected to these units, with the largest aircraft engines requiring three. Powered equipment: Non-potable water trucks Non-potable water trucks are special vehicles that provide water to an aircraft. The water is filtered and protected from the elements while being stored on the vehicle. A pump in the vehicle assists in moving the water from the truck to the aircraft. The water is designated as non-potable. Powered equipment: Lavatory service vehicles Lavatory service vehicles empty and refill lavatories onboard aircraft. Waste is stored in tanks on the aircraft until these vehicles can empty them and remove the waste. After the tank is emptied, it is refilled with a mixture of water and a disinfecting concentrate, commonly called 'blue juice'. Instead of a self-powered vehicle, some airports have lavatory carts, which are smaller and must be pulled by tug. Powered equipment: Catering vehicle The catering vehicle resembles a typical box truck but it consists of a rear body, lifting system, platform and an electro-hydraulic control mechanism. The rear body can be lifted up, down and the platform can be moved to place in front of the aircraft. Powered equipment: Catering services include the unloading of unused food and drink from the aircraft, and the loading of fresh food and drinks for passengers and crew. The meals are typically delivered on standardized carts which are wheeled into the catering vehicle. Meals are prepared mostly on the ground in order to minimize the amount of preparation (apart from chilling or reheating) required during flight. The vehicle then drives to the airport and is parked in front of the plane. The stabilizers are deployed and the van body is lifted. The platform can be fine controlled to move left-right as well as in-out so that it is aligned with the door correctly. The body is made of insulated panels and is capable of maintaining temperatures of 0 °C (32 °F) by means of refrigeration unit. Powered equipment: In-flight food is prepared in a flight kitchen facility, a completely HACCP certified facility where food is reheated in sterile and controlled environments. The prepared food is then placed in trollies and wheeled into the cabin. A predecessor to the catering truck was in use by the U.S. Army Air Forces during World War II.A special higher type of catering truck has been designed to accommodate the Airbus A380. Powered equipment: Catering vehicle gallery Belt loaders Belt loaders are vehicles with conveyor belts for unloading and loading of baggage and cargo onto aircraft. A belt loader is positioned at the door sill of an aircraft hold (baggage compartment) during operation. Belt loaders are used for narrowbody aircraft, and the bulk hold of wide body aircraft. Stowing baggage without containers is known as bulk loading. Powered equipment: Passenger boarding steps/stairs Passenger boarding stairs, sometimes referred to as boarding ramps, stair car or aircraft steps, provide a mobile means to traverse between the aircraft doors and the ground. Because larger aircraft have door sills 5 to 20 feet (1.5 to 6.1 m) high, stairs facilitate safe boarding and deplaning. Smaller units are generally moved by being towed or pushed, while larger units are self-powered. Most models have adjustable height to accommodate various aircraft. Optional features may include canopies, heating, supplementary lighting, and a red carpet for VIP passengers. Larger aircraft may use one or more jet bridges connected to the terminal building for passenger boarding, but ground-based stairs are used when this is unavailable or impractical. Powered equipment: Pushback tugs and tractors Pushback tugs are mostly used to push an aircraft away from the gate when it is ready to leave. These tugs are very powerful and because of the large engines, are sometimes referred to as an engine with wheels. Pushback tugs can also be used to pull aircraft in various situations, such as to a hangar. Different size tugs are required for different size aircraft. Some tugs use a tow-bar as a connection between the tug and the aircraft, while other tugs lift the nose gear off the ground to make it easier to tow or push. Recently there has been a push for towbarless tractors as larger airplanes are designed. Powered equipment: Tugs and tractors gallery De/anti-icing vehicles The procedure of de/anti-icing, protection from fluids freezing up on aircraft, is done from special vehicles. These vehicles have booms, like a cherry picker, to allow easy access to the entire aircraft. A hose sprays a special mixture that melts current ice on the aircraft and also prevents some ice from building up while waiting on the ground. Powered equipment: Aircraft rescue and firefighting Aircraft rescue and firefighting is a special category of firefighting that involves the response, hazard mitigation, evacuation and possible rescue of passengers and crew of an aircraft involved in (typically) an airport ground emergency.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Cottage pudding** Cottage pudding: Cottage pudding is a traditional American dessert consisting of a plain, dense cake served with a sweet glaze or custard. The glaze is generally cornstarch based and flavored with sugar, vanilla, chocolate, butterscotch, or one of a variety of fruit flavors such as lemon or strawberry. History: One typical recipe is from Recipes Tried and True, a collection of recipes compiled in 1894 by the Ladies' Aid Society of the First Presbyterian Church in Marion, Ohio.Cottage pudding can be baked over a fruit base, with a recipe from Fannie Farmer resulting in a dessert similar to a fruit cobbler, as in the recipe for Apple Pan Dowdy in The Fannie Farmer Cookbook. Description: Cottage pudding is a simple single layer butter cake served with some kind of sauce poured over it. The sauce could be custard, hard sauce, butterscotch, chocolate sauce, whipped cream, or crushed fruits. There are many variations on the simple recipe. The traditional preparation is served as a one layer cake topped with fruit or custard, but the same batter can also be used for layer cakes, like banana layer cakes, which are filled with a layer of custard and sliced bananas, or as a substitute for sponge cake in traditional layer cake "pies" like the Washington pie or Boston cream pie, and other desserts like peach melba and baked Alaska. It could also be used to make ice cream sandwiches.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Principal series (spectroscopy)** Principal series (spectroscopy): In atomic emission spectroscopy, the principal series is a series of spectral lines caused when electrons move between p orbitals of an atom and the lowest available s orbital. These lines are usually found in the visible and ultraviolet portions of the electromagnetic spectrum. The principal series has given the letter p to the p atomic orbital and subshell. Principal series (spectroscopy): The lines are absorption lines when the electron gains energy from an s subshell to a p subshell. When electrons descend in energy they produce an emission spectrum. The term principal came about because this series of lines is observed both in absorption and emission for alkali metal vapours. Other series of lines appear in the emission spectrum only and not in the absorption spectrum, and were named the sharp series and the diffuse series based on the appearance of the lines.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded