text
stringlengths
60
353k
source
stringclasses
2 values
**Dialogue editor** Dialogue editor: The dialogue editor is a type of sound editor who assembles, synchronizes, and edits all the dialogue in a film or television production. Usually, they will use the production tracks: the sound that was recorded on the set. If any of the production tracks are unusable they can be replaced by either alternate production tracks recorded on set or by ADR, automated dialogue replacement, which is recorded after the shoot with the actors watching their performances in a sound studio and rerecording the lines. Large productions may have an ADR editor working under the dialogue editor, but the positions are often combined. The ADR editor or dialogue editor also work with the walla group in films which they are required, providing the background chatter noise in scenes with large crowds, such as parties or restaurants.Once the dialogue editor has completed the dialogue track, the re-recording mixer then mixes it with the music and sound effects tracks to produce the final soundtrack.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**DART radiative transfer model** DART radiative transfer model: DART (Discrete anisotropic radiative transfer) is a 3D radiative transfer model, designed for scientific research, in particular remote sensing. Developed at CESBIO since 1992, DART model was patented in 2003. It is freeware for scientific activities. General Description: DART model simulates, simultaneously in several wavelengths of the optical domain (e.g., visible and thermal infrared), the radiative budget and remotely sensed images of any Earth scene (natural / urban with /without relief), for any sun direction, any atmosphere, any view direction and any sensor FTM. It was designed to be precise, easy to use and adapted for operational use. For that, it simulates: Terrestrial landscape. General Description: The space or airborne radiometric sensor (optional simulation).It simulates any landscape as a 3D matrice of cells that contain turbid material and triangles. Turbid material is used for simulating vegetation (e.g., tree crowns, grass, agricultural crops,...) and the atmosphere. Triangles are used for simulating translucent and opaque surfaces that makes up topography, urban elements and 3D vegetation. DART can use structural and spectral data bases (atmosphere, vegetation, soil,...). It includes a LIDAR simulation mode. General Description: General Information On Radiative Transfer The approaches used to simulate radiative transfer differ on 2 levels: mathematical method of resolution and mode of representation of the propagation medium. These two levels are in general dependent. The models of radiative transfer are often divided into 2 categories associated with the 2 principal modes of representation of the landscape: homogeneous or heterogeneous representation. For the models known as homogeneous (Idso and of Wit, 1970; Ross, 1981; Verhoef, 1984; Myneni et al., 1989), the landscape is represented by a constant horizontal distribution of absorbing and scattering elements (sheets, branches, etc...). On the other hand, for the models known as heterogeneous, the landscape is represented by a no uniform space distribution of unspecified elements of the landscape (North, 1996; Govaerts, 1998). General Description: Simulation of the "Earth – Atmosphere" scene DART simulates radiative transfer in the "Earth-Atmosphere" system, for any wavelength in the optical domain (shortwaves : visible, thermal infrared,...). Its approach combines the ray tracing and the discrete ordinate methods. It works with natural and urban landscapes (forests with different types of trees, buildings, rivers,...), with topography and atmosphere above and within the landscape. It simulates light propagation from solar irradiance (Top of Atmosphere) and/or thermal emission within the scene. Context [1]: The study of the functioning of Continental surfaces requires the understanding of the various energetic and physiologic mechanisms that influence these surfaces. For example, the radiation absorbed in the visible spectral domain is the major energy source for vegetation photosynthesis. Moreover, energy and mass fluxes at the "Earth – Atmosphere" interface affect surface functioning, and consequently climatology. In this context, Earth observation from space (i.e., space remote sensing) is an indispensable tool, due to its unique potential to provide synoptic and continuous surveys of the Earth, at different time and space scales. Context [1]: The difficulty in studying continental surfaces arises from the complexity of the energetic and physiologic processes involved and also from the different time and space scales concerned. It comes also from the complexity of satellite remote sensing space and from its links to quantities that characterize Earth functioning. These remarks underline the need of models, because only these can couple and gather within a single scheme all concerned processes. Major references: Modelling radiative transfer in heterogeneous 3-D vegetation canopies, 1996, Gastellu-Etchegorry JP, Demarez V, Pinel V, Zagolski F, Remote sensing of Environment, 58:131–156. Radiative transfer model for simulating high-resolution satellite images, Gascon F., 2001, Gastellu-Etchegorry J.P. et Lefèvre M.J., IEEE, 39(9), 1922–1926. The radiation transfer model intercomparison (RAMI) exercise, 2001, Pinty B., Gascon F., Gastellu-Etchegorry et al., Journal of Geophysical Research, Vol. 106, No. D11, June 16, 2001. Building a Forward-Mode 3-D Reflectance model for topographic normalization of high-resolution (1-5m) imagery: Validation phase in a forested environment, 2012, Couturier, S., Gastellu-Etchegorry J.P., Martin E., Patiño, P., IEEE, Vol. 51, Number 7, 3910–3921. Retrieval of spruce leaf chlorophyll content from airborne image data using continuum removal and radiative transfer, 2013, Malenovský Z., Homolová L., Zurita-Milla R., Lukeš P., Kapland V., Hanuš J., Gastellu-Etchegorry J.P., Schaepman M., Remote sensing of Environment. 131:85–102. Major references: A new approach of direction discretization and oversampling for 3D anisotropic radiative transfer modeling, 2013, Yin T., Gastellu-Etchegorry J.P., Lauret N., Grau E., Rubio J., Remote Sensing of Environment. 135, pp 213–223 A canopy radiative transfer scheme with explicit FAPAR for the interactive vegetation model ISBA-A-gs: impact on carbon fluxes, 2013, Carrer D., Roujean J.L., Lafont S., Calvet J.C., Boone A., Decharme B., Delire C., Gastellu-Etchegorry J.P., Journal of Geophysical Research – Biogeosciences, Vol. 118: 1–16 Investigating the Utility of Wavelet Transforms for Inverting a 3-D Radiative Transfer Model Using Hyperspectral Data to Retrieve Forest LAI, 2013, Banskota A., Wynne R., Thomas V., Serbin S., Kayastha N., Gastellu-Etchegorry J.P., Townsend P., Remote Sensing, 5: 2639–2659 Directional viewing effects on satellite Land Surface Temperature products over sparse vegetation canopies – A multi-sensor analysis, 2013, Guillevic P.C., Bork-Unkelbach A., Göttsche F.M., Hulley G., Gastellu-Etchegorry J.P., Olesen F.S and Privette J.L., IEEE Geoscience and Remote sensing, 10, 1464–1468. Major references: Radiative transfer modeling in the "Earth – Atmosphere" system with DART model, 2013, Grau E. and Gastellu-Etchegrry, Remote Sensing of Environment, 139, 149–170 The 4th radiation transfer model intercomparison (RAMI-IV): Proficiency testing of canopy reflectance models with ISO-13528, 2013, Widlowski J-L, B Pinty, M Lopatka, C Atzberger, D Buzica, M Chelle, M Disney, J-P Gastellu-Etchegorry, M Gerboles, N Gobron, E Grau, H Huang, A Kallel, H Kobayashi, P E Lewis, W Qin, M Schlerf, J Stuckens, D Xie, Journal of Geophysical Research 01/2013 1–22, doi:10.1002/jgrd.50497 3D Modeling of Imaging Spectrometer Data: data: 3D forest modeling based on LiDAR and in situ data, 2014, Schneider F.D. Leiterer R., Morsdorf F., Gastellu-Etchegorry J.P., Lauret N., Pfeifer N., Schaepman M.E., Remote Sensing of Environment, 152: 235–250. Major references: Discrete anisotropic radiative transfer (DART 5) for modeling airborne and satellite spectroradiometer and LIDAR acquisitions of natural and urban landscapes, 2015, Gastellu-Etchegorry J.P., Yin T., Lauret N., 2015, Remote Sensing, 7, 1667–1701: doi: 10.3390/rs70201667. A LUT-Based Inversion of DART Model to Estimate Forest LAI from Hyperspectral Data, 2015, Banskota A., Serbin S. P., Wynne R. H., Thomas V.A., Falkowski M.J., Kayastha N., Gastellu-Etchegorry J.P., Townsend P.A., IEEE Geoscience and Remote sensing, JSTARS-2014-00702.R1, in press. Simulating images of passive sensors with finite field of view by coupling 3-D radiative transfer model and sensor perspective projection, 2015, Yin T., Lauret N. and Gastellu-Etchegorry J.P., Remote Sensing of Environment, accepted.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Dixmier mapping** Dixmier mapping: In mathematics, the Dixmier mapping describes the space Prim(U(g)) of primitive ideals of the universal enveloping algebra U(g) of a finite-dimensional solvable Lie algebra g over an algebraically closed field of characteristic 0 in terms of coadjoint orbits. More precisely, it is a homeomorphism from the space of orbits g*/G of the dual g* of g (with the Zariski topology) under the action of the adjoint group G to Prim(U(g)) (with the Jacobson topology). The Dixmier map is closely related to the orbit method, which relates the irreducible representations of a nilpotent Lie group to its coadjoint orbits. Dixmier (1963) introduced the Dixmier map for nilpotent Lie algebras and then in (Dixmier 1966) extended it to solvable ones. Dixmier mapping: Dixmier (1996, chapter 6) describes the Dixmier mapping in detail. Construction: Suppose that g is a completely solvable Lie algebra, and f is an element of the dual g*. A polarization of g at f is a subspace h of maximal dimension subject to the condition that f vanishes on [h,h], that is also a subalgebra. The Dixmier map I is defined by letting I(f) be the kernel of the twisted induced representation Ind~(f|h,g) for a polarization h.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Out-flow radial turbine** Out-flow radial turbine: Radial means that the fluid is flowing in radial direction that is either from inward to outward or from outward to inward. If the fluid is flowing from inward to outward then it is called outflow radial turbine. In such turbines, the water enters at the centre of the wheel and then flows outwards (i.e., towards the outer periphery of the wheel). Here guide mechanism is surrounded by the runner. Out-flow radial turbine: In this turbine, the inner diameter of the runner is the inlet and outer diameter is an outlet.In radial outflow turbine the fluid expands in the turbine that is the fluid enlarges in the rotor. It follows the organic rankine cycle. Here the flow is radially outwards and it gets discharged at the periphery of the runner. But here the speed is difficult to control. Components of Out-flow Turbine: The Main Components of Reaction Turbine are : Spiral Casing: On this spiral casing the entire system or entire assembly is attached. Guide Vanes: Converts the pressure energy into momentum energy. These are inside the spiral casing. Here the first drop of pressure occurs and thus equivalent amount of kinetic energy rises which is then imparted to the runner. And then the runner starts to rotate. Runner: On here the radial flow acts on runner vanes causing the runner to spin. The runner is connected to the shaft which rotates along with it and thus this can be used for power production. Components of Out-flow Turbine: Draft Tube: It is connected to outlet of the turbine which helps in water exiting the spiral casing. It is used because the exit pressure becomes less than the atmospheric pressure in the turbine and thus it becomes difficult for the fluid to get out from the spiral casing. To make it exit from the tail race we make a tube of diverging cross section so that the pressure can increased because of the increase in the cross section the velocity decreases and thus pressure increases thus the fluid can exit the turbine. Advantages: Some of the advantages of radial outflow turbine are: The configuration of radial flow turbine is very simple, it, actually looks very similar to a centrifugal compressor. Radial flow turbines are very robust machines and they are easy to configure. As a result of that they were indeed considered for the application before axial turbine. Advantages: Just like centrifugal compressors; radial flow turbines have very high energy extraction capability in one single stage.So the radial flow turbines are indeed the more preferred form of energy extraction than even compared to axial flow turbines, especially in small engines. Radial flow turbine rotor does not use aerofoil sections, as a result of which the rotor of radial flow turbine has a shape very similar to a centrifugal compressor and it uses 3D shape for energy extraction. This 3D shape has been become a great interest in modern research.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Prehydrated electrons** Prehydrated electrons: Prehydrated electrons are free electrons that occur in water under irradiation. Usually they form complexes with water molecules and become hydrated electrons. They can also react with the bases of the nucleotides dGMP and dTMP in aqueous solution. This suggests they may also react with the bases of the DNA double helix, ultimately breaking molecular bonds and causing DNA damage. This mechanism is hypothesized to be a cause of radiation damage to DNA.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Europa Thermal Emission Imaging System** Europa Thermal Emission Imaging System: The Europa Thermal Emission Imaging System (E-THEMIS) instrument is designed to scan the surface of Europa and identify areas of geologically recent resurfacing through the detection of subtle thermal anomalies. This 'heat detector' will provide high spatial resolution, multi-spectral thermal imaging of Europa to help detect active sites such as outflows and plumes. E-THEMIS will be launched on board the planned Europa Clipper astrobiology mission to Jupiter's moon Europa in 2025. Europa Thermal Emission Imaging System: The E-THEMIS uses technology inherited from the THEMIS camera flown on board the 2001 Mars Odyssey orbiter, and the OSIRIS-REx OTES instruments. Overview: E-THEMIS will identify areas of geologically recent resurfacing through the detection of subtle thermal anomalies. E-THEMIS will be fabricated by Arizona State University with hardware contributions from Ball Aerospace Corporation, and Raytheon Vision Systems. The Principal Investigator is Philip Christensen at Arizona State University. One of the primary science objectives of the Europa Thermal Emission Imaging System (E-THEMIS) is to determine the regolith particle size, block abundance, and sub-surface layering for landing site assessment and surface process studies. The E-THEMIS investigation is designed to characterize Europa's thermal behavior and identify any thermal anomalies due to recent or ongoing activity, which include multi-spectral infrared emission, at both day and night. To accomplish this, E-THEMIS will obtain thermal infrared images in three spectral bands from 7 to 70 μm at multiple times of day. Overview: Thermal anomalies on Europa may be manifestations of subsurface melting due to hot spots, shear heating on faults, and eruptions of liquid water, which can be imaged in the infrared spectrum. Europa's water is suspected to lie 70 km (43 mi) below the moon's ice crust. Objectives: The specific objectives of the E-THEMIS investigation are: Detect and characterize thermal anomalies on the surface that may be indicative of recent resurfacing or active venting Identify active plumes Determine the regolith particle size, block abundance, and sub-surface layering for landing site assessment and surface process studiesTo achieve this, E-THEMIS will image the surface at a resolution of 5 × 22 m from 25 km altitude; it will have a precision of 0.2 K for 90 K surfaces and 0.1 K at 220 K, with an accuracy of 1-2.2 K from 220-90 K; and E-THEMIS will obtain images with up to 360 cross-track pixels with a 10.1 km wide image swath from 100 km. The instrument can identify active vents, if existing, at the 1-10 meter scale. A radiation-hardened integrated circuit will be incorporated to meet the radiation requirements.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Keyboard protector** Keyboard protector: A keyboard protector or keyboard cover is a device which is placed on top of a computer keyboard in order to reduce contact with the environment. Keyboards are susceptible to corrosion damage from liquid spills and build up of dust and debris, requiring frequent cleaning and maintenance. The protector serves as a barrier to eliminate ingress from these materials. Composition: A keyboard protector is usually made from plastic, polyurethane or silicone. It is in the form of a flexible sheet, moulded to fit the key profiles and arrangements on the keyboard. Working principle: A keyboard protector is placed on top of a keyboard, acting as a physical barrier to the environment. When a key is depressed, the protector material deforms with the key, allowing full key travel and tactile feedback. Some models of have the sides of the protector extend to the underside of the keyboard, which are secured with adhesive tape. When dirty, the protector can be removed and cleaned. Advantages and inconvenience: Computer users who are unfamiliar with keyboard protectors may take some time to become accustomed, since the keystrokes are dampened and the force needed to depress the keys is different. These factors may also affect their typing speed and accuracy. Some applications can be a disadvantage, for example laptops and luggables. On laptops, the computer may not close properly with the protector fitted, and can transfer dirt and debris to the display. Compatibility: Since there are several major types of keyboards in the market, some with different layouts, the compatibility of keyboard protectors is also important in order to have the keyboard fully and well protected. Different keyboards will often feature slightly different key spacing or arrangement, leading to ill-fitting protectors.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Dentures** Dentures: Dentures (also known as false teeth) are prosthetic devices constructed to replace missing teeth, supported by the surrounding soft and hard tissues of the oral cavity. Conventional dentures are removable (removable partial denture or complete denture). However, there are many denture designs, some of which rely on bonding or clasping onto teeth or dental implants (fixed prosthodontics). There are two main categories of dentures, the distinction being whether they fit onto the mandibular arch or on the maxillary arch. Medical uses: Dentures can help people via: Mastication: chewing ability is improved by the replacement of edentulous (lacking teeth) areas with denture teeth. Aesthetics: the presence of teeth gives a natural appearance to the face, and wearing a denture to replace missing teeth provides support for the lips and cheeks and corrects the collapsed appearance that results from the loss of teeth. Pronunciation: replacing missing teeth, especially the anteriors, enables patients to speak better, enunciating more easily sibilants and fricatives in particular. Self-esteem: improved looks and speech boost confidence in patients' ability to interact socially. Complications: Stomatitis Denture stomatitis is an inflammatory condition of the skin under the dentures. It can affect both partial and complete denture wearers, and is most commonly seen on the palatal mucosa. Clinically, it appears as simple localized inflammation (Type I), generalized erythema covering the denture-bearing area (Type II) and inflammatory papillary hyperplasia (Type III). People with denture stomatitis are more likely to have angular cheilitis. Denture stomatitis is caused by a mixed infection of Candida albicans (90%) and a number of bacteria such as Staphylococcus, Streptococcus, Fusobacterium and Bacteroides species. Acrylic resin is more susceptible for fungal colonization, adherence and proliferation. Denture trauma, poor denture hygiene and nocturnal denture wear are local risk factors for denture stomatitis. Systemic risk factors for denture stomatitis include nutritional deficiencies, immunosuppression, smoking, diabetes, use of steroid inhaler and xerostomia. A person should be investigated for any underlying systemic disease. Improve the fit of ill-fitting dentures to eliminate any dental trauma. Stress on the importance of good denture hygiene including cleaning of the denture, soaking the dentures in disinfectant solution and not wearing it during sleeping at night is the key to treating all types of denture stomatitis. Topical application and systemic use of antifungal agents can be used to treat denture stomatitis cases that fail to respond to local conservative measures. Complications: Ulceration Mouth ulceration is the most common lesion in people with dentures. It can be caused by repetitive minor trauma like poorly fitting dentures including over-extension of a denture. Pressure-indicating paste can be used to check the fitting of dentures. It allows the areas of premature contact to be distinguished from areas of physiologic tissue contact. Therefore, the particular area can be polished with an acrylic bur. Leaching of residual monomer methyl methacrylate from inadequately cured denture acrylic resin material can cause mucosal irritation and hence oral ulceration as well. Patients are advised to use warm salt water mouth rinses and a betamethasone rinse can heal ulcer. Review of persisting oral ulcerations for more than 3 weeks is recommended. Tooth loss: People can become entirely edentulous for many reasons, the most prevalent being removal due to dental disease, which typically relates to oral flora control, i.e., periodontal disease and tooth decay. Other reasons include pregnancy, tooth developmental defects caused by severe malnutrition, genetic defects such as dentinogenesis imperfecta, trauma, or drug use. Periodontitis is defined as an inflammatory lesion mediated by host-pathogen interaction that results in the loss of connective tissue fiber attachment to the root surface and ultimately to the alveolar bone. It is the loss of connective tissue to the root surface that leads to teeth falling out. The hormones associated with pregnancy increases the risk of gingivitis and vomiting. Tooth loss: Hormones released during pregnancy softens the cardia muscle ring that keeps food within the stomach. Hydrochloric acid is the acid involved in gastric reflux, also known as morning sickness. This acid, at a pH of 1.5-3.5, coats the enamel on the teeth, mainly affecting the palatal surfaces of the maxillary teeth. Eventually the enamel is softened and easily wears away. Tooth loss: Dental trauma refers to trauma (injury) to the teeth and/or periodontium (gums, periodontal ligament, alveolar bone). Strong force may cause the root of the tooth to completely dislocate from its socket, mild trauma may cause the tooth to chip. Types: Removable partial dentures Removable partial dentures are for patients who are missing some of their teeth on a particular arch. Fixed partial dentures, also known as "crown and bridge" dentures, are made from crowns that are fitted on the remaining teeth. They act as abutments and pontics, and are made from materials resembling the missing teeth. Fixed bridges are more expensive than removable appliances but are more stable. Types: Another option in this category is the flexible partial, which takes advantage of innovations in digital technology. Flexible partial fabrication involves only non-invasive procedures. Dentures can be difficult to clean and can affect oral hygiene. Types: Complete dentures Complete dentures are worn by patients who are missing all of the teeth in a single arch—i.e. the maxillary (upper) or mandibular (lower) arch—or, more commonly, in both arches. The full denture is removable because it is held in place by suction. They are painful at first and can take some time to get used to. There are two types of full dentures: immediate dentures and conventional dentures. Types: Copy dentures Copy dentures can be made for either partial, but mainly complete denture patients. These dentures require fewer visits to make and usually are made for older patients, patients who would have difficulty adjusting to new dentures, would like a spare pair of dentures or like the aesthetics of their dentures already. This requires taking an impression of the patient's current denture and remaking it. Types: Materials Dentures are mainly made from acrylic due to the ease of material manipulation and likeness to intra-oral tissues, i.e. gums. Most dentures are composed of heat-cured acrylic polymethyl methacrylate and rubber-reinforced polymethyl methacrylate. Coloring agents and synthetic fibers are added to obtain the tissue-like shade, and to mimic the small capillaries of the oral mucosa, respectively. However, dentures made from acrylic can be fragile and fracture easily if the patient has trouble adapting neuromuscular control. This can be overcome by reinforcing the denture base with cobalt chromium (Co-Cr). They are often thinner (therefore more comfortable) and stronger (to prevent repeating fractures). History: As early as the 7th century BC, Etruscans in northern Italy made partial dentures out of human or other animal teeth fastened together with gold bands. The Romans had likely borrowed this technique by the 5th century BC.Wooden full dentures were invented in Japan around the early 16th century. Softened beeswax was inserted into the patient's mouth to create an impression, which was then filled with harder bees wax. Wooden dentures were then meticulously carved based on that model. The earliest of these dentures were entirely wooden, but later versions used natural human teeth or sculpted pagodite, ivory, or animal horn for the teeth. These dentures were built with a broad base, exploiting the principles of adhesion to stay in place. This was an advanced technique for the era; it would not be replicated in the West until the late 18th century. Wooden dentures continued to be used in Japan until the Opening of Japan to the West in the 19th century.In 1728, Pierre Fauchard described the construction of dentures using a metal frame and teeth sculpted from animal bone. The first porcelain dentures were made around 1770 by Alexis Duchâteau. In 1791, the first British patent was granted to Nicholas Dubois De Chemant, previous assistant to Duchateau, for "De Chemant's Specification": ... a composition for the purpose of making of artificial teeth either single double or in rows or in complete sets, and also springs for fastening or affixing the same in a more easy and effectual manner than any hitherto discovered which said teeth may be made of any shade or colour, which they will retain for any length of time and will consequently more perfectly resemble the natural teeth. History: He began selling his wares in 1792, with most of his porcelain paste supplied by Wedgwood.17th century London's Peter de la Roche is believed to be one of the first 'operators for the teeth', men who advertized themselves as specialists in dental work. They were often professional goldsmiths, ivory turners or students of barber-surgeons.In 1820, Samuel Stockton, a goldsmith by trade, began manufacturing high-quality porcelain dentures mounted on 18-carat gold plates. Later dentures from the 1850s onwards were made of Vulcanite, a form of hardened rubber into which porcelain teeth were set. In the 20th century, acrylic resin and other plastics were used. In Britain, sequential Adult Dental Health Surveys revealed that in 1968 79% of those aged 65–74 had no natural teeth; by 1998, this proportion had fallen to 36%. History: George Washington George Washington (1732–1799) suffered from problems with his teeth throughout his life, and historians have tracked his experiences in great detail. He lost his first adult tooth when he was twenty-two and had only one left by the time he became president. He had several sets of false teeth made, four of them by a dentist named John Greenwood. None of the sets, contrary to popular belief, were made from wood or contained any wood. The set made when he became president were carved from hippopotamus and elephant ivory, held together with gold springs. Prior to these, he had a set made with real human teeth, likely ones he purchased from "several unnamed Negroes, presumably Mount Vernon slaves" in 1784. Manufacturing: Modern dentures are most often fabricated in a commercial dental laboratory or by a denturist using a combination of tissue shaded powders polymethyl methacrylate acrylic (PMMA). These acrylics are available as heat-cured or cold-cured types. Commercially produced acrylic teeth are widely available in hundreds of shapes and tooth colors. Manufacturing: The process of fabricating a denture usually begins with an initial dental impression of the maxillary and mandibular ridges. Standard impression materials are used during the process. The initial impression is used to create a simple stone model that represents the maxillary and mandibular arches of the patient's mouth. This is not a detailed impression at this stage. Once the initial impression is taken, the stone model is used to create a 'custom impression tray', which is then used to take a second and much more detailed and accurate impression of the patient's maxillary and mandibular ridges. Polyvinyl siloxane impression material is one of several very accurate impression materials used when the final impression is taken of the maxillary and mandibular ridges. A wax rim is fabricated to assist the dentist or denturist in establishing the vertical dimension of occlusion. After this, a bite registration is created to marry the position of one arch to the other. Manufacturing: Once the relative position of each arch to the other is known, the wax rim can be used as a base to place the selected denture teeth in correct position. This arrangement of teeth is tested in the mouth so that adjustments can be made to the occlusion. After the occlusion has been verified by the dentist or denturist and the patient, and all phonetic requirements are met, the denture is processed. Manufacturing: Processing a denture is usually performed using a lost-wax technique whereby the form of the final denture, including the acrylic denture teeth, is invested in stone. This investment is then heated, and when it melts the wax is removed through a spruing channel. The remaining cavity is then either filled by forced injection or pouring in the uncured denture acrylic, which is either a heat-cured or cold-cured type. During the processing period, heat cured acrylics—also called permanent denture acrylics—go through a process called polymerization, causing the acrylic materials to bond very tightly and taking several hours to complete. After a curing period, the stone investment is removed, the acrylic is polished, and the denture is complete. The end result is a denture that looks much more natural, is much stronger and more durable than a cold-cured temporary denture, resists stains and odors, and will last for many years. Manufacturing: Cold-cured or cold-pour dentures, also known as temporary dentures, do not look as natural, are less durable, tend to be highly porous and are only used as a temporary expedient until a more permanent solution is found. These types of dentures tend to cost much less due to their quick production time (usually minutes) and composition of low-cost materials. It is not suggested that a patient wear a cold-cured denture for a long period of time, as they are prone to cracks and can break rather easily. Prosthodontic principles: Support Support is the principle that describes how well the underlying mucosa (oral tissues, including gums) keeps the denture from moving vertically towards the arch in question during chewing, and thus being excessively depressed and moving deeper into the arch. For the mandibular arch, this function is provided primarily by the buccal shelf, a region extending laterally from the back or posterior ridges, and by the pear-shaped pad (the most posterior area of keratinized gingival formed by the scaling down of the retro-molar papilla after the extraction of the last molar tooth). Secondary support for the complete mandibular denture is provided by the alveolar ridge crest. The maxillary arch receives primary support from the horizontal hard palate and the posterior alveolar ridge crest. The larger the denture flanges (that part of the denture that extends into the vestibule), the better the stability (another parameter to assess fit of a complete denture). Long flanges beyond the functional depth of the sulcus are a common error in denture construction, often (but not always) leading to movement in function, and ulcerations (denture sore spots). Prosthodontic principles: Stability Stability is the principle that describes how well the denture base is prevented from moving in a horizontal plane, and thus sliding from side to side or front to back. The more the denture base (pink material) is in smooth and continuous contact with the edentulous ridge (the hill upon which the teeth used to reside, but now only residual alveolar bone with overlying mucosa), the better the stability. Of course, the higher and broader the ridge, the better the stability will be, but this is usually a result of patient anatomy, barring surgical intervention (bone grafts, etc.). Prosthodontic principles: Retention Retention is the principle that describes how well the denture is prevented from moving vertically in the opposite direction of insertion. The better the topographical mimicry of the intaglio (interior) surface of the denture base to the surface of the underlying mucosa, the better the retention will be (in removable partial dentures, the clasps are a major provider of retention), as surface tension, suction and friction will aid in keeping the denture base from breaking intimate contact with the mucosal surface. It is important to note that the most critical element in the retentive design of a maxillary complete denture is a complete and total border seal (complete peripheral seal) in order to achieve 'suction'. The border seal is composed of the edges of the anterior and lateral aspects and the posterior palatal seal. The posterior palatal seal design is accomplished by covering the entire hard palate and extending not beyond the soft palate and ending 1–2 mm from the vibrating line. Prosthodontic principles: Prosthodontists use a scale called the Kapur index to quantify denture stability and retention. Prosthodontic principles: Implant technology can vastly improve the patient's denture-wearing experience by increasing stability and preventing bone from wearing away. Implants can also aid retention. Instead of merely placing the implants to serve as blocking mechanism against the denture's pushing on the alveolar bone, small retentive appliances can be attached to the implants that can then snap into a modified denture base to allow for tremendously increased retention. Available options include a metal "Hader bar" or precision ball attachments. Prosthodontic principles: Fit, maintenance and relining Generally speaking, partial dentures tend to be held in place by the presence of the remaining natural teeth and complete dentures tend to rely on muscular co-ordination and limited suction to stay in place. The maxilla very commonly has more favorable denture-bearing anatomy, as the ridge tends to be well formed and there is a larger area on the palate for suction to retain the denture. Conversely, the mandible tends to make lower dentures much less retentive due to the displacing presence of the tongue and the higher rate of resorption, frequently leading to significantly resorbed lower ridges. Disto-lingual regions tend to offer retention even in highly resorbed mandibles, and extension of the flange into these regions tends to produce a more retentive lower denture. An implant supported lower denture is another option for improving retention. Prosthodontic principles: Dentures that fit well during the first few years after creation will not necessarily fit well for the rest of the wearer's lifetime. This is because the bone and mucosa of the mouth are living tissues, which are dynamic over decades. Bone remodeling never stops in living bone. Edentulous jaw ridges tend to resorb progressively over the years, especially the alveolar ridge of the lower jaw. Mucosa reacts to being chronically rubbed by the dentures. Poorly fitting dentures hasten both of those processes compared to the rates with well-fitting dentures. Poor fitting dentures may also lead to the development of conditions such as epulis fissuratum. In addition, the occlusion (chewing surfaces of the teeth) tends to wear away over time, which reduces chewing efficacy and decreases the vertical dimension of occlusion (the "openness" of the jaws and mouth). Costs: In countries where denturism is legally performed by denturists, it is typically a denturist association that publishes the fee guide. In countries where it is performed by dentists, it is typically a dental association that publishes the fee guide. Some governments also provide additional coverage for the purchase of dentures by seniors. Typically, only standard low-cost dentures are covered by insurance and because many individuals would prefer to have a premium cosmetic denture or a premium precision denture they rely on consumer dental patient financing options. Costs: A low-cost denture starts at about $300–$500 per denture, or $600–$1,000 for a complete set of upper and lower dentures. These tend to be cold-cured dentures, which are considered temporary because of the lower quality materials and streamlined processing methods used in their manufacture. In many cases, there is no opportunity to try them on for fit before they are finished. They also tend to look artificial and not as natural as higher quality, higher priced dentures. Costs: A mid-priced (and better quality) heat-cured denture typically costs $500–$1,500 per denture, or $1,000–$3,000 for a complete set. The teeth look much more natural and are much longer-lasting than cold-cured or temporary dentures. In many cases, they may be tried out before they are finished to ensure that all the teeth occlude (meet) properly and look esthetically pleasing. These usually come with a 90-day to two-year warranty and in some cases a money-back guarantee if the customer is not satisfied. In some cases, the cost of subsequent adjustments to the dentures is included. Costs: Premium heat-cured dentures can cost $2,000–$4,000 per denture, or $4,000–$8,000 or more for a set. Dentures in this price range are usually completely customized and personalized, use high-end materials to simulate the lifelike look of gums and teeth as closely as possible, last a long time and are warrantied against chipping and cracking for 5–10 years or longer. Often the price includes several follow-up visits to fine-tune the fit. Costs: In the United Kingdom, as of 13 March 2018, an NHS patient must pay £244.30 for a denture to be made. This is a flat rate and no additional charges may be made regarding material used or appointments needed. Privately, the cost can lie upwards of £300. Care: Daily cleaning of dentures is recommended. Plaque and tartar can build up on false teeth, just as they do on natural teeth. Cleaning can be done using chemical or mechanical denture cleaners. Dentures should not be worn continuously, but rather taken out of the mouth during sleep. This is to give the tissues a chance to recover: wearing dentures at night is likened to sleeping in shoes. The main risk is the development of fungal infections, especially denture-related stomatitis. Dentures should also be removed while smoking, as the heat can damage the denture acrylic, and overheated acrylic can burn the soft tissues. Care: Deposits such as microbial plaque, calculus and food debris can accumulate on the dentures, which may lead to issues such as angular stomatitis, denture stomatitis, undesirable odors and tastes as well as staining. These deposits can also quicken the degradation of some of the denture materials. Due to the presence of these deposits, there is an increased risk of the denture wearer and other people around them developing a systemic disease by organisms such as methicillin-resistant Staphylococcus aureus (MRSA), but research shows that denture cleaners are effective against MRSA. Therefore, denture cleaning is imperative for the overall health of the denture wearers as well as for the health of people they come into contact with. Care: Brushing After receiving dentures, the patient should brush them often with soap, water and a soft nylon toothbrush which has a small head, as this will enable the brush to reach into all the areas of the denture surface. The bristles must be soft in order for them to easily conform to the contours of the dentures for adequate cleaning: stiff bristles will not conform well and are likely to cause abrasion of the denture acrylic resin. If a patient finds it difficult to utilize a toothbrush, e.g. a patient with arthritis, a brush with easy-grip modifications may be used.Disclosing solutions can be used at home to make less obvious plaque deposits more visible to ensure thorough cleaning of plaque. Food dyes can be utilized as a disclosing solution when used correctly.Instead of brushing their dentures with soap and water, patients can use pastes designed for dentures or conventional toothpaste to clean their dentures. However, the American Dental Association advises against using toothpaste as it can be too harsh for cleaning dentures. Care: Immersion Patients should combine the brushing of their dentures with soaking them in an immersion cleaner from time to time as this combined cleaning strategy has been shown to control denture plaque. Due to microbial invasion, the lack of use of immersion cleaners and inadequate denture plaque control will cause rapid deterioration of the soft linings of the denture. Cleansers and methods Liquid cleansers that dentures can be immersed in include: bleaches e.g. sodium hypochlorite; effervescent solutions e.g. alkaline peroxides, perborates and persulfates; acid cleaners. Care: Sodium hypochlorite cleansers Sodium hypochlorite (NaOCl) cleansers have a disinfectant action and remove non-viable organisms and other deposits from the surface, but they are weak for eliminating calculus from the denture surface. Immersing dentures in a hypochlorite solution for more than 6 hours occasionally will eliminate plaque and staining. Furthermore, as microbial invasion is prevented, the deterioration of the soft lining material does not occur. Corrosion of cobalt chromium has occurred when hypochlorite cleansers have been used, and they may also result in the fading of the acrylic and silicone lining, but the softness or elastically of the linings are not greatly changed. Care: Effervescent cleansers Effervescent cleansers are the most popular immersion cleansers and include alkaline peroxides, perborates and persulfates. Their cleansing action occurs by the formation of small bubbles which displace loosely-attached material from the surface of the denture. They are not very effective as cleansers and have a restricted ability to eliminate microbial plaque. Moreover, they are safe for use and do not cause deterioration of the acrylic resin or the metals used in denture construction. Despite this, they are able to cause rapid damage to some short-term soft lining. Discoloration of the acrylic resin to a white denture often occurs; however, this can be due to the use of very hot water with cleaning agents against manufacturer instructions. Care: Acid cleansers Sulfamic acid is a type of acid cleanser that is used to prevent the formation of calculus on dentures. Sulfamic acid has a very good compatibility with many denture materials, including the metals used in denture construction. Care: 5% hydrochloric acid is another type of acid cleanser. In this case, the denture is immersed in the hydrochloric cleanser to soften the calculus so that it can be brushed away. The acid can cause damage to clothes if accidentally spilt, and can cause corrosion of cobalt-chromium or stainless steel if immersed in the acid often and for long periods of time. Care: Other denture cleaning methods Other denture cleaning methods include enzymes, ultrasonic cleansers and microwave exposure. A Cochrane Review found that there is weak evidence to support soaking dentures in effervescent tablets or in enzymatic solutions, and while the most effective method for eliminating plaque is not clear, the review shows that brushing with paste eliminates microbial plaque better than inactive methods. There is a need for studies to provide reports about the cost of materials and the negative effects that may be associated with their use, as these factors could affect the acceptability of such materials by patients which will in turn affect their effectiveness in a daily setting in the long term. Putting dentures into a dishwasher overnight can be a useful shortcut when away from home. Additionally, further studies comparing the different methods of cleaning dentures are needed. Care: Broken dentures Dentures sometimes break, often during eating or when dropped during cleaning. A repair or replacement should be sought as soon as possible to restore function and aesthetics; the continued wearing of a broken denture results in unnecessary intra-oral tissue irritation, which may result in an increased risk of infection and other pathologies including malignancies.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Bohr–Mollerup theorem** Bohr–Mollerup theorem: In mathematical analysis, the Bohr–Mollerup theorem is a theorem proved by the Danish mathematicians Harald Bohr and Johannes Mollerup. The theorem characterizes the gamma function, defined for x > 0 by Γ(x)=∫0∞tx−1e−tdt as the only positive function f , with domain on the interval x > 0, that simultaneously has the following three properties: f (1) = 1, and f (x + 1) = x f (x) for x > 0 and f is logarithmically convex.A treatment of this theorem is in Artin's book The Gamma Function, which has been reprinted by the AMS in a collection of Artin's writings.The theorem was first published in a textbook on complex analysis, as Bohr and Mollerup thought it had already been proved.The theorem admits a far-reaching generalization to a wide variety of functions (that have convexity or concavity properties of any order). Statement: Bohr–Mollerup Theorem. Γ(x) is the only function that satisfies f (x + 1) = x f (x) with log( f (x)) convex and also with f (1) = 1. Proof: Let Γ(x) be a function with the assumed properties established above: Γ(x + 1) = xΓ(x) and log(Γ(x)) is convex, and Γ(1) = 1. From Γ(x + 1) = xΓ(x) we can establish Γ(x+n)=(x+n−1)(x+n−2)(x+n−3)⋯(x+1)xΓ(x) The purpose of the stipulation that Γ(1) = 1 forces the Γ(x + 1) = xΓ(x) property to duplicate the factorials of the integers so we can conclude now that Γ(n) = (n − 1)! if n ∈ N and if Γ(x) exists at all. Because of our relation for Γ(x + n), if we can fully understand Γ(x) for 0 < x ≤ 1 then we understand Γ(x) for all values of x. Proof: For x1, x2, the slope S(x1, x2) of the line segment connecting the points (x1, log(Γ (x1))) and (x2, log(Γ (x2))) is monotonically increasing in each argument with x1 < x2 since we have stipulated that log(Γ(x)) is convex. Thus, we know that for all x∈(0,1]. After simplifying using the various properties of the logarithm, and then exponentiating (which preserves the inequalities since the exponential function is monotonically increasing) we obtain (n−1)x(n−1)!≤Γ(n+x)≤nx(n−1)!. From previous work this expands to (n−1)x(n−1)!≤(x+n−1)(x+n−2)⋯(x+1)xΓ(x)≤nx(n−1)!, and so (n−1)x(n−1)!(x+n−1)(x+n−2)⋯(x+1)x≤Γ(x)≤nxn!(x+n)(x+n−1)⋯(x+1)x(n+xn). Proof: The last line is a strong statement. In particular, it is true for all values of n. That is Γ(x) is not greater than the right hand side for any choice of n and likewise, Γ(x) is not less than the left hand side for any other choice of n. Each single inequality stands alone and may be interpreted as an independent statement. Because of this fact, we are free to choose different values of n for the RHS and the LHS. In particular, if we keep n for the RHS and choose n + 1 for the LHS we get: ((n+1)−1)x((n+1)−1)!(x+(n+1)−1)(x+(n+1)−2)⋯(x+1)x≤Γ(x)≤nxn!(x+n)(x+n−1)⋯(x+1)x(n+xn)nxn!(x+n)(x+n−1)⋯(x+1)x≤Γ(x)≤nxn!(x+n)(x+n−1)⋯(x+1)x(n+xn) It is evident from this last line that a function is being sandwiched between two expressions, a common analysis technique to prove various things such as the existence of a limit, or convergence. Let n → ∞: lim n→∞n+xn=1 so the left side of the last inequality is driven to equal the right side in the limit and nxn!(x+n)(x+n−1)⋯(x+1)x is sandwiched in between. This can only mean that lim n→∞nxn!(x+n)(x+n−1)⋯(x+1)x=Γ(x). Proof: In the context of this proof this means that lim n→∞nxn!(x+n)(x+n−1)⋯(x+1)x has the three specified properties belonging to Γ(x). Also, the proof provides a specific expression for Γ(x). And the final critical part of the proof is to remember that the limit of a sequence is unique. This means that for any choice of 0 < x ≤ 1 only one possible number Γ(x) can exist. Therefore, there is no other function with all the properties assigned to Γ(x). Proof: The remaining loose end is the question of proving that Γ(x) makes sense for all x where lim n→∞nxn!(x+n)(x+n−1)⋯(x+1)x exists. The problem is that our first double inequality S(n−1,n)≤S(n+x,n)≤S(n+1,n) was constructed with the constraint 0 < x ≤ 1. If, say, x > 1 then the fact that S is monotonically increasing would make S(n + 1, n) < S(n + x, n), contradicting the inequality upon which the entire proof is constructed. However, lim n→∞x⋅(nxn!(x+n)(x+n−1)⋯(x+1)x)nn+x+1Γ(x)=(1x)Γ(x+1) which demonstrates how to bootstrap Γ(x) to all values of x where the limit is defined.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Virtual graffiti** Virtual graffiti: Virtual graffiti consists of virtual or digital media applied to public locations, landmarks or surfaces. Virtual graffiti applications utilize augmented reality and ubiquitous computing to anchor virtual graffiti to physical landmarks or objects in the real world. The virtual content can then be viewed through digital devices. Virtual graffiti is aimed at delivering messaging and social multimedia content to mobile applications and devices based on the identity, location, and community of the user. Mediums: This overall effort focuses on creating new mobile experiences based on merging virtual reality, telepresence, and global positioning systems. These experiences evolve over time based on the needs and capabilities of the users. Location-based messaging This medium regards a mobile user receiving or sending a message based on their location. The content quality is directly related to the accuracy of the user's location. These include experiences such as: Searching where the nearest restaurant is and receiving a set of messages and coupons from nearby restaurant proprietors. Receiving a message on their mobile device that there is traffic ahead with a suggested alternative route. Coworkers receive a notification that an upcoming meeting has been moved to a new location. Mediums: Location anchored virtual reality This involves anchoring a virtual reality experience at a physical location. Thus the experiences in the virtual world can only be had at a specific real location. Several use cases that are included here are: A virtual command post can be set up at the scene of an incident. This command post involves the sharing of information in the virtual world but can only be accessed by those at the scene of the incident.A set of blogs and media files are left at famous outdoor sculptures. Groups of friends can contribute, copy, and share files only while they are viewing the sculpture. Background: The phrase "virtual graffiti" has existed for a long time and has been applied to various applications over the years. Originally, it referred to posting messages on electronic bulletin board systems. From there, it has developed in academia into contextual messaging applications. Contextual messaging Contextual messaging refers to leaving some type of context-specific annotation, e.g., a virtual Post-it Note on a computer monitor, a time-sensitive message attached to a wall, or location-based graffiti on a physical object. Researchers at the University of Salford experimented with a Cave system in which a user could mark up a scene using 6-degree freedom sensors. Obviously, this is not suitable for immediate use or mass market applications, but it serves as starting point from which other work could be derived. Background: During a research fellowship at the University of Georgia in 2003, Kit Hughes developed a system in which users with WiFi-enabled mobile devices could mark up buildings in downtown Athens, Georgia, with their own virtual graffiti via a process known as tagging. In this system, the buildings are selected on a map, and the graffiti is stored in a database that can be accessed from other mobile devices and the project’s website. Background: A location-based messaging system for leaving virtual post-it notes on physical objects was developed at the National University of Singapore. The system uses mobile devices as AR interfaces to view virtual messages associated with fiducial markers on physical objects. Background: In a project from Lancaster University, mobile phones are used as digital mediums, using RFID tags to identify markable objects. The RFID tags can hold the identities of the last five people to leave graffiti. The graffiti itself is stored on a server. When another user comes within range of an RFID-tagged object, the associated graffiti is downloaded onto their mobile device.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Google Takeout** Google Takeout: Google Takeout, also known as Download Your Data, is a project by the Google Data Liberation Front that allows users of Google products, such as YouTube and Gmail, to export their data to a downloadable archive file. Usage: Users can select different services from the list of options provided. As of 24 March 2016, the services that can be exported are as follows: The user can select to export all of the available services or choose services from the above list. Takeout will then process the request and put all the files into a zip file. Takeout then optionally sends an email notification that the export is completed, at which point the user can download the archive from the downloads section of the website. The zip file contains a separate folder for each service that was selected for export. History: Google Takeout was created by the Google Data Liberation Front on June 28, 2011 to allow users to export their data from most of Google's services. Since its creation, Google has added several more services to Takeout due to popular demand from users.Takeout started with exports of only Google Buzz, Google Contacts, Google Profile, Google Streams, and Picasa Albums. The next month, on July 15, 2011, Google added the export of Google +1's to the list after it was frequently requested by Takeout's users. Later in 2011 on September 6, Google added Google Voice to their export service. A big milestone was the addition of YouTube video exports to Takeout next year on September 26, 2012. Google took another big step with the addition of Blogger posts and Google+ pages on February 17, 2013.On December 5, 2013, Google Takeout was further expanded to include Gmail and Google Calendar data. Criticism: Earlier criticisms were raised that Google Takeout did not allow users to export from some core Google services, most notably Google Search history and Google Wallet details. Google has since expanded the service to include search history and Wallet details (September 2016). Google has also added Google Hangouts to the Takeout service. Google also does not delete user data automatically after exporting, they provide a separate service to perform deletion. Google Takeout has also been criticized for keeping the takeout data available for too short a time for many users with large files to easily download everything before the batch expires, in essence "trapping" users with large data and slow bandwidth in Google's services.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Pyrazinoic acid** Pyrazinoic acid: Pyrazinoic acid is a pyrazinamide metabolite. Possible role in tuberculosis treatment: Pyrazinamid is currently used as a treatment for tuberculosis. Mycobacterium tuberculosis converts pyrazinamid into pyrazinoic acid. The use of pyrazinoic acid has been investigated as a possible treatment for pyrazinamid resistant strains of Mycobacterium tuberculosis. It has been shown that the MICs of esters of pyrazinoic acid are lower, therefore they are more potent antibiotics. Moreover they cross the bacterial membrane more easily, due to their higher lipophilicity. Derivatives/uses: It is a part of the bortezomib molecule.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Aequationes Mathematicae** Aequationes Mathematicae: Aequationes Mathematicae is a mathematical journal. It is primarily devoted to functional equations, but also publishes papers in dynamical systems, combinatorics, and geometry. As well as publishing regular journal submissions on these topics, it also regularly reports on international symposia on functional equations and produces bibliographies on the subject.János Aczél founded the journal in 1968 at the University of Waterloo, in part because of the long publication delays of up to four years in other journals at the time of its founding. Aequationes Mathematicae: It is currently published by Springer Science+Business Media, with Zsolt Páles of the University of Debrecen as its editor in chief. János Aczél remains its honorary editor in chief.As of 2016 it was listed as a second-quartile mathematics journal by SCImago Journal Rank.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Corncob** Corncob: A corncob, also called corn cob, cob of corn, or corn on the cob, is the central core of an ear of corn (also known as maize). It is the part of the ear on which the kernels grow. The ear is also considered a "cob" or "pole" but it is not fully a "pole" until the ear is shucked, or removed from the plant material around the ear. It is also the green husk that goes outside the corn. Corncob: Young ears, also called baby corn, can be consumed raw, but as the plant matures the cob becomes tougher until only the kernels are truly edible. However, during several instances of famine, especially in the European countries through the history, people have been known to eat the corncobs, especially the foamy middle part. The whole cob or just the middle used to be ground and mixed with whatever type of flour that was at hand (usually wheat or corn flour). It served as a sort of a peculiar "filler", to extend the quantity of the original flour and as such, it was used even in production of bread. Containing mainly cellulose, hemicellulose and lignin, corncob is not toxic to humans and can be digested, but the outside is rough and practically inedible in its original form, while the foamy part has a peculiar texture when mature and is completely bland, which most people would find unappealing, due to the consistency similar to foam plastic. Corncob: Corncobs are particularly good source of heat when burned, so they were traditionally used for roasting meat on the spit, barbecuing and heating the bread ovens, through the centuries. In the olden days, it was especially appreciated for its long and steady burning embers, also used for the ember irons. When harvesting corn, the corncob may be collected as part of the ear (necessary for corn on the cob), or instead may be left as part of the corn stover in the field. Uses: Corncobs find use in the following applications: Industrial source of the chemical furfural Fiber in fodder for ruminant livestock (despite low nutritional value)Other applications include: Bedding for animals — cobs absorb moisture and provide a compliant surface Ground up and washed (then re-dried) to make cat litter A mild abrasive for cleaning building surfaces, when coarsely ground Raw material for bowls of corncob pipes As a biofuel Charcoal production Environmentally-friendly rodenticide (powdered corn cob) Soil conditioner, water retainer in horticulture Absorbent media for safe disposal of liquid and solid effluents Diluent/carrier/filler material in animal health products, agro-chemicals, veterinary formulations, vitamin premixes, pharmaceuticals, etc. Uses: Xylose — a sweetener Anal hygiene The body of a doll
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Media circus** Media circus: Media circus is a colloquial metaphor, or idiom, describing a news event for which the level of media coverage—measured by such factors as the number of reporters at the scene and the amount of material broadcast or published—is perceived to be excessive or out of proportion to the event being covered. Coverage that is sensationalistic can add to the perception the event is the subject of a media circus. The term is meant to critique the coverage of the event by comparing it to the spectacle and pageantry of a circus. Usage of the term in this sense became common in the 1970s. It can also be called a media feeding frenzy or just media frenzy, especially when they cover the media coverage. History: Although the idea is older, the term media circus began to appear around the mid-1970s. An early example is from the 1976 book by author Lynn Haney, in which she writes about a romance in which the athlete Chris Evert was involved: "Their courtship, after all, had been a 'media circus.'" A few years later The Washington Post had a similar courtship example in which it reported, "Princess Grace herself is still traumatized by the memory of her own media-circus wedding to Prince Rainier in 1956."Media circuses make up the central plot device in the 1951 movie Ace in the Hole about a self-interested reporter who, covering a mine disaster, allows a man to die trapped underground. It cynically examines the relationship between the media and the news they report. The movie was subsequently re-issued as The Big Carnival, with "carnival" referring to what we now call a "circus". History: In the film, the disaster attracts campers including a real circus. The movie was based on real-life Floyd Collins who in 1925 was trapped in a Kentucky cave drawing so much media attention that it became the third largest media event between the two World Wars (the other two being Lindbergh's solo flight and the Lindbergh kidnapping). Examples: Events described as a media circus include: Aruba The disappearance, and assumed death, of Natalee Holloway (2005–) Australia The Azaria Chamberlain disappearance of 2-month-old baby in outback Australia (1980) The Beaconsfield Mine collapse (2006) 2009 Violence against Indians in Australia controversy Schapelle Corby Drug smuggler (2014) Brazil The murder of Isabella Nardoni (2008) Canada Conrad Black, business magnate of newspapers, convicted of fraud, embezzlement and corporate destruction, imprisoned in Florida (2007) Toronto mayor Rob Ford's life, including his usage of drugs, alcohol and involvement with organized crime (2013) Paul Bernardo and Karla Homolka (serial killers) (1987–1990) Omar Khadr (detained as a minor at Guantanamo Bay in 2001, transferred to Canada in 2012, released in May 2015) Luka Magnotta Rocco, a gay Quebec pornstar charged with murdering his Chinese roommate in 2012 then fled to Germany where arrested. Examples: Fatal traffic accident of the Neville-Lake children (2015) Chile 2010 Copiapó mining accident (2010) Colombia The Death of Luis Andres Colmenares (2010) Indonesia Miss Universe Indonesia 2023 sexual abuse scandal (2023) Italy Amanda Knox (convicted of the murder of Meredith Kercher; her conviction was subsequently overturned) (2015) Malaysia The missing Malaysia Airlines Flight 370 (2014) Peru Joran van der Sloot and the death of Stephany Flores Ramirez (2010) Philippines Pepsi Number Fever 349 incident (1992) Murders of Eileen Sarmenta and Allan Gomez (1993) Manila Film Festival scandal (1994) PhilSports Stadium stampede (2006) Manila hostage crisis (2010) Vhong Navarro assault incident (2014) Good conduct time allowance controversy (2019) PNP Ninja cops controversy (2019) Barretto sisters controversy (2019) ABS-CBN franchise renewal controversy (2020) PhilHealth corruption scandal (2020) Marichu Mauro maltreatment case (2020–21) Philippine Government–Sinovac Biotech purchase controversy (2020–2021) 2020 Tarlac shooting (2020–2021) Death of Christine Dacera (2021–2022) Tim Yap birthday party controversy (2021) 2021 PNP–PDEA shootout (2021) 2021 PDP–Laban dispute (2021–2022) Kylie Padilla–Aljur Abrenica breakup (2022) Pharmally pandemic deals scandal (2021–2022) LJ Reyes–Paolo Contis breakup (2021) Ernest John Obiena–PATAFA dispute (2021–2022) Ana Jalandoni–Kit Thompson altercation incident (2022) Moira Dela Torre–Jason Hernandez breakup (2022) Ateneo de Manila University shooting (2022) Binibining Pilipinas 2022 coronation event controversy (2022) Disappearance of Jovelyn Galleno (2022) Deped laptop deals scandal (2022) 2022 Philippine sugar crisis (2022) Grand Lotto 6/55 controversial draw (2022) Killing of Percy Lapid (2022) Camp Crame hostage–taking incident (2022) Juanito Remulla III drug case (2022–2023) Benilde Blazers–JRU Heavy Bombers altercation incident (2022) 2023 Philippine airspace closure (2023) Oriental Mindoro oil spill (2023) Pamplona massacre (2023) Miss Universe Philippines 2023 coronation event controversy (2023) TVJ–TAPE Inc. dispute (2023) Love the Philippines controversy (2023) Awra Briguela altercation incident (2023) PAGCOR logo rebranding issue (2023) Maharlika Investment Fund issues (2023) Vice Ganda–Ion Perez indecent acts issue (2023) Wally Bayola profanity incident (2023) Removal of Representative Arnolfo Teves Jr. (2023) Romania Disappearance and alleged murder of Elodia Ghinescu, especially on OTV, which aired a couple hundred episodes on the matter (2007) South Africa Oscar Pistorius on trial for death of his girlfriend Reeva Steenkamp (2013–14) South Korea Suicide and funeral of K-pop star and Shinee member Kim Jong-hyun (2017) Thailand Tham Luang cave rescue (2018) Ukraine Mykola Melnychenko's involvement in the Cassette Scandal (1999–2000) United Kingdom The Charlie Gard case (2017) The life, career, death and funeral of Jade Goody (2009) The disappearance of Madeleine McCann (2008). Examples: "Megxit" feud between Meghan Markle/Prince Harry and the royal family (2020–2023) United States The 1924 murder trials of Beulah Annan, Belva Gærtner, and several other female suspects in Chicago, adapted into the Chicago franchise by a newspaper reporter The 1932 kidnapping of toddler Charles Lindbergh Jr. Journalist H. L. Mencken described the incident as "the biggest story since the Resurrection". Examples: The early 1930s string of public enemies, ranging from mafia leaders such as Al Capone to smaller-time gangsters, most enduringly famously Bonnie and Clyde The 1954 trial of Sam Sheppard. The U.S. Supreme Court held "massive, pervasive, and prejudicial publicity" prevented him from receiving a fair trial The 1965 littering trial against singer Arlo Guthrie and Richard Robbins, deliberately turned into a local media circus by arresting officer William Obanhein to deter others from repeating their actions Coverage of the investigation and trial of the 1969 murders of Sharon Tate and four others by the Manson family David Gelman, Peter Greenberg, et al. in Newsweek on January 31, 1977: "Brooklyn born photographer and film producer Lawrence Schiller managed to make himself the sole journalist to witness the execution of Gary Gilmore in Utah....In the Gilmore affair, he was like a ringmaster in what became a media circus, with sophisticated newsmen scrambling for what he had to offer" The rescue of baby Jessica McClure (1987) The Central Park jogger case (1989) The O. J. Simpson murder case (1994–1995) The Blizzard of '96 (1996), "...this storm ...so hyped by the media in the same way that the O. J. Simpson murder case became hyped as the "Trial of the century" The Clinton–Lewinsky scandal (1998) The Elián González custody conflict (2000) The Summer of the Shark (2001) The trial of Scott Peterson for the murder of his wife Laci Peterson (2004), "The circus became even more raucous when Peterson went on trial for murder in 2004" The trial of Martha Stewart (2004), "The stone-faced Stewart never broke stride as she cut a path through the media circus" The disappearance of Stacy Peterson (2007) The alleged teenage "pregnancy pact" at Gloucester High School (2008) The Casey Anthony murder trial (2011), "Once again, it was relentless media coverage that in large part fed the fascination with the case", Ford observed The killing of Trayvon Martin (2012), "Here is where the media circus takes a decidedly ugly turn", Eric Deggans wrote The murder of Travis Alexander (2013), where Jodi Arias was found guilty of first-degree murder The Killing of Cecil the lion (2015) Opposition to and protests against the nomination of Brett Kavanaugh and the proceedings of his Confirmation Hearings (2018) The murder of George Floyd and the protests that followed (2020) The Killing of Gabby Petito (2021) Will Smith slapping Chris Rock (2022) The Johnny Depp v Amber Heard trial (2022) Miss USA 2022 coronation event controversy (2022) The 2022 University of Idaho killings (2022) The various scandals surrounding George Santos (2022–2023) The trial of Alex Murdaugh (2023) Indictment of Donald Trump (2023)
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Cobalt poisoning** Cobalt poisoning: Cobalt poisoning is intoxication caused by excessive levels of cobalt in the body. Cobalt is an essential element for health in animals in minute amounts as a component of vitamin B12. A deficiency of cobalt, which is very rare, is also potentially lethal, leading to pernicious anemia.Exposure to cobalt metal dust is most common in the fabrication of tungsten carbide. Another source is from wear and tear of certain metal-on-metal hip prostheses.Per the International Agency for Research on Cancer (IARC), cobalt metal with tungsten carbide is "probably carcinogenic to humans" (IARC Group 2A Agent), whereas cobalt metal without tungsten carbide is "possibly carcinogenic to humans" (IARC Group 2B Agent). Cobalt salts: The LD50 value for soluble cobalt salts has been estimated to be between 150 and 500 mg/kg. Thus, for a 100 kg person the LD50 would be about 20 grams.Soluble cobalt(II) salts are "possibly carcinogenic to humans" (IARC Group 2B Agents). Beer drinker's cardiomyopathy: In August 1965, a person presented to a hospital in Quebec City with symptoms suggestive of alcoholic cardiomyopathy. Over the next eight months, fifty more cases with similar findings appeared in the same area with twenty of these being fatal. It was noted that all were heavy drinkers who mostly drank beer and preferred the Dow brand; thirty out of those drank more than 6 litres (12 pints) of beer per day. Epidemiological studies found that Dow had been adding cobalt sulfate to the beer for foam stability since July 1965 and that the concentration added in the Quebec city brewery was ten times that of the same beer brewed in Montreal where there were no reported cases. A 1972 paper noted that several dozen cases were also identified over a similar time period in Omaha, Nebraska; Minneapolis, Minnesota; and Belgium. Cobalt in the environment: Plants, animals, and humans can all be affected by high cobalt concentrations in the environment. For plants, the uptake and distribution of cobalt is entirely species-specific. In some species of plants, the overaccumulation of cobalt can lead to an iron deficiency. This in turn leads to poor growth of the plant as well as leaf loss which overall decreases the amount of oxygen produced by plants during photosynthesis. Eventually the deficiency would lead to plant death. One such example was seen in an experiment involving the effects of increased cobalt concentration on tomato plants. As the dosage of cobalt in the soil surrounding the plants increased, so too did the rate of necrosis of the leaves of the tomato plant. Over time this led to an inability of the plant to produce fruit and eventually the plant died.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Germinal epithelium (male)** Germinal epithelium (male): The germinal epithelium is the epithelial layer of the seminiferous tubules of the testicles. It is also known as the wall of the seminiferous tubules. The cells in the epithelium are connected via tight junctions. Germinal epithelium (male): There are two types of cells in the germinal epithelium. The large Sertoli cells (which are not dividing) function as supportive cells to the developing sperm. The second cell type are the cells belonging to the spermatogenic cell lineage. These develop to eventually become sperm cells (spermatozoon). Typically, the spermatogenic cells will make four to eight layers in the germinal epithelium.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Fighter brand** Fighter brand: In marketing, a fighter brand (sometimes called a fighting brand or a flanker brand) is a lower-priced offering launched by a company to take on, and ideally take out, specific competitors that are attempting to under-price them. Unlike traditional brands that are designed with target consumers in mind, fighter brands are created specifically to combat a competitor that is threatening to take market share away from a company's main brand.A related concept is the flanker brand, a term often found in the mobile phone industry. In the case of flankers, or multibranding, the products may be identical to the main offerings and the new brand is used to expand product placement. Concept: Use of a fighter brand is one of the oldest strategies in branding, tracing its history to cigarette marketing in the 19th century. The strategy is most often used in difficult economic times. As customers trade down to lower-priced offers because of economic constraints, many managers at mid-tier and premium brands are faced with a classic strategic conundrum: should they tackle the threat head-on and reduce existing prices, knowing it will reduce profits and potentially commodify the brand, or should they maintain prices, hope for better times to return, and in the meantime lose customers who might never come back. With both alternatives often equally unpalatable, many companies choose the third option of launching a fighter brand. Concept: When the strategy works, a fighter brand not only defeats a low-priced competitor, but also opens up a new market. The Celeron microprocessor is a case study of a successful fighter brand. Despite the success of its Pentium processors, Intel faced a major threat from less costly processors that were better placed to serve the emerging market for low-cost personal computers, such as the AMD K6. Intel wanted to protect the brand equity and price premium of its Pentium chips, but also wanted to avoid AMD gaining a foothold into the lower end of the market. This led to Intel's creation of the Celeron brand, a cheaper, less powerful version of Intel's Pentium chips, as a fighter brand to serve this market. Examples: Australia: Qantas launching Jetstar to take on Virgin Blue. Telstra, Optus and Vodafone Australia respectively launching Belong, Gomo AU and Felix Mobile to take on MVNOs such as Aldi Mobile AU, Amaysim, Boost Mobile and TPG Mobile Australia. Canada: Rogers Communications and Telus Mobility respectively launching Chatr and Koodo Mobile to take on Mobilicity and Wind Mobile (now Freedom Mobile) Shaw Communications and Vidéotron respectively launching Shaw Mobile and Fizz Mobile to take on Bell Mobility's Lucky Mobile. France: Orange S.A., SFR and Bouygues Telecom launching Sosh, Red by SFR and B&You to take on Free Mobile. Ireland: Eir launching GoMo Ireland to take on An Post Mobile (formerly Postfone, MVNO using Vodafone Ireland towers), 48 Mobile and Tesco Mobile Ireland (MVNOs using Three Ireland towers). Italy: TIM, Vodafone Italia and WindTre respectively launching Kena Mobile, ho. mobile and Very Mobile to take on Iliad Italia. Germany: Merck Sharp & Dohme launching Zocor MSD to take on generic brands and protect Zocor in Europe. Deutsche Telekom, Vodafone Germany and O2 Deutschland respectively launching Congstar, Otelo and Blau Mobilfunk initially as budget-focused counterparts of their regular mobile phone offerings, and later repositioned somewhat to take on 1&1 Drillisch. Philippines: Globe Telecom launching GOMO! to take on DITO Telecommunity with comparable data offerings. Russia: Philip Morris launching Bond Street to take on local brands and protect Marlboro. Singapore: Singapore Airlines launching Scoot as an eventual successor to Tigerair to take on AirAsia and Jetstar Asia. Singtel and Starhub launching GOMO and Giga to take on Circles.Life and TPG Telecom. Sweden: Telia, Tele2, Telenor and 3 respectively launching Halebop, Comviq, Vimla and Hallon as lower-cost, prepaid and no binding contract counterparts of their regular mobile phone offerings. Switzerland: Swisscom, Sunrise Communications and Salt Mobile SA respectively launching Wingo, Yallo/Yallo Swype and GoMo Switzerland as hybrid-prepaid and lower-cost counterparts of their regular mobile phone offerings. UK: British Airways launching Go to take on Ryanair and EasyJet. Tesco creating Jack's to counter growing competition from low-cost German supermarket chains Aldi and LIDL. Vodafone UK launching VOXI to take on O2 UK's Giffgaff, Three UK's SMARTY and EE Limited's BT Mobile and Plusnet. USA: General Motors launching Saturn to take on Japanese imports into America. Whole Foods launching 365 to take on lower-priced grocery stores such as Trader Joe's and Sprouts.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Elastin-like polypeptides** Elastin-like polypeptides: Elastin-like polypeptides (ELPs) are synthetic biopolymers with potential applications in the fields of cancer therapy, tissue scaffolding, metal recovery, and protein purification. For cancer therapy, the addition of functional groups to ELPs can enable them to conjugate with cytotoxic drugs. Also, ELPs may be able to function as polymeric scaffolds, which promote tissue regeneration. This capacity of ELPs has been studied particularly in the context of bone growth. ELPs can also be engineered to recognize specific proteins in solution. The ability of ELPs to undergo morphological changes at certain temperatures enables specific proteins that are bound to the ELPs to be separated out from the rest of the solution via experimental techniques such as centrifugation.The general structure of polymeric ELPs is (VPGXG)n, where the monomeric unit is Val-Pro-Gly-X-Gly, and the "X" denotes a variable amino acid that can have consequences on the general properties of the ELP, such as the transition temperature (Tt). Specifically, the hydrophilicity or hydrophobicity and the presence or absence of a charge on the guest residue play a great role in determining the Tt. Also, the solubilization of the guest residue can effect the Tt. The "n" denotes the number of monomeric units that comprise the polymer. In general, these polymers are linear below the Tt, but aggregate into spherical clumps above the Tt.. Structure: Although engineered and modified in a laboratory setting, ELPs share structural characteristics with intrinsically disordered proteins (IDPs) naturally found in the body, such as tropoelastin, from which ELPs were given their name. The repeat sequences found in the biopolymer give each ELP a distinct structure, as well as influence the lower critical solution temperature (LCST), also referred to commonly as the Tt. It is at this temperature that the ELPs move from a linear, relatively disordered state to a more densely aggregated, partially ordered state Although given as a single temperature, Tt, the ELP phase change process generally begins and ends within a temperature range of approximately 2 °C. Also, Tt is altered by the addition of unique proteins to the free ELPs. Structure: Tropoelastin Tropoelastin is a protein, of size 72kDa, that comes together via cross-links to form elastin in the extracellular matrix of the cell. The cross-link formation process is mediated by lysyl oxidase. One of the major reasons that elastin can withstand high levels of stress in the body without experiencing any physical deformation is that the underlying tropoelastin contains domains that are highly hydrophobic. These hydrophobic domains, consisting overwhelmingly of alanine, proline, glycine, and valine, tend towards instability and disorderliness, ensuring that the elastin does not lock into any specific confirmation. Thus, ELPs consisting of the Val-Pro-Gly-X-Gly monomeric units, which bear resemblance to the repetitive tropoelastin hydrophobic domains, are highly disordered below their Tt. Even above their Tt in their aggregated state, ELPs are only partially ordered. This is due to the fact that the proline and glycine amino acids are present in high amounts in the ELP. Glycine, due to the lack of a bulky side chain, enables the biopolymer to be flexible and proline prevents the formation of stable hydrogen bonds in the ELP backbone. It is important to note, however, that certain segments of the ELP may be able to form instantaneous type II β turns, but these turns are not long-lasting and do not resemble true β sheets, when the NMR chemical shifts are compared. Structure: Amyloid formation Although ELPs generally form reversible spherical aggregates due to their proline and glycine content, there is a possibility that, under certain conditions such as exceedingly high temperatures, ELPs will form amyloids, or irreversible aggregates of insoluble protein. It is also believed that changes in the ELP backbone leading to a reduction in the proline and glycine content may lead to ELPs with a greater propensity for the amyloid state. As amyloids are implicated in the progression of Alzheimer's disease as well as in prion-based diseases, such as Creutzfeldt-Jakob disease (CJD), modeling of ELP amyloid formation may be useful from a biomedical standpoint. Structure: Tt dependence on ELP structure The transition temperature of an ELP depends to a certain extent on the identity of the "X" residue found at the fourth position of the pentapeptide monomeric unit. Residues that are highly hydrophobic, such as leucine and phenylalanine, tend to decrease the transition temperature. On the other hand, residues that are highly hydrophilic, such as serine and glutamine, tend to increase the transition temperature. The presence of a potentially charged residue at the "X" position will determine how the ELP responds to varying pHs, with glutamic acid and aspartic acid raising the Tt at pH values in which the residues are deprotonated and lysine and arginine raising the Tt at pH values in which the residues are protonated. The pH needs to be compatible with the charged states of these amino acids in order to raise the Tt. Also higher molecular mass ELPs and higher concentrations of ELPs in solution make it much easier for the polymer to form aggregates, in effect lowering the experimental Tt. Structure: Tt theoretical model Oftentimes, ELPs are not used in isolation, but are rather fused with other proteins to become functionally active. The structure of these other proteins will have a certain effect on transition temperature. It is important to be able to predict the transition temperature that these fusion proteins will have relative to the free ELPs, as this temperature will determine the fused protein's applicability and phase transition. A theoretical model is available that relates the change in Tt of the fused protein to the varying ratios of each individual amino acid found in the fused protein. The model involves calculating a surface index (SI) associated with each amino acid and then extrapolating, based on the ratio of each amino acid present in the fused protein, the total change in the Tt associated with the fusion protein, ΔTt,fusion:SI= ∑XAA (ASAXAA/ ASAp)(Ttc) where ASAp refers to the area of the entire fused protein that is available to the solvent that is being used, ASAXAA refers to the area of the guest residue on the ELP that is available to the solvent, and Ttc is the transition temperature that is unique to the amino acid. Summing up the contribution of each potential guest residue (XAA) will yield an SI index that is directly proportional to ΔTt,fusion. It was found that the amino acids that are charged under a physiological pH of 7.4 have the greatest impact on the overall SI of a fused protein. This is due to the fact that they are more accessible to water-containing solvents, thereby increasing the ASAXAA and also have high Ttc values. Hence, knowledge of the transition temperature of a fused protein is highly dependent on the presence of these charged residues. Synthesis: Because ELPs are protein-based biopolymers, synthesis involves manipulation of genes to continually express the monomeric repeat unit. Various techniques have been employed in the production of ELPs of various sizes, including unidirectional ligation or concatemerization, overlap extension polymerase chain reaction (OEPCR), and recursive directional ligation (RDL). Also, ELPs can be experimentally modified through conjugation with other polymers or through SpyTag/SpyCatcher reaction, allowing for the synthesis of copolymers with unique morphology. Synthesis: Concatemerization The concatemerization process generates libraries of concatamers for the ELPs. Concatamers are oligomeric products of ligating a single gene with itself. This will result in repeat segments of a gene, all of which can be transcribed and translated immediately to produce the ELP of interest. A major problem with this synthetic route is that the number of gene repeat segments ligated together to form the concatamer cannot be controlled, leading to ELPs of different sizes, from which the ELP of a desired size must be isolated. Synthesis: Overlap extension polymerase chain reaction (OEPCR) The OEPCR method uses a small amount of the gene encoding the monomeric ELP unit and leads to the amplification of this segment to a great extent. This amplification is due to the fact that the initial segment added to the reaction functions as a template, from which identical gene segments can be synthesized. The process will result in the production of double-stranded DNA encoding the ELP of interest. One major bottleneck associated with this method is the potentially low fidelity associated with the Taq polymerase used. This might lead to replication from the template in which the wrong nucleotides are incorporated into the growing DNA strand. Synthesis: Recursive directional ligation (RDL) In recursive directional ligation, the gene encoding the monomer is inserted into a plasmid with restriction sites that are recognized by at least two endonucleases. The endonucleases will cut the plasmid, releasing the gene of interest. Then, this single gene is inserted into a recipient plasmid vector already containing one copy of the ELP monomer gene via digestion of the recipient plasmid with the same restriction endonucleases used on the donor plasmid and a subsequent ligation step. From this process, a sequence of two ELP monomer genes is retrieved. RDL allows for the controlled synthesis of ELP gene oligomers, in which single gene segments are sequentially added. However, the restriction endonucleases used are limited to those that do not cut within the ELP monomer gene itself, as this would lead to loss of crucial nucleotides and a potential frameshift mutation in the protein. Synthesis: Synthetic conjugation ELPs can be synthetically conjugated to poly (ethylene glycol) by adding a cyclooctyne functional motif to the poly (ethylene glycol) and an azide group to the ELP. Through a cycloaddition reaction involving both of the functional groups and manipulation of the solvent pH, diblock and star polymers can be formed. Rather than forming the canonical spherical clumps above the transition temperature, this specific conjugated ELP forms a micelle with amphiphillic properties, in which the polar head groups face outward and the hydrophobic domains face inward. Such micelles may be helpful in delivering nonpolar drugs to the body. Applications: Due to the unique temperature-dependent phase transition experienced by ELPs, in which they move from a linear state to a spherical aggregate state above their Tt, as well as the ability of ELPs to be easily conjugated with other compounds, these biopolymers hold numerous applications. Some of these applications involve ELP use in protein purification, cancer therapy, and tissue scaffolding. Applications: Protein purification The ELP can be conjugated to a functional group that can bind to a protein of interest. At temperatures below the Tt, the ELP will bind to the ligand in its linear form. In this linear state, the ELP-protein complex cannot easily be distinguished from the extraneous proteins in the solution. However, once the solution is heated to a temperature exceeding the Tt, the ELP will form spherical clumps. These clumps will then settle to the bottom of the solution tube following centrifugation, carrying the protein of interest. The proteins that are not needed will be found in the supernatant, which can be physically separated from the spherical aggregates. To ensure that there are few impurities in the ELP-protein complex isolated, the solution can be cooled below the Tt, enabling the ELPs to once again assume their linear structure. From this point, hot and cold centrifugation cycles can be repeated, and then the protein of interest can be eluted from the ELPs via the addition of a salt. Applications: Tissue scaffolding The temperature-based phase behavior of ELPs can be utilized to produce stiff networks that may be compatible with cellular regeneration applications. At high concentrations (weight percent exceeding 15%), the ELP transition from a linear state to a spherical aggregate state above the transition temperature is arrested, leading to the formation of brittle gels. These otherwise brittle networks can then be modified chemically, via oxidative coupling, to yield hydrogels which can sustain high levels of mechanical stress and strain. Also, the modified gel networks contain pores, through which important cell-sustaining compounds can easily be delivered. Such strong hydrogels, when bathed in minimal cell media, have been found to promote the growth of human mesencyhmal stem cell populations. The ability of these arrested ELP networks to promote cell growth may prove indispensable in the production of tissue scaffolds that promote cartilage production, for example. Such an intervention may prove useful in the treatment of bone disease and rheumatoid arthritis. Applications: Drug delivery ELPs modified with certain functional groups have the capacity to be conjugated with drugs, including chemotherapeutic agents. Together, the ELP-drug complex can be taken up by tumor cells to a greater extent, promoting the cytotoxic activity of the drug. The reason that the complexes preferentially target the tumor cells is that these cells tend to be associated with more permeable blood vessels and also possess a weaker lymphatic presence. This essentially means that the drugs can cross over from the vessels to the tumor cells more frequently and can remain in the vessels for a longer period of time, without being filtered out. The phase transition associated with ELPs can also be used to promote tumor cell uptake of the drug. By locally heating tumor cell regions, the ELP-drug complex will aggregate into spherical clumps. If this ELP-drug complex is engineered to expose functional domains in the spherical clump shape that are recognized by tumor cell surfaces, then this cell surface interaction would promote uptake of the drug as the tumor cell would mistake the ELP-drug complex as being a harmless substance. Applications: Metal recovery A recent study highlights the first report of thermo-responsive rare-earth elements (REE)-selective protein. The ELP and the REE-binding domain are genetically fused to form REE-selective and thermo-responsive genetically encoded ELP called RELP for the selective extraction and recovery of total REEs. RELP shows a selective and repeatable biosorption platform for REE recovery. The authors highlighted that technology can be adapted to recover other precious metals and commodities.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Toyota IMV platform** Toyota IMV platform: The Toyota IMV platform is an automobile platform for SUVs, pickups/light trucks and passenger cars from Toyota. The name "IMV" stands for "Innovative International Multi-purpose Vehicle". It uses a ladder frame chassis construction. IMV platform-based vehicles are either rear-wheel drive or four-wheel drive (can be either full-time or rear-based part-time). The front suspension is independent double-wishbone, while the rear suspension is half-dependent. Engines are mounted longitudinally. History: The IMV Project was first announced by Toyota in 2002. The project aimed to develop and produce pickup trucks, a minivan and an SUV outside Japan to reduce costs. The vehicles were released in 2004 as the seventh-generation Hilux, first-generation Innova and first-generation Fortuner respectively.Initial production of IMV vehicles were centered in Thailand, Indonesia, Argentina and South Africa, which would supply vehicles to countries in Asia, Europe, Africa, Oceania, Latin America and the Middle East in a complete form or by knock-down kits. The production of major components were divided, for example diesel engine production were centered in Thailand, petrol engines in Indonesia and manual transmissions in the Philippines and India.Cumulative sales reached 1 million vehicles in 2006, 2 million in 2008, 3 million in 2009, 4 million in 2010, and 5 million in March 2012. Applications: Toyota Hilux AN10/AN20/AN30 (2004–2015, also referred to as "IMV1"/"IMV2"/"IMV3") AN120/AN130 (2015–present) Toyota Fortuner/SW4/Hilux SW4 AN50/AN60 (2005–2015, also referred to as "IMV4") AN150/AN160 (2015–present) Toyota Innova/Kijang Innova AN40 (2004–2015, also referred to as "IMV5") AN140 (2015–present) IMV 0 The IMV 0 is a single-cab pickup concept that was presented on 14 December 2022 in Thailand based on the IMV platform. It was designed and engineered by Toyota Daihatsu Engineering & Manufacturing (TDEM) in collaboration with Japanese and Australian engineering team. It is expected to be powered by a 2.4-litre 2GD-FTV diesel engine, and will be produced in 2023.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Theatrical makeup** Theatrical makeup: Theatrical makeup is makeup that is used to assist in creating the appearance of the characters that actors portray during a theater production. Background: In Greek and Roman theatre, makeup was unnecessary. Actors wore various masks, allowing them to portray another gender, age, or entirely different likeness. Thespis, considered to be the first actor, used white lead and wine to paint his face. In medieval Europe, actors altered their appearances by painting their faces a different color. Performers who portrayed God painted their faces white or gold; actors playing angels painted their faces red. During the Renaissance, actors were creative and resourceful when making-over their faces. They used lamb's wool for false beards and flour as face paint.Advancements in stage lighting technology required stage makeup to evolve beyond one over-all face colour to a multidimensional craft. Originally, theatres used candles and oil lamps; these two sources of light were dim and allowed for crude, unrealistic makeup applications. Once gas lighting, limelight and electric light were introduced to theatres, a need emerged for new makeup materials and more skillful application techniques. In 1873, Ludwig Leichner, a Wagnerian opera singer, began commercially producing a non-toxic greasepaint stick, easing the application of makeup. Highlight and shadow: Through the use of makeup, specifically highlighting and shading, the apparent shape of an actor's face can be changed. By highlighting the face's protruding bones, the features become pronounced; shadowing cavities can add depth. Sagging jowls, forehead wrinkles, eye pouches, and prominent veins can be created by manipulating highlights and shadows. A highlight is a base makeup that is at least two shades lighter than the base. It is applied on the bridge of the nose, cheekbones, and areas under the eyes and below the brows. Using a color two shades deeper than the base provides depth and definition. This depth is commonly used on the eye sockets, to thin the sides of the nose, to shallow the cheeks, and to minimize heaviness under the chin. Makeup and lighting: Lighting controls makeup to a high degree. Makeup can lose its effectiveness due to incorrect stage lighting. Conversely, skillful lighting can greatly aid the art of makeup. Close communication between the lighting director and the makeup artist is crucial for the best possible effect.Understanding light's effect on makeup and various shades and pigments is important when designing a performer's makeup. The following are among the basic rules of light: nothing has color until light is reflected from it; an object appears black when all of the light is absorbed; an object appears white when all of the light is reflected. If certain rays are absorbed and others are reflected, the reflected rays determine the color. Makeup and lighting: Light's effect on makeup Pink tends to gray the cool colors and intensify the warm ones. Yellow becomes more orange. Flesh pink flatters most makeup. Fire red ruins makeup. All but the darker flesh tones virtually disappear. Light and medium rouge fade into the foundation, whereas the dark red rouges turn a reddish brown. Yellow becomes orange, and the cool shading colors become shades of gray and black. Bastard amber is flattering because it picks up the warm pinks and flesh tones in the makeup. Amber and orange intensifies and yellow most flesh colors. They turn rouges more orange. Cool colors are grayed. Green grays all flesh tones and rouges in proportion to its intensity. Green will be intensified. Yellow and blue will become greener. Light blue-green lowers the intensity of the base colors. One should generally use very little rouge under this type of light. Green-blue washes out pale flesh tones, and will gray medium and deep flesh tones, as well as all reds. Blues gray most flesh tones and cause them to appear more red or purple. Violet causes orange, flame, and scarlet to become redder. Rouge appears more intense. Purple affects makeup like violet lighting, except reds and oranges will be even more intense, and most blues will look violet. Straight makeup: Straight makeup is a style of makeup that provides a natural, clean and healthy glow. Straight makeup: Skin If a performer's skin is perfectly toned, makeup spreads smoothly and adheres easily. Dry skin or oily skin is dealt with prior to makeup application; otherwise, the makeup appears blotchy or smeared due to variations in absorption. Performers with dry skin use a moisturizer daily and after their faces have been cleansed following a performance. Performers with oily complexions use a facial toner wipe or astringent to remove the oil and allow a smooth application.Skin has four basic tones: brown, fair, pink and olive. Individuals with fair, pink, and olive skin tones use olive, beige, or suntan bases. Makeup artist and performers select shades compatible with the natural skin tone, but the base is one to several shades deeper. Performers with predominately pink or ruddy complexions use base colors with cool undertones. The character, size of the theatre, and light intensity will determine the tone depth of the foundation.A thin layer of base makeup is applied to the neck, ears, and face using a white rubber sponge or fingers. A heavy application of base appears aged and creepy. Straight makeup: Rouge Fair complexions are enhanced by soft shades of peach and pink, while brown complexions are best accented with coral shades. The moist powder is applied before powder; dry rouge is used to accent the already powdered makeup. Eyes Eyes and eyebrows are the greatest communicative tool in an actor's arsenal. They are the most expressive feature on the face. Straight makeup: Eye shadow Grease or stick shadow is applied to the eyelids and blended out toward the eyebrow bone before powder is applied; dry eye shadow is used alone or to intensify and touch up the color underneath. Dark eye shadow or grease deepens the eye sockets, creating a skull-like effect. Shades of brown and gray are best for individuals with fair complexions. Individuals with brown complexions use lighter shadows such as toast, mushroom or soft yellows. Straight makeup: Eye liner Liquid eyeliner, cake eyeliner, or the eyebrow pencil is used to accent and frame the eyes. There are two ways to line the upper lid of the eye: the owl eye or the almond eye. The owl eye is used to widen the eye and involves using a heavier line in the middle of the lid. The almond-shaped eye is created by extending the line out beyond the outer corner of the eye. The lower line is created by using the same tool used on the upper lid. The line begins a quarter-inch from the inner corner of the eye. This extra space is needed to open the eye. Straight makeup: Eyelashes Mascara is used to add extra attention to the eyes. Black lash mascara is the most popular and commonly used by women with fair and brown complexions. Very fair individuals and men use brown mascara. The bottom lashes are coated with mascara and to avoid using false lashes, a process of layering powder and mascara is used to provide greater thickness. Straight makeup: Powder A generous amount of powder is needed to reduce unwanted shine. If a performer's makeup is under-powdered, his skin oils will break through quickly, producing shine and possibly running. After powder is applied to the entire face, starting under and around the eyes, it is gently pressed for thirty seconds. The excess is brushed off with a large soft brush or piece of cotton. A wet natural sponge or cotton is wiped lightly across the face to set the makeup, to remove any visible powder, and to eliminate the masky feeling.Translucent powders are used for fair complexions because they do not alter the original color of the base, the under-rouge, or the moist eye shadow. Brown complexions are set with tinted that is compatible with the base color. It is used sparingly over the under-rouge and moist eye shadow. After the powder is applied, dry eye shadow and dry rouge are added. Straight makeup: Lips Though the eyes are the most expressive feature of the face, the eyes and ears of the audience follow mouth movements to understand a play's progression. If a performer's lips are underdone or overplayed, they will detract from the performer and the performance. A general rule is: the larger the mouth, the deeper the lipstick tone. However, the actor should not appear “all mouth”.Fair complexions use shades of lipstick like pink and coral. Brown complexions are enhanced by coral and orange shades. Red lipsticks are reserved for large theatres and character portrayals. An auburn or brown pencil are used to provide definition to the lips. Lipsticks on men can look doll-like. Men use natural-colored lipsticks, lightly applied. Training/Education: Because stage actors are seen from farther away than actors on screen, it is crucial that their makeup is more dramatic and professionally done. Many higher-learning institutions have drama departments where all aspects of theater are taught, including the art of theatrical makeup. Some independent agencies also provide classes in theatrical makeup, and online courses are also available. Through training, makeup artists learn important techniques such as hand-eye coordination, ability to draw straight lines and consistent shapes, creativity, good grooming and personal hygiene habits, etc. Many makeup artists who specialize in theatrical makeup build portfolios to show their clients and employers. Many of them work as freelance makeup artists or work for cosmetics brands in department stores.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Avalanche breakdown** Avalanche breakdown: Avalanche breakdown (or avalanche effect) is a phenomenon that can occur in both insulating and semiconducting materials. It is a form of electric current multiplication that can allow very large currents within materials which are otherwise good insulators. It is a type of electron avalanche. The avalanche process occurs when carriers in the transition region are accelerated by the electric field to energies sufficient to create mobile or free electron-hole pairs via collisions with bound electrons. Explanation: Materials conduct electricity if they contain mobile charge carriers. There are two types of charge carriers in a semiconductor: free electrons (mobile electrons) and electron holes (mobile holes which are missing electrons from the normally occupied electron states). A normally bound electron (e.g., in a bond) in a reverse-biased diode may break loose due to a thermal fluctuation or excitation, creating a mobile electron-hole pair. If there is a voltage gradient (electric field) in the semiconductor, the electron will move towards the positive voltage while the hole will move towards the negative voltage. Usually, the electron and hole will simply move to opposite ends of the crystal and enter the appropriate electrodes. When the electric field is strong enough, the mobile electron or hole may be accelerated to high enough speeds to knock other bound electrons free, creating more free charge carriers, increasing the current and leading to further "knocking out" processes and creating an avalanche. In this way, large portions of a normally insulating crystal can begin to conduct. Explanation: The large voltage drop and possibly large current during breakdown necessarily leads to the generation of heat. Therefore, a diode placed into a reverse blocking power application will usually be destroyed by breakdown if the external circuit allows a large current. In principle, avalanche breakdown only involves the passage of electrons and need not cause damage to the crystal. Avalanche diodes (commonly encountered as high voltage Zener diodes) are constructed to break down at a uniform voltage and to avoid current crowding during breakdown. These diodes can indefinitely sustain a moderate level of current during breakdown. The voltage at which the breakdown occurs is called the breakdown voltage. There is a hysteresis effect; once avalanche breakdown has occurred, the material will continue to conduct even if the voltage across it drops below the breakdown voltage. This is different from a Zener diode, which will stop conducting once the reverse voltage drops below the breakdown voltage.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Vipul Patel** Vipul Patel: Vipul R. Patel, FACS is the founder and Medical Director of the Florida Hospital Global Robotics Institute, founder and Vice President of the Society of Robotic Surgery, and founder and Editor Emeritus of The Journal of Robotic Surgery. He is board certified by the American Urological Association and specializes in robotic surgery for prostate cancer. As of February, 2018 he performed his 11,000th robotic-assisted prostatectomy. The large volume of prostatectomies he has performed has enabled him to amass a large amount of statistical evidence regarding the efficacy of robotic techniques which has been used in developing and refining techniques. Patel credits the use of robotic assisted surgery with helping surgeons achieve better surgical outcomes with the "trifecta" of cancer control, continence and sexual function. In the course of his career Patel has led and participated in studies that have resulted in developing improved outcomes for robotic surgery and urologic treatment. Biography: In January, 2008 Patel left Ohio and moved to Florida where he became Medical Director of the Global Robotics Institute at Florida Hospital Celebration Health and Director of Urologic Oncology at the Florida Hospital Cancer Institute.In 2012 Patel founded the International Prostate Cancer Foundation, a charitable institution with the goals of promoting research in testing for genetic predisposition to prostate cancer, patient and physician education, and global screening for early detection of prostate cancer. Patel serves as chairman of the foundation. Academic career: Patel has played a major role in training and mentoring students in robotic technologies and in establishing robotic surgery programs in many countries. He was instrumental in developing guidelines for training and credentialing the next generation of robotic surgeons.He leads the robotic training team at the Nicholson Center, is a Professor of Urology at the University of Central Florida, and a Clinical Associate Professor of Urology at Nova Southeastern University. Academic career: He wrote the first textbook on robotic urologic surgery, now in a second edition.He has helped to establish and train robotic surgery units around the world. For example, in 2004 he established a robotic surgery department at the Hospital Kuala Lumpur in Malaysia. In 2008 he trained the first Russian robotic surgery team and performed the first robotic prostatectomy in Russia, for which he was inducted into the Russian Academy of Sciences Works: Textbooks as editor Patel, Vipul. (2015). Robotic Urologic Surgery. (2nd ed- Chinese Translation). Beijing, China: World Publishing Xi'an Corporation Ltd. Patel, Vipul. (2012). Robotic Urologic Surgery. (2nd ed.). London, England: Springer. Patel, V., Ramalingam, M. (2009) Operative Atlas of Laparoscopic Reconstructive Urology. London, England: Springer. Patel, Vipul. (2007). Robotic Urologic Surgery. London, England: Springer. Works: Articles Patel has published more than one hundred articles in scientific and medical journals. The most cited include: Ficarra, Vincenzo; Novara, Giacomo; Artibani, Walter; Cestari, Andrea; Galfano, Antonio; Graefen, Markus; Guazzoni, Giorgio; Guillonneau, Bertrand; Menon, Mani; Montorsi, Francesco; Patel, Vipul; Rassweiler, Jens; Van Poppel, Hendrik (2009). "Retropubic, Laparoscopic, and Robot-Assisted Radical Prostatectomy: A Systematic Review and Cumulative Analysis of Comparative Studies". European Urology. 55 (5): 1037–63. doi:10.1016/j.eururo.2009.01.036. PMID 19185977. Works: Patel, V; Tully, A; Holmes, R; Lindsay, J (2005). "Robotic Radical Prostatectomy in the Community Setting—The Learning Curve and Beyond: Initial 200 Cases". The Journal of Urology. 174 (1): 269–72. doi:10.1097/01.ju.0000162082.12962.40. PMID 15947662. Ficarra, Vincenzo; Novara, Giacomo; Rosen, Raymond C.; Artibani, Walter; Carroll, Peter R.; Costello, Anthony; Menon, Mani; Montorsi, Francesco; Patel, Vipul R.; Stolzenburg, Jens-Uwe; Van Der Poel, Henk; Wilson, Timothy G.; Zattoni, Filiberto; Mottrie, Alexandre (2012). "Systematic Review and Meta-analysis of Studies Reporting Urinary Continence Recovery After Robot-assisted Radical Prostatectomy". European Urology. 62 (3): 405–17. doi:10.1016/j.eururo.2012.05.045. PMID 22749852. Coelho, Rafael F.; Rocco, Bernardo; Patel, Manoj B.; Orvieto, Marcelo A.; Chauhan, Sanket; Ficarra, Vincenzo; Melegari, Sara; Palmer, Kenneth J.; Patel, Vipul R. (2010). "Retropubic, Laparoscopic, and Robot-Assisted Radical Prostatectomy: A Critical Review of Outcomes Reported by High-Volume Centers". Journal of Endourology. 24 (12): 2003–15. doi:10.1089/end.2010.0295. PMC 3122926. PMID 20942686. Patel, Vipul R.; Palmer, Kenneth J.; Coughlin, Geoff; Samavedi, Srinivas (2008). "Robot-Assisted Laparoscopic Radical Prostatectomy: Perioperative Outcomes of 1500 Cases". Journal of Endourology. 22 (10): 2299–305. doi:10.1089/end.2008.9711. PMID 18837657. Patel, Vipul R.; Coelho, Rafael F.; Palmer, Kenneth J.; Rocco, Bernardo (2009). "Periurethral Suspension Stitch During Robot-Assisted Laparoscopic Radical Prostatectomy: Description of the Technique and Continence Outcomes". European Urology. 56 (3): 472–8. doi:10.1016/j.eururo.2009.06.007. PMID 19560260. Patel, Vipul R.; Sivaraman, Ananthakrishnan; Coelho, Rafael F.; Chauhan, Sanket; Palmer, Kenneth J.; Orvieto, Marcelo A.; Camacho, Ignacio; Coughlin, Geoff; Rocco, Bernardo (2011). "Pentafecta: A New Concept for Reporting Outcomes of Robot-Assisted Laparoscopic Radical Prostatectomy". European Urology. 59 (5): 702–7. doi:10.1016/j.eururo.2011.01.032. PMID 21296482.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**JULES** JULES: JULES (Joint UK Land Environment Simulator) is a land-surface parameterisation model scheme describing soil-vegetation-atmosphere interactions. JULES is a community lead project which evolved from MOSES, the United Kingdom Meteorological Office (Met Office) Surface Exchange Scheme. It can be used as a stand-alone model or as the land surface part of the Met Office Unified Model. JULES has been used to help decide what tactics would be effective to help meet the goals of the Paris Agreement. As well as use by the Met Office climate modelling group a number of studies have cited JULES and used it as a tool to assess the effects of climate change, and to simulate environmental factors from groundwater to carbon in the atmosphere.JULES has been described as the most accurate global carbon budget model of net ecosystem productivity, because it has more years of data than other models.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Acoustic transmission line** Acoustic transmission line: An acoustic transmission line is the use of a long duct, which acts as an acoustic waveguide and is used to produce or transmit sound in an undistorted manner. Technically it is the acoustic analog of the electrical transmission line, typically conceived as a rigid-walled duct or tube, that is long and thin relative to the wavelength of sound present in it. Acoustic transmission line: Examples of transmission line (TL) related technologies include the (mostly obsolete) speaking tube, which transmitted sound to a different location with minimal loss and distortion, wind instruments such as the pipe organ, woodwind and brass which can be modeled in part as transmission lines (although their design also involves generating sound, controlling its timbre, and coupling it efficiently to the open air), and transmission line based loudspeakers which use the same principle to produce accurate extended low bass frequencies and avoid distortion. The comparison between an acoustic duct and an electrical transmission line is useful in "lumped-element" modeling of acoustical systems, in which acoustic elements like volumes, tubes, pistons, and screens can be modeled as single elements in a circuit. With the substitution of pressure for voltage, and volume particle velocity for current, the equations are essentially the same. Electrical transmission lines can be used to describe acoustic tubes and ducts, provided the frequency of the waves in the tube is below the critical frequency, such that they are purely planar. Design principles: Phase inversion is achieved by selecting a length of line that is equal to the quarter wavelength of the target lowest frequency. The effect is illustrated in Fig. 1, which shows a hard boundary at one end (the speaker) and the open-ended line vent at the other. The phase relationship between the bass driver and vent is in phase in the pass band until the frequency approaches the quarter wavelength, when the relationship reaches 90 degrees as shown. However, by this time the vent is producing most of the output (Fig. 2). Because the line is operating over several octaves with the drive unit, cone excursion is reduced, providing higher SPL's and lower distortion levels, compared with reflex and infinite baffle designs. Design principles: The calculation of the length of the line required for a certain bass extension appears to be straightforward, based on a simple formula: 344 4×f where f is the sound frequency in hertz (Hz), 344 is the speed of sound in air at 20°C in meters/second, and ℓ is the length of the transmission line in meters. Design principles: The complex loading of the bass drive unit demands specific Thiele-Small driver parameters to realise the full benefits of a TL design. However, most drive units in the marketplace are developed for the more common reflex and infinite baffle designs and are usually not suitable for TL loading. High efficiency bass drivers with extended low frequency ability, are usually designed to be extremely light and flexible, having very compliant suspensions. Whilst performing well in a reflex design, these characteristics do not match the demands of a TL design. The drive unit is effectively coupled to a long column of air which has mass. This lowers the resonant frequency of the drive unit, negating the need for a highly compliant device. Furthermore, the column of air provides greater force on the driver itself than a driver opening onto a large volume of air (in simple terms it provides more resistance to the driver's attempt to move it), so to control the movement of air requires an extremely rigid cone, to avoid deformation and consequent distortion. Design principles: The introduction of the absorption materials reduces the velocity of sound through the line, as discovered by Bailey in his original work. Bradbury published his extensive tests to determine this effect in a paper in the Journal of the Audio Engineering Society (JAES) in 1976 and his results agreed that heavily damped lines could reduce the velocity of sound by as much as 50%, although 35% is typical in medium damped lines. Bradbury's tests were carried out using fibrous materials, typically longhaired wool and glass fibre. These kinds of materials, however, produce highly variable effects that are not consistently repeatable for production purposes. They are also liable to produce inconsistencies due to movement, climatic factors and effects over time. High-specification acoustic foams, developed by loudspeaker manufacturers such as PMC, with similar characteristics to longhaired wool, provide repeatable results for consistent production. The density of the polymer, the diameter of the pores and the sculptured profiling are all specified to provide the correct absorption for each speaker model. Quantity and position of the foam is critical to engineer a low-pass acoustic filter that provides adequate attenuation of the upper bass frequencies, whilst allowing an unimpeded path for the low bass frequencies. Discovery and development: The concept was termed "acoustical labyrinth" by Stromberg-Carlson Co. when used in their console radios beginning in 1936 (see Concert Grand 837G Ch= 837 Radio Stromberg-Carlson Australasia Pty | Radiomuseum). This type of loudspeaker enclosure was proposed in October 1965 by Dr A.R. Bailey and A.H. Radford in Wireless World (p483-486) magazine. The article postulated that energy from the rear of a driver unit could be essentially absorbed, without damping the cone's motion or superimposing internal reflections and resonance, so Bailey and Radford reasoned that the rear wave could be channeled down a long pipe. If the acoustic energy was absorbed, it would not be available to excite resonances. A pipe of sufficient length could be tapered, and stuffed so that the energy loss was almost complete, minimizing output from the open end. No broad consensus on the ideal taper (expanding, uniform cross-section, or contracting) has been established. Uses: Loudspeaker design Acoustic transmission lines gained attention in their use within loudspeakers in the 1960s and 1970s. In 1965, A R Bailey's article in Wireless World, “A Non-resonant Loudspeaker Enclosure Design”, detailed a working Transmission Line, which was commercialized by John Wright and partners under the brand name IMF and later TDL, and were sold by audiophile Irving M. "Bud" Fried in the United States. Uses: A transmission line is used in loudspeaker design, to reduce time, phase and resonance related distortions, and in many designs to gain exceptional bass extension to the lower end of human hearing, and in some cases the near-infrasonic (below 20 Hz). TDL's 1980s reference speaker range (now discontinued) contained models with frequency ranges of 20 Hz upwards, down to 7 Hz upwards, without needing a separate subwoofer. Irving M. Fried, an advocate of TL design, stated that: "I believe that speakers should preserve the integrity of the signal waveform and the Audio Perfectionist Journal has presented a great deal of information about the importance of time domain performance in loudspeakers. I’m not the only one who appreciates time- and phase-accurate speakers but I have been virtually the only advocate to speak out in print in recent years. There’s a reason for that."In practice, the duct is folded inside a conventional shaped cabinet, so that the open end of the duct appears as a vent on the speaker cabinet. There are many ways in which the duct can be folded and the line is often tapered in cross section to avoid parallel internal surfaces that encourage standing waves. Depending upon the drive unit and quantity – and various physical properties – of absorbent material, the amount of taper will be adjusted during the design process to tune the duct to remove irregularities in its response. The internal partitioning provides substantial bracing for the entire structure, reducing cabinet flexing and colouration. The inside faces of the duct or line, are treated with an absorbent material to provide the correct termination with frequency to load the drive unit as a TL. A theoretically perfect TL would absorb all frequencies entering the line from the rear of the drive unit but remains theoretical, as it would have to be infinitely long. The physical constraints of the real world, demand that the length of the line must often be less than 4 meters before the cabinet becomes too large for any practical applications, so not all the rear energy can be absorbed by the line. In a realized TL, only the upper bass is TL loaded in the true sense of the term (i.e. fully absorbed); the low bass is allowed to freely radiate from the vent in the cabinet. The line therefore effectively works as a low-pass filter, another crossover point in fact, achieved acoustically by the line and its absorbent filling. Below this “crossover point” the low bass is loaded by the column of air formed by the length of the line. The length is specified to reverse the phase of the rear output of the drive unit as it exits the vent. This energy combines with the output of the bass unit, extending its response and effectively creating a second driver. Sound ducts as transmission lines: A duct for sound propagation also behaves like a transmission line (e.g. air conditioning duct, car muffler, ...). Its length may be similar to the wavelength of the sound passing through it, but the dimensions of its cross-section are normally smaller than one quarter the wavelength. Sound ducts as transmission lines: Sound is introduced at one end of the tube by forcing the pressure across the whole cross-section to vary with time. An almost planar wavefront travels down the line at the speed of sound. When the wave reaches the end of the transmission line, behaviour depends on what is present at the end of the line. There are three possible scenarios: The frequency of the pulse generated at the transducer results in a pressure peak at the terminus exit (odd ordered harmonic open pipe resonance) resulting in effectively low acoustic impedance of the duct and high level of energy transfer. Sound ducts as transmission lines: The frequency of the pulse generated at the transducer results in a pressure null at the terminus exit (even ordered harmonic open pipe anti -resonance) resulting in effectively high acoustic impedance of the duct and low level of energy transfer. The frequency of the pulse generated at the transducer results in neither a peak or null in which energy transfer is nominal or in keeping with typical energy dissipation with distance from the source.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**GNU Privacy Guard** GNU Privacy Guard: GNU Privacy Guard (GnuPG or GPG) is a free-software replacement for Symantec's PGP cryptographic software suite. The software is compliant with RFC 4880, the IETF standards-track specification of OpenPGP. Modern versions of PGP are interoperable with GnuPG and other OpenPGP-compliant systems.GnuPG is part of the GNU Project and received major funding from the German government in 1999. Overview: GnuPG is a hybrid-encryption software program because it uses a combination of conventional symmetric-key cryptography for speed, and public-key cryptography for ease of secure key exchange, typically by using the recipient's public key to encrypt a session key which is used only once. This mode of operation is part of the OpenPGP standard and has been part of PGP from its first version. Overview: The GnuPG 1.x series uses an integrated cryptographic library, while the GnuPG 2.x series replaces this with Libgcrypt. Overview: GnuPG encrypts messages using asymmetric key pairs individually generated by GnuPG users. The resulting public keys may be exchanged with other users in a variety of ways, such as Internet key servers. They must always be exchanged carefully to prevent identity spoofing by corrupting public key ↔ "owner" identity correspondences. It is also possible to add a cryptographic digital signature to a message, so the message integrity and sender can be verified, if a particular correspondence relied upon has not been corrupted. Overview: GnuPG also supports symmetric encryption algorithms. By default, GnuPG uses the AES symmetrical algorithm since version 2.1, CAST5 was used in earlier versions. GnuPG does not use patented or otherwise restricted software or algorithms. Instead, GnuPG uses a variety of other, non-patented algorithms.For a long time, it did not support the IDEA encryption algorithm used in PGP. It was in fact possible to use IDEA in GnuPG by downloading a plugin for it, however, this might require a license for some uses in countries in which IDEA was patented. Starting with versions 1.4.13 and 2.0.20, GnuPG supports IDEA because the last patent of IDEA expired in 2012. Support of IDEA is intended "to get rid of all the questions from folks either trying to decrypt old data or migrating keys from PGP to GnuPG", and hence is not recommended for regular use. Overview: More recent releases of GnuPG 2.x ("modern" and the now deprecated "stable" series) expose most cryptographic functions and algorithms Libgcrypt (its cryptography library) provides, including support for elliptic curve cryptography (ECDH, ECDSA and EdDSA) in the "modern" series (i.e. since GnuPG 2.1). Overview: Algorithms As of 2.3 or 2.2 versions, GnuPG supports the following algorithms: Public key RSA, ElGamal, DSA, ECDH (cv25519, cv448, nistp256, nistp384, nistp521, brainpoolP256r1, brainpoolP384r1, brainpoolP512r1, secp256k1), ECDSA (nistp256, nistp384, nistp521, brainpoolP256r1, brainpoolP384r1, brainpoolP512r1, secp256k1), EdDSA (ed25519, ed448) Cipher 3DES, IDEA (for backward compatibility), CAST5, Blowfish, Twofish, AES-128, AES-192, AES-256, Camellia-128, -192 and -256 Hash MD5, SHA-1, RIPEMD-160, SHA-256, SHA-384, SHA-512, SHA-224 Compression Uncompressed, ZIP, ZLIB, BZIP2 History: GnuPG was initially developed by Werner Koch. The first production version, version 1.0.0, was released on September 7, 1999, almost two years after the first GnuPG release (version 0.0.0). The German Federal Ministry of Economics and Technology funded the documentation and the port to Microsoft Windows in 2000.GnuPG is a system compliant to the OpenPGP standard, thus the history of OpenPGP is of importance; it was designed to interoperate with PGP, an email encryption program initially designed and developed by Phil Zimmermann.On February 7, 2014, a GnuPG crowdfunding effort closed, raising €36,732 for a new Web site and infrastructure improvements. History: Branches Since the release of a stable GnuPG 2.3, starting with version 2.3.3 in October 2021, three stable branches of GnuPG are actively maintained: A "stable branch", which currently is (as of 2021) the 2.3 branch. A "LTS (long-term support) branch", which currently is (as of 2021) the 2.2 branch (which was formerly called "modern branch", in comparison to the 2.0 branch). History: The old "legacy branch" (formerly called "classic branch"), which is and will stay the 1.4 branch.Before GnuPG 2.3, two stable branches of GnuPG were actively maintained: "Modern" (2.2), with numerous new features, such as elliptic curve cryptography, compared to the former "stable" (2.0) branch, which it replaced with the release of GnuPG 2.2.0 on August 28, 2017. It was initially released on November 6, 2014. History: "Classic" (1.4), the very old, but still maintained stand-alone version, most suitable for outdated or embedded platforms. Initially released on December 16, 2004.Different GnuPG 2.x versions (e.g. from the 2.2 and 2.0 branches) cannot be installed at the same time. However, it is possible to install a "classic" GnuPG version (i.e. from the 1.4 branch) along with any GnuPG 2.x version.Before the release of GnuPG 2.2 ("modern"), the now deprecated "stable" branch (2.0) was recommended for general use, initially released on November 13, 2006. This branch reached its end-of-life on December 31, 2017; Its last version is 2.0.31, released on December 29, 2017.Before the release of GnuPG 2.0, all stable releases originated from a single branch; i.e., before November 13, 2006, no multiple release branches were maintained in parallel. These former, sequentially succeeding (up to 1.4) release branches were: 1.2 branch, initially released on September 22, 2002, with 1.2.6 as the last version, released on October 26, 2004. History: 1.0 branch, initially released on September 7, 1999, with 1.0.7 as the last version, released on April 30, 2002.(Note that before the release of GnuPG 2.3.0, branches with an odd minor release number (e.g. 2.1, 1.9, 1.3) were development branches leading to a stable release branch with a "+ 0.1" higher version number (e.g. 2.2, 2.0, 1.4); hence branches 2.2 and 2.1 both belong to the "modern" series, 2.0 and 1.9 both to the "stable" series, while the branches 1.4 and 1.3 both belong to the "classic" series. History: With the release of GnuPG 2.3.0, this nomenclature was altered to be composed of a "stable" and "LTS" branch from the "modern" series, plus 1.4 as the last maintained "classic" branch. Also note that even or odd minor release numbers do not indicate a stable or development release branch, anymore.) Platforms: Although the basic GnuPG program has a command-line interface, there exists various front-ends that provide it with a graphical user interface. For example, GnuPG encryption support has been integrated into KMail and Evolution, the graphical email clients found in KDE and GNOME, the most popular Linux desktops. There are also graphical GnuPG front-ends, for example Seahorse for GNOME and KGPG and Kleopatra for KDE. Platforms: GPGTools provides a number of front-ends for OS integration of encryption and key management as well as GnuPG installations via Installer packages for macOS. GPG Suite installs all related OpenPGP applications (GPG Keychain), plugins (GPG Mail) and dependencies (MacGPG), along with GPG Services (integration into macOS Services menu) to use GnuPG based encryption. Platforms: Instant messaging applications such as Psi and Fire can automatically secure messages when GnuPG is installed and configured. Web-based software such as Horde also makes use of it. The cross-platform extension Enigmail provides GnuPG support for Mozilla Thunderbird and SeaMonkey. Similarly, Enigform provides GnuPG support for Mozilla Firefox. FireGPG was discontinued June 7, 2010.In 2005, g10 Code GmbH and Intevation GmbH released Gpg4win, a software suite that includes GnuPG for Windows, GNU Privacy Assistant, and GnuPG plug-ins for Windows Explorer and Outlook. These tools are wrapped in a standard Windows installer, making it easier for GnuPG to be installed and used on Windows systems. Vulnerabilities: The OpenPGP standard specifies several methods of digitally signing messages. In 2003, due to an error in a change to GnuPG intended to make one of those methods more efficient, a security vulnerability was introduced. It affected only one method of digitally signing messages, only for some releases of GnuPG (1.0.2 through 1.2.3), and there were fewer than 1000 such keys listed on the key servers. Most people did not use this method, and were in any case discouraged from doing so, so the damage caused (if any, since none has been publicly reported) would appear to have been minimal. Support for this method has been removed from GnuPG versions released after this discovery (1.2.4 and later). Vulnerabilities: Two further vulnerabilities were discovered in early 2006; the first being that scripted uses of GnuPG for signature verification may result in false positives, the second that non-MIME messages were vulnerable to the injection of data which while not covered by the digital signature, would be reported as being part of the signed message. In both cases updated versions of GnuPG were made available at the time of the announcement. Vulnerabilities: In June 2017, a vulnerability (CVE-2017-7526) was discovered within Libgcrypt by Bernstein, Breitner and others: a library used by GnuPG, which enabled a full key recovery for RSA-1024 and about more than 1/8th of RSA-2048 keys. This side-channel attack exploits the fact that Libgcrypt used a sliding windows method for exponentiation which leads to the leakage of exponent bits and to full key recovery. Again, an updated version of GnuPG was made available at the time of the announcement. Vulnerabilities: In October 2017, the ROCA vulnerability was announced that affects RSA keys generated by YubiKey 4 tokens, which often are used with PGP/GPG. Many published PGP keys were found to be susceptible.Around June 2018, the SigSpoof attacks were announced. These allowed an attacker to convincingly spoof digital signatures.In January 2021, Libgcrypt 1.9.0 was released, which was found to contain a severe bug that was simple to exploit. A fix was released 10 days later in Libgcrypt 1.9.1.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Allyl methyl sulfide** Allyl methyl sulfide: Allyl methyl sulfide is an organosulfur compound with the chemical formula CH2=CHCH2SCH3. The molecule features two functional groups, an allyl (CH2=CHCH2) and a sulfide. It is a colourless liquid with a strong odor characteristic of alkyl sulfides. It is a metabolite of garlic, and "garlic breath" is attributed to its presence.It is prepared by the reaction of allyl chloride with sodium hydroxide and methanethiol. Allyl methyl sulfide: CH2=CHCH2Cl + NaOH (aq) + CH3SH → CH2=CHCH2SCH3 + NaCl + H2O
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Quasiconformal mapping** Quasiconformal mapping: In mathematical complex analysis, a quasiconformal mapping, introduced by Grötzsch (1928) and named by Ahlfors (1935), is a homeomorphism between plane domains which to first order takes small circles to small ellipses of bounded eccentricity. Intuitively, let f : D → D′ be an orientation-preserving homeomorphism between open sets in the plane. If f is continuously differentiable, then it is K-quasiconformal if the derivative of f at every point maps circles to ellipses with eccentricity bounded by K. Definition: Suppose f : D → D′ where D and D′ are two domains in C. There are a variety of equivalent definitions, depending on the required smoothness of f. If f is assumed to have continuous partial derivatives, then f is quasiconformal provided it satisfies the Beltrami equation for some complex valued Lebesgue measurable μ satisfying sup |μ| < 1 (Bers 1977). This equation admits a geometrical interpretation. Equip D with the metric tensor ds2=Ω(z)2|dz+μ(z)dz¯|2, where Ω(z) > 0. Then f satisfies (1) precisely when it is a conformal transformation from D equipped with this metric to the domain D′ equipped with the standard Euclidean metric. The function f is then called μ-conformal. More generally, the continuous differentiability of f can be replaced by the weaker condition that f be in the Sobolev space W1,2(D) of functions whose first-order distributional derivatives are in L2(D). In this case, f is required to be a weak solution of (1). When μ is zero almost everywhere, any homeomorphism in W1,2(D) that is a weak solution of (1) is conformal. Definition: Without appeal to an auxiliary metric, consider the effect of the pullback under f of the usual Euclidean metric. The resulting metric is then given by |∂f∂z|2|dz+μ(z)dz¯|2 which, relative to the background Euclidean metric dzdz¯ , has eigenvalues (1+|μ|)2|∂f∂z|2,(1−|μ|)2|∂f∂z|2. The eigenvalues represent, respectively, the squared length of the major and minor axis of the ellipse obtained by pulling back along f the unit circle in the tangent plane. Accordingly, the dilatation of f at a point z is defined by K(z)=1+|μ(z)|1−|μ(z)|. The (essential) supremum of K(z) is given by sup z∈D|K(z)|=1+‖μ‖∞1−‖μ‖∞ and is called the dilatation of f. A definition based on the notion of extremal length is as follows. If there is a finite K such that for every collection Γ of curves in D the extremal length of Γ is at most K times the extremal length of {f o γ : γ ∈ Γ}. Then f is K-quasiconformal. If f is K-quasiconformal for some finite K, then f is quasiconformal. A few facts about quasiconformal mappings: If K > 1 then the maps x + iy ↦ Kx + iy and x + iy ↦ x + iKy are both quasiconformal and have constant dilatation K. If s > −1 then the map z↦z|z|s is quasiconformal (here z is a complex number) and has constant dilatation max (1+s,11+s) . When s ≠ 0, this is an example of a quasiconformal homeomorphism that is not smooth. If s = 0, this is simply the identity map. A homeomorphism is 1-quasiconformal if and only if it is conformal. Hence the identity map is always 1-quasiconformal. If f : D → D′ is K-quasiconformal and g : D′ → D′′ is K′-quasiconformal, then g o f is KK′-quasiconformal. The inverse of a K-quasiconformal homeomorphism is K-quasiconformal. The set of 1-quasiconformal maps forms a group under composition. The space of K-quasiconformal mappings from the complex plane to itself mapping three distinct points to three given points is compact. Measurable Riemann mapping theorem: Of central importance in the theory of quasiconformal mappings in two dimensions is the measurable Riemann mapping theorem, proved by Lars Ahlfors and Lipman Bers. The theorem generalizes the Riemann mapping theorem from conformal to quasiconformal homeomorphisms, and is stated as follows. Suppose that D is a simply connected domain in C that is not equal to C, and suppose that μ : D → C is Lebesgue measurable and satisfies ‖μ‖∞<1 . Then there is a quasiconformal homeomorphism f from D to the unit disk which is in the Sobolev space W1,2(D) and satisfies the corresponding Beltrami equation (1) in the distributional sense. As with Riemann's mapping theorem, this f is unique up to 3 real parameters. Computational quasi-conformal geometry: Recently, quasi-conformal geometry has attracted attention from different fields, such as applied mathematics, computer vision and medical imaging. Computational quasi-conformal geometry has been developed, which extends the quasi-conformal theory into a discrete setting. It has found various important applications in medical image analysis, computer vision and graphics.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Maschke's theorem** Maschke's theorem: In mathematics, Maschke's theorem, named after Heinrich Maschke, is a theorem in group representation theory that concerns the decomposition of representations of a finite group into irreducible pieces. Maschke's theorem allows one to make general conclusions about representations of a finite group G without actually computing them. It reduces the task of classifying all representations to a more manageable task of classifying irreducible representations, since when the theorem applies, any representation is a direct sum of irreducible pieces (constituents). Moreover, it follows from the Jordan–Hölder theorem that, while the decomposition into a direct sum of irreducible subrepresentations may not be unique, the irreducible pieces have well-defined multiplicities. In particular, a representation of a finite group over a field of characteristic zero is determined up to isomorphism by its character. Formulations: Maschke's theorem addresses the question: when is a general (finite-dimensional) representation built from irreducible subrepresentations using the direct sum operation? This question (and its answer) are formulated differently for different perspectives on group representation theory. Formulations: Group-theoretic Maschke's theorem is commonly formulated as a corollary to the following result: Then the corollary is The vector space of complex-valued class functions of a group G has a natural G -invariant inner product structure, described in the article Schur orthogonality relations. Maschke's theorem was originally proved for the case of representations over C by constructing U as the orthogonal complement of W under this inner product. Formulations: Module-theoretic One of the approaches to representations of finite groups is through module theory. Representations of a group G are replaced by modules over its group algebra K[G] (to be precise, there is an isomorphism of categories between -Mod and Rep G , the category of representations of G ). Irreducible representations correspond to simple modules. In the module-theoretic language, Maschke's theorem asks: is an arbitrary module semisimple? In this context, the theorem can be reformulated as follows: The importance of this result stems from the well developed theory of semisimple rings, in particular, the Artin–Wedderburn theorem (sometimes referred to as Wedderburn's Structure Theorem). When K is the field of complex numbers, this shows that the algebra K[G] is a product of several copies of complex matrix algebras, one for each irreducible representation. If the field K has characteristic zero, but is not algebraically closed, for example, K is a field of real or rational numbers, then a somewhat more complicated statement holds: the group algebra K[G] is a product of matrix algebras over division rings over K . The summands correspond to irreducible representations of G over K Category-theoretic Reformulated in the language of semi-simple categories, Maschke's theorem states Proofs: Group-theoretic Let U be a subspace of V complement of W. Let p0:V→W be the projection function, i.e., p0(w+u)=w for any u∈U,w∈W . Define {\textstyle p(x)={\frac {1}{\#G}}\sum _{g\in G}g\cdot p_{0}\cdot g^{-1}(x)} , where g⋅p0⋅g−1 is an abbreviation of ρWg⋅p0⋅ρVg−1 , with ρWg,ρVg−1 being the representation of G on W and V. Then, ker ⁡p is preserved by G under representation ρV : for any ker ⁡p,h∈G , so ker ⁡p implies that ker ⁡p . So the restriction of ρV on ker ⁡p is also a representation. By the definition of p , for any w∈W , p(w)=w , so ker ⁡p={0} , and for any v∈V , p(p(v))=p(v) . Thus, p(v−p(v))=0 , and ker ⁡p . Therefore, ker ⁡p Module-theoretic Let V be a K[G]-submodule. We will prove that V is a direct summand. Let π be any K-linear projection of K[G] onto V. Consider the map Then φ is again a projection: it is clearly K-linear, maps K[G] to V, and induces the identity on V (therefore, maps K[G] onto V). Moreover we have so φ is in fact K[G]-linear. By the splitting lemma, ker ⁡φ . This proves that every submodule is a direct summand, that is, K[G] is semisimple. Converse statement: The above proof depends on the fact that #G is invertible in K. This might lead one to ask if the converse of Maschke's theorem also holds: if the characteristic of K divides the order of G, does it follow that K[G] is not semisimple? The answer is yes.Proof. For {\textstyle x=\sum \lambda _{g}g\in K[G]} define {\textstyle \epsilon (x)=\sum \lambda _{g}} . Let ker ⁡ϵ . Then I is a K[G]-submodule. We will prove that for every nontrivial submodule V of K[G], I∩V≠0 . Let V be given, and let {\textstyle v=\sum \mu _{g}g} be any nonzero element of V. If ϵ(v)=0 , the claim is immediate. Otherwise, let {\textstyle s=\sum 1g} . Then ϵ(s)=#G⋅1=0 so s∈I and so that sv is a nonzero element of both I and V. This proves V is not a direct complement of I for all V, so K[G] is not semisimple. Non-examples: The theorem can not apply to the case where G is infinite, or when the field K has characteristics dividing #G. For example, Consider the infinite group Z and the representation ρ:Z→GL2(C) defined by ρ(n)=[1101]n=[1n01] . Let W=C⋅[10] , a 1-dimensional subspace of GL2(C) spanned by [10] . Then the restriction of ρ on W is a trivial subrepresentation of Z . However, there's no U such that both W, U are subrepresentations of Z and Z=W⊕U : any such U needs to be 1-dimensional, but any 1-dimensional subspace preserved by ρ has to be spanned by eigenvector for [1101] , and the only eigenvector for that is [10] Consider a prime p, and the group Z/pZ , field K=Fp , and the representation ρ:Z/pZ→GL2(Fp) defined by ρ(n)=[1n01] . Simple calculations show that there is only one eigenvector for [1101] here, so by the same argument, the 1-dimensional subrepresentation of Z/pZ is unique, and Z/pZ cannot be decomposed into the direct sum of two 1-dimensional subrepresentations.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Cool'n'Quiet** Cool'n'Quiet: AMD Cool'n'Quiet is a CPU dynamic frequency scaling and power saving technology introduced by AMD with its Athlon XP processor line. It works by reducing the processor's clock rate and voltage when the processor is idle. The aim of this technology is to reduce overall power consumption and lower heat generation, allowing for slower (thus quieter) cooling fan operation. The objectives of cooler and quieter result in the name Cool'n'Quiet. The technology is similar to Intel's SpeedStep and AMD's own PowerNow!, which were developed with the aim of increasing laptop battery life by reducing power consumption. Cool'n'Quiet: Due to their different usage, Cool'n'Quiet refers to desktop and server chips, while PowerNow! is used for mobile chips; the technologies are similar but not identical. This technology was also introduced on "e-stepping" Opterons, however it is called Optimized Power Management, which is essentially a re-tooled Cool'n'Quiet scheme designed to work with registered memory. Cool'n'Quiet is fully supported in the Linux kernel from version 2.6.18 onward (using the powernow-k8 driver) and FreeBSD from 6.0-CURRENT onward. Implementation: In-order to take advantage of Cool'n'Quiet Technology in Microsoft's Operating Systems: Cool'n'Quiet should be Enabled in system BIOS In Windows XP and 2000: Operating Systems "Minimal Power Management" profile must be active in "Power Schemes". A PPM driver was also released by AMD that facilitates this. In Windows Vista and 7: "Minimum processor state" found in "Processor Power Management" of "Advanced Power Settings" should be lower than "100%".Also In Windows Vista and 7 the "Power Saver" power profile allows much lower power state (frequency and voltage) than in the "High Performance" power state. Unlike Windows XP, Windows Vista only supports Cool'n'Quiet on motherboards that support ACPI 2.0 or later. With earlier versions of Windows, processor drivers along with Cool'n'Quiet software also need to be installed. The latest version is 1.3.2.0. Third party utilities: In addition to the CPU drivers offered by AMD, several motherboard manufacturers have released software to give the end user more control over the Cool 'n' Quiet feature, as well as the other new features of AMD processors and chipsets. Using these applications, one can even control the CPU voltage explicitly. PhenomMsrTweaker (SourceForge link) RMClock Processors supporting Cool'n'Quiet: Athlon XP Athlon 64 and X2 – all models Athlon 64 FX – FX-53 (Socket 939 only) and higher FX (Socket 942) Athlon II – all models Sempron – Socket 754: 3000+ and higher; Socket AM2: 3200+ and higher Opteron – E-stepping and higher, branded as Optimized Power Management Phenom – all versions support Cool'n'Quiet 2.0 Phenom II – supports Cool'n'Quiet 3.0 some of the APUs Ryzen – 3, 5, 7, and 9 all models support Cool ‘n’ Quiet EPYC
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Conspirituality** Conspirituality: Conspirituality is a neologism portmanteau describing the overlap of conspiracy theories with spirituality, typically of New Age varieties. Contemporary conspirituality became common in the 1990s. Characterization: The term was coined for the 2011 study "The Emergence of Conspirituality" by sociologists Charlotte Ward and David Voas published in the Journal of Contemporary Religion. They characterized the movement as follows: "It offers a broad politico-spiritual philosophy based on two core convictions, the first traditional to conspiracy theory, the second rooted in the New Age: 1) a secret group covertly controls, or is trying to control, the political and social order, and 2) humanity is undergoing a 'paradigm shift' in consciousness. Proponents believe that the best strategy for dealing with the threat of a totalitarian 'new world order' is to act in accordance with an awakened 'new paradigm' worldview." A 2020 opinion piece in ABC Australia said that, as with other extremist movements, the conspirituality narrative portrayed its followers as more enlightened than mainstream society and prone to persecution due to their awareness of the "real truth". Ward and Voas considered the combination of optimistic, holistic New Age culture and pessimistic, conservative conspiracy culture to be paradoxical. Conspirituality includes the "dark occulture" of conspiracy culture. The uniting philosophy of conspirituality movements is a belief that society is under the covert control by a group of elites, and that it can be emancipated from that control by a "paradigm shift in consciousness that harnesses cosmic forces". The appeal of conspirituality is the narcissistic idea of being the one to unravel the true explanations for all that is wrong in the world. Characterization: Alex McKeen, writing in The Toronto Star, says: Conspiritualists share a conviction that enlightenment exists in a dimension that is separate and above politics, science and everything as banal as “three dimensional” human concerns (a common spirituality trope is reaching five-dimensional consciousness). Once you experience it — and it’s a subjective, private experience — you can’t relate anymore in “3D.” Asbjørn Dyrendal counters that combining conspiracy theory with New Age spirituality is not new, and that Western esotericism is inherently suspicious. Both conspiracy culture and esotericism emphasize secrecy and the revelation of higher knowledge. He identifies Marta Steinsvik, Alf Larsen, Bertram Dybwad Brochmann, and neo-paganism as early examples of the promotion of alternate spirituality and conspiracy theory. Jules Evans, an honorary research fellow at the Center for the History of Emotions at Queen Mary University of London, identifies an overlap between alternative spirituality and far-right populism among traditionalists.Ward and Voas said that sometimes those with New Age beliefs are more prone to thinking like conspiracy theorists. The study describes The Zeitgeist Movement, an activist group, as being a part of the conspirituality movement. Conspirituality has been linked to the far-right conspiracy theory QAnon and COVID-19 conspiracy theories, as well as the Movement for Spiritual Integration into the Absolute (MISA) and the New Age religious movement Love Has Won. Online yoga and wellness communities have seen members posting conspiracies about Covid-19, masks, and QAnon-related child exploitation claims. 9/11 conspiracy theories spread through new age communities such as "lightworkers" and "indigo children". Anthropologist of religion Dr. Adam Klin-Oron says that in Israel, "we are seeing people who used to talk about 'love' and 'light' standing shoulder to shoulder with those who believe there is a ring of pedophiles that drink the blood of babies".In Norway, the online magazine Nyhetsspeilet (The News Mirror) has been described as the "flagship of conspirituality". Its goal for "triple awakening" focuses on consciousness and spirituality, extraterrestrial visitors, and New World Order conspiracy theories. The Conspirituality podcast updates listeners on the intersection between the "wellness" industry and conspiracy theories, referring to it as "disaster spirituality". People described as members of the movement: Jake Angeli, an American conspiracy theorist also known as the "QAnon Shaman". Pete Evans, an Australian chef and conspiracy theorist. David Icke, an English conspiracy theorist Russell Brand, an English comedian and activist Christiane Northrup, an obstetrician and gynecologist who promotes pseudoscience JP Sears, an American YouTuber and comedian
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Fire HD** Fire HD: The Fire HD, also known as Kindle Fire HD in the generations prior to 2014, is a member of the Amazon Fire family of tablet computers. Fire HD refers to Amazon Fire family tablets with HD resolution. The many generations Fire HD subfamily includes: 7" and 8.9" (2012 models), 7" (2013 model), 6" and 7" (2014 models), 8" and 10.1" (2015 models), 8" (2016 model), 8" and 10.1" (2017 models), 8" (2018 model), 10.1" (2019 model), 8" (2020 model), 10.1" (2021 model), 8" (2022 model), and 11" (2023 model). These devices run the Fire OS operating system. History: The first Fire HD model was announced on September 6, 2012, and was available in two versions: 7" and 8.9". The 7" model was released in the United States on September 14, France, Germany, Italy, Spain and the United Kingdom on October 25 and in Japan on December 18. The 8.9" model was released on November 20 in the United States, in Japan on March 12, 2013, in Germany on March 13, and in India on June 27.On September 25, 2013, an updated Fire HD 7 was quietly announced alongside the newly debuted flagship Kindle Fire HDX line. It was available as a pre-order until the official ship date of October 2, 2013. Changes include: price reduction to $139, processor speed upgraded to 1.5 GHz, firmware upgrade from the unnamed "Android based" OS to a compatible proprietary fork of Android named Fire OS 3, removal of the front camera, new shell form factor, and decreased available storage options. History: On October 2, 2014, the next revision of Fire HD models were released, part of the Fire Tablet's fourth generation, with 6-inch and 7-inch touchscreen sizes. In addition, the Fire HD Kids Edition was released, which is the same device as the Fire HD 6 except it comes with a case and one-year subscription to Kindle Freetime apps. The branding "Kindle" was officially removed from the tablets' name. History: In September 2015, Amazon released a new range of Fire tablets with 7, 8, and 10.1 inch sizes. The 7 inch was simply called the Fire 7, while the 8" and 10.1" were called Fire HD 8 and Fire HD 10 respectively. Amazon had ended the HDX line after two generations and the new model range shifted the entire Fire tablet line down-market, with Fire 7 as the lowest priced Fire tablet at $50. History: In September 2016, Amazon announced the release of the updated Fire HD 8 which includes the virtual assistant Alexa, priced at US$89.99. Fortune reported that, "As with most of Amazon's devices, the aim isn't to make money off of the hardware but instead to sell digital content such as books, movies, and TV shows to users".In 2017, the seventh Generation Fire HD 8 was released. Some differences between the 6th and 7th Generation HD 8 models were the price, the gyroscope removal, the increase of maximum SD card expansion, and the better graphics chip. History: In September 2018, Amazon refreshed their Fire tablet line with the release of eighth Generation Fire HD 8/Kids Edition and Fire HD 10. The price remained the same as last year's model with minor upgrade on the hardware where the external storage is expandable to 400GB. On the software side, the 2018 model is preinstalled with Fire OS 6 that allows hands-free Alexa control. History: On October 7, 2019, Amazon announced an update to the Fire HD 10 that was released on October 30, 2019. The major hardware differences compared to the previous version were replacement of microUSB with USB-C, a faster processor (upgraded from Quad-Core up to 1.8 GHz to Octa-core 2.0 GHz, which Amazon claims is 30% faster than the previous one) and battery life that is 2 hours longer than the previous generation. The screen size and price remained unchanged. A new color option, white, was also added. History: In 2020, the Fire HD 8 was updated with a faster 64-bit quad-core SoC, more storage (from 16/32GB to 32/64GB), USB-C, brighter display, enhanced wifi fidelity, a new front camera location (for landscape video chats instead of portrait), and Fire OS 7 (based on Android 9). It came in two versions: 2GB RAM (HD 8) and 3GB RAM (HD 8 Plus). The 3GB RAM version, which enabled more memory-intensive apps, was available only in US, UK, DE, and JP marketplaces. History: In 2021, the Fire HD 10 was refreshed with a new front camera location (for landscape video chats instead of portrait) and slimmer side bezels, leading to a more symmetrical design. Also, wireless charging was introduced for the "HD 10 Plus" model. The HD 10 Plus also has 4GB of RAM. History: In 2023, Amazon released an all new size of Fire tablet called Fire Max 11. The device features a larger display at 11" with official USI2.0 stylus support, pogo pins for an external keyboard, and a fingerprint sensor. All Max 11 have a partially recycled aluminum rear housing instead of plastic. All are new technologies to Fire tablets. A short-lived option for an aluminum rear-housing was available for the 5th gen HD 10 and was not continued on later gens. It also features newer hardware such as a faster and more efficient octa-core SoC, 4 GB RAM, 8 MP cameras, WiFi 6, and Bluetooth 5.3 BLE. Despite the change of naming convention Amazon considers the Max 11 to be in the Fire HD series. Design: Hardware The Fire tablets feature multi-touch touchscreen LCD screens. The first generation 7" model contains a Texas Instruments OMAP 4460 processor, while the 8.9" model uses an OMAP 4470 processor. All three models feature Dolby audio and stereo speakers. The 7" model's speakers are dual-driver, while the 8.9" model's are single-driver. The device has two Wi-Fi antennas on the 2.4 GHz and 5 GHz bands which utilize MIMO to improve reception. The Fire HD also added Bluetooth connectivity allowing users to connect an array of wireless accessories including keyboards. The first generation models have an HDMI port, but this is missing from future generations. Design: In June 2016, Amazon released a version of the Fire HD 10 that has an aluminum exterior instead of plastic like the other Fire tablets, and is available at the same price as the plastic version.The first Fire HD model to get an octa-core processor was the HD 10 tablet released in 2019. USB-C replaced the microUSB for charging in the HD 10 in 2019 and the HD 8 in 2020. The position of the front-facing camera was redesigned to permit landscape video chats in the HD 8 in 2020 and the HD 10 in 2021. Also higher RAM versions were released: 3GB RAM in the "HD 8 plus" in 2020 and 4GB RAM in the "HD 10 plus" in 2021. Design: In 2023, the introduction of the Fire Max 11 added many new features and technologies to the Fire tablet line. Most notably it included active stylus support. All previous models were limited to simple capacitive styluses that behave like a finger, without any enhanced accuracy or features that active styluses are capable of. The Max 11 also included a fingerprint sensor on the power button, as well as a special connector for an external keyboard. It was the first Fire model to ship without a 3.5mm headphone socket, instead using USB audio over the USB-C connector. Design: Software The 2012 models use software that introduced user profiles for sharing among family members and the ability to place absolute limits on total usage or usage of individual features, called FreeTime, and tracks the user's reading speed to predict when the user will finish a chapter or book. The OS is based on a version of Android 4.0.3 "Ice Cream Sandwich". This does not allow use of Google Play, limiting the number of apps accessible for the Fire HD. Fire HD software updates can be received OTA or from the support websites.The Fire HD 7" second generation used Fire OS 3. Note that although this version is called the Fire HD 7", it is not the successor to the original Fire HD. This model is the successor to the Fire second generation. The Fire HD models second generation were updated to FireOS 4.1.1, based on Android 4.4.4, in Q3 2014. Design: The Fire HD 6" and 7" third generation uses Fire OS 4 "Sangria", which features profiles so each user on the tablet can have their own settings and apps.The Fire HD 8 and 10 fifth generation uses Fire OS 5 "Bellini" and was released in late 2015. In September 2016, Amazon released virtual assistant Alexa for the sixth generation Fire tablets.The 2018 model of the Fire HD 8 has Fire OS 6 preinstalled, which is based on Android 7.1 "Nougat". It also includes Alexa Hands-Free and the new "Show Mode", in which the tablet acts like an Amazon Echo Show.The 2019 model of the Fire HD 10 (and the 2020 model of Fire HD 8) has Fire OS 7 preinstalled, which is based on Android 9 "Pie".The 2021 model of the Fire HD 10 / Fire HD 10 Plus introduced 64-bit app support (arm64-v8a Android ABI) for the first time in the Fire HD series. Models: Overview on generations and models for all Fire (including Amazon Fire) tablet devices: Note: Items in bold are currently available. Detailed specifications for all Fire HD tagged tablet devices: *table above only includes data on tablets Amazon lists in the HD family. The Model number consists of three parts: first the KF prefix for 'Kindle Fire', second one or two letters derived from the code name, third WI for Wi-Fi or WA for cellular interface.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Semantic externalism** Semantic externalism: In the philosophy of language, semantic externalism (the opposite of semantic internalism) is the view that the meaning of a term is determined, in whole or in part, by factors external to the speaker. According to an externalist position, one can claim without contradiction that two speakers could be in exactly the same brain state at the time of an utterance, and yet mean different things by that utterance -- that is, at the least, that their terms could pick out different referents. Overview: The philosopher Hilary Putnam (1975/1985) proposed this position and summarized it with the statement "meanings just ain't in the head!" Although he did not use the term "externalism" at the time, Putnam is thought to have pioneered semantic externalism in his 1975 paper "The Meaning of 'Meaning'". His Twin Earth thought experiment, from the aforementioned paper, is widely cited to illustrate his argument for externalism to this day. Alongside Putnam, credit also goes to Saul Kripke and Tyler Burge, both of whom attacked internalism for independent reasons, providing a foundation on which Putnam's attacks rested. Overview: Externalism is generally thought to be a necessary consequence of any causal theory of reference; since the causal history of a term is not internal, the involvement of that history in determining the term's referent is enough to satisfy the externalist thesis. However, Putnam and many subsequent externalists have maintained that not only reference, but sense as well is determined, at least in part, by external factors (see sense and reference). Overview: While it is common to shorten "semantic externalism" to "externalism" within the context of the debate, one must be careful in doing so, as there are several distinct debates in philosophy that employ the terms "externalism" and "internalism". Arguments for externalism: Putnam presented a variety of arguments for the externalist position, the most famous being those that concerned Twin Earth. Subsequent philosophers have produced other, related thought experiments, most notably Donald Davidson's swamp man experiment. However, there have been numerous arguments for externalism that do not involve science-fiction scenarios. Arguments for externalism: Putnam pointed out, for instance, that he has no knowledge that could distinguish elm trees from beech trees. He has precisely the same concept of one as of the other: "a deciduous tree growing in North America". Yet when Putnam makes a statement containing the word "elm", we take him to be referring specifically to elms. If he makes a claim about a property of elm trees, it will be considered true or false, depending upon whether that property applies to those trees which are in fact elms. There is nothing "in the head" that could fix his reference thus; rather, he concluded, his linguistic community, containing some speakers who did know the difference between the two trees, ensured that when he said "elm", he referred to elms. Putnam refers to this feature of language as "the division of linguistic labor".
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Endurance (aeronautics)** Endurance (aeronautics): In aviation, endurance is the maximum length of time that an aircraft can spend in cruising flight. In other words, it is the amount of time an aircraft can stay in the air with one load of fuel. Endurance is different from range, which is a measure of distance flown. For example, a typical sailplane exhibits high endurance characteristics but poor range characteristics. Endurance (aeronautics): Endurance can be defined as: E=∫t1t2dt=−∫W1W2dWF=∫W2W1dWF where W stands for fuel weight, F for fuel flow, and t for time. Endurance can factor into aviation design in a number of ways. Some aircraft, such as the P-3 Orion or U-2 spy plane, require high endurance characteristics as part of their mission profile (often referred to as loiter time (on target)). Endurance plays a prime factor in finding out the fuel fraction for an aircraft. Endurance, like range, is also related to fuel efficiency; fuel-efficient aircraft will tend to exhibit good endurance characteristics.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Omniview technology** Omniview technology: Omniview technology (also known as surround view or bird view) is a vehicle parking assistant technology that first was introduced in 2007 as the "Around View Monitor" option for the Nissan Elgrand and Infiniti EX. It is designed to assist drivers in monitoring their surroundings, for example, while parking a vehicle in a small space. Principle: Early vehicle parking assistant products used ultrasonic parking sensors and/or a single rear-view camera to view and obtain distances to objects surrounding the vehicle, providing drivers with an audible alarm or rear-view video through a fisheye lens. There are some drawbacks to these early products: the alarm only provides a proximity warning but not the position of the object(s) relative to the vehicle, and the rear-view camera has a limited field of view. However, omniview technology overcomes these problems and has seen increasing availability. Principle: Omniview system In most omniview systems, there are four wide-angle cameras: one in the front of the vehicle, one in the back of the vehicle, and one each in the side-mounted rear view mirrors. The four cameras have overlapping fields of view that collectively cover the whole area around the vehicle and serve as an omnidirectional (360-degree) camera. Video from the cameras are sent to the processor, which synthesizes a bird's-eye view from above the vehicle by stitching the video feeds together, correcting distortion, and transforming the perspective. In some cases, ultrasonic sensors are used in combination with the omniview system to provide distance information and highlight the relevant view that may be affected by potential obstacles.Because the bird's-eye view is a simulated perspective using camera inputs much closer to the ground, objects at ground level will appear relatively undistorted while those above the ground will appear to "lean away" from the vehicle. In addition, if the same object is captured by the overlapping fields of two cameras, it can appear to lean away in two different directions.: 66 History: The first vehicle equipped with Nissan's "Around View Monitor" was the Japanese-market Elgrand, introduced in November 2007. In America, the system was introduced one month later, as an option for the EX35 from Nissan's luxury marque Infiniti. At about the same time, Mitsubishi Motors and Honda implemented similar functionality as the "Multi-around monitor system" for the Delica and "Multi-View Camera System" for the Odyssey, respectively.Third-party automotive component suppliers such as Freescale Semiconductor and Continental AG have developed and marketed modular omniview systems, the latter through the acquisition of Application Solutions Ltd. (ASL Vision).Nissan have since added moving object detection using the cameras, billing the system as "Intelligent Around View Monitor" (I-AVM). In 2016, stuntman Paul Swift used the I-AVM system to match the world record for the tightest J-turn in a specially-prepared Nissan Juke, using a space just 18 cm (7.1 in) wider than the vehicle's length to turn it around with the windows completely blacked out.An omniview system that uses four cameras and displays a three-dimensional rendering of the vehicle and its surroundings has been proposed as a logical next step to increase the driver's awareness.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Code page 851** Code page 851: Code page 851 (CCSID 851) (CP 851, IBM 851, OEM 851) is a code page used under DOS to write Greek language although it lacks the letters Ϊ and Ϋ. It covers the German language as well. It also covers some accented letters of the French language, but it lacks most of the accented capital letters required for French. It is also called MS-DOS Greek 1.It has been superseded by Code page 869. Character set: The following table shows code page 851. Each character is shown with its equivalent Unicode code point. Only the second half of the table (code points 128–255) is shown, the first half (code points 0–127) being the same as code page 437.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Electronic identification** Electronic identification: An electronic identification ("eID") is a digital solution for proof of identity of citizens or organizations. They can be used to view to access benefits or services provided by government authorities, banks or other companies, for mobile payments, etc. Apart from online authentication and login, many electronic identity services also give users the option to sign electronic documents with a digital signature. Electronic identification: One form of eID is an electronic identification card (eIC), which is a physical identity card that can be used for online and offline personal identification or authentication. The eIC is a smart card in ID-1 format of a regular bank card, with identity information printed on the surface (such as personal details and a photograph) and in an embedded RFID microchip, similar to that in biometric passports. The chip stores the information printed on the card (such as the holder's name and date of birth) and the holder's photo(s). Several photos may be taken from different angles along with different facial expressions, thus allowing the biometric facial recognition systems to measure and analyze the overall structure, shape and proportions of the face. It may also store the holder's fingerprints. The card may be used for online authentication, such as for age verification or for e-government applications. An electronic signature, provided by a private company, may also be stored on the chip. Electronic identification: Countries which currently issue government-issued eIDs include Afghanistan, Bangladesh, Belgium, Bulgaria, Chile, Estonia, Finland, Guatemala, Germany, Iceland, India, Indonesia, Israel, Italy, Latvia, Lithuania, Luxembourg, the Netherlands, Nigeria, Morocco, Pakistan, Peru, Portugal, Poland, Romania, Saudi Arabia, Spain, Slovakia, Malta, and Mauritius. Germany, Uruguay and previously Finland have accepted government issued physical eICs. Norway, Sweden and Finland accept bank-issued eIDs (also known as BankID) for identification by government authorities. There are also an increasing number of countries applying electronic identification for voting (enrollment, issuing voter ID cards, voter identification and authentication, etc.), including those countries using biometric voter registration. eID in Europe: European Union According to the EU electronic identification and trust services (eIDAS) Regulation, described as a pan-European login system, all organizations delivering public digital services in an EU member state shall accept electronic identification from all EU member states from 29 September 2018. Belgium Belgium has been issuing eIDs since 2003, and all identity cards issued since 2004 have been electronic, replacing the previous plastic card. eID in Europe: Chip contents The eID card contains a chip containing: the same information as legible on the card the address of the card holder the identity - and signature keys and certificates fingerprints place of birth Using the eID At home, the users can use their electronic IDs to log into specific websites (such as Tax-on-web, allowing them to fill in their tax form online). To do this the user needs an eID card a smartcard reader the eID middleware softwareWhen other software (such as an Internet Browser) attempts to read the eID, the users are asked for confirmation for this action, and potentially even for their PIN.Other applications include signing emails with the user's eID certificate private key. Giving the public key to your recipients allows them to verify your identity. eID in Europe: Kids ID Although legally Belgian citizens only have to carry an ID from the age of 12, as of March 2009, a "Kids ID" has been introduced for children below this age, on a strictly voluntary basis. This ID, beside containing the usual information, also holds a contact number that people, or the child themselves, can call when they, for example, are in danger or had an accident. The card can be used for electronic identification after the age of six, and it does not contain a signing certificate as minors cannot sign a legally binding document. An important goal of the Kids-ID card is to allow children to join "youth-only" chat sites, using their eID to gain entrance. These sites would essentially block any users above a certain age from gaining access to the chat sessions, effectively blocking out potential pedophiles. eID in Europe: Bulgaria Bulgaria introduced a limited scale proof-of-concept of electronic identity cards, called ЕИК (Eлектронна карта за идентичност), in 2013. Croatia Croatia introduced its electronic identity cards, called e-osobna iskaznica, on 8 June 2015. Denmark Electronic identities in Denmark issued by banks are called NemID. NemID authentication allows larger payments in MobilePay - a service used by more than half of the population as of 2017. eID in Europe: Estonia The Estonian ID card is also used for authentication for Estonia's Internet-based voting system. In February 2007, Estonia was the first country to allow for electronic voting for parliamentary elections. Over 30,000 voters participated in the country's e-election.At end of 2014 Estonia extended the Estonian ID Card to non-residents. The target of the project is to reach 10 million residents by 2025, which is 8 times more than the Estonian population of 1.3 million. eID in Europe: Finland The Finnish electronic ID was first issued to citizens on 1 December 1999. Electronic identities in Finland are issued by banks. They make it possible to log into Finnish authorities, universities and banks, and to make larger payments using the MobilePay mobile payment service. The mobiilivarmenne is utilizing the mobile phone SIM card for authentication, and is financed by a fee to the mobile network operator for each authentication. eID in Europe: Germany Germany introduced its electronic identity cards, called Personalausweis, in 2010. eID in Europe: Iceland In Iceland, electronic IDs (Icelandic: Rafræn skilríki) are extensively used by the public and private sector today and were first introduced in 2008. The most widely used version today is on a mobile phone - with the authentication key held on a SIM card. In Iceland 95% of the eligible population (13 years or older) has an active eID, including 75% of over 75s. Icelandic eID holders used their eID more than 20 times a month in 2021.During enrollment, users create a PIN. Each time they need to identify, verify or sign something online, a prompt via flash SMS is initiated and the PIN code is validated. Today this system is used by all banks, government services (island.is portal), healthcare, eductation, document signing and over 300 private companies using for customer page logins (linked to the Icelandic ID no.). Since the only thing to remember is one's PIN code and their phone, it is very prevalent, and works as a sort of single-sign-on service. They are administered by Auðkenni hf., which was initially created by a consortium of banks but is now owned by the government.The first form of the system in 2008 was a special smartcard with an EMV chip, paired with a smartcard reader on the client's computer. The smartcard was first introduced in late 2008 for employees of government departments, large companies and the healthcare system. It was rolled out to all departments and companies handling sensitive data. It was also possible to store one's eID on a debit card. In November 2013 the SIM card implementation for mobile phones was introduced, which led to a much quicker take-up of eIDs due to its ease of use. By 2014, 40% of Icelanders were using eIDs. eID in Europe: Italy Italy introduced its electronic identity cards, called Carta d'Identità Elettronica (in Italy identified with the acronym CIE), to replace the paper-based ID card in Italy. Since 4 July 2016, Italy is in the process of renewing all ID cards to electronic ID cards. Latvia eID and eSignature service provider in Latvia is called eParaksts Malta Since 12 February 2014, Malta is in the process of renewing all ID cards to electronic ID cards. Netherlands Electronic identities in Netherlands are called DigiD and Netherlands is currently developing an eID scheme. eID in Europe: Norway Electronic identities in Norway issued by banks are called BankID (different than Sweden's BankID). They make it possible to log into Norwegian authorities, universities and banks, and to make larger payments using the Vipps mobile payment service, used by more than half of the population as of 2017. The Norwegian BankID på mobil service is utilizing the mobile phone SIM card for authentication, and is financed by a fee to the mobile network operator for each authentication. eID in Europe: Romania Since 25 May 2023, Romanians are able to use their national ID to sign up to the RoEID application which allows them to access public servcies Spain Electronic identity cards in Spain are called DNIe and have been issued since 2006. Switzerland SwissID, developed by SwissSign, is a certified digital ID in Switzerland offered since 2017 (2010–17 as SuisseID). As a base for a new Federal Act on Electronic Identification Services (e-ID Act), an eID-concept had been developed by the authorities, yet experts criticized its technology part. eID in Europe: The law was accepted by the Swiss parliament on 29 September 2019. It would have updated current legislation and would have continued to allow private companies or public organizations to issue eIDs if certified by a new federal authority. However, an optional referendum called for a public vote on this issue in the weeks until Sunday, 7 March 2021. The vote resulted in 35.6% Yes and 64.4% No, rejecting the proposed new law.SwissSign might develop the SwissID further, to make it compatible with future E-ID regulations. eID in Europe: Sweden The most widespread electronic identification in Sweden is issued by banks and called BankID. The BankID may be in the form of a certificate file on disk, on card or on smart phones. The latter (Swedish mobile BankID service) was used by 84 percent of the Swedish population in 2019. A Mobile BankID login does not require a fee since the service is provided by banks rather than mobile operators. It can be used both for authentication within various apps and web services on the same smart phone, and also for web pages on other devices. It also supports fingerprint and face recognition authentication on compatible iOS and Android devices. eID in Europe: Electronic IDs are used for secure web login to Swedish authorities, banks, health centers (allowing people to see their medical records and prescriptions and book doctors visits), and companies such as pharmacies. Mobile BankID also allows the Swish mobile payment service, utilized by 78 percent of the Swedish population in 2019, at first mainly for payments between individuals. BankID was previously used for university applications and admissions, but this was prohibited by Swedbank since universities utilized the system for distribution of their own student logins. Increasingly, BankID is used as an added security for signing contracts. eID in other countries: Afghanistan Afghanistan issued its first electronic ID (e-ID) card on 3 May 2018. Afghan President Ashraf Ghani was the first to receive the card. Afghan President was accompanied by First Lady Rula Ghani, his VP, Head of Afghan Senate, Head of Afghan Parliament, Chief Justice and other senior government officials, and they also received their cards. As of January 2021, approximately 1.7 million Afghan citizens have obtained their e-ID cards. eID in other countries: Costa Rica Costa Rica plans to introduce facial recognition data into its national identification card. Guatemala Guatemala introduced its electronic identity card called DPI (Documento Personal de Identificación) in August 2010. India Indonesia Indonesian electronic ID was trialed in six areas in 2009 and launched nationwide in 2011. Israel Electronic identity cards in Israel have been issued since July 2013. Kazakhstan Kazakhstan introduced its electronic identity cards in 2009. Mauritius Mauritius has had electronic identity cards since 2013. eID in other countries: Mexico Mexico had an intent to develop an official electronic biometric ID card for all youngsters under the age of 18 years and was called the Personal Identity Card (Record of Minors), which included the data verified on the birth certificate, including the names of the legal ascendant(s), a unique key of the Population Registry (CURP), a biometric facial recognition photograph, a scan of all 10 fingerprints, and an iris scan registration. eID in other countries: Nigeria General Multi-purpose Electronic Identity Cards are issued by the National Identity Management Commission (NIMC), a Federal Government agency under the Presidency. The NeID Card complies with ICAO standard 9303, ISO standard 7816–4., as well as GVCP for the MasterCard-supported payment applet. NIMC plans to issue 50m multilayer-polycarbonate cards, the first set being contact only, but also dual-interface with DESFire Emulation in the near future. eID in other countries: Pakistan Pakistan officially began its nationwide Computerized National Identity Card (CNIC) distribution in 2002, with over 89.5 x CNICs issued by 2012. In October 2012, the National Database and Registration Authority (NADRA) introduced the smart national identity card (SNIC), which contains a data chip and 36 security features. The SNIC complies with ICAO standard 9303 and ISO standard 7816–4. The SNIC can be used for both offline and online identification, voting, pension disbursement, social and financial inclusion programmes and other services. NADRA aims to replace all 89.5 million CNICs with SNICs by 2020. eID in other countries: Serbia Serbia has its first trustful and reliable electronic identity since June 2019. The first reliable service provider is The Office for IT and eGovernment, through which citizens and residents of Serbia can access services on eGovernment Portal and eHealth portal. The electronic identification offers two levels of security, first basic level with authentication of only user name and password, and medium level of two-factor of authentication. eID in other countries: Sri Lanka Since on 1 January 2016, Sri Lanka is in the process of developing a Smart Card based RFID E-National Identity Card which will replace the obsolete 'Laminated Type' cards by storing the holders information on a chip that can be read by banks, offices etc. thereby reducing the need to have documentation of these informations physically by storing in the cloud. eID in other countries: Turkey In Turkey the e-Government (e-Devlet) Gateway is a largely scaled Internet site that provides access to all public services from a single point. The purpose of the Gateway is to present public services to the citizens, enterprises and public institutions effectively and efficiently with information and communication technologies. Uruguay Uruguay has had electronic identity cards since 2015. The Uruguayan eID has a private key that allows to digitally sign documents, and has the user fingerprint stored in order to allow to verify the identity. It is also a valid travel document in some South American countries. As of 2017 the old laminated ID coexists with the new eID. Manufacturing: Electronic identification can also be attributed to the manufacturing sector, where the technology of electronic identification is transferred to individual parts or components within a manufacturing facility in order to track, and identify these parts to enhance manufacturing efficiency. This can also be referred to location detection technologies within the Fourth Industrial Revolution.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**CeCILL** CeCILL: CeCILL (from CEA CNRS INRIA Logiciel Libre) is a free software license adapted to both international and French legal matters, in the spirit of and retaining compatibility with the GNU General Public License (GPL). CeCILL: It was jointly developed by a number of French agencies: the Commissariat à l'Énergie Atomique (Atomic Energy Commission), the Centre national de la recherche scientifique (National Centre for Scientific Research) and the Institut national de recherche en informatique et en automatique (National Institute for Research in Computer Science and Control). It was announced on 5 July 2004 in a joint press communication of the CEA, CNRS and INRIA. CeCILL: It has gained support of the main French Linux User Group and the Minister of Public Function, and was considered for adoption at the European level before the European Union Public Licence was created. Terms: The CeCILL grants users the right to copy, modify, and distribute the licensed software freely. It defines the rights as passing from the copyright holder to a "Licensor", which may be the copyright holder or a further distributor, to the user or "Licensee". Like the GPL, it requires that modifications to the software be distributed under the CeCILL, but it makes no claim to work that executes in "separate address spaces", which may be licensed under terms of the licensee's choice. It does not grant a patent license (as some other common open-source licenses do), but rather includes a promise by the licensor not to enforce any patents it owns. In Article 9.4, the licensor agrees to provide "technical and legal assistance" if litigation regarding the software is brought against the licensee, though the extent of the assistance "shall be decided on a case-by-case basis...pursuant to a memorandum of understanding". Terms: The disclaimers of warranty and liability are written in a manner different from other common open-source licenses in order to comply with French law. The CeCILL does not preclude the licensor from offering a warranty or technical support for its software, but requires that such services be negotiated in a separate agreement. Terms: The license is compatible with the GPL through an explicit relicensing clause.Article 13's explicit reference to French law and a French court does not limit users, who can still choose a jurisdiction of their choice by mutual agreement to solve any litigation they may experience. The explicit reference to a French court will be used only if mutual agreement is not possible; this immediately solves the problem of competence of laws (something that the GPL does not solve cleanly, except when all parties in a litigation are in the USA). Versions: Version 2 was developed after consultations with the French-speaking Linux and Free Software Users' Association, the Association pour la Promotion et la Recherche en Informatique Libre, and the Free Software Foundation; it was released on 21 May 2005. According to the CeCILL FAQ there are no major differences in spirit, though there are in terms. Versions: The most notable difference in CeCILL v2 is the fact that the English text was approved not as a draft translation (as in CeCILL v1) but as an authentic text, in addition to the equally authentic French version. This makes the CeCILL license much easier to enforce internationally, as the cost of producing an authentic translation in any international court will be lower with the help of a second authentic reference text. The second difference is that the reference to the GNU General Public License, with which CeCILL v2 is now fully compatible, is explicitly defined precisely using its exact title and the exact name of the Free Software Foundation, to avoid all possible variations of the terms of the GPL v2. Some additional definitions were added to more precisely define the terms with less ambiguity. With these changes, the CeCILL is now fully enforceable according to WIPO rules, and according to French law in courts, without the legal problems remaining in GPL version 2 outside the United States. Versions: Version 2.1 was released in June 2013. It allows relicensing to the GNU Affero General Public License and the European Union Public License as well as the GPL, and clarifies the language that requires licensees to give access to the source code (which had previously caused rejection of version 2.0 by the Open Source Initiative). International protection and approbation of the CeCILL licenses: Note that CeCILL v1 already allowed replacing a CeCILL v1 license by CeCILL v2, so all software previously licensed with CeCILL v1 in 2004 can be licensed with CeCILL v2, with legal terms enforceable as authentic not only in French but in English too. International protection and approbation of the CeCILL licenses: The fact that it is protected by reputed public research centers (in France the INRIA, a founding member of the international W3 consortium, and the CEA working on atomic energy) which use them to publish their own open-source and free software, and by critical governmental organizations (which are also working in domains like military and defense systems) also gives much more security than using the GPL alone, as the license is supported officially by a government which is a full member of WIPO, and by an enforceable law. This also means that all international treaties related to the protection of intellectual rights do apply to CeCILL-licensed products, and so they are enforceable by law in all countries that signed any of the international treaties protected by WIPO. However, this also leaves open the possibility that the French government will make a future version of the CeCILL unfree and restricted. International protection and approbation of the CeCILL licenses: The CeCILL license is approved as a "Free Software" license by the FSF with which the CeCILL project founders have worked. Since version 2.1, CeCILL is also approved by the Open Source Initiative as an "Open Source" license. Other CeCILL licenses: The CeCILL project also adds two other licenses: CeCILL-B, which is fully compatible with BSD-like licenses (BSD, X11, MIT) which have a strong attribution requirement (which goes much further than a simple copyright notice), a requirement normally not allowed by the GPL itself (which describes it as an advertising requirement), and so this license may be incompatible with the original CeCILL license, if BSD-like components are integrated, unless the software uses a dual-licensing scheme and conforms to the licensing terms of all embedded components. Other CeCILL licenses: CeCILL-C, for "component" software, which is fully compatible with the FSF's LGPL license.These two licenses are also defined to make BSD-like and FSF's LGPL licenses enforceable internationally under WIPO rules. Notable users: Brian (software) G'MIC MedinTux Paradiseo PhoX Scilab ScientificPython SYNTAX Yass (software) Origins and general applicability: Although the three CeCILL licenses were developed and used for strategic French research systems (in the domain of defense, space launching systems, medical research, meteorology/climatology, and various domains of fundamental or applied physics), they are made to be usable also by the general public or any other commercial or non-profit organization, including from other governments, simply because these software component need and use (or are integrated with) component software or systems which were initially released with an open-source or free license, and they are operated by organizations that also have a commercial status. Origins and general applicability: Without these licenses, such systems could not have been built and used, and protected legally against various international patent claims. Due to the huge cost of these French strategic systems, a very strong licensing scheme was absolutely necessary to help protecting these investments against illegitimate claims by other commercial third parties, and one of the first needs was to make the well-known open-source and free licenses fully compatible and protected under the French law and the many international treaties ratified by France.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**UTP—xylose-1-phosphate uridylyltransferase** UTP—xylose-1-phosphate uridylyltransferase: In enzymology, an UTP—xylose-1-phosphate uridylyltransferase (EC 2.7.7.11) is an enzyme that catalyzes the chemical reaction UTP + alpha-D-xylose 1-phosphate ⇌ diphosphate + UDP-xyloseThus, the two substrates of this enzyme are UTP and alpha-D-xylose 1-phosphate, whereas its two products are diphosphate and UDP-xylose. This enzyme belongs to the family of transferases, specifically those transferring phosphorus-containing nucleotide groups (nucleotidyltransferases). The systematic name of this enzyme class is UTP:alpha-D-xylose-1-phosphate uridylyltransferase. Other names in common use include xylose-1-phosphate uridylyltransferase, uridylyltransferase, xylose 1-phosphate, UDP-xylose pyrophosphorylase, uridine diphosphoxylose pyrophosphorylase, and xylose 1-phosphate uridylyltransferase. This enzyme participates in nucleotide sugars metabolism.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Cheeger bound** Cheeger bound: In mathematics, the Cheeger bound is a bound of the second largest eigenvalue of the transition matrix of a finite-state, discrete-time, reversible stationary Markov chain. It can be seen as a special case of Cheeger inequalities in expander graphs. Let X be a finite set and let K(x,y) be the transition probability for a reversible Markov chain on X . Assume this chain has stationary distribution π Define Q(x,y)=π(x)K(x,y) and for A,B⊂X define Q(A×B)=∑x∈A,y∈BQ(x,y). Define the constant Φ as min S⊂X,π(S)≤12Q(S×Sc)π(S). The operator K, acting on the space of functions from |X| to |X| , defined by (Kϕ)(x)=∑yK(x,y)ϕ(y) has eigenvalues λ1≥λ2≥⋯≥λn . It is known that λ1=1 . The Cheeger bound is a bound on the second largest eigenvalue λ2 Theorem (Cheeger bound): 1−2Φ≤λ2≤1−Φ22.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**McGuire Programme** McGuire Programme: The McGuire Programme is a stammering or stuttering programme/course run for people who stammer or stutter by people who stammer. It was founded in 1994 by American Dave McGuire.Scottish international rugby union captain, Kelly Brown, is a graduate of the course. Singer Gareth Gates attended the programme's workshops and subsequently qualified as a speech instructor himself. Stammering awareness activist Adam Black, also a graduate of the course, received a British Empire Medal in the 2019 New Year Honours list where his work raising awareness of stammering was recognised.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Moses (machine translation)** Moses (machine translation): Moses is a free software, statistical machine translation engine that can be used to train statistical models of text translation from a source language to a target language, developed by the University of Edinburgh. Moses then allows new source-language text to be decoded using these models to produce automatic translations in the target language. Training requires a parallel corpus of passages in the two languages, typically manually translated sentence pairs. Moses is released under the LGPL licence and available both as source code and binaries for Windows and Linux. Its development is primarily supported by the EuroMatrix project, with funding by the European Commission. Moses (machine translation): Among its features are: A beam search algorithm that quickly finds the highest probability translation within a number of choices Phrase-based translation of short text chunks Handles words with multiple factored representations to enable the integration of linguistic and other information (e.g., surface form, lemma and morphology, part-of-speech, word class) Decodes ambiguous forms of a source sentence, represented as a confusion network, to support integration with upstream tools such as speech recognizers Support for large language models (LMs) such as IRSTLM (an exact LM using memory-mapping) and RandLM (an inexact LM based on Bloom filters)
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Video banking** Video banking: Video banking is a term used for performing banking transactions or professional banking consultations via a remote video connection. Video banking can be performed via purpose built banking transaction machines (similar to an Automated teller machine), or via a videoconference enabled bank branch. Types of video banking: In-branch Video banking can be conducted in a traditional banking branch. This form of video banking replaces or partially displaces the traditional banking tellers to a location outside of the main banking branch area. Via the video and audio link, the tellers are able to service the banking customer. The customer in the branch uses a purpose built machine to process viable medias such as cheques, cash, or coins. Types of video banking: Time Video banking can provide professional banking services to bank customers during nontraditional banking hours at convenient times such as in after hours banking branch vestibules that could be open up to 24 hours a day. This gives banking customers the benefit of personal teller service during hours when bank branches are not typically open. Types of video banking: In addition, following Check 21, cutoff times are typically later for personal teller machines, as physical checks do not need to be gathered and collected to be delivered to a separate check processing location. Substitute checks, or digital images of the original check, can be utilized to process the transaction. This can result in the typical 3 p.m. business day cutoff for a branch being extended well into the evening. Types of video banking: Location Video banking can provide professional banking services in nontraditional banking locations such as after hours banking branch vestibules, grocery stores, office buildings, factories, or educational campuses. Technology branches Video banking can enable banks to expand real-time availability of high-value banking consultative services in branches that might not otherwise have access to the banking expertise. Types of video banking: Video banking from anywhere Video banking has now been taken to another level of convenience. IndusInd Bank from India, has recently launched an innovative functionality called Video Branch. Video Branch empowers the customer to do a Virtual yet Face-to-Face banking with his bank branch manager or a central video branch. This concept has brought banking virtually in the hands of the customer. The customer can engage with the bank for services and financial transactions by meeting a bank representative anytime, anywhere.Customers can simply connect to the bank through an app on Android and Apple devices as well as through an application on the laptop or desktop. So whether the customer is at his home or office or even travelling he will be able to set up a video conference and experience instant banking services. Technology of video banking: Video connection Although termed "video banking," the video connection is always accompanied by an audio link which ensures the customer and bank representative can communicate clearly with one another. The communication link for that video and audio typically requires a high-speed data connection for applications where the tellers are not in the same physical location. Various technologies are employed by the vendors of video banking, but recent advances in audio and video compression make the use of these technologies much more affordable. For an in depth discussion on videoconferencing technologies see wiki videoconferencing article. Technology of video banking: Transaction equipment Other than the deployment location, one of the major differences between video banking and videoconferencing is the ability to conduct banking transactions and exchange viable medias such as checks, cash, and coins. Purpose-built machines, such as a personal teller machines, enable both the video / audio link to the customer plus the ability to accept and dispense viable medias. The system typically allows the bank teller to manipulate the machine to accept or dispense the cash and checks.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Yuan Cao** Yuan Cao: Yuan Cao (Chinese: 曹原; pinyin: Cáo Yuán) is a Chinese physicist. His research is focused on the properties of two-dimensional materials. He discovered that a stack of two sheets of graphene, cooled to 1.7 K, could act as a superconductor or as an insulator when exposed to an electric field. In 2018, Nature chose him as one of 10 people who mattered that year in science, calling him a "graphene wrangler".Cao was born in Chengdu in 1996, and he attended Shenzhen Yaohua Experimental School starting in 2007. In 2010 he got into the Special Class for the Gifted Young at the University of Science and Technology of China. In 2014 he started graduate school at Massachusetts Institute of Technology, and he obtained his doctorate in 2020. He did research in Pablo Jarillo-Herrero's group studying graphene. After graduating, he has conducted postdoctoral research at MIT.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Chat fiction** Chat fiction: Chat fiction is a format of web fiction written solely in the form of text-message or instant messaging conversations. Works are read primarily through dedicated mobile phone applications, the earliest being Hooked, which launched in 2015. The format became popular among teenagers and young adults, and other competing platforms followed, including Yarn and Tap, among others. History: The first chat fiction platform, Hooked, was created by Prerna Gupta and Parag Chordia, who were writing a novel and decided to do A/B testing to gauge reader preferences. They found that most of their target audience of teenagers failed to finish 1,000-word excerpts of best-selling young-adult novels, but read through stories of the same length written as text message conversations. They accordingly developed and launched Hook in 2015. The app gained popularity from late 2016, and reached the Apple App Store's top position among free apps in 2017. Competing apps began launching the same year, including Yarn, which also has a focus on interactive fiction, and Tap, developed by the online publishing platform Wattpad. Format: Chat fiction stories are presented as digital text conversations between two or more characters, without any narration. The format limits possible storytelling options, and presents a challenge to authors in conveying narrative only through dialogue. Most popular stories are of the horror and thriller genres. The format has been popular among teenagers and young adults, though it has been criticized as not providing a meaningful reading experience.Applications usually present the story incrementally, with the user tapping to advance the story message by message. Some platforms feature content by paid writers, while others allow or rely on user contributions. Revenue is usually based on a freemium model, with basic access being free while subscribing offers removal of limits and other benefits.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**High-voltage cable** High-voltage cable: A high-voltage cable (HV cable) is a cable used for electric power transmission at high voltage. A cable includes a conductor and insulation. Cables are considered to be fully insulated. This means that they have a fully rated insulation system that will consist of insulation, semi-con layers, and a metallic shield. This is in contrast to an overhead line, which may include insulation but not fully rated for operating voltage (EG: tree wire). High-voltage cables of differing types have a variety of applications in instruments, ignition systems, and alternating current (AC) and direct current (DC) power transmission. In all applications, the insulation of the cable must not deteriorate due to the high-voltage stress, ozone produced by electric discharges in air, or tracking. The cable system must prevent contact of the high-voltage conductor with other objects or persons, and must contain and control leakage current. Cable joints and terminals must be designed to control the high-voltage stress to prevent the breakdown of the insulation. High-voltage cable: The cut lengths of high-voltage cables may vary from several feet to thousands of feet, with relatively short cables used in apparatus and longer cables run within buildings or as buried cables in an industrial plant or for power distribution. The longest cut lengths of cable will often be submarine cables under the ocean for power transmission. Cable insulation technologies: Like other power cables, high-voltage cables have the structural elements of one or more conductors, an insulation system, and a protective jacket. High-voltage cables differ from lower-voltage cables in that they have additional internal layers in the insulation system to control the electric field around the conductor. These additional layers are required at 2,000 volts and above between conductors. Without these semi-conducting layers, the cable will fail due to electrical stress within minutes. This technique was patented by Martin Hochstadter in 1916; the shield is sometimes called a Hochstadter shield and shielded cable used to be called H-Type Cable. Depending on the grounding scheme, the shields of a cable can be connected to the ground at one end or both ends of the cable. Splices in the middle of the cable can be also grounded depending on the length of the circuit and if a semiconducting jacket is employed on direct buried circuits. Cable insulation technologies: Since 1960 solid dielectric extruded cables have taken dominance in the distribution market. These medium voltage cables are generally insulated with EPR or XLPE polymeric insulation. EPR insulation is common on cables from 4 to 34kV. EPR is not commonly used over 35KV due to losses, however, it can be found in 69kV cables. XLPE is used at all voltage levels from the 600V class and up. Sometimes EAM insulation is marketed, however, market penetration remains fairly low. Solid, extruded insulation cables such as EPR and XLPE account for the majority of distribution and transmission cables produced today. However, the relative unreliability of early XLPE resulted in a slow adoption at transmission voltages. Cables of 330, 400, and 500 kV are commonly constructed using XLPE today, but this has occurred only in recent decades. Cable insulation technologies: An increasingly uncommon insulation type is PILC or paper insulation lead-covered cable. Some utilities still install this for distribution circuits as new construction or replacement. Sebastian Ziani de Ferranti was the first to demonstrate in 1887 that carefully dried and prepared kraft paper could form satisfactory cable insulation at 11,000 volts. Previously paper-insulated cable had only been applied for low-voltage telegraph and telephone circuits. An extruded lead sheath over the paper cable was required to ensure that the paper remained moisture-free. Mass-impregnated paper-insulated medium voltage cables were commercially practical by 1895. During World War II several varieties of synthetic rubber and polyethylene insulation were applied to cables. Modern high-voltage cables use polymers, especially polyethylene, including cross-linked polyethylene (XLPE) for insulation. Cable insulation technologies: The demise of PILC could be considered to be in the 1980s and 1990s as urban utilities started to install more EPR and XLPE insulated cables. The factors for the decreased use of PILC are the high level of craftsmanship needed to splice lead, longer splicing times, reduced availability of the product domestically, and pressure to stop using lead for environmental and safety reasons. It should also be noted that rubber insulated lead-covered cable enjoyed a short period of popularity prior to 1960 in the low and medium voltage markets but was not widely used by most utilities. Existing PILC feeders are often considered to be near the end of life by most utilities and subject to replacement programs. Cable insulation technologies: Vulcanized rubber was patented by Charles Goodyear in 1844, but it was not applied to cable insulation until the 1880s when it was used for lighting circuits. Rubber-insulated cable was used for 11,000 volt circuits in 1897 installed for the Niagara Falls Power Generation project. Cable insulation technologies: Oil-filled, gas-filled, and pipe-type cables have been largely considered obsolete since the 1960s. Such cables are designed to have significant oil flow through the cable. Standard PILC cables are impregnated with oil but the oil is not designed to flow or cool the cable. Oil-filled cables are typically lead-insulated and can be purchased on reels. Pipe-type cables differ from oil-filled cables in that they are installed in a rigid pipe usually made of steel. With pipe-type cables, the pipes are constructed first, and then at a later date, the cable is pulled through. The cable may feature skid wires to prevent damage during the pulling process. The cross-sectional volume of oil in a pipe-type cable is significantly higher than in an oil-filled cable. These pipe-type cables are oil-filled at nominal low, medium, and high pressures. Higher voltages require higher oil pressures to prevent the formation of voids that would allow partial discharges within the cable insulation. Pipe-type cables will typically have a cathodic protection system driven off voltage where an oil-filled cable circuit would not. Pipe-type cable systems are often protected from forming holidays through an asphaltic coating. There are still many of these pipe-type circuits in operation today. However, they have fallen out of favor due to the high front-end cost and massive O+M budget needed to maintain the fleet of pumping plants. Cable insulation components: High voltage is defined as any voltage over 1000 volts. Those of 2 to 33 kV are usually called medium voltage cables, those over 50 kV high voltage cables. Cable insulation components: Modern HV cables have a simple design consisting of a few parts: the conductor, the conductor shield, the insulation, the insulation shield, the metallic shield, and the jacket. Other layers can include water blocking tapes, ripcords, and armor wires. Copper or aluminum wires transports the current, see (1) in figure 1. (For a detailed discussion on copper cables, see main article: Copper conductor.) The insulation, insulation shield, and conductor shield are generally polymer-based with a few rare exceptions. Cable insulation components: Single conductor designs under 2000 KCM are generally concentric. The individual strands are often deformed during the stranding process to provide a smoother overall circumference. These are known are compact and compressed conductors. Compact offers a 10% reduction in conductor outer diameter while the compressed version only offers a 3% decrease. The selection of a compressed or compact conductor will often require a different connector during splicing. 2000 KCM and larger transmission cables often include a sectored style design to reduce skin effect losses. Utility power cables are often designed to run at up to 75C, 90C, and 105C conductor temperatures. This temperature is limited by the construction standard and jacket selection. Cable insulation components: The conductor shield is always permanently bonded to the EPR or XLPE cable insulation in the solid dielectric cable. The semi-conductive insulation shield can be bonded or removable depending on the desires of the purchaser. For voltages 69KV and up the insulation shield is generally bonded. A strippable insulation shield is purchased to reduce splicing time and skill. It can be argued that strippable Semicon can lead to fewer workmanship issues at medium voltage. With paper insulated cables the semiconducting layers consist of carbon-bearing or metalized tapes applied over the conductor and paper insulation. The function of these layers is to prevent air-filled cavities and suppress voltage stress between the metal conductors and the dielectric so that little electric discharges cannot arise and endanger the insulation material.The insulation shield is covered by a copper, aluminum, or lead "screen." The metallic shield or sheath serves as an earthed layer and will drain leakage currents. The shield's function is not to conduct faults but that functionality can be designed if desired. Some designs that could be used are copper tape, concentric copper wires, longitudinally corrugated shields, copper flat straps, or extruded lead sheath. Cable insulation components: The cable jacket is often polymeric. The function of the jacket is to provide mechanical protection as well as prevent moisture & chemical intrusion. Jackets can be semiconducting or non-conducting depending on soil conditions and desired grounding configuration. Semiconducting jackets can also be employed on cables to help with a jacket integrity test. Some types of jackets are LLDPE, HDPE, polypropylene, PVC (bottom end of the market), LSZH, etc. Quality: During the development of high voltage insulation, which has taken about half a century, two characteristics proved to be paramount. Quality: First, the introduction of the semiconducting layers. These layers must be absolutely smooth, without even protrusions as small as a few µm. Further, the fusion between the insulation and these layers must be absolute; any fission, air-pocket or other defect — again, even of a few µm — is detrimental to the cable. Second, the insulation must be free of inclusions, cavities, or other defects of the same sort of size. Any defect of these types shortens the voltage life of the cable which is supposed to be in the order of 30 years or more.Cooperation between cable makers and manufacturers of materials has resulted in grades of XLPE with tight specifications. Most producers of XLPE-compound specify an "extra clean" grade where the number and size of foreign particles are guaranteed. Packing the raw material and unloading it within a cleanroom environment in the cable-making machines is required. The development of extruders for plastics extrusion and cross-linking has resulted in cable-making installations for making defect-free and pure insulations. The final quality control test is an elevated voltage 50 or 60 Hz partial discharge test with very high sensitivity (in the range of 5 to 10 pico coulombs) This test is performed on every reel of cable before it is shipped. HVDC cable: A high-voltage cable for high-voltage direct current (HVDC) transmission has the same construction as the AC cable shown in figure 1. The physics and the test requirements are different. In this case the smoothness of the semiconducting layers (2) and (4) is of utmost importance. Cleanliness of the insulation remains imperative. Many HVDC cables are used for DC submarine connections, because at distances over approximately 100 km AC can no longer be used. As of 2021 the longest submarine cable today is the North Sea Link cable between Norway and the UK which is 720 km (450 mi) long. Cable terminals: Terminals of high-voltage cables must manage the electric fields at the ends. Without such a construction the electric field will concentrate at the end of the earth-conductor as shown in figure 2. Cable terminals: Equipotential lines are shown here, which can be compared with the contour lines on a map of a mountainous region: the nearer these lines are to each other, the steeper the slope and the greater the danger, in this case, the danger of an electrical breakdown. The equipotential lines can also be compared with the isobars on a weather map: The denser the lines, the more wind and the greater the danger of damage. Cable terminals: In order to control the equipotential lines (that is to control the electric field) a device is used that is called a stress cone, see figure 3. The crux of stress relief is to flare the shield end along a logarithmic curve. Before 1960, the stress cones were handmade using tape—after the cable was installed. These were protected by potheads, so named because a potting compound/dielectric was poured around the tape inside a metal/ porcelain body insulator. About 1960, performed terminations were developed consisting of a rubber or elastomer body that is stretched over the cable end. On this rubber-like body R a shield electrode is applied that spreads the equipotential lines to guarantee a low electric field. Cable terminals: The crux of this device, invented by NKF in Delft in 1964, is that the bore of the elastic body is narrower than the diameter of the cable. In this way the (blue) interface between cable and stress-cone is brought under mechanical pressure so that no cavities or air pockets can be formed between cable and cone. Electric breakdown in this region is prevented in this way. Cable terminals: This construction can further be surrounded by a porcelain or silicone insulator for outdoor use, or by contraptions to enter the cable into a power transformer under oil, or switchgear under gas pressure. Cable joints: Connecting two high-voltage cables with one another poses two main problems. First, the outer conducting layers in both cables must be terminated without causing a field concentration, as with the making of a cable terminal. Secondly, a field-free space must be created where the cut-down cable insulation and the connector of the two conductors safely can be accommodated. These problems were solved by NKF in Delft in 1965 by introducing a device called bi-manchet cuff. Cable joints: Figure 10 shows a photograph of the cross-section of such a device. At one side of this photograph, the contours of a high-voltage cable are drawn. Here red represents the conductor of that cable and blue the insulation of the cable. The black parts in this picture are semiconducting rubber parts. The outer one is at earth potential and spreads the electric field in a similar way as in a cable terminal. The inner one is at high voltage and shields the connector of the conductors from the electric field. Cable joints: The field itself is diverted as shown in figure 8, where the equipotential lines are smoothly directed from the inside of the cable to the outer part of the bi-manchet (and vice versa at the other side of the device).The crux of the matter is here, like in the cable terminal, that the inner bore of this bi-manchet is chosen smaller than the diameter over the cable insulation. In this way a permanent pressure is created between the bi-manchet and the cable surface, and cavities or electrical weak points are avoided. Cable joints: Installing a terminal or bi-manchet cuff is skilled work. The technical steps of removing the outer semiconducting layer at the end of the cables, placing the field-controlling bodies, connecting the conductors, etc., require skill, cleanliness, and precision. Cable joints: Hand-taped joints Hand taped joints are the old-school method of splicing and terminating cable. The construction of these joints involves taking several types of tape and manually building up appropriate stress relief. Some of the tapes involved could be rubber tapes, semiconducting tapes, friction tapes, varnished cambric tapes, etc. This splicing method is incredibly labor and time-intensive. It requires measuring the diameter and length of the layers being built up. Often the tapes must be half-lapped and pulled tight to prevent the formation of windows or voids in the resulting splice. Waterproofing hand taped splicing is very difficult. Cable joints: Pre-molded joints Pre-molded joints are injection molded bodies created in two or more stages. Due to automation, the faraday cage will have a precise geometry and placement not achievable in taped joints. Pre-molded joints come in many different body sizes that much be matched up to the cable Semicon's outside diameter. A tight joint interface is required to ensure waterproofing. These joints are often pushed on and can cause soft tissue injuries among craftsmen. Cable joints: Heat shrink joints Heat shrink joints consist of many different heat shrink tubes: insulating and conducting. These kits are less labor-intensive than taping but more than pre-molded. There can be concerns about having an open flame in a manhole or building vault. There can also be workmanship concerns with using a torch as the tubes must be fully recovered without scorching and any mastics used must flow into the voids and eliminate any air. Sufficient time and heat must be given. There are also a high number of components that must be placed in the correct order and position relative to the center of the joint. Cable joints: Cold shrink joints Cold shrink is the newest family of joints. The idea is a polymer tube is formed at the correct diameter for the cable. It is then expanded over a form and placed onto a hold-out tube at the factory. Then ready for installation the joint is very easily slipped over the cable end. After the connector is installed the splicer simply needs to center the joint body and then release the holdout. The tube will automatically recover to the original size. The only complication is cold shrink has a shelf life of approximately 2–3 years. After that time period, the rubber will form memory and not recover down to the intended size. This can lead to joint failure if not installed before the recommended date. From a utility perspective, this makes it difficult to keep track of stock or retain emergency spares for critical customers. Cold shrink is the more rapidly growing area of distribution splices and is thought to have the fewest workmanship issues with the quickest install times. X-ray cable: X-ray cables are used in lengths of several meters to connect the HV source with an X-ray tube or any other HV device in scientific equipment. They transmit small currents, in the order of milliamperes at DC voltages of 30 to 200 kV, or sometimes higher. The cables are flexible, with rubber or other elastomer insulation, stranded conductors, and an outer sheath of braided copper wire. The construction has the same elements as other HV power cables. Testing of high-voltage cables: There are different causes for faulty cable insulation when considering solid dielectric or paper insulation. Hence, there are various test and measurement methods to prove fully functional cables or to detect faulty ones. While paper cables are primarily tested with DC insulation resistance tests the most common test for solid dielectric cable systems is the partial discharge test. One needs to distinguish between cable testing and cable diagnosis. While cable testing methods result in a go/no go statement cable diagnosis methods allow judgment of the cable's current condition. With some tests, it is even possible to locate the position of the defect in the insulation before failure. Testing of high-voltage cables: In some cases, electrical treeing (water trees) can be detected by tan delta measurement. Interpretation of measurement results can in some cases yield the possibility to distinguish between new, strongly water treed cable. Unfortunately, there are many other issues that can erroneously present themselves as high tangent delta, and the vast majority of solid dielectric defects can not be detected with this method. Damage to the insulation and electrical treeing may be detected and located by partial discharge measurement. Data collected during the measurement procedure are compared to measurement values of the same cable gathered during the acceptance test. This allows a simple and quick classification of the dielectric condition of the tested cable. Just like with tangent delta, this method has many caveats, but with good adherence to factory test standards, field results can be very reliable.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Methylphosphine** Methylphosphine: Methylphosphine is the simplest organophosphorus compound with the formula CH3PH2, often written MePH2. It is a malodorous gas that condenses to a colorless liquid. It can be produced by methylation of phosphide salts: KPH2 + MeI → MePH2 + KI Reactions: The compound exhibits the properties characteristic of a primary phosphine, i.e., a compound of the type RPH2. It can be oxidized to methylphosphonous acid: MePH2 + O2 → MeP(H)O2HIt protonates to give the phosphonium ion: MePH2 + H+ → MePH3+With strong bases, it can be deprotonated to give methylphosphide derivatives: MePH2 + KOH → K[MePH] + H2O
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Uridashi bonds** Uridashi bonds: An Uridashi bond is a secondary offering of bonds outside Japan. They can be denominated in Yen or issued in a foreign currency. These bonds are sold to Japanese household investors. An Uridashi bond is normally issued in high-yielding currencies such as New Zealand Dollars or Australian Dollars in order to give the investor a higher return than the historically low domestic interest rate in Japan.Provided that the interest rate differential between the foreign and local currency is maintained, the investor will receive higher interest rate payments than if he/she had invested in a Japanese Yen - denominated bond. In addition to the credit risk on the bond issuer, the investor also takes on currency risk since the foreign currency denominated coupon payments will have to be exchanged into Japanese Yen for the retail investor or if the investor should wish to sell the bond and exchange the proceeds from the sale back into Japanese Yen. Where the bond is issued in Japanese Yen, they are typically linked to a foreign currency or to an equity index like the Nikkei. Uridashi bonds: Uridashi bonds became very popular in the 2000s and are often associated with the carry trade in which a loan is made in a low interest currency to buy instruments in a higher yield currency. During the 2008 financial crisis the carry trade and foreign currency bonds in general came under criticism in Japan for contributing to the crisis.On 1 November 2015 the size of the Uridashi Bond Market was US$33.2bn equivalent in 15 different currencies.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Medvedev–Sponheuer–Karnik scale** Medvedev–Sponheuer–Karnik scale: The Medvedev–Sponheuer–Karnik scale, also known as the MSK or MSK-64, is a macroseismic intensity scale used to evaluate the severity of ground shaking on the basis of observed effects in an area where an earthquake transpires. Medvedev–Sponheuer–Karnik scale: The scale was first proposed by Sergei Medvedev (USSR), Wilhelm Sponheuer (East Germany), and Vít Kárník (Czechoslovakia) in 1964. It was based on the experiences being available in the early 1960s from the application of the Modified Mercalli intensity scale and the 1953 version of the Medvedev scale, known also as the GEOFIAN scale.With minor modifications in the mid-1970s and early 1980s, the MSK scale became widely used in Europe and the USSR. In early 1990s, the European Seismological Commission (ESC) used many of the principles formulated in the MSK in the development of the European Macroseismic Scale, which is now a de facto standard for evaluation of seismic intensity in European countries. MSK-64 is still being used in India, Israel, Russia, and throughout the Commonwealth of Independent States. Medvedev–Sponheuer–Karnik scale: The Medvedev–Sponheuer–Karnik scale is somewhat similar to the Modified Mercalli (MM) scale used in the United States. The MSK scale has 12 intensity degrees expressed in Roman numerals (to prevent the use of decimals):
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**FlexBook** FlexBook: FlexBook is a textbook authoring platform developed by the CK-12 Foundation launched in 2008, focused on textbooks for the K-12 market. Derived from the words "flexibility" and "textbook," a FlexBook allows users to produce and customize content by re-purposing educational content using different modules. FlexBooks can be designed to suit a learner's learning style, region, language, or level of skill, while adhering to the local education standards. Features: FlexBooks are designed to overcome some of the limitations of traditional textbooks. Anyone – including teachers, students, and parents – can adapt, create, and configure a FlexBook.Some FlexBooks features include: Web-based collaborative model, where the user can create and edit content to produce a custom textbook Open Educational Resource(OER) which allows for remixing of content Available in PDF, HTML, ePub (for iPad) and AZW (for Kindle) Licensing: Each CK-12 FlexBook is created under Creative Commons Attribution-Non-Commercial 3.0 Unported (CC BY-NC 3.0) License, giving its author/user a right to share (i.e., right to copy, distribute and transmit the work) a right to remix (i.e., right to adapt the work). However, conditions of Attribution and Non-commercial apply. Examples of use and collaboration: In March 2009, FlexBook was acknowledged as “an adaptive, web-based set of instructional materials” by Virginia officials when members from Virginia's K-12 physics community along with university and industry volunteers developed an eleven chapter FlexBook titled “21st Century Physics FlexBook: A Compilation of Contemporary and Modern Technologies” in just 4 months. In September 2010, NASA teamed up with CK-12 to add a chapter on “modeling and simulation” to the existing Physics FlexBook created earlier. In November 2011, teachers from a school district, Anoka-Hennepin, Minnesota, reportedly, saved the district $175,000 by writing their own online textbook instead of buying $65 textbooks – earlier, costing the district to the tune of $200,000. Wolfram has teamed up with CK-12 to produce interactive FlexBooks with Wolfram demonstrations embedded into the FlexBooks.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**String group** String group: In topology, a branch of mathematics, a string group is an infinite-dimensional group String ⁡(n) introduced by Stolz (1996) as a 3 -connected cover of a spin group. A string manifold is a manifold with a lifting of its frame bundle to a string group bundle. This means that in addition to being able to define holonomy along paths, one can also define holonomies for surfaces going between strings. There is a short exact sequence of topological groups String Spin ⁡(n)→0 where K(Z,2) is an Eilenberg–MacLane space and Spin ⁡(n) is a spin group. The string group is an entry in the Whitehead tower (dual to the notion of Postnikov tower) for the orthogonal group: Fivebrane String Spin SO ⁡(n)→O⁡(n) It is obtained by killing the π3 homotopy group for Spin ⁡(n) , in the same way that Spin ⁡(n) is obtained from SO ⁡(n) by killing π1 . The resulting manifold cannot be any finite-dimensional Lie group, since all finite-dimensional compact Lie groups have a non-vanishing π3 . The fivebrane group follows, by killing π7 More generally, the construction of the Postnikov tower via short exact sequences starting with Eilenberg–MacLane spaces can be applied to any Lie group G, giving the string group String(G). Intuition for the string group: The relevance of the Eilenberg-Maclane space K(Z,2) lies in the fact that there are the homotopy equivalences K(Z,1)≃U(1)≃BZ for the classifying space BZ , and the fact K(Z,2)≃BU(1) . Notice that because the complex spin group is a group extension Spin Spin ⁡(n)→0 the String group can be thought of as a "higher" complex spin group extension, in the sense of higher group theory since the space K(Z,2) is an example of a higher group. It can be thought of the topological realization of the groupoid BU(1) whose object is a single point and whose morphisms are the group U(1) . Note that the homotopical degree of K(Z,2) is 2 , meaning its homotopy is concentrated in degree 2 , because it comes from the homotopy fiber of the map String Spin ⁡(n) from the Whitehead tower whose homotopy cokernel is K(Z,3) . This is because the homotopy fiber lowers the degree by 1 Understanding the geometry The geometry of String bundles requires the understanding of multiple constructions in homotopy theory, but they essentially boil down to understanding what K(Z,2) -bundles are, and how these higher group extensions behave. Namely, K(Z,2) -bundles on a space M are represented geometrically as bundle gerbes since any K(Z,2) -bundle can be realized as the homotopy fiber of a map giving a homotopy square P→∗↓↓M→K(Z,3) where K(Z,3)=B(K(Z,2)) . Then, a string bundle S→M must map to a spin bundle S→M which is K(Z,2) -equivariant, analogously to how spin bundles map equivariantly to the frame bundle. Fivebrane group and higher groups: The fivebrane group can similarly be understood by killing the Spin ⁡(n))≅π7(O⁡(n)) group of the string group String ⁡(n) using the Whitehead tower. It can then be understood again using an exact sequence of higher groups Fivebrane String ⁡(n)→0 giving a presentation of Fivebrane ⁡(n) it terms of an iterated extension, i.e. an extension by K(Z,6) by String ⁡(n) . Note map on the right is from the Whitehead tower, and the map on the left is the homotopy fiber.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Proning** Proning: Proning or prone positioning is the placement of patients into a prone position so that they are lying on their front. This is used in the treatment of patients in intensive care with acute respiratory distress syndrome (ARDS). It has been especially tried and studied for patients on ventilators but, during the COVID-19 pandemic, it is being used for patients with oxygen masks and CPAP as an alternative to ventilation. Intensive care: Prone positioning may be used for people suffering from acute respiratory distress syndrome (ARDS) to improve their breathing. If the patient is undergoing intensive care and sedated then this is a difficult procedure because lifting and turning the unconscious patient requires many staff or special equipment. If they are intubated then care has to be taken to manage the tangle of associated lines and tubes.A 2011 meta-analysis of 48 studies found that there were no negative effects on mortality for patients in intensive care but that a significant reduction in mortality was only found with those patients who were severely ill with ARDS.A 2012 systematic review (updated in 2022) of proning in infants with acute respiratory distress with mechanical ventilation found low certainty evidence that it was effective in improving oxygenation. No adverse effects were found but the risk of sudden infant death syndrome, which is greater in the prone position, necessitates continuous monitoring.A 2014 systematic review of 11 trials found that reduction of the tidal volume of ventilation, in combination with prone positioning, was effective, saving the life of about one additional patient in eleven. Intensive care: The Large Observational Study to UNderstand the Global Impact of Severe Acute Respiratory FailurE (LUNG-SAFE) conducted by the European Society of Intensive Care Medicine (ESICM) looked at the use of proning during the study period of 2014. At that time, proning was used for 7% of all ARDS patients and 14% of the most severe cases. The ESICM and Surviving Sepsis Campaign published Guidelines on the Management of Critically Ill Adults with Coronavirus Disease 2019 (COVID-19) in 2020. These recommended the use of proning:For mechanically ventilated adults with COVID-19 and moderate to severe ARDS, we suggest prone ventilation for 12 to 16 hours, over no prone ventilation (weak recommendation, low quality evidence). Intensive care: In the COVID-19 pandemic, there is anecdotal evidence in areas such as New York, that prone or reclining posture can be used with oxygen supplied by a mask or continuous positive airway pressure (CPAP) to improve oxygenation and so avoid the need for intubation and ventilation. This especially effective with heavy, obese patients who suffer more on their back in a supine position. In April 2020, the Intensive Care Society issued guidelines for the use of prone positioning with conscious COVID sufferers, recommending that it be tried for all suitable patients. Mechanisms: There are several factors which have been suggested to explain the benefits of this position for ARDS patients. These include better oxygenation due to the physical effects of the position, reducing the weight of the body on the diaphragm and lungs a reduction in ventilator-associated lung injury (VILI) as the stress and strain on the lungs is reduced improving the effectiveness of the right ventricle of the heart, which pumps blood through the lungs, and so reducing the incidence of fatal cor pulmonale better draining of lung fluids causing a reduction in ventilator-associated pneumonia
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Erdős–Turán inequality** Erdős–Turán inequality: In mathematics, the Erdős–Turán inequality bounds the distance between a probability measure on the circle and the Lebesgue measure, in terms of Fourier coefficients. It was proved by Paul Erdős and Pál Turán in 1948.Let μ be a probability measure on the unit circle R/Z. The Erdős–Turán inequality states that, for any natural number n, sup A|μ(A)−mesA|≤C(1n+∑k=1n|μ^(k)|k), where the supremum is over all arcs A ⊂ R/Z of the unit circle, mes stands for the Lebesgue measure, exp ⁡(2πikθ)dμ(θ) are the Fourier coefficients of μ, and C > 0 is a numerical constant. Application to discrepancy: Let s1, s2, s3 ... ∈ R be a sequence. The Erdős–Turán inequality applied to the measure μm(S)=1m#{1≤j≤m|sjmod1∈S},S⊂[0,1), yields the following bound for the discrepancy: sup 0≤a≤b≤1|m−1#{1≤j≤m|a≤sjmod1≤b}−(b−a)|)≤C(1n+1m∑k=1n1k|∑j=1me2πisjk|).(1) This inequality holds for arbitrary natural numbers m,n, and gives a quantitative form of Weyl's criterion for equidistribution. A multi-dimensional variant of (1) is known as the Erdős–Turán–Koksma inequality. Additional references: Harman, Glyn (1998). Metric Number Theory. London Mathematical Society Monographs. New Series. Vol. 18. Clarendon Press. ISBN 0-19-850083-1. Zbl 1081.11057.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Kernfs (BSD)** Kernfs (BSD): In the Berkeley Software Distribution (BSD) and its descendants, kernfs is a pseudo file system that provides access to information on the currently running kernel. The file system itself and its content are dynamically generated when the operating system is booted, and the kernfs is often mounted at the /kern directory. As a result of its nature, kernfs does not consist of actual files on a storage device, allowing instead processes to retrieve system information by accessing virtual files.kernfs first appeared in 4.4BSD, and NetBSD 6.0 continues to use kernfs by default while mounting it at the canonical /kern mount point.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**IBM 4300** IBM 4300: The IBM 4300 series are mid-range systems compatible with System/370 that were sold from 1979 through 1992. They featured modest electrical and cooling requirements, and thus did not require a data center environment. They had a disruptive effect on the market, allowing customers to provide internal IBM computing services at a cost point lower than commercial time-sharing services. All 4300 processors used a 3278-2A, 3279-C or 3205 display console rather than a 3210 or 3215 keyboard/printer console. Models: Each model - 4331, 4341, 4361, and 4381 - had various sub-models, such as the 4341 model 1 (or 4341-1) and 4341 model 2 (4341-2). The 4381-13 through 4381-24 (announced in 1987) were entry-level machines for the 370-XA architecture. They were positioned between the IBM 9370 and IBM 3090 in performance at the time of announcement. The 4381-3, 4381-14, 4381-24 and 4381-92 are dual-CPU models. Other models included 1, 2, 11, 12, 13, 21, 22, 23, 90 and 91. IBM 4321: The IBM 4321 was announced on 18 November 1981. IBM 4331: The IBM 4331 (and the 4341) were announced on 30 January 1979. It came with an integrated adapter that permitted attaching up to 16 of two newly introduced direct-access storage devices (DASD): The IBM 3310, described as having "a storage capacity of 64.5 million characters", was to be used with "Storage disks .. sealed to reduce the possibility of damage, loss, misuse or contamination". IBM 4331: The IBM 3370 with up to 571 million characters could also be used with an IBM 4341.The 4331 was withdrawn on 18 November 1981. IBM 4341: The IBM 4341 (and the 4331) were announced on 30 January 1979. Like the 4331, it came with an integrated adapter that permitted attaching up to 16 of the newly introduced IBM 3370 DASD. The 4341 did not support the much lower capacity IBM 3310. The 4341 Introduced the Extended Control Program Support:VM (ECPS:VM), Extended Control Program Support:VS1 (ECPS:VS1) and Extended Control Program Support:Virtual Storage Extended (ECPS:VSE) features. The 4341-2 introduced the Extended Control Program Support (ECPS:MVS) option, a subset of System/370 extended facility. IBM 4341: On 20 October 1982, IBM announced a new entry-level 4341 model, Model Group 9 and a new top-of-the-line 4341, Model Group 12. Model Group 12 included the Dual Address Space (DAS) facility. The 4341 was withdrawn on 11 February 1986. IBM 4361: The IBM 4361 Model Groups 4 & 5 were announced on 15 September 1983. Model Group 3 was announced the following year on 12 September 1984. New features Among the new/optional features for the 4361 were: Auto-Start—automatically turns on the processor by telephone via the Remote Operator Control Facility or at a predetermined time and day of the week. The processor powers on and proceeds with initial microcode load, sets the clock and loads the system. APL keyboard A Workstation Adapter that includes support for terminals with APL keyboards, supporting the APL syntax and symbols. High-Accuracy Arithmetic Facility While Floating-Point Arithmetic capability has long been part of computing history, and was present in System/360, this feature's advancement, conceptualization of which, as Karlsruhe Accurate Arithmetic, had been under development for decades, was implemented as an optional feature on the 4361.The 4361 was withdrawn on 17 February 1987. IBM 4381: The IBM 4381 had a greater longevity than any of the above systems. Model Groups 1 & 2 were announced Sep 15, 1983 and withdrawn on 11 February 1986. Model Group 3 was announced on 25 October 1984 and withdrawn on 11 February 1986. Model Groups 11, 12, 13 & 14 were announced on 11 February 1986. Model Groups 21, 22, 23 & 24 were announced on 19 May 1987 and withdrawn on 19 August 1992. Operating systems: New releases of: Disk Operating System/Virtual Storage Extended (DOS/VSE) Virtual Machine Facility/370 (VM/370) Release 6 Operating System/Virtual Storage 1 (OS/VS1) Release 7supported the 4300 series as well as other System/370-compatible processors. For the 4321 and 4331: Small Systems Executive/Virtual Storage Extended (SSX/VSE), a simplified version of the DOS/VSE operating system for the IBM 4321 and IBM 4331 processors. Other: Hughes Aircraft Company was the first IBM customer to install Endicott's initial IBM 4341 processor The IBM 4331 Model 2 was developed by the Boeblingen lab and manufactured in Endicott. The IBM 4341 Model 2 was developed by the intermediate systems group, and manufactured by SPD, in Endicott. Subsequent processors had development and manufacturing activities in Endicott, Havant, Boeblingen, Valencia, and Sumare.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Kevoree** Kevoree: Kevoree is an open source project that aims at enabling the development of reconfigurable distributed systems. It is built around a component model, and takes advantage of the Models@Runtime approach to provide efficient tools for the development, live adaptation and synchronization of distributed Software Systems. History: The Kevoree project has been initiated by the University of Rennes / IRISA and INRIA Bretagne Atlantique. Started in 2010, Kevoree is now a mature solution to develop distributed software systems.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Stickle Bricks** Stickle Bricks: Stickle Bricks are a construction toy primarily intended for toddlers invented by Denys Fisher in 1969. The brand is owned by Hasbro, and as of 2016 is sub-licensed to Flair Leisure Products plc. Description: An individual stickle brick is a colourful plastic shape a few centimeters long which has a "brush" of small plastic "fingers" on one or more surfaces. The fingers of adjacent stickle bricks can interlock, allowing them to be joined in various ways. Standard sets of stickle bricks contain triangular, square and rectangular pieces. Many recent sets also include other types of pieces such as heads, wheels and teddy bear shapes. History: Stickle Bricks were invented in 1969 by Denys Fisher.From 2001 to 2008, GP Flair was the British distributor of the bricks. In October 2015, Flair licensed the bricks along with Mr. Frosty from Hasbro starting in 2016. Similar toys: Several companies manufacture similar toys, not all of them compatible. Names for these toys include "Nopper", "Bristle Blocks", "Fun Bricks", "Clipo", "Krinkles", "Multi-Fit", and "Thistle Blocks".
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Bismuth(III) iodide** Bismuth(III) iodide: Bismuth(III) iodide is the inorganic compound with the formula BiI3. This gray-black salt is the product of the reaction of bismuth and iodine, which once was of interest in qualitative inorganic analysis.Bismuth(III) iodide adopts a distinctive crystal structure, with iodide centres occupying a hexagonally closest-packed lattice, and bismuth centres occupying either none or two-thirds of the octahedral holes (alternating by layer), therefore it is said to occupy one third of the total octahedral holes. Synthesis: Bismuth(III) iodide forms upon heating an intimate mixture of iodine and bismuth powder: 2Bi + 3I2 → 2BiI3BiI3 can also be made by the reaction of bismuth oxide with aqueous hydroiodic acid: Bi2O3(s) + 6HI(aq) → 2BiI3(s) + 3H2O(l) Reactions: Since bismuth(III) iodide is insoluble in water, an aqueous solution can be tested for the presence of Bi3+ ions by adding a source of iodide such as potassium iodide. A black precipitate of bismuth(III) iodide indicates a positive test.Bismuth(III) iodide forms iodobismuth(III) anions when heated with halide donors: 2 NaI + BiI3 → Na2[BiI5]Bismuth(III) iodide catalyzes the Mukaiyama aldol reaction. Bi(III) is also used in a Barbier type allylation of carbonyl compounds in combination with a reducing agent such as zinc or magnesium.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**NTIME** NTIME: In computational complexity theory, the complexity class NTIME(f(n)) is the set of decision problems that can be solved by a non-deterministic Turing machine which runs in time O(f(n)). Here O is the big O notation, f is some function, and n is the size of the input (for which the problem is to be decided). Meaning: This means that there is a non-deterministic machine which, for a given input of size n, will run in time O(f(n)) (i.e. within a constant multiple of f(n), for n greater than some value), and will always "reject" the input if the answer to the decision problem is "no" for that input, while if the answer is "yes" the machine will "accept" that input for at least one computation path. Equivalently, there is a deterministic Turing machine M that runs in time O(f(n)) and is able to check an O(f(n))-length certificate for an input; if the input is a "yes" instance, then at least one certificate is accepted, if the input is a "no" instance, no certificate can make the machine accept. Space constraints: The space available to the machine is not limited, although it cannot exceed O(f(n)), because the time available limits how much of the tape is reachable. Relation to other complexity classes: The well-known complexity class NP can be defined in terms of NTIME as follows: NP=⋃k∈NNTIME(nk) Similarly, the class NEXP is defined in terms of NTIME: NEXP=⋃k∈NNTIME(2nk) The non-deterministic time hierarchy theorem says that nondeterministic machines can solve more problems in asymptotically more time. NTIME is also related to DSPACE in the following way. For any time constructible function t(n), we have NTIME(t(n))⊆DSPACE(t(n)) .A generalization of NTIME is ATIME, defined with alternating Turing machines. It turns out that NTIME(t(n))⊆ATIME(t(n))⊆DSPACE(t(n))
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**6-Acetyl-2,3,4,5-tetrahydropyridine** 6-Acetyl-2,3,4,5-tetrahydropyridine: 6-Acetyl-2,3,4,5-tetrahydropyridine is an aroma compound and flavor that gives baked goods such as white bread, popcorn, and tortillas their typical smell, together with its structural homolog 2-acetyl-1-pyrroline. 6-Acetyl-2,3,4,5-tetrahydropyridine and 2-acetyl-1-pyrroline are usually formed by Maillard reactions during heating of food. Both compounds have odor thresholds below 0.06 ng/L. Structure and properties: 6-Acetyl-2,3,4,5-tetrahydropyridine is a substituted tetrahydropyridine and a cyclic imine as well as a ketone. The compound exists in a chemical equilibrium with its tautomer 6-acetyl-1,2,3,4-tetrahydropyridine that differs only by the position of the double bond in the tetrahydropyridine ring:
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Nonoxynol-9** Nonoxynol-9: Nonoxynol-9, sometimes abbreviated as N-9, is an organic compound that is used as a surfactant. It is a member of the nonoxynol family of nonionic surfactants. N-9 and related compounds are ingredients in various cleaning and cosmetic products. It is widely used in contraceptives for its spermicidal properties. Uses: Spermicide As a spermicide, it attacks the acrosomal membranes of the sperm, causing the sperm to be immobilized. Nonoxynol-9 is the active ingredient in most spermicidal creams, jellies, foams, gel, film, and suppositories. Lubricant Nonoxynol-9 is a common ingredient of most vaginal and anal lubricants due to its spermicidal properties. A 2004 study found that over a six-month period, the typical-use failure rates for five nonoxynol-9 vaginal contraceptives (film, suppository, and gels at three different concentrations) ranged from 10% to 20%. Uses: Condoms Many models of condoms are lubricated with solutions containing nonoxynol-9. In this role, it has been promoted as a backup method for avoiding pregnancy and a microbicide for sexually transmitted diseases in the event of condom failure. However, the 2001 WHO / CONRAD Technical Consultation on Nonoxynol-9 concluded that: There is no published scientific evidence that N-9-lubricated condoms provide any additional protection against pregnancy or STDs compared with condoms lubricated with other products. Since adverse effects due to the addition of N-9 to condoms cannot be excluded, such condoms should no longer be promoted. However, it is better to use N-9-lubricated condoms than no condoms.Compared to regular lubricated condoms, condoms containing nonoxynol-9 present another disadvantage — they are limited by the shelf-life of the spermicide. Uses: Cervical barriers Almost all brands of diaphragm jelly contain nonoxynol-9 as the active ingredient. This jelly may also be used for a cervical cap. Most contraceptive sponges contain nonoxynol-9 as an active ingredient. Shaving cream Nonoxynol-9 is sometimes included in shaving creams for its properties as a nonionic surfactant; it helps break down skin oils that normally protect hair from moisture, so that they become wet and, hence, softer and easier to shave. Gillette formerly used nonoxynol-9 for this purpose in its Foamy products, but has discontinued the practice. Sports cream Nonoxynol-9 is also found in Bengay Vanishing Scent as an inactive ingredient. Poison ivy creams Nonoxynol-9 is also found in Zanfel poison ivy cream. It effectively helps to break up the oil urushiol that causes the rash. Side effects: From 1996 to 2000, a UN-sponsored study conducted in several locations in Africa followed nearly 1,000 sex workers who used nonoxynol-9 gels or a placebo. The HIV infection rate among those using nonoxynol-9 was about 50% higher than those who used the placebo; those using nonoxynol-9 also had a higher incidence of vaginal lesions, which may have contributed to this increased risk. Whereas these results may not be directly applicable to lower-frequency use, these findings combined with lack of any demonstrated HIV-prevention benefit from nonoxynol-9 use led the World Health Organization to recommend that it no longer be used by those at high risk of HIV infection. The WHO further notes that "Nonoxynol-9 offers no protection against sexually transmitted infections such as gonorrhoea, chlamydia." A 2006 study of a nonoxynol-9 vaginal gel in female sex workers in Africa concluded that it did not prevent genital human papillomavirus (HPV) infection and could increase the virus's ability to infect or persist.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Oneirogen** Oneirogen: An oneirogen, from the Greek ὄνειρος óneiros meaning "dream" and gen "to create", is a substance or other stimulus which produces or enhances dreamlike states of consciousness. This is characterized by an immersive dream state similar to REM sleep, which can range from realistic to alien or abstract. Many dream-enhancing plants such as dream herb (Calea zacatechichi) and African dream herb (Entada rheedii), as well as the hallucinogenic diviner's sage (Salvia divinorum), have been used for thousands of years in a form of divination through dreams, called oneiromancy, in which practitioners seek to receive psychic or prophetic information during dream states. The term oneirogen commonly describes a wide array of psychoactive plants and chemicals ranging from normal dream enhancers to intense dissociative or deliriant drugs. Oneirogen: Effects experienced with the use of oneirogens may include microsleep, hypnagogia, fugue states, rapid eye movement sleep (REM), hypnic jerks, lucid dreams, and out-of-body experiences. Some oneirogenic substances are said to have little to no effect on waking consciousness, and will not exhibit their effects until the user falls into a natural sleep state. List of oneirogens: Calea zacatechichi has been traditionally used in Central America as a believed way to potentiate lucid dreams and perform dream divination. It can promote dreams vivid to the senses, sight, scent, hearing, touch, and taste. May be taken as a tea or smoked. Entada rheedii ("African dream bean") Mugwort, see Artemisia douglasiana List of possible oneirogens: Amanita muscaria (contains muscimol) Amphetamines and other stimulants can create psychotic episodes (called stimulant psychosis) which may be defined as bursts of dream activity erupting spontaneously into waking states; this is not due to the substance itself but rather a result of the prolonged suppression of cholinergic activity and REM sleep due to amphetamine or stimulant abuse. Artemisia douglasiana or California mugwort, Douglas's sagewort or dream plant, is a western North American species of aromatic herb in the sunflower family that can be used as a scent, tea, or smoke to trigger vivid and lucid dreams. Artemisia vulgaris Wild red asparagus root may promote dreams that involve flying. Atropa belladonna (contains atropine, hyoscyamine, and scopolamine) Atropine (via blockade of acetylcholine receptors) Benzatropine Datura (contains atropine, hyoscyamine, and scopolamine) Dextromethorphan (the main ingredient in many cough syrups) Dimethyltryptamine can trigger intensely vivid and surreal spiritually charged dream states. Diphenhydramine ("Benadryl") can invoke an intense hypnagogic REM-like microsleep often indifferentiable from reality. It accomplishes this by blocking various acetylcholine receptors in the brain. Galantamine was shown to increase lucid dreaming by 27% at 4 mg and 42% at 8 mg in a 2018 double-blind study lasting three nights. Galanthus (genus) – An alkaloid in the plant is believed to increase the concentration of acetylcholine – a neurotransmitter that plays a very active role in dreaming Harmaline Hyoscyamine Ibogaine, Ibogamine, and Tabernanthe iboga Ilex guayusa can promote vivid dreams and aids in dream recollection. Melatonin and Ramelteon may cause vivid dreams as a side effect Mirtazapine, paroxetine, and varenicline often cause vivid dreams. MMDA Muscimol and other GABA receptor agonists like Zolpidem Nutmeg in commonly used amounts myristicin and elemicin, can increase vividness of dreams Water lily dried flowers may be smoked, or the rhizomes eaten, to promote vivid dreams. Many opioids may produce a euphoric dream-like state with microsleep, known colloquially as "nodding". Peganum harmala (contains harmaline) Scopolamine Silene undulata ("African dream root") is used by the Xhosa people of South Africa to induce lucid dreams. Hallucinogenic oneirogens: Tabernanthe iboga (iboga) is a perennial rainforest shrub native to West Africa. An evergreen bush indigenous to Gabon, the Democratic Republic of Congo, and the Republic of Congo, it is cultivated across West Africa. In African traditional medicine and rituals, the yellowish root or bark is used to produce hallucinations and near-death outcomes, with some fatalities occurring. Psilocybe mushrooms and their active ingredients psilocin and psilocybin Salvia divinorum and other Kappa receptor agonists Ketamine Disputed oneirogens: Valerian (herb) – A study conducted in the UK in 2001 showed that valerian root significantly improved stress induced insomnia, but as a side effect greatly increased the vividness of dreams. This study concluded that valerian root affects REM due to natural chemicals and essential oils that stimulate serotonin and opioid receptors. Another study found no encephalographic changes in subjects under its influence. Non-chemical oneirogens: Binaural beats can be used to stimulate or trigger dream states, like hypnagogia or rapid eye movement sleep. Mindfulness practices could be useful in achieving lucid dream. Sleep deprivation can make dreams more intense, which is caused by REM rebound effect Sources: Schultes, Richard Evans; Albert Hofmann (1979), Plants of the Gods: Origins of Hallucinogenic Use, New York: McGraw-Hill, ISBN 0-07-056089-7 Gianluca Toro; Benjamin Thomas (2007), Drugs of the Dreaming: Oneirogens: Salvia divinorum and Other Dream-Enhancing Plants, Park Street Press, ISBN 978-1594771743
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Pavement light** Pavement light: Pavement lights (UK), vault lights (US), floor lights, or sidewalk prisms are flat-topped walk-on skylights, usually set into pavement (sidewalks) or floors to let sunlight into the space below. They often use anidolic lighting prisms to throw the light sideways under the building. They were developed in the 19th century, but declined in popularity with the advent of cheap electric lighting in the early 20th. Older cities and smaller centers around the world have, or once had, pavement lights. In the early 21st century, such lights are approximately a century old, although lights are being installed in some new construction. Uses: Sidewalk prisms are a method of daylighting basements, and are able to serve as a sole source of illumination during the day. At night, lighting in the basements beneath produces a glowing sidewalk. Vault lights may be used to make subterranean space useful. They are more common in city centers, dense, high-rent areas where space is valuable. Historically, landlords took an interest in improving not only the floor area ratio, but the amount of space that was naturally lit, on the grounds that this was profitable. Occupiers valued daylight not only as a way of saving on artificial lighting costs (which were higher historically), but also as a way to let premises remain cooler in summer, and a way to save on ventilation costs (if using gas lighting rather than arc lamps or early incandescent lights).Pavement lights and related products were historically marketed as a way of saving on artificial lighting costs and making space more usable and pleasant. Modern studies of similar daylighting technology provide evidence for those claims.Vault lights also are used in floors under glass roofs, for example in Budapest's historic Párizsi udvar and New York's mostly-demolished old Pennsylvania Station (see § Current state and trends). Vault lights also could be set into the basement floor, underneath other vault lights, creating a double-deck arrangement, which would light the subbasement. Manhole covers and coalhole covers with lighting elements were also made. Some steps have vault lights set into the vertical stair risers. History: A basement that extends below a sidewalk or pavement is called an areaway, a vaulted sidewalk, or a hollow sidewalk. In some cities, these areaways were created by the raising of the street level to combat floods, and in some cases they form an (often now abandoned) tunnel network. To light these spaces, sidewalks incorporated gratings, which were a trip hazard and let water and street dirt as well as light into the basement. Replacing the open gratings with glass was an obvious improvement. History: Frames Sidewalk prisms developed from deck prisms, which were used to let light through the decks of ships. The earliest pavement light (Rockwell, 1834) used a single large round glass lens set in an iron frame. The large lens was directly exposed to traffic, and if the lens broke, a large hole was left in the pavement, which was potentially unsafe for pedestrians.Thaddeus Hyatt corrected these faults with his "Hyatt light" of 1854. Many small lenses ("bull's-eyes") were set in a wrought-iron frame, (later cast iron), and the frame included raised nubs around each lens to improve traction in wet weather and to protect them from damage and wear. Even if all the lenses were broken out, the panel would still be safe to walk on.In the 1930s, London authorities ruled that glass sections could not be larger than 100 mm by 100 mm. Modern glass floors are made of laminated and toughened glass pavers, which can be substantially larger. They have an upper protective layer that can be replaced if it becomes chipped or cracked. The top surface of the pavers may also be chosen and treated to improve traction. History: Wrought iron, cast iron, and stainless steel frames have all been used. Reinforced concrete slabs began to replace iron frames in the 1890s in New York. Benefits claimed included less condensation (due to the lower thermal conductivity) and a less slippery surface when wet. Concrete panels may be pre-cast or cast in-situ. (For process details, see § External links, below.) Late concrete panels often were made with metal-framed "armored prisms", which were intended to prevent breakage and make replacing individual prisms easier. The glass is not cast into the concrete but caulked into the frame. Rather than chiselling out the old glass, the glass can be popped out of the frame.Translucent concrete has also been proposed as a floor material. This would essentially make it a vault light with very small (fiberoptic) lighting elements. It also innately redirects the light from the angle of incidence to an angle ~parallel to the optical fibers (usually, perpendicular to the surface of the concrete). History: Transparent elements The transparent elements may be referred to as prisms or lenses (depending on shape), or as jewels. History: Glass color The glass in many old pavement lights is now either purple or straw-colored. This is a side-effect of the manufacturing process. Pure silica glass is transparent, but older glass manufacture often used silica from sand, which contains iron and other impurities. Iron produces a greenish tint in the finished glass. To remove this effect, a "decolorizer" such as manganese dioxide ("glassmakers' soap") was added during the manufacture of the glass. History: When exposed to ultraviolet light, the manganese slowly "solarizes", turning purple, which is why many existing sidewalk prisms are now purple. WWI increased demand for manganese in the US and cut off the supply of high-grade ore from Germany, so selenium dioxide was used as a decolorizer instead. Selenium also solarizes, but to a straw color.Replacement glass that has been tinted purple deliberately, in order to match the current colour, has been used in some historic restoration projects. History: Glass shape In 1871 London, Hayward Brothers patented their "semi-prism": changing the shape of the glass by adding pendant prisms to the underside reflects the light sideways, allowing it to light the area under the main building. The pendant shapes were right-angle ("half") prisms, which reflected all incoming light sideways. The horizontal ridges protruding from the top of the prism let it be set into an opening in an iron or cement grating. History: Some cast glass pendant prisms have flat portions to shed light directly below, as well as throwing it sideways under the main body of the building (see image). Some prisms were made with multiple pendant prisms, either as a Fresnel-lens-like sheet of identical prisms ("multi") or a sheet of dissimilar prisms that could distribute the light ("three-way" etc.).The precise angles at which the prisms refracted or reflected light was important. An installation would generally consist of multiple different prescriptions of prism, chosen either by an on-site expert contractor or by a layman using standard algorithms. This also would diffuse the light somewhat, as would the rough glass surfaces (the lenses are translucent, not transparent). History: Larger castings are more expensive, not only because they use more glass, but because they take longer to cool. Modern glass floors use laminate sheet glass some centimeters (more than an inch) thick; it often is transparent. History: Non-glass translucent materials Synthetic resin composites (such as fiberglass), as well as plastics such as Lexan, have been proposed to replace missing prism lights. Translucent decking panels made of fiberglass are often used for balconies which would otherwise shade the windows below them. Peel-and-stick prism films recently have come on the market, with acrylic micro-prisms that internally reflect light somewhat like glass pendant prisms. History: Structure In some cases, a second vertical curtain of prisms was installed under the building sill. These were analogous to the prism transoms used over above-ground windows and doors. The light could be bent in two stages and used to daylight the whole basement.The areaway under a sidewalk light usually has a masonry wall separating it from the soil under the street, although it may extend partly under the street. Support for the vault light frames varies. Steel cross-beams supported by columns are common in older buildings; metal decks are common in newer ones. Current state and trends: Manufacture, maintenance, and repair Some modern pavement lights are quite different from historic ones, so restoration and replacement may use different techniques and parts. Current state and trends: A few companies now manufacture and sell vault lights, either as glass-only, prefab panels, or installation. Construction methods and prices vary widely. Historically, glass lenses were standardized by each manufacturer; some modern manufacturers produce standardized prisms. Some firms also supply replacement glass castings to order. Cost varies greatly; shapes needing complicated articulated moulds are more expensive.Modern caulking materials are used for caulking in replacement glass. Broken and damaged frames can be patched, re-welded, or re-cast. Generally speaking, restoration requires only simple tools and technology.Promptly repairing sidewalk cracks, and avoiding de-icers that will corrode metal, helps keep the supporting structure dry and in good repair. Keeping a sidewalk light watertight does not cost much in time or materials. Vaults generally last many decades, and many extant vaults are more than a century old. Current state and trends: Reuse and preservation Despite their reusability and repairability, old panels often are landfilled. However, the city of Victoria, Canada is stockpiling removed pavement light panels for future restoration projects. Often, individual broken sidewalk prisms are not replaced, but instead, the opening is filled with concrete or other opaque materials, such as metal, wood, and asphalt.When a building is renovated, vault lights may be removed or concreted over. For instance, the floor of New York's mostly-demolished old Pennsylvania Station was made of vault lights, to let light through the concourse floor onto the platforms. The undersides of the lights can still be seen, but the tops have been concreted over (see images).While some cities have preservation measures for vault lights, others actively remove them and fill areaways. Sometimes the outside appearance of the lights is retained while filling the areaway and setting the lights in a concrete pad, removing their daylighting function. Some areaways are "mothballed"; that is, filled with gravel that could later be removed.Areaways are used in some cities as a convenient place to run utilities, which may make the cities reluctant to give areaways legal protection. In some cases, utility construction leads to areaways being filled. Current state and trends: Load-bearing strength The load-bearing strength of vault lights varies widely with span, construction, and state of repair. Some damaged vaults may not be able to support a fire engine, which a sidewalk vault in sound condition should be able to do. Many jurisdictions do not have regulations on the load-bearing capacity of pavement lights, and manufacturers may develop their own loading standards, in compliance with local fire department regulations. The load-bearing capacity of pavement lights can be tested, and lights can be designed and built to specific load-bearing capacities.Damp areaways may corrode the steel load-bearing elements supporting the pavement roof. Moisture may come from leakage from above or from groundwater from below. Current state and trends: Current installations Amsterdam, The Netherlands, has vault lights, some of which have been documented by the Netherlands Department for Conservation. Astoria, Oregon, has a community program for restoring vault lights, funded by the Astoria Downtown Historic District Association. A volunteer plan to replace broken glass with squares of Lexan, topped with resin embedded with glass teardrops, was prevented by legislation. Current state and trends: Budapest, Hungary, has vault lights in one of its tourist sites, the Art Deco-period Párizsi udvar mall on Ferenciek tere (Square of the Franciscans). The mall has unusual, decorative pavement lights let into its polychrome tile floor, to allow light from the glass dome skylights into the basement level. There also are vault lights in other locations, such as in the old post office building. Current state and trends: Chicago, Illinois, has extremely extensive sidewalk vaults, but many of them do not have vault lights. There is no inventory of them. The city is filling in all vaults, as some are structurally unsound. See also the raising of Chicago. Deadwood, South Dakota, funded a major restoration and maintenance project for vault lights in approximately 2000. Dublin, Ireland has many vault lights. Dunedin, New Zealand has well-preserved Luxfer and Hayward Brothers vault lights. London, England has many vault lights, many made by the Hayward Brothers. Historic preservation legislation encourages a market in new pavement lights. New York City has large numbers of vault lights, mostly in the SoHo district. More than half of the subway stations originally had vault lights, but these had mostly been blocked off. Installing and restoring vault lights has become part of modern construction practices. The city government has no policies or records about vault lights. Philadelphia, Pennsylvania has numerous vault lights, some of them locally manufactured. Portland, Oregon has prisms at several locations. It has no preservation project for its prisms, however, and fills those that break with concrete. There is some local opposition to the policy. See also Portland Underground. Pretoria, South Africa has Hayward vault lights. Sacramento, California has "hollow sidewalks", which originated when the city raised its street level to combat floods; some of these spaces are lit by vault lights. There are many stories told about these areas. Current state and trends: Salem, Oregon has an extensive tunnel network with vault lights. Historians have found a mural-painted grocery drop, a disco, a swimming pool, a firing range, opium dens, and bordellos in the tunnels. Guided tours are sometimes conducted in the tunnels. The Go Downtown Salem! Board welcomed the idea of regular underground tours. Many of the tunnels have been filled during sewer construction. Current state and trends: San Diego, California has sixteen-sided pavement jewels of the "Searchlight" brand. San Francisco evaluates the lights as having little historic value, and as a safety hazard for pedestrians. Most of the lights have been removed. The City Lights Bookstore has vault lights. Current state and trends: Saskatoon, Saskatchewan has had sidewalk prisms. They have been used in music videos, and a Facebook group fought to save them. They were scheduled to be infilled in 2015.Seattle, Washington raised its street level, by up to 22 feet in some places, in the aftermath of the Great Seattle Fire of 1889. Previously, the Pioneer Square area had flooded tidally. Seattle replaced some of its sidewalk vault lights in Pioneer Square with new pre-purpled ones in 2002. Seattle runs tourist trips through its underground. Current state and trends: Tijuana, Mexico has armoured unsolarized vault lights in the 1919 Casa de la Cultura. Toronto, Ontario once had many vault lights, but the last known remaining example were in front of the shops at 2869 Dundas Street West (near Keele) until 2011. Current state and trends: Vancouver, British Columbia has an unofficial policy of requiring any applicants for development permits to fill in areaways, although some have been paved over or made sufficiently load-bearing to support a fire engine. Some of the remaining areaways have restaurants built into them. A walking map of the sidewalk prisms has been produced. There are ~130 remaining areaways, the records of which are not digitised, and no measures exist to promote their preservation. Current state and trends: Victoria, British Columbia has more than eleven thousand sidewalk prisms in seven locations (as of 2006), including an underground gallery running around an entire block outside the Yarrow Building. More than 670 of the prisms are missing or filled with concrete. Sidewalk prisms have been heritage-registered since 1990. Originally, there were hundreds of thousands of prisms. The city has some panels in storage for restoration, but is having difficulty finding a glass supplier. There are city plans to light the galleries below at night, creating glowing purple sidewalks in the downtown core. While they are protected, there is no funding for the preservation of sidewalk prisms.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Splitting band knife** Splitting band knife: The splitting band knife (or band knife or bandknife) is a kind of knife used in several fields including: tannery, EVA/rubber, foam, cork, shoe and leather goods, paper, carpet and other soft sheet materials. It is a power tool which is very similar in operation to a band saw, with an endless loop blade; the material to be cut is supported by a flat table. Technical characteristics: A splitting band knife can be produced in different sizes (length x width x thickness) according to the splitting machine on which it has to be fitted. Different technical characteristics define the quality of the product (blade) The blade can be welded and bevelled, toothed not rectified, rectified on both edge and surfaces, with pre-sharpening made by tools or grinding stones. Technical characteristics: A splitting band knife can be produced in several dimensions, usually with a length from mm 1000 to 15000, a width between mm 10 and 110, a thickness from mm 0,40 to 1,50. Sectors and use: Tannery sector In the tannery sector the splitting band knife allows to divide/split leather and textile in its thickness. The final products of this operation are Split and Grain (internal and external parts) of the leather. Blades can be used to split every kind of material which has to be split in the thickness: leather fur non-woven material velvetIn the tannery sector, splitting band knives can be used in following working: wet blue, lime, dry, wet white and other tannings. Sectors and use: The blades which are mostly used in this sector are rectified on both edge and surfaces in order to guarantee the best splitting, that means a constance in the thickness of the leather that is produced/split (rectification of surfaces), and also to guarantee the maximum linearity during the splitting process ( back edge); the blade must run as stable as possible without oscillations at all, which could create defects on the leather. Moreover, blades are used to be provided pre-bevelled in order to save time to start the blade running up once it is fitted on the splitting machine. Sectors and use: Rubber, cork and foam sectors In the fields of rubber and cork, splitting band knives can be used on every kind of material that needs to be split in the thickness such as: rubber (except vulcanized rubbers) cork foamIn this sector blades can be used according to their application, to the splitting machine, to the material and to the cut/split precision required by the final product. Sectors and use: Shoes and leather goods sectors In the production linked to shoes and leather goods, the splitting band knives allow to divide/split and equalize or “reduce” the leather in the thickness in order to improve the quality of the finished product. The final product of this working, equalization or “reduction” is a leather ready to become a shoe or leather goods (for example bags, wallets, belts, etc..) The hides used in this sectors are always finished leathers and in dry. Sectors and use: In this field, splitting band knives can be used on every kind of material that needs to be split in the thickness such as: leather textiles – linings rubber – insoles cardboard componentsThe blades which are mostly used in this sector are rectified on both edge and surfaces in order to guarantee the best splitting, that means a constance in the thickness of the leather that is produced/split (rectification of surfaces), and also to guarantee the maximum linearity during the splitting process ( back edge); the blade must run as stable as possible without oscillations at all, which could create defects on the leather. Moreover, blades are used to be provided pre-bevelled in order to save time to start the blade running up once it is fitted on the splitting machine. Sectors and use: Paper sector The splitting band knife can be used also in paper sector and allows to divide/to split the material in the thickness, for example paper reels (from toilet paper to the reels for industrial use, paper towel rolls for domestic use, etc..) In this production, the final product obtained by the splitting is: paper for industrial sector: big rolls, reels, etc.. for hygienic uses: handkerchiefs, toilet paper, kitchen rollsIn this sector blades can be used according to their application, to the splitting machine, to the material and to the cut/split precision required by the final product Band knife machines: Band Knife blades are used on two types of machine (vertical and horizontal) depending on the material being cut/processed. Band knife machines: Vertical On a vertical band knife machine usually a narrow width band knife blade is used, the most common width being 10mm. The length of the band knife blade depends on the supplier of the band knife machine. The dimensions are indicated on a small metal tag pasted or riveted on the machine. The vertical machine band knife blade is most commonly a "double bevel, double edge DBDE" execution to enable cutting while advancing the table and also while retracting the work table, while as the "double bevel, single edge DBSE" execution cuts only in one direction. Productivity is enhanced when the operator cuts both while advancing and also while retracting the work table after adjusting the foam block after each pass for cutting. The DBDE execution blade can have a parallel or twisted 180 degrees welding. The twisted welding execution saves a grinding unit, as both edges pass the same grinding unit after two turns. It has been observed that a narrow width on a vertical band knife machine gives better dimensional accuracies on the foam block. The wider the vertical machine band knife - more the deflection and size variation from one end to the other extreme end. Band knife machines: Horizontal Horizontal band knife blades are wider usually 30-60mm wide for foam converting is popular, for leather goods 40-50mm wide blade is popular, 85-110mm width is popular for the tannery splitting band knife. There are other widths depending on the machine manufacturer. The horizontal machine band knife blade is supported by a guide to give dimensional accuracies while cutting/splitting. Therefore, only blades that have passed as one main manufacturing step a surface grinding process reach the necessary thickness tolerances of less than 0,02mm. A higher tolerance would lead to marks on the surface of the split material like leather or rubber. Blades are available in different grades of exactness depending on the required exactness on the material to be cut/split. On modern machines in combination with a high grade blade a splitting thickness of 0,2mm for 1500mm material width is possible. Blade sharpening: For both the vertical and horizontal band knife machines there is a grinding attachment which continuously sharpens the band knife while it is cutting. It is possible to find a nonpowered grinding attachment for the vertical machines but for the horizontal band knife machine the grinding attachment for continuously sharpening the blade is powered by electric motors. History: 1808 W. Newberry patent No. 3105 London, including "machinery for ... splitting skins", 1854 J.F. Flanders and J.A. Marden with a patent for a bandknife machine, 1912: Foundation of blade manufacturer Rudolf Alber,Before WW II: several machinery brands on the market: Turner, Clasen, USM, BMD, 2011: Known Polish pneumatic lifting table manufacturer REXEL started producing Vertical Band knife machines. At the moment there are models: R1250, R1150, R1000, R750, R500 (The number e.g. 1000 is the arm length in cm).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Lipid polymorphism** Lipid polymorphism: Polymorphism in biophysics is the ability of lipids to aggregate in a variety of ways, giving rise to structures of different shapes, known as "phases". This can be in the form of spheres of lipid molecules (micelles), pairs of layers that face one another (lamellar phase, observed in biological systems as a lipid bilayer), a tubular arrangement (hexagonal), or various cubic phases (Fd3m, Im3m, Ia3m, Pn3m, and Pm3m being those discovered so far). More complicated aggregations have also been observed, such as rhombohedral, tetragonal and orthorhombic phases. Lipid polymorphism: It forms an important part of current academic research in the fields of membrane biophysics (polymorphism), biochemistry (biological impact) and organic chemistry (synthesis). Lipid polymorphism: Determination of the topology of a lipid system is possible by a number of methods, the most reliable of which is x-ray diffraction. This uses a beam of x-rays that are scattered by the sample, giving a diffraction pattern as a set of rings. The ratio of the distances of these rings from the central point indicates which phase(s) are present. Lipid polymorphism: The structural phase of the aggregation is influenced by the ratio of lipids present, temperature, hydration, pressure and ionic strength (and type). Hexagonal phases: In lipid polymorphism, if the packing ratio of lipids is greater or less than one, lipid membranes can form two separate hexagonal phases, or nonlamellar phases, in which long, tubular aggregates form according to the environment in which the lipid is introduced. Hexagonal phases: Hexagonal I phase (HI) This phase is favored in detergent-in-water solutions and has a packing ratio of less than one. The micellar population in a detergent/water mixture cannot increase without limit as the detergent to water ratio increases. In the presence of low amounts of water, lipids that would normally form micelles will form larger aggregates in the form of micellar tubules in order to satisfy the requirements of the hydrophobic effect. These aggregates can be thought of as micelles that are fused together. These tubes have the polar head groups facing out, and the hydrophobic hydrocarbon chains facing the interior. This phase is only seen under unique, specialized conditions, and most likely is not relevant for biological membranes. Hexagonal phases: Hexagonal II phase (HII) Lipid molecules in the HII phase pack inversely to the packing observed in the hexagonal I phase described above. This phase has the polar head groups on the inside and the hydrophobic, hydrocarbon tails on the outside in solution. The packing ratio for this phase is larger than one, which is synonymous with an inverse cone packing. Hexagonal phases: Extended arrays of long tubes will form (as in the hexagonal I phase), but because of the way the polar head groups pack, the tubes take the shape of aqueous channels. These arrays can stack together like pipes. This way of packing may leave a finite hydrophobic surface in contact with water on the outside of the array. However, the otherwise energetically favorable packing apparently stabilizes this phase as a whole. It is also possible that an outer monolayer of lipid coats the surface of the collection of tubes to protect the hydrophobic surface from interaction with the aqueous phase. Hexagonal phases: It is suggested that this phase is formed by lipids in solution in order to compensate for the hydrophobic effect. The tight packing of the lipid head groups reduces their contact with the aqueous phase. This, in turn, reduces the amount of ordered, but unbound water molecules. The most common lipids that form this phase include phosphatidylethanolamine (PE), when it has unsaturated hydrocarbon chains. Diphosphatidylglycerol (DPG, otherwise known as cardiolipin) in the presence of calcium is also capable of forming this phase. Hexagonal phases: Techniques for detection There are several techniques used to map out which phase is present during perturbations done on the lipid. These perturbations include pH changes, temperature changes, pressure changes, volume changes, etc. Hexagonal phases: The most common technique used to study phospholipid phase presence is phosphorus nuclear magnetic resonance (31P NMR). In this technique, different and unique powder diffraction patterns are observed for lamellar, hexagonal, and isotropic phases. Other techniques that are used and do offer definitive evidence of existence of lamellar and hexagonal phases include freeze-fracture electron microscopy, X-ray diffraction, differential scanning calorimetry (DSC), and deuterium nuclear magnetic resonance (2H NMR). Hexagonal phases: Additionally, negative staining transmission electron microscopy has been shown as a useful tool to study lipid bilayer phase behavior and polymorphism into lamellar phase, micellar, unilamellar liposome, and hexagonal aqueous-lipid structures, in aqueous dispersions of membrane lipids. As water-soluble negative stain is excluded from the hydrophobic part (fatty acyl chains) of lipid aggregates, the hydrophilic headgroup portions of the lipid aggregates stain dark and clearly mark the outlines of the lipid aggregates (see figure).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Knot invariant** Knot invariant: In the mathematical field of knot theory, a knot invariant is a quantity (in a broad sense) defined for each knot which is the same for equivalent knots. The equivalence is often given by ambient isotopy but can be given by homeomorphism. Some invariants are indeed numbers (algebraic), but invariants can range from the simple, such as a yes/no answer, to those as complex as a homology theory (for example, "a knot invariant is a rule that assigns to any knot K a quantity φ(K) such that if K and K' are equivalent then φ(K) = φ(K')."). Research on invariants is not only motivated by the basic problem of distinguishing one knot from another but also to understand fundamental properties of knots and their relations to other branches of mathematics. Knot invariants are thus used in knot classification, both in "enumeration" and "duplication removal". Knot invariant: A knot invariant is a quantity defined on the set of all knots, which takes the same value for any two equivalent knots. For example, a knot group is a knot invariant. Knot invariant: Typically a knot invariant is a combinatorial quantity defined on knot diagrams. Thus if two knot diagrams differ with respect to some knot invariant, they must represent different knots. However, as is generally the case with topological invariants, if two knot diagrams share the same values with respect to a [single] knot invariant, then we still cannot conclude that the knots are the same. Knot invariant: From the modern perspective, it is natural to define a knot invariant from a knot diagram. Of course, it must be unchanged (that is to say, invariant) under the Reidemeister moves ("triangular moves"). Tricolorability (and n-colorability) is a particularly simple and common example. Other examples are knot polynomials, such as the Jones polynomial, which are currently among the most useful invariants for distinguishing knots from one another, though currently it is not known whether there exists a knot polynomial which distinguishes all knots from each other. However, there are invariants which distinguish the unknot from all other knots, such as Khovanov homology and knot Floer homology. Knot invariant: Other invariants can be defined by considering some integer-valued function of knot diagrams and taking its minimum value over all possible diagrams of a given knot. This category includes the crossing number, which is the minimum number of crossings for any diagram of the knot, and the bridge number, which is the minimum number of bridges for any diagram of the knot. Knot invariant: Historically, many of the early knot invariants are not defined by first selecting a diagram but defined intrinsically, which can make computing some of these invariants a challenge. For example, knot genus is particularly tricky to compute, but can be effective (for instance, in distinguishing mutants). Knot invariant: The complement of a knot itself (as a topological space) is known to be a "complete invariant" of the knot by the Gordon–Luecke theorem in the sense that it distinguishes the given knot from all other knots up to ambient isotopy and mirror image. Some invariants associated with the knot complement include the knot group which is just the fundamental group of the complement. The knot quandle is also a complete invariant in this sense but it is difficult to determine if two quandles are isomorphic. The peripheral subgroup can also work as a complete invariant.By Mostow–Prasad rigidity, the hyperbolic structure on the complement of a hyperbolic link is unique, which means the hyperbolic volume is an invariant for these knots and links. Volume, and other hyperbolic invariants, have proven very effective, utilized in some of the extensive efforts at knot tabulation. Knot invariant: In recent years, there has been much interest in homological invariants of knots which categorify well-known invariants. Heegaard Floer homology is a homology theory whose Euler characteristic is the Alexander polynomial of the knot. It has been proven effective in deducing new results about the classical invariants. Along a different line of study, there is a combinatorially defined cohomology theory of knots called Khovanov homology whose Euler characteristic is the Jones polynomial. This has recently been shown to be useful in obtaining bounds on slice genus whose earlier proofs required gauge theory. Mikhail Khovanov and Lev Rozansky have since defined several other related cohomology theories whose Euler characteristics recover other classical invariants. Catharina Stroppel gave a representation theoretic interpretation of Khovanov homology by categorifying quantum group invariants. Knot invariant: There is also growing interest from both knot theorists and scientists in understanding "physical" or geometric properties of knots and relating it to topological invariants and knot type. An old result in this direction is the Fáry–Milnor theorem states that if the total curvature of a knot K in R3 satisfies ∮Kκds≤4π, where κ(p) is the curvature at p, then K is an unknot. Therefore, for knotted curves, ∮Kκds>4π. Knot invariant: An example of a "physical" invariant is ropelength, which is the length of unit-diameter rope needed to realize a particular knot type. Other invariants: Linking number – Numerical invariant that describes the linking of two closed curves in three-dimensional space Finite type invariant (or Vassiliev or Vassiliev–Goussarov invariant) Stick number – Smallest number of edges of an equivalent polygonal path for a knot
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**LivK RNA motif** LivK RNA motif: The livK RNA motif describes a conserved RNA structure that was discovered using bioinformatics. The livK motif is detected only in the species Pseudomonas syringae. It is found in the potential 5' untranslated regions (5' UTRs) of livK genes and downstream livM and livH genes, as well as the 5' UTRs of amidase genes. The liv genes are predicted to be transporters of branched-chain amino acids, i.e., leucine, isoleucine or valine. The specific reaction catalyzed the amidase genes is not predicted.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Acadesine** Acadesine: Acadesine (INN), also known as 5-aminoimidazole-4-carboxamide-1-β-D-ribofuranoside, AICA-riboside, and AICAR, is an AMP-activated protein kinase activator which is used for the treatment of acute lymphoblastic leukemia and may have applications in treating other disorders such as diabetes. AICAR has been used clinically to treat and protect against cardiac ischemic injury. The drug was first used in the 1980s as a method to preserve blood flow to the heart during surgery.Acadesine is an adenosine regulating agent developed by PeriCor Therapeutics and licensed to Schering-Plough in 2007 for phase III studies. The drug is a potential first-in-class agent for prevention of reperfusion injury in CABG surgery. Schering began patient enrollment in phase III studies in May 2009. The trial was terminated in late 2010 based on an interim futility analysis. Chemistry: Reaction of 2-bromo tribenzoyl ribose with diaminomaleonitrile results in the displacement of the anomeric halogen by one of the amino groups and the formation of the aminosugar largely as the β-anomer. Treatment of this product with methyl orthoformate in the presence of a base leads to the replacement of the alkoxy groups in orthoformate by the adjacent amines, resulting in the formation of the imidazole ring. Reaction with alkoxide then converts the nitrile nearest the sugar to an iminoester; the benzoyl groups are cleaved in the process. Hofmann rearrangement in the presence of a bromine and a base converts the iminoester to the corresponding primary amine. Basic hydrolysis then converts the remaining nitrile to an amide, affording acadesine. Medical use: A brief period of coronary arterial occlusion followed by reperfusion prior to prolonged ischemia is known as preconditioning. It has been shown that this is protective. Preconditioning preceded myocardial infarction, may delay cell death and allow for greater salvage of myocardium through reperfusion therapy. AICAR has been shown to precondition the heart shortly before or during ischemia. AICAR triggers a preconditioned anti-inflammatory state by increasing NO production from endothelial nitric oxide synthase. When AICAR is given 24 hours prior to reperfusion, it prevents post ischemic leukocyte-endothelial cell adhesive interactions with increased NO production. AICAR-dependent preconditioning is also mediated by an ATP-sensitive potassium channel and hemeoxygenase-dependent mechanism. It increases AMPK-dependent recruitment of ATP-sensitive K channels to the sarcolemma causing the action potential duration to shorten, and preventing calcium overload during reperfusion. The decrease in calcium overload prevents inflammation activation by ROS. AICAR also increases AMPK-dependent glucose uptake through translocation of GLUT-4 which is beneficial for the heart during post-ischemic reperfusion. The increase in glucose during AICAR preconditioning lengthens the period for preconditioning up to 2 hours in rabbits and 40 minutes in humans undergoing coronary ligation. As a result, AICAR reduces the frequency and size of myocardial infarcts up to 25% in humans allowing improved blood flow to the heart. As well, the treatment has been shown to decrease the risk of an early death and improve recovery after surgery from an ischemic injury. Pharmacology and use in doping: Acadesine acts as an AMP-activated protein kinase agonist. It stimulates glucose uptake and increases the activity of p38 mitogen-activated protein kinases α and β in skeletal muscle tissue, as well as suppressing apoptosis by reducing production of reactive oxygen compounds inside the cell.In 2008, researchers at the Salk Institute discovered that acadesine injected in mice significantly improved their performance in endurance-type exercise, apparently by converting fast-twitch muscle fibers to the more energy-efficient, fat-burning, slow-twitch type. They also looked at the administration of GW 501516 (also called GW1516) in combination with acadesine. Given to mice that did not exercise, this combination activated 40% of the genes that were turned on when mice were given GW1516 and made to exercise. This result drew attention to the compound as a possible athletic endurance aid. One of the lead researchers from this study has developed a urine test to detect it and has made the test available to the International Olympic Committee, and the World Anti-Doping Agency (WADA) has added acadesine to the prohibited list from 2009 onwards. The British Medical Journal reported in 2009 that WADA had found evidence that acadesine was used by cyclists in the 2009 Tour de France.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Atmospheric Chemistry Suite** Atmospheric Chemistry Suite: The Atmospheric Chemistry Suite (ACS) is a science payload consisting of three infrared spectrometer channels abord the ExoMars Trace Gas Orbiter (TGO) orbiting Mars since October 2016. The three channels are: the near-infrared channel (NIR), the mid-infrared channel (MIR), and the far infrared channel (FIR, also called TIRVIM). Atmospheric Chemistry Suite: The ACS was proposed in 2011 by Russian Academy Section and eventually accepted by the European Space Agency (ESA) and Roscosmos as one of two Russian instruments onboard TGO. The instrument was funded by Roscosmos and Centre national d'études spatiales (CNES) of France, and has components of both Russia and France. Its development and fabrication was under Russian leadership. The functionality of all the three channels was confirmed during cruise to Mars. Objectives: The main objective of the ACS suite is to make an inventory and map minor atmospheric species or trace gases in the atmosphere of Mars. This will allow scientists to profile the upper limits on the methane contents, and to possibly to detect sulfur dioxide (SO2), a gas of volcanic origin. Channels: The near-infrared channel (NIR), is a compact spectrometer operating in the range of 0.7–1.7 μm with a resolving power of λ/Δλ ~ 20,000 and with a spectral range of 10–20 nm. It is designed to operate in nadir and in solar occultation modes.The mid-infrared channel (MIR) is an echelle spectrometer with crossed dispersion, designed exclusively for solar occultation measurements in the 2.2–4.4 μm spectral range with a resolving power of approximately 50,000.The far-infrared channel covers the thermal infrared spectroscopy; it is a Fourier spectrometer called the TIRVIM. It has an aperture of ~5 cm and it measures the spectrum of 1.7–17 μm. Its main task will be for temperature sounding of the Martian atmosphere in the 15-μm CO2 band. TIRVIM has 10 times higher performances than the PFS spectrometer of Mars Express orbiter. Methane: Of particular interest to this astrobiology mission, is the detection and characterization of the atmospheric methane (CH4), as it may be of geological or biological nature. Large differences in the abundances were measured between observations taken in 2003, 2006, and in 2014 NASA reported that the Curiosity rover detected a tenfold increase ('spike') in methane in the atmosphere in late 2013 and early 2014. This suggests that the methane was locally concentrated and is probably seasonal. Because methane on Mars would quickly break down due to ultraviolet radiation from the Sun and chemical reactions with other gases, its persistent presence in the atmosphere also implies the existence of an unknown source to continually replenish the gas.Measuring the ratio of hydrogen and methane levels on Mars may help determine the likelihood of life on Mars. According to the scientists, "...low H2/CH4 ratios (less than approximately 40) indicate that life is likely present and active."
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Car dealerships in the United States** Car dealerships in the United States: In the United States, a car dealership is a business that sells cars. A car dealership can either be a franchised dealership selling new and used cars, or a used car dealership, selling only used cars. In most cases, dealerships provide car maintenance and repair services as well as trade-in, leasing, and financing options for customers. Used car dealers can carry cars from various different manufacturers, while nearly all new car dealerships are franchises associated with one or more manufacturers. Some new car dealerships may carry multiple brands from the same manufacturer. In some locales, dealerships have been consolidated and a corporation may control a chain of dealerships representing several different manufacturers. Car dealerships in the United States: In the United States, direct manufacturer auto sales are prohibited in almost every state by franchise laws requiring that new cars be sold only by dealers. Economists have characterized these regulations as a form of rent-seeking that extracts rents from manufacturers of cars, increases costs for consumers, and limits entry of new car dealerships while raising profits for incumbent car dealers. Research shows that as a result of these laws, retail prices for cars are higher than they otherwise would be. Selling cars: Most car dealerships display their inventory in a showroom and on a car lot. Under U.S. federal law, all new cars must carry a sticker showing the offering price and summarizing the vehicle's features. Salespersons, predominantly those who only work on commission, negotiate with buyers to determine a final sales price. In many cases, this includes negotiating the price of a trade-in; the dealer's purchase of the buyer's current automobile. Negotiation from the dealership's perspective is the course of dealing that occurs beginning when a salesperson negotiates a deal to the point where the customer makes an offer on the new vehicle, often including the customer's current vehicle as part of the deal. Selling cars: The salesperson then brings the offer, plus a sign of good faith from the customer, which can be a check with a deposit or a credit card, to the sales manager. This is also known as the booking amount, which is usually refundable. The sales manager returns options for the monthly payment, financing, and pricing options available to the customer in a process referred to as "desking" the deal. If the customer and sales manager agree on the terms, they sign off on the option chosen. The next step is a purchase and sales agreement or a sales agreement, and the actual monetary down payment is generated. The manager and customer sign this paperwork, and then the customer is handed off to the "box," or the finance and insurance office where various add-ons are often sold that include special waxing, wheel protection, or often, extended warranty services. The final paperwork is also printed out at this phase. A percentage of people believe that desking is part of the negotiation process, it only occurs once the salesperson has a legitimate offer on the vehicle from the customer and can hand the sales manager a token of good faith, as noted. Selling cars: A car dealer orders vehicles from the manufacturer for inventory and pays interest (called flooring or floor planning). Dealer holdbacks are a system of payments made by manufacturers to their dealers. The holdback payments assist the dealer's ability to stock their inventory of vehicles and improve the profitability of dealers. Typically the holdback amount is around 1% to 3% of the vehicles' manufacturer's suggested retail price (MSRP). Hold-back is usually not a negotiable part of the price a consumer would pay for the vehicle, but dealers will give up the dealer holdback to get rid of a car that has been sitting in its inventory for a long time, or if the additional sale will bring them up to the manufacturer's additional incentive payments for reaching unit bonus targets. The holdback was originally designed to help offset the cost the new car dealer has for paying interest on the money that is borrowed to keep the car in inventory but is in effect lowering the dealer's gross profit, and thus the sales commissions paid to employees. The holdback allows dealerships to promote at- or near-invoice price sales and still achieve comfortable profits on such transactions.With the advent of the Internet, the process of selling cars has undergone a considerable change. More than 70% of car purchases in the United States start with research on the Internet. It empowers buyers with knowledge of the features of comparable cars and the prices and discounts offered by different dealers within the same geographic area. This helps the buyers during price negotiations and puts further pressure on the profit margin of the dealer. Trading for cars: To an average dealer, the actual cash value of a trade is an opinion of what the vehicle could reasonably be sold for at auction in six weeks to three months, less any reconditioning costs should the dealer be unable or unwilling to re-sell the trade to the public. Since most states have requirements for a dealer to warranty or even guarantee a used vehicle for a certain amount of time and or mileage if sold to the public at a certain price, a dealer must make a profit selling the previously traded car (now a used car). Trading for cars: Trade-in value is an important facet of the car deal. Many websites offer trade-in value estimates. However, most of these values are estimated from a theoretical chart that may or may not be based on recent average sales prices of a particular make and model. If a particular make and model has less accurate data available from recent auction prices the dealer will be more cautious in the appraisal of the car. Trading for cars: A dealer may have a manager who appraises each vehicle offered for trade. This person will often be the person who also attends used car auctions, often buying and selling on behalf of the dealer. This person will have a realistic idea of the actual cash value of the trade. A dealer will look at a trade for body damage, windshield damage, engine noise, and known problems with a particular model, and price it to re-sell it at a profit. Additional services: Most car dealers offer a variety of financing options for the purchase of cars, including loans and leases. Financing can be highly profitable for dealerships. There have been some scandals involving discriminatory or predatory lending practices, and as a result, vehicle financing is heavily regulated in many states. For example, in California, there must be several signs prominently posted on the premises, and the contract must contain several prominent warnings, such as the words "THERE IS NO COOLING-OFF PERIOD".Although the terms of installment contracts are negotiated by the dealer with the buyer, few dealers make loans directly to consumers. In the business, such dealers are called "Buy Here Pay Here" dealerships. These stores can make loans directly to customers because they have some means of recovering the vehicle if the customer defaults on the loan. The means by which "Buy Here Pay Here" dealers can recover a vehicle vary by state. Additional services: Most dealers utilize indirect lenders. This means that the installment loan contracts are immediately "assigned" or "resold" to third-party finance companies, often an offshoot of the car's manufacturer such as GM Financial, Ally Financial, or banks, which pay the dealer and then recover the balance by collecting the monthly installment payments promised by the buyer. To facilitate such assignments, dealers generally use one of several standard form contracts pre-approved by lenders. The most popular family of contracts for the retail installment sale of vehicles in the U.S. are sold by business process vendor Reynolds and Reynolds; their contracts have been the subject of extensive (and frequently hostile) judicial interpretation in lawsuits between dealers and customers. Additional services: The dealer has the option of marking up the interest rate of the contract and retaining a portion of that markup. For example, a bank may give a wholesale money rate of 6.75% and the dealer may give the consumer an interest rate of 7.75%. The bank would then pay the dealer the difference or a portion thereof. This is a regular practice because the dealership is selling the contract to a bank just like it sold a car to the customer. Most banks or states strictly limit the amount a contract rate may be marked up (by giving a range of rates at which they will buy the contract). In many cases, this amounts to little difference in the customer's payment as the amount borrowed is small by comparison to a mortgage and the term shorter. Additional services: Customers may also find that a dealer can get them better rates than they can with their local bank or credit union. However, manufacturers often offer a low-interest rate OR a cash rebate, if the vehicle is not financed through the dealer. Depending upon the amount of the rebate, it is prudent for the consumer to check if applying a larger rebate results in a lower payment due to the fact that s/he is financing less of the purchase. For example, if a dealer has an interest rate offer of 7.9% financing OR a $2000.00 rebate and a consumer's lending source offers 8.25%, a consumer should compare at the credit union what payments and total interest paid would be, if the consumer financed $2000.00 less at the credit union. The dealer can have their lending institution check a consumer's credit. A consumer can also allow his or her lending source to do the same and compare the results. Most financing available at new car dealerships is offered by the financing arm of the vehicle manufacturer or a local bank. Additional services: Dealers may also offer other services, typically through the Finance and Insurance office. These additional services can include: Service contracts: While any vehicle sold in the United States now comes standard with some degree of manufacturer's warranty coverage, customers have a wide range of choices to cover their vehicle from mechanical failure beyond that point. Service contracts may have the same terms of coverage as the vehicle's original manufacturer's warranty, but often they do not. Often service contracts carry a deductible as might any insurance contract. Because of the vast number of choices, it is important for consumers to be aware of the coverages before entering into an agreement. Usually, these service contracts do not cover regular maintenance items such as brakes, fluids, or filters. In some states, particularly Florida, the cost of such agreements is heavily regulated.There are three main types of service contracts offered. The first is offered by the manufacturer through the dealership and is usually good at any dealership in the US that has that same franchise. When warranty repair work is required, the dealer submits a claim to the manufacturer and is reimbursed for the repair less the deductible paid by the consumer. Under this type of service agreement, there is usually no incentive for the dealer to do anything but repair the car as reimbursement from the manufacturer is usually profitable. Additional services: The second service contract is usually a simple insurance policy that the dealer purchases wholesale and is administered through a third party working for the dealer. This "third party" can often be a major insurance company. This money collected by the dealer from the consumer is put in a "reserve" fund for the length and/or term of the service contract. When a repair is required, the dealer authorizes the repair with the third-party administrator, usually before the repair is done. The third-party deducts the repair expense from the dealer's reserve fund. The fewer payments or deductions made on the service contract the greater the profit to the dealer as any unused portion of the "reserve" is given back to the original selling dealer less an administration fee when the service contract retires. Additional services: The third type of service contract can be purchased directly from a few automobile insurance companies. Additional services: GAP insurance: GAP insurance is protection for the loan in the event that the vehicle is lost as the result of an accident or theft. A GAP policy ensures that in the event of a total loss, the remaining payments are made on the loan so that a customer does not have to pay for a vehicle he or she no longer possesses. Many states regulate GAP insurance (New York, for example, does not allow dealerships to profit from the sale of GAP insurance). Additional services: Credit/Life/Disability insurance: It is important to note that this kind of insurance is a profit center for the dealership, working similarly to the second type of service contract described above in this article and cannot be required as a condition of the loan. Customers / Borrowers often have the option of purchasing protection for their loan should the borrower become disabled and unable to work for a period during the time the borrower is required to make payments. Often the coverage begins on the 31st or 32nd day of disability: Meaning the borrower has to be unable to work for a period greater than 30 days before a claim can be filed. Often the borrower is required to submit paperwork to validate a disability claim. Credit Life insurance will usually cover the entire remaining balance of a loan if the borrower dies within the term of the contract. Customers can often obtain this coverage from their own insurance companies. Consumers should compare rates and policies with their own insurance companies. See Consumer Reports for their opinion Aftermarket accessories: Many dealerships offer accessories that are not offered by the manufacturer directly. These can be dangerous for consumers as some dealerships engage in illegal "payment packing"---that is, quoting an inflated monthly payment for the car in order to entice customers to agree to purchase aftermarket products offered at inaccurately low costs. One salesman for an accessories distributor was fired after he started asking questions about the legality of this practice; the resulting jury verdict of $480,003 against his former employer for wrongful termination in violation of public policy was upheld in full by a California appellate court in 2007. As with Credit/Life/Disability insurance, there are many ways a consumer can purchase these options outside the dealership. Additional services: Maintenance agreements: Many dealerships that have their own service shops will offer pre-paid maintenance agreements. These are sometimes offered directly through the manufacturer (such as Saturn's Basic Care or Car Care programs) or by the dealership alone. Because of the vast differences in programs that can exist from dealership to dealership, it is important to know what is covered under the plan and what are the recommended service intervals (see below). Additional services: Lease Here Pay Here Contracts: With lease-to-purchase programs, customers are given a vehicle to lease for a time period that can range from 12 months up to 36 months. These vehicles are used as opposed to new ones typically obtained from a new car dealer. The dealership that leases these cars is often scrutinized due to the fact that they normally serve the public that does not have strong financial abilities to maintain the obligation.The dealers are not held to the lending standards that most banks are, and also at times of bankruptcy are completely exempt in times of default leading the car dealership with the opportunity to repossess at any time regardless of breach of contract. Some other pro-advocates say the monthly obligation on leases is cheaper because there are no sales taxes on the vehicle as opposed to the amount a buyer may pay if they are making loan payments on a new or used car purchase. Additional services: Car dealers also provide maintenance and in some cases, repair services for cars. New car dealerships are more likely to provide these services since they usually stock and sell parts and process warranty claims for the manufacturers they represent. Maintenance is typically a high-margin service and represents a significant profit center for automotive dealers. Regulation: In the United States, most aspects of operating a car dealership are regulated at the state level. Car titles are issued and transferred by the individual states through their respective Departments of Motor Vehicles. The purchase price of a vehicle usually includes various fees which the dealer forwards to the state DMV to transfer the vehicle's title to the buyer. In many states, the DMVs also license and regulate car dealerships. In many states, car dealerships are capable of submitting all necessary forms to the DMV on behalf of the customer and are authorized to issue temporary paperwork to the customer to prove that the transaction is in process, allowing the customer to avoid a trip to the nearest DMV office. Regulation: Consumer complaints against car dealerships are usually investigated by the Attorney General's office in the state where the dealership is located. In states where the DMV licenses and regulates car dealerships, the DMV may have responsibility for initially handling consumer complaints and the state AG's office becomes involved only when there is evidence that a dealer may have committed a crime. Perceptions: Customer experience According to one survey, more than half of dealership customers would prefer to buy directly from the manufacturer, without any monetary incentives to do so. An analyst report of a direct sales model is estimated to cut the cost of a vehicle by 8.6%. This implies an even greater demand currently exists for a direct manufacturer sales model. However, laws in many U.S. states prohibit manufacturers from selling directly, requiring customers to buy new cars through a dealer. Perceptions: Discrimination Studies have found that some auto dealerships charge higher interest rates or otherwise raise their prices to females and ethnic minorities, including Asians, and African Americans. These issues have sometimes resulted in lawsuits, including class action lawsuits, against the dealers on the basis of discrimination based on nationality. Largest dealerships: New cars 2020 stats AutoNation, 249,654 units Penske Automotive Group, 178,437 units Lithia Motors, 171,168 units Group 1 Automotive, 140,221 units Hendrick Automotive, 102,761 units Asbury Automotive Group, 95,165 units Sonic Automotive, 93,281 units Larry H. Miller Dealerships, 61,097 units Ken Garff Automotive Group, 53,687 units David Wilson Automotive Group, 43,943 units Used cars 2020 stats CarMax, 832,640 units Carvana, 244,111 units AutoNation, 241,182 units Penske Automotive Group, 233,469 units Lithia Motors, 183,230 units Sonic Automotive, 159,025 units Group 1 Automotive, 140,118 units Hendrick Automotive, 94,356 units Asbury Automotive Group, 80,537 units Larry H. Miller Dealerships, 50,751 units
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Larazotide** Larazotide: Larazotide (INN; also known as AT-1001; formulated as the salt with acetic acid, larazotide acetate) is a synthetic eight amino acid peptide that functions as a tight junction regulator and reverses leaky junctions to their normally closed state. It is being studied in people with celiac disease. Structure: Larazotide is an octapeptide whose structure is derived from a protein (zonula occludens toxin) secreted by Vibrio cholerae. It has the amino acid sequence GGVLVQPG, IUPAC condensed descriptor of H-Gly-Gly-Val-Leu-Val-Gln-Pro-Gly-OH, and the systematic name glycylglycyl-L-valyl-L-leucyl-L-valyl-L-glutaminyl-L-prolyl-glycine. Mechanism of action: Larazotide is an inhibitor of paracellular permeability. In celiac disease, one pathway that allows fragments of gliadin protein to get past the intestinal epithelium and subsequently trigger an immune response begins with binding of indigestible gliadin fragments to the chemokine CXC motif receptor 3 (CXCR3) on the luminal side of the intestinal epithelium (see this page). This leads to the induction of myeloid differentiation factor 88 (MYD88) and the release of zonulin into the lumen. Zonulin then binds to epidermal growth factor receptor (EGFR) and protease-activated receptor 2 (PAR2) in the intestinal epithelium. This complex then initiates a signalling pathway that eventually results in tight junction disassembly and increased intestinal permeability. Larazotide acetate intervenes in the middle of this pathway by blocking zonulin receptors, thereby preventing tight junction disassembly and associated increase in intestinal permeability. Origin: Larazotide acetate is a synthetic peptide based on a Vibrio cholerae enterotoxin called zonula occludens toxin that decreases intestinal permeability. An investigation was carried out to discover which specific part of this toxin was responsible for this activity. Several mutants were constructed and tested for their biological activity and their ability to bind to intestinal epithelial cells in culture. The responsible region was located near the carboxyl terminus of the toxin protein. This region coincided with a peptide product generated by Vibrio cholerae. The eight amino acid sequence in this region was shared with zonulin, an endogenous protein involved in tight junction modulation. This sequence was later designated larazotide acetate. Research: It's been used in experiments related to arthritis.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Formal wear** Formal wear: Formal wear or full dress is the Western dress code category applicable for the most formal occasions, such as weddings, christenings, confirmations, funerals, Easter and Christmas traditions, in addition to certain state dinners, audiences, balls, and horse racing events. Formal wear is traditionally divided into formal day and evening wear, implying morning dress (morning coat) before 6 p.m., and white tie (dress coat) after 6 p.m. Generally permitted other alternatives, though, are the most formal versions of ceremonial dresses (including court dresses, diplomatic uniforms and academic dresses), full dress uniforms, religious clothing, national costumes, and most rarely frock coats (which preceded morning coat as default formal day wear 1820s-1920s). In addition, formal wear is often instructed to be worn with official full size orders and medals. Formal wear: The protocol indicating particularly men's traditional formal wear has remained virtually unchanged since the early 20th century. Despite decline following the counterculture of the 1960s, it remains observed in formal settings influenced by Western culture: notably around Europe, the Americas, South Africa, Australia, as well as Japan. For women, although fundamental customs for formal ball gowns (and wedding gowns) likewise apply, changes in fashion have been more dynamic. Traditional formal headgear for men is the top hat, and for women picture hats etc. of a range of interpretations. Shoes for men are dress shoes, dress boots or pumps and for women heeled dress pumps. In western countries, a "formal" or white tie dress code typically means tailcoats for men and evening dresses for women. The most formal dress for women is a full-length ball or evening gown with evening gloves. Some white tie functions also request that the women wear long gloves past the elbow.Formal wear being the most formal dress code, it is followed by semi-formal wear, equivalently based around daytime black lounge suit, and evening black tie (dinner suit/tuxedo), and evening gown for women. The male lounge suit and female cocktail dress in turn only comes after this level, traditionally associated with informal attire. Notably, if a level of flexibility is indicated (for example "uniform, morning coat or lounge suit", such as seen to the royal wedding of Prince Harry and Meghan Markle in 2018), the hosts tend to wear the most formal interpretation of that dress code in order to save guests the inconvenience of out-dressing. Formal wear: Since the most formal versions of national costumes are typically permitted as supplementary alternatives to the uniformity of Western formal dress codes, conversely, since most cultures have at least intuitively applied some equivalent level of formality, the versatile framework of Western formal dress codes open to amalgamation of international and local customs have influenced its competitiveness as international standard. From these social conventions derive in turn also the variants worn on related occasions of varying solemnity, such as formal political, diplomatic, and academic events, in addition to certain parties including award ceremonies, balls, fraternal orders, high school proms, etc. History: Clothing norms and fashions fluctuated regionally in the Middle Ages. More widespread conventions emerged around royal courts in Europe in the more interconnected Early Modern era. The justacorps with cravat, breeches and tricorne hat was established as the first suit (in an anarchaic sense) by the 1660s-1790s. It was sometimes distinguished by day and evening wear. By the Age of Revolution in the Late Modern era, it was replaced by the previously-casual country leisure wear-associated front cutaway dress coat around the 1790s-1810s. At the same time, breeches were gradually replaced by pantaloons, as were tricorne hats by bicorne hats and ultimately by the top hat by the 19th century and thenceforth. By the 1820s, the dress coat was replaced as formal day wear by the dark, closed-front knee-length frock coat. However, the dress coat from the transition period was maintained as formal evening wear in the form of white tie, remaining so until this day. By the 1840s, the first cutaway morning coats of contemporary style emerged, which would eventually replace the frock coat as formal day wear by the 1920s. Likewise, starting from the 1860s, fashion evolved to gradually introduce the more sportive, shorter suit jacket, likewise originating in country leisure wear. This evolved into the semi-formal evening wear black tie from the 1880s and the informal wear suit accepted by polite society from the 1920s. Dress codes: The dress codes counted as formal wear are the formal dress codes of morning dress for daytime and white tie for evenings. Although some consider strollers for daytime and black tie for the evening as formal, they are traditionally considered semi-formal attires, sartorially speaking below in formality level.The clothes dictated by these dress codes for women are ball gowns. For many uniforms, the official clothing is unisex. Examples of this are court dress, academic dress, and military full dress uniform. Dress codes: Morning dress Morning dress is the daytime formal dress code, consisting chiefly for men of a morning coat, waistcoat, and striped trousers, and an appropriate dress for women. Dress codes: White tie The required clothing for men, in the evening, is roughly the following: Formal trousers, uncuffed, with stripes on leg seams White piqué front or plain stiff-fronted shirt with a detachable wing collar, cuff links and shirt studs White piqué bow tie White piqué vest (waistcoat) A (dress coat) evening tailcoat Black patent leather court shoes AccessoriesWomen wear a variety of dresses. See ball gowns, evening gowns, and wedding dresses. Business attire for women has a developmental history of its own and generally looks different from formal dress for social occasions. Supplementary alternatives: Many invitations to white tie events, like the last published edition of the British Lord Chamberlain's Guide to Dress at Court, explicitly state that national costume or national dress may be substituted for white tie.In general, each of the supplementary alternatives applies equally for both day attire, and evening attire. Ceremonial dress Including court dresses, diplomatic uniforms, and academic dresses. Supplementary alternatives: Full dress uniform Prior to World War II formal style of military dress, often referred to as full dress uniform, was generally restricted to the British, British Empire and United States armed forces; although the French, Imperial German, Swedish and other navies had adopted their own versions of mess dress during the late nineteenth century, influenced by the Royal Navy.In the U.S. Army, evening mess uniform, in either blue or white, is considered the appropriate military uniform for white-tie occasions. The blue mess and white mess uniforms are black tie equivalents, although the Army Service Uniform with bow tie are accepted, especially for non-commissioned officers and newly commissioned officers. For white-tie occasions, of which there are almost none in the United States outside the national capital region for U.S. Army, an officer must wear a wing-collar shirt with white tie and white vest. For black tie occasions, officers must wear a turndown collar with black tie and black cummerbund. The only outer coat prescribed for both black- and white-tie events is the army blue cape with branch colour lining. Supplementary alternatives: Religious clothing Certain clergy wear, in place of white tie outfits, a cassock with ferraiolone, which is a light-weight ankle-length cape intended to be worn indoors. The colour and fabric of the ferraiolone is determined by the rank of the cleric and can be scarlet watered silk, purple silk, black silk or black wool. For outerwear, the black cape (cappa nigra), also known as a choir cape (cappa choralis), is most traditional. It is a long black woolen cloak fastened with a clasp at the neck and often has a hood. Cardinals and bishops may also wear a black plush hat or, less formally, a biretta. In practice, the cassock and especially the ferraiolone have become much less common and no particular formal attire has appeared to replace them. The most formal alternative is a clerical waistcoat incorporating a Roman collar (a rabat) worn with a collarless French cuff shirt and a black suit, although this is closer to black-tie than white tie. Supplementary alternatives: Historically, clerics in the Church of England would wear a knee-length cassock called an apron, accompanied by a tailcoat with silk facings but no lapels, for a white tie occasion. In modern times this is rarely seen. However, if worn, the knee-length cassock is now replaced with normal dress trousers. Cultural dress In Western formal state ceremonies and social functions, diplomats, foreign dignitaries, and guests of honour wear a Western formal dress if not wearing their own national dress. Supplementary alternatives: Many cultures have a formal day and evening dress, for example: Av Pak — both traditional and modern embroidered blouse worn by women in Cambodia for special occasions and traditional festivals Bandhgala — also called Jodhpuri suit, worn by men in India, is a traditional dress Barong Tagalog — worn by men in the Philippines Bisht — worn by men with thawb and shmagh or ghutrah and agal in formal and religious occasions, e.g. Eid, in some Eastern Arab countries like (Saudi Arabia, Iraq, Kuwait, UAE, Qatar, Bahrain and others) Batik shirt — worn by men and women in Indonesia. Besides counting as formal wear, batik shirts are worn well into the informal level. Supplementary alternatives: Bunad — worn as formal dress by women and men in Norway Changshan — a long male version of the qipao, which originated during the Qing dynasty. It can be of cotton for ordinary wear, or of silk for those within aristocratic families. Beneath the changshan, the man generally wears a white mandarin-collar long-sleeved shirt and a pair of dark-colored long pants. Like the qipao, this changshan male gown has slits on both sides (at least knee level) as well. Worn either by Chinese men in the martial arts world or as attire for weddings to match the qipao the bride wears. The qipao and changshan originated as Manchu dresses which government officials, but not ordinary civilians, were required to wear under the Qing dynasty's laws. Gradually, the general Han Chinese civilian population shifted from wearing traditional Chinese hanfu clothing to the qipao and changshan. Supplementary alternatives: Cheongsam — a modern female variation of the Qing dynasty silk dress, characterized by a high mandarin collar and side slits of varying lengths. It can be sleeveless, short-sleeved, elbow-length or long-sleeved, and has been adopted by most Chinese women as Chinese wear, depending on materials and occasions. Supplementary alternatives: Daura-Suruwal — worn as formal dress by men in Nepal Dashiki — worn by men in West African countries Dhoti — worn by men in Pakistan, India, Bangladesh, the Maldives, and Tamil men in Sri Lanka Folkdräkt — worn as formal dress by women and men in Sweden Hátíðarbúningur — worn by men in Iceland to formal events such as state dinners and weddings Hanbok — worn by both men and women in Korea Highland dress with Scottish kilt — worn as formal dress by men in Scotland or of Scottish descent Kebaya — worn by women in Malaysia and Indonesia Mao suit, worn as diplomatic uniform and evening dress by officials of the People's Republic of China Sari — worn by women in India, Nepal, Bangladesh, Pakistan and Sri Lanka Shalwar kameez — worn by both men and women in Pakistan, India and Bangladesh Sherwani worn by men in India and Pakistan Frock coat Although ceased as a protocol-regulated required formal attire at the British royal court in 1936 at the order of the short-reigning King Edward VIII, the frock coat - embodying the background for all contemporary civil formal wear - has not altogether vanished. Yet, it is a rarity mostly confined to infrequent appearances at certain weddings. Supplementary alternatives: The state funeral of Winston Churchill in 1965 included bearers of frock coats.To this day, King Tupou VI of Tonga (born 1959) has been a frequent wearer of frock coats at formal occasions. Also more recent fashion has been inspired by frock coats: Prada's autumn editions of 2012, Alexander McQueen's menswear in the autumn of 2017, and Paul Smith's autumn 2018.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Couvreface** Couvreface: A couvreface in fortification architecture is a small outwork that was built in front of the actual fortress ditch before bastions or ravelins. It usually just consisted of a low rampart with a breastwork that protected its defending infantry. Another ditch in front of the work guarded it from immediate frontal assault. The function of couvrefaces was to protect the faces of the higher ravelin or bastion behind it from direct artillery fire. So that the couvreface and the works behind it could not come under simultaneous fire from an enemy battery along the line of the ramparts they were not allowed to run parallel to one another. Similar to the couvreface is the larger counterguard which, by contrast, was designed to enable the positioning of guns. Couvrefaces are found particularly in Dutch and French fortifications from the 17th to the early 19th centuries.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Pyridotriazolodiazepine** Pyridotriazolodiazepine: A pyridotriazolodiazepine is a heterocyclic compound containing pyridine and triazole rings fused to a diazepine ring. Pyridotriazolodiazepines forms the central structure of zapizolam. Zapizolam is poorly researched, but probably it is a sedative and/or anxiolytic, like other benzodiazepine derivatives, especially triazolobenzodiazepines (such as alprazolam).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Anode ray** Anode ray: An anode ray (also positive ray or canal ray) is a beam of positive ions that is created by certain types of gas-discharge tubes. They were first observed in Crookes tubes during experiments by the German scientist Eugen Goldstein, in 1886. Later work on anode rays by Wilhelm Wien and J. J. Thomson led to the development of mass spectrometry. Anode ray tube: Goldstein used a gas-discharge tube which had a perforated cathode. When an electrical potential of several thousand volts is applied between the cathode and anode, faint luminous "rays" are seen extending from the holes in the back of the cathode. These rays are beams of particles moving in a direction opposite to the "cathode rays", which are streams of electrons which move toward the anode. Goldstein called these positive rays Kanalstrahlen, "channel rays", or "canal rays", because these rays passed through the holes or channels in the cathode. Anode ray tube: The process by which anode rays are formed in a gas-discharge anode ray tube is as follows. When the high voltage is applied to the tube, its electric field accelerates the small number of ions (electrically charged atoms) always present in the gas, created by natural processes such as radioactivity. These collide with atoms of the gas, knocking electrons off them and creating more positive ions. These ions and electrons in turn strike more atoms, creating more positive ions in a chain reaction. The positive ions are all attracted to the negative cathode, and some pass through the holes in the cathode. These are the anode rays. Anode ray tube: By the time they reach the cathode, the ions have been accelerated to a sufficient speed such that when they collide with other atoms or molecules in the gas they excite the species to a higher energy level. In returning to their former energy levels these atoms or molecules release the energy that they had gained. That energy gets emitted as light. This light-producing process, called fluorescence, causes a glow in the region behind the cathode. Anode ray ion source: An anode ray ion source typically is an anode coated with the halide salt of an alkali or alkaline earth metal. Application of a sufficiently high electrical potential creates alkali or alkaline earth ions and their emission is most brightly visible at the anode.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Electrogram** Electrogram: An electrogram (EGM) is a recording of electrical activity of organs such as the brain and heart, measured by monitoring changes in electric potential. Brain: Electroencephalography (EEG) An electroencephalogram (EEG) is an electrical recording of the activity of the brain taken from the scalp. An EEG can be used to diagnose seizures, sleep disorders, and for monitoring of level of anesthesia during surgery. Electrocorticography (ECoG or iEEG) An electrocorticogram is an electrical recording of the brain measured intracranially, that is, from within the brain. Eye: Electrooculography (EOG) An electrooculogram (EOG) is an electrical recording of the potential between the cornea and the retina, and does not change with visual stimuli. An EOG can measure movements of the eyes and can help in diagnosis of nystagmus. Electroretinography (ERG) An electroretinogram (ERG) is an electrical recording of the electrical activity of the retina. Heart: Electrocardiogram (ECG) An electrocardiogram (ECG or EKG) is an electrical recording of the activity of the heart. The typical meaning of an "ECG" is the 12-lead ECG that uses 10 wires or electrodes to record the signal across the chest. Interpretation of an ECG is the basis of a number of cardiac diseases including myocardial infarction (heart attack) and arrhythmias such as atrial fibrillation. Cardiac electrogram When electrical recordings are made from the skin, it is considered to be an ECG as described above. However, electrical recordings made from within the heart such as with an artificial cardiac pacemaker or during an electrophysiology study, the signals recorded are considered an "electrogram" instead of an ECG. These signals are not interpreted in the same manner as an ECG. Other muscles: An electromyogram (EMG) is an electrical recording of the activity of a muscle or muscle group. An EMG study can be combined with a nerve conduction study to diagnose neuromuscular diseases such as peripheral neuropathy and amyotrophic lateral sclerosis.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Sea (astronomy)** Sea (astronomy): The Sea or the Water is an area of the sky in which many water-related, and few land-related, constellations occur. This may be because the Sun passed through this part of the sky during the rainy season.Most of these constellations are named by Ptolemy: Aquarius the Water-bearer Capricornus the Sea-goat Cetus the Whale Delphinus the Dolphin Eridanus the Great River Hydra the Water serpent Pisces the Fishes Piscis Austrinus, the Southern Fish (not named by Ptolemy)Sometimes included are the ship Argo and Crater the Water Cup. Sea (astronomy): Some water-themed constellations are newer, so are not in this region. They include Hydrus, the lesser water snake; Volans, the flying fish; and Dorado, the swordfish.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Umbral moonshine** Umbral moonshine: In mathematics, umbral moonshine is a mysterious connection between Niemeier lattices and Ramanujan's mock theta functions. It is a generalization of the Mathieu moonshine phenomenon connecting representations of the Mathieu group M24 with K3 surfaces. Mathieu moonshine: The prehistory of Mathieu moonshine starts with a theorem of Mukai, asserting that any group of symplectic automorphisms of a K3 surface embeds in the Mathieu group M23. The moonshine observation arose from physical considerations: any K3 sigma-model conformal field theory has an action of the N=(4,4) superconformal algebra, arising from a hyperkähler structure. When Tohru Eguchi, Hirosi Ooguri, and Yuji Tachikawa (2011) computed the first few terms of the decomposition of the elliptic genus of a K3 CFT into characters of the N=(4,4) superconformal algebra, they found that the multiplicities matched well with simple combinations of representations of M24. However, by the Mukai–Kondo classification, there is no faithful action of this group on any K3 surface by symplectic automorphisms, and by work of Gaberdiel–Hohenegger–Volpato, there is no faithful action on any K3 CFT, so the appearance of an action on the underlying Hilbert space is still a mystery. Mathieu moonshine: Eguchi and Hikami showed that the N=(4,4) multiplicities are mock modular forms, and Miranda Cheng suggested that characters of elements of M24 should also be mock modular forms. This suggestion became the Mathieu Moonshine conjecture, asserting that the virtual representation of N=(4,4) given by the K3 elliptic genus is an infinite dimensional graded representation of M24 with non-negative multiplicities in the massive sector, and that the characters are mock modular forms. In 2012, Terry Gannon proved that the representation of M24 exists. Umbral moonshine: In 2012, Cheng, Duncan & Harvey (2012) amassed numerical evidence of an extension of Mathieu moonshine, where families of mock modular forms were attached to divisors of 24. After some group-theoretic discussion with Glauberman, Cheng, Duncan & Harvey (2013) found that this earlier extension was a special case (the A-series) of a more natural encoding by Niemeier lattices. For each Niemeier root system X, with corresponding lattice LX, they defined an umbral group GX, given by the quotient of the automorphism group of LX by the subgroup of reflections- these are also known as the stabilizers of deep holes in the Leech lattice. They conjectured that for each X, there is an infinite dimensional graded representation KX of GX, such that the characters of elements are given by a list of vector-valued mock modular forms that they computed. The candidate forms satisfy minimality properties quite similar to the genus-zero condition for Monstrous moonshine. These minimality properties imply the mock modular forms are uniquely determined by their shadows, which are vector-valued theta series constructed from the root system. The special case where X is the A124 root system yields precisely Mathieu Moonshine. The umbral moonshine conjecture has been proved in Duncan, Griffin & Ono (2015). Umbral moonshine: The name of umbral moonshine derives from the use of shadows in the theory of mock modular forms. Other moonlight-related words like 'lambency' were given technical meanings (in this case, the genus zero group attached to a shadow SX, whose level is the dual Coxeter number of the root system X) by Cheng, Duncan, and Harvey to continue the theme. Umbral moonshine: Although the umbral moonshine conjecture has been settled, there are still many questions that remain. For example, connections to geometry and physics are still not very solid, although there is work relating umbral functions to duVal singularities on K3 surfaces by Cheng and Harrison. As another example, the current proof of the umbral moonshine conjecture is ineffective, in the sense that it does not give natural constructions of the representations. This is similar to the situation with monstrous moonshine during the 1980s: Atkin, Fong, and Smith showed by computation that a moonshine module exists in 1980, but did not give a construction. The effective proof of the Conway-Norton conjecture was given by Borcherds in 1992, using the monster representation constructed by Frenkel, Lepowsky, and Meurman. There is a vertex algebra construction for the E83 case by Duncan and Harvey, where GX is the symmetric group S3. However, the algebraic structure is given by an asymmetric cone gluing construction, suggesting that it is not the last word.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Corneous** Corneous: Corneous is a biological and medical term meaning horny, in other words made out of a substance similar to that of horns and hooves in some mammals. The word is generally used to describe natural or pathological anatomical structures made out of a hard layer of protein. In mammals this protein would usually be keratin. Corneous: The word corneous is also often used to describe the operculum of a snail, a gastropod mollusc. Not all gastropods have opercula, but in the great majority of those that do have one, the operculum is corneous. (However in several genera within a few families including the marine Naticidae and the terrestrial Pomatiidae, the operculum is primarily calcareous, in other words mostly made of calcium carbonate.) Corneous opercula are made out of the protein conchiolin.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Métamorphose (renamer)** Métamorphose (renamer): Métamorphose or Métamorphose file -n- folder renamer is an open source batch renamer. The focus is on legibility, usability, and power - there are no codes or formats to remember and all controls are shown, yet rather complicated operations can be done. Because it is written in wxPython, it is very portable, and can run on all major operating systems. Features: Renames files and folders simultaneously. Recursive selection - loads files in directory and in subdirectories. Undo an operation. Wide use of regular expressions: when selecting items, for search/replace, etc.. Reading of metadata such as ID3 and Exif tags, or creation/modification/last access time. Change length of names. Change case in various ways. Add counting sequences: numerical, alphabetical, and Roman numeral. Extensive multilingual and platform support (see below). Language and OS support: From the beginning, Métamorphose was conceived to be as widely usable as possible. As a result of this, there has been extensive testing and adjustments done to ensure all portions of the application are displayed and function properly across different platforms. Here are the fully tested and supported operating systems: Microsoft Windows, versions: 2000, XP, Vista, 7, 2003, 2008 and 2008R2 servers. Language and OS support: Linux and FreeBSD: using GNOME, KDE, Blackbox, and Fluxbox. Mac OS XLikewise, language choice has been important since inception. The GNU gettext system is used, allowing for easy translation of the application, and a custom help section loader will also show localised help files if they are available. There is support for properly displaying right to left languages. Here are the currently available languages: Interface and all help files: US English, French, Italian Interface, some help files: Brazilian Portuguese, German, Hungarian, Japanese, Polish, Spanish, Turkish Interface only: Arabic, Chinese (Simplified), Dutch, Greek, Russian, Swedish Métamorphose 2: With the first version now completed, work for the next stage of the project has begun. More specifically, the focus is on fixing the following shortcomings of the current version: User has no control of order of operations. Only one operation type per rename. Main interface can be confusing to a new user. No way to make 3rd party add-on modules. Adding more user-requested features.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Mir-210 microRNA** Mir-210 microRNA: In molecular biology mir-210 microRNA is a short RNA molecule. MicroRNAs function to regulate the expression levels of other genes by several mechanisms. mir-210 has been strongly linked with the hypoxia pathway, and is upregulated in response to Hypoxia-inducible factors. It is also overexpressed in cells affected by cardiac disease and tumours. MiRNA-210 in particular, has been studied for its effects in rescuing cardiac function after myocardial infarcts via the up-regulation of angiogenesis and inhibition of cardiomyocyte apoptosis. Myocardial infarction therapy: Myocardial infarction is cardiac tissue necrosis that results from occlusion of blood supply via coronary arteries, thereby starving cells of oxygen and nutrients (termed ischemia). Prolong ischemia will eventually kill the cells and the destruction of cardiac cells leads to tissue death, which can lead to heart failure. Myocardial infarction therapy: Delivery of miRNA-210 to an ischemic heart improves heart function, possibly by promoting the release of angiogenic factors like interleukin-1α (IL-1α), tumor necrosis factor-α (TNF-α) and leptin, as seen in HL-1 cardiomyocytes injected with miRNA-210. However, miRNA-210 also targets the Efna3 and Ptp1b genes, which are genes which endogenously regulate angiogenesis and apoptosis, respectively.Ephrin-A3 (Efna3) is a gene that is involved in the inhibition of angiogenesis. Although it is known that Efna3 inhibits the formation of new blood vessels, its specific role is still unknown. MiRNA-210 suppresses Efna3 at the mRNA level, thereby allowing angiogenesis to occur in cardiac tissue post-infarct.The second target gene, protein tyrosine phosphatase-1B (Ptp1b) is involved in the induction of apoptosis. Ptp1b gene protein has been known to regulate apoptosis by regulating the phosphorylation status of apoptotic proteins such as caspase-3 and caspase-8. MiRNA-210 inhibits the effects of Ptp1b protein, which suppresses its pro-apoptotic functions. Therefore, suppression of these two particular genes may contribute to the improvement of cardiac tissue and function by up-regulating angiogenesis and inhibiting apoptosis of cardiomyocytes after myocardial infarct. Biomarker: Adrenocortical carcinoma Mir-210 has been suggested as a useful biomarker to distinguish adrenocortical carcinoma from adrenocortical adenoma. Breast cancer mir-210 expression is associated with survival in breast cancer. Higher expression indicates lower probability for survival in patients with breast cancer.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Generalized-strain mesh-free formulation** Generalized-strain mesh-free formulation: The generalized-strain mesh-free (GSMF) formulation is a local meshfree method in the field of numerical analysis, completely integration free, working as a weighted-residual weak-form collocation. This method was first presented by Oliveira and Portela (2016), in order to further improve the computational efficiency of meshfree methods in numerical analysis. Local meshfree methods are derived through a weighted-residual formulation which leads to a local weak form that is the well known work theorem of the theory of structures. In an arbitrary local region, the work theorem establishes an energy relationship between a statically-admissible stress field and an independent kinematically-admissible strain field. Based on the independence of these two fields, this formulation results in a local form of the work theorem that is reduced to regular boundary terms only, integration-free and free of volumetric locking. Generalized-strain mesh-free formulation: Advantages over finite element methods are that GSMF doesn't rely on a grid, and is more precise and faster when solving bi-dimensional problems. When compared to other meshless methods, such as rigid-body displacement mesh-free (RBDMF) formulation, the element-free Galerkin (EFG) and the meshless local Petrov-Galerkin finite volume method (MLPG FVM); GSMF proved to be superior not only regarding the computational efficiency, but also regarding the accuracy.The moving least squares (MLS) approximation of the elastic field is used on this local meshless formulation. Formulation: In the local form of the work theorem, equation: ∫ΓQtTu∗dΓ+∫ΩQbTu∗dΩ=∫ΩQσTε∗dΩ. Formulation: The displacement field u∗ , was assumed as a continuous function leading to a regular integrable function that is the kinematically-admissible strain field ε∗ . However, this continuity assumption on u∗ , enforced in the local form of the work theorem, is not absolutely required but can be relaxed by convenience, provided ε∗ can be useful as a generalized function, in the sense of the theory of distributions, see Gelfand and Shilov. Hence, this formulation considers that the displacement field u∗ , is a piecewise continuous function, defined in terms of the Heaviside step function and therefore the corresponding strain field ε∗ , is a generalized function defined in terms of the Dirac delta function. Formulation: For the sake of the simplicity, in dealing with Heaviside and Dirac delta functions in a two-dimensional coordinate space, consider a scalar function d , defined as: d=‖x−xQ‖ which represents the absolute-value function of the distance between a field point x and a particular reference point xQ , in the local domain ΩQ∪ΓQ assigned to the field node Q . Therefore, this definition always assumes d=d(x,xQ)≥0 , as a positive or null value, in this case whenever x and xQ are coincident points. Formulation: For a scalar coordinate d⊃d(x,xQ) , the Heaviside step function can be defined as H(d)=1ifd≤0(d=0forx≡xQ) H(d)=0ifd>0(x≠xQ) in which the discontinuity is assumed at xQ and consequently, the Dirac delta function is defined with the following properties δ(d)=H′(d)=∞ifd=0thatisx≡xQ δ(d)=H′(d)=0ifd≠0(d>0forx≠xQ) and ∫−∞+∞δ(d)dd=1 in which H′(d) represents the distributional derivative of H(d) . Note that the derivative of H(d) , with respect to the coordinate xi , can be defined as H(d),i=H′(d)d,i=δ(d)d,i=δ(d)ni Since the result of this equation is not affected by any particular value of the constant ni , this constant will be conveniently redefined later on. Formulation: Consider that dl , dj and dk represent the distance function d , for corresponding collocation points xl , xj and xk . The displacement field u∗(x) , can be conveniently defined as u∗(x)=[Lini∑l=1niH(dl)+Ltnt∑j=1ntH(dj)+SnΩ∑k=1nΩH(dk)]e in which e=[11]T represents the metric of the orthogonal directions and ni , nt and nΩ represent the number of collocation points, respectively on the local interior boundary ΓQi=ΓQ−ΓQt−ΓQu with length Li , on the local static boundary ΓQt with length Lt and in the local domain ΩQ with area S . This assumed displacement field u∗(x) , a discrete rigid-body unit displacement defined at collocation points. The strain field ε∗(x) , is given by ε∗(x)=Lu∗(x)=[Lini∑l=1niLH(dl)+Ltnt∑j=1ntLH(dj)+SnΩ∑k=1nΩLH(dk)]e=[Lini∑l=1niδ(dl)nT+Ltnt∑j=1ntδ(dj)nT+SnΩ∑k=1nΩδ(dk)nT]e Having defined the displacement and the strain components of the kinematically-admissible field, the local work theorem can be written as Lini∑l=1ni∫ΓQ−ΓQttTH(dl)edΓ+Ltnt∑j=1nt∫ΓQtt¯TH(dj)edΓ+SnΩ∑k=1nΩ∫ΩQbTH(dk)edΩ=SnΩ∑k=1nΩ∫ΩQσTδ(dk)nTedΩ. Formulation: Taking into account the properties of the Heaviside step function and Dirac delta function, this equation simply leads to Lini∑l=1nitxl=−Ltnt∑j=1ntt¯xj−SnΩ∑k=1nΩbxk Discretization of this equations can be carried out with the MLS approximation, for the local domain ΩQ , in terms of the nodal unknowns u^ , thus leading to the system of linear algebraic equations that can be written as Lini∑l=1ninxlDBxlu^=−Ltnt∑j=1ntt¯xj−SnΩ∑k=1nΩbxk or simply KQu^=FQ This formulation states the equilibrium of tractions and body forces, pointwisely defined at collocation points, obviously, it is the pointwise version of the Euler-Cauchy stress principle. This is the equation used in the Generalized-Strain Mesh-Free (GSMF) formulation which, therefore, is free of integration. Since the work theorem is a weighted-residual weak form, it can be easily seen that this integration-free formulation is nothing else other than a weighted-residual weak-form collocation. The weighted-residual weak-form collocation readily overcomes the well-known difficulties posed by the weighted-residual strong-form collocation, regarding accuracy and stability of the solution.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Beam compass** Beam compass: A beam compass is a compass with a beam and sliding sockets or cursors for drawing and dividing circles larger than those made by a regular pair of compasses. The instrument can be as a whole, or made on the spot with individual sockets (called trammel points) and any suitable beam. Draftsman's beam compass: A draftsman's beam compass consists of a set of points and holders, mounted on a plated brass, aluminum, or German 'silver' rod. One end is generally locked down at the end of the rod, while the other has both rough and fine adjustments, though some are opposite in construction. The locked tip holder consists of a needle, for the centre of the radius, and the other holds either a lead clutch, or an inking nib. There are older variants which use a wooden beam. Another similar type is a Machinist or Engineers beam compass, which uses scribing points only, similar to ones used by woodworkers, except that its fine adjustment is generally more refined. These beam compasses can be extended by adding press-in rods, or by using a lockable rod connector. Woodworking trammel points: Trammels or trammel points are the sockets or cursors that, together with the beam, make up a beam compass. Their relatively small size makes them easy to store or transport. They consist of two separate metal pieces (approx. 2+1⁄2 in × 5 in × 1⁄2 in (6 cm × 13 cm × 1 cm)) that are usually connected by a piece of wood, The wood timber is not included in the purchase of the trammel points. It can be ripped on a table saw. A lumber yard or woodworking store should have a piece readily available to fit the opening also, metal or pipe. They work like a scratch awl. Use: As for any compass, there are two uses. Use: Scribing a circle The beam compass is used to scribe a circle, either by drawing with lead, penning by ink, or scratching with a sharpened point. The radius can be adjusted by sliding the metal point holder across a wood beam or metal rod, and locking it by turning a knob at the desired location. Some have a fine radius adjustment. The threaded adjustment is similar to that of a Screw. The only limitation is the rigidity of the wood beam or metal rod being used. Longer wooden beams tend to sag depending on the species of wood used. Metal rods can be used as an alternative, but they also have length limitations. Some trammel sets include a support roller for attachment at mid span of the beam or rod, to take out the sag. Trammel points score a precise line by using a sharpened point, or draw a line using a lead clutch, or an ink nib. When the circular knob is turned, it micro adjusts the radius of the circle. On some, a spring and screw mechanism locks the compass at the precise desired location. Turning clockwise decreases the radius while turning counterclockwise increases the radius slightly. Use: Transferring measurements A beam compass can also be used to make a series of repetitive measurements in a precise manner; the same as using a divider. Each point is rotated 180° along a straight line or large circle, and this process is repeated until the desired measurement or division is reached. The indentation created by the sharp point of the trammel is easily seen and makes a precise point to reference to the next location. Variants: The circle cutter is a basic variation of the beam compass. There are many types of circle cutters. This cutter is used primarily to score a circular pattern in the drywall to fit over recessed lighting in the ceiling. The tool consists of a square shank with a sliding pivot that is locked into the desired location with a turn knob. The shank is graduated into 16 units and each unit is further divided into increments of one quarter. One end of the shank has a fixed cutter wheel that scores a fine line in the drywall.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Selective exposure theory** Selective exposure theory: Selective exposure is a theory within the practice of psychology, often used in media and communication research, that historically refers to individuals' tendency to favor information which reinforces their pre-existing views while avoiding contradictory information. Selective exposure has also been known and defined as "congeniality bias" or "confirmation bias" in various texts throughout the years.According to the historical use of the term, people tend to select specific aspects of exposed information which they incorporate into their mindset. These selections are made based on their perspectives, beliefs, attitudes, and decisions. People can mentally dissect the information they are exposed to and select favorable evidence, while ignoring the unfavorable. The foundation of this theory is rooted in the cognitive dissonance theory (Festinger 1957), which asserts that when individuals are confronted with contrasting ideas, certain mental defense mechanisms are activated to produce harmony between new ideas and pre-existing beliefs, which results in cognitive equilibrium. Cognitive equilibrium, which is defined as a state of balance between a person's mental representation of the world and his or her environment, is crucial to understanding selective exposure theory. According to Jean Piaget, when a mismatch occurs, people find it to be "inherently dissatisfying".Selective exposure relies on the assumption that one will continue to seek out information on an issue even after an individual has taken a stance on it. The position that a person has taken will be colored by various factors of that issue that are reinforced during the decision-making process. According to Stroud (2008), theoretically, selective exposure occurs when people's beliefs guide their media selections.Selective exposure has been displayed in various contexts such as self-serving situations and situations in which people hold prejudices regarding outgroups, particular opinions, and personal and group-related issues. Perceived usefulness of information, perceived norm of fairness, and curiosity of valuable information are three factors that can counteract selective exposure. Effect on decision-making: Individual versus group decision-making Selective exposure can often affect the decisions people make as individuals or as groups because they may be unwilling to change their views and beliefs either collectively or on their own, despite conflicting and reliable information. An example of the effects of selective exposure is the series of events leading up to the Bay of Pigs Invasion in 1961. President John F. Kennedy was given the go ahead by his advisers to authorize the invasion of Cuba by poorly trained expatriates despite overwhelming evidence that it was a foolish and ill-conceived tactical maneuver. The advisers were so eager to please the President that they confirmed their cognitive bias for the invasion rather than challenging the faulty plan. Changing beliefs about one's self, other people, and the world are three variables as to why people fear new information. A variety of studies has shown that selective exposure effects can occur in the context of both individual and group decision making. Numerous situational variables have been identified that increase the tendency toward selective exposure. Social psychology, specifically, includes research with a variety of situational factors and related psychological processes that eventually persuade a person to make a quality decision. Additionally, from a psychological perspective, the effects of selective exposure can both stem from motivational and cognitive accounts. Effect on decision-making: Effect of information quantity According to research study by Fischer, Schulz-Hardt, et al. (2008), the quantity of decision-relevant information that the participants were exposed to had a significant effect on their levels of selective exposure. A group for which only two pieces of decision-relevant information were given had experienced lower levels of selective exposure than the other group who had ten pieces of information to evaluate. This research brought more attention to the cognitive processes of individuals when they are presented with a very small amount of decision-consistent and decision-inconsistent information. The study showed that in situations such as this, an individual becomes more doubtful of their initial decision due to the unavailability of resources. They begin to think that there is not enough data or evidence in this particular field in which they are told to make a decision about. Because of this, the subject becomes more critical of their initial thought process and focuses on both decision-consistent and inconsistent sources, thus decreasing his level of selective exposure. For the group who had plentiful pieces of information, this factor made them confident in their initial decision because they felt comfort from the fact that their decision topic was well-supported by a large number of resources. Therefore, the availability of decision-relevant and irrelevant information surrounding individuals can influence the level of selective exposure experienced during the process of decision-making. Effect on decision-making: Selective exposure is prevalent within singular individuals and groups of people and can influence either to reject new ideas or information that is not commensurate with the original ideal. In Jonas et al. (2001) empirical studies were done on four different experiments investigating individuals' and groups' decision making. This article suggests that confirmation bias is prevalent in decision making. Those who find new information often draw their attention towards areas where they hold personal attachment. Thus, people are driven toward pieces of information that are coherent with their own expectations or beliefs as a result of this selective exposure theory occurring in action. Throughout the process of the four experiments, generalization is always considered valid and confirmation bias is always present when seeking new information and making decisions. Effect on decision-making: Accuracy motivation and defense motivation Fischer and Greitemeyer (2010) explored individuals' decision making in terms of selective exposure to confirmatory information. Selective exposure posed that individuals make their decisions based on information that is consistent with their decision rather than information that is inconsistent. Recent research has shown that "Confirmatory Information Search" was responsible for the 2008 bankruptcy of the Lehman Brothers Investment Bank which then triggered the Global Financial Crisis. In the zeal for profit and economic gain, politicians, investors, and financial advisors ignored the mathematical evidence that foretold the housing market crash in favor of flimsy justifications for upholding the status quo. Researchers explain that subjects have the tendency to seek and select information using their integrative model. There are two primary motivations for selective exposure: Accuracy Motivation and Defense Motivation. Accuracy Motivation explains that an individual is motivated to be accurate in their decision making and Defense Motivation explains that one seeks confirmatory information to support their beliefs and justify their decisions. Accuracy motivation is not always beneficial within the context of selective exposure and can instead be counterintuitive, increasing the amount of selective exposure. Defense motivation can lead to reduced levels of selective exposure. Effect on decision-making: Personal attributes Selective exposure avoids information inconsistent with one's beliefs and attitudes. For example, former Vice President Dick Cheney would only enter a hotel room after the television was turned on and tuned to a conservative television channel. When analyzing a person's decision-making skills, his or her unique process of gathering relevant information is not the only factor taken into account. Fischer et al. (2010) found it important to consider the information source itself, otherwise explained as the physical being that provided the source of information. Selective exposure research generally neglects the influence of indirect decision-related attributes, such as physical appearance. In Fischer et al. (2010) two studies hypothesized that physically attractive information sources resulted in decision makers to be more selective in searching and reviewing decision-relevant information. Researchers explored the impact of social information and its level of physical attractiveness. The data was then analyzed and used to support the idea that selective exposure existed for those who needed to make a decision. Therefore, the more attractive an information source was, the more positive and detailed the subject was with making the decision. Physical attractiveness affects an individual's decision because the perception of quality improves. Physically attractive information sources increased the quality of consistent information needed to make decisions and further increased the selective exposure in decision-relevant information, supporting the researchers' hypothesis. Both studies concluded that attractiveness is driven by a different selection and evaluation of decision-consistent information. Decision makers allow factors such as physical attractiveness to affect everyday decisions due to the works of selective exposure. Effect on decision-making: In another study, selective exposure was defined by the amount of individual confidence. Individuals can control the amount of selective exposure depending on whether they have a low self-esteem or high self-esteem. Individuals who maintain higher confidence levels reduce the amount of selective exposure. Albarracín and Mitchell (2004) hypothesized that those who displayed higher confidence levels were more willing to seek out information both consistent and inconsistent with their views. The phrase "decision-consistent information" explains the tendency to actively seek decision-relevant information. Selective exposure occurs when individuals search for information and show systematic preferences towards ideas that are consistent, rather than inconsistent, with their beliefs. On the contrary, those who exhibited low levels of confidence were more inclined to examine information that did not agree with their views. The researchers found that in three out of five studies participants showed more confidence and scored higher on the Defensive Confidence Scale, which serves as evidence that their hypothesis was correct. Effect on decision-making: Bozo et al. (2009) investigated the anxiety of fearing death and compared it to various age groups in relation to health-promoting behaviors. Researchers analyzed the data by using the terror management theory and found that age had no direct effect on specific behaviors. The researchers thought that a fear of death would yield health-promoting behaviors in young adults. When individuals are reminded of their own death, it causes stress and anxiety, but eventually leads to positive changes in their health behaviors. Their conclusions showed that older adults were consistently better at promoting and practicing good health behaviors, without thinking about death, compared to young adults. Young adults were less motivated to change and practice health-promoting behaviors because they used the selective exposure to confirm their prior beliefs. Selective exposure thus creates barriers between the behaviors in different ages, but there is no specific age at which people change their behaviors. Effect on decision-making: Though physical appearance will impact one's personal decision regarding an idea presented, a study conducted by Van Dillen, Papies, and Hofmann (2013) suggests a way to decrease the influence of personal attributes and selective exposure on decision-making. The results from this study showed that people do pay more attention to physically attractive or tempting stimuli; however, this phenomenon can be decreased through increasing the "cognitive load." In this study, increasing cognitive activity led to a decreased impact of physical appearance and selective exposure on the individual's impression of the idea presented. This is explained by acknowledging that we are instinctively drawn to certain physical attributes, but if the required resources for this attraction are otherwise engaged at the time, then we might not notice these attributes to an equal extent. For example, if a person is simultaneously engaging in a mentally challenging activity during the time of exposure, then it is likely that less attention will be paid to appearance, which leads to a decreased impact of selective exposure on decision-making. Theories accounting for selective exposure: Cognitive dissonance theory Leon Festinger is widely considered as the father of modern social psychology and as an important figure to that field of practice as Freud was to clinical psychology and Piaget was to developmental psychology. He was considered to be one of the most significant social psychologists of the 20th century. His work demonstrated that it is possible to use the scientific method to investigate complex and significant social phenomena without reducing them to the mechanistic connections between stimulus and response that were the basis of behaviorism. Festinger proposed the groundbreaking theory of cognitive dissonance that has become the foundation of selective exposure theory today despite the fact that Festinger was considered as an "avant-garde" psychologist when he had first proposed it in 1957. In an ironic twist, Festinger realized that he himself was a victim of the effects of selective exposure. He was a heavy smoker his entire life and when he was diagnosed with terminal cancer in 1989, he was said to have joked, "Make sure that everyone knows that it wasn't lung cancer!" Cognitive dissonance theory explains that when a person either consciously or unconsciously realizes conflicting attitudes, thoughts, or beliefs, they experience mental discomfort. Because of this, an individual will avoid such conflicting information in the future since it produces this discomfort, and they will gravitate towards messages sympathetic to their own previously held conceptions. Decision makers are unable to evaluate information quality independently on their own (Fischer, Jonas, Dieter & Kastenmüller, 2008). When there is a conflict between pre-existing views and information encountered, individuals will experience an unpleasant and self-threatening state of aversive-arousal which will motivate them to reduce it through selective exposure. They will begin to prefer information that supports their original decision and neglect conflicting information. Individuals will then exhibit confirmatory information to defend their positions and reach the goal of dissonance reduction. Cognitive dissonance theory insists that dissonance is a psychological state of tension that people are motivated to reduce (Festinger 1957). Dissonance causes feelings of unhappiness, discomfort, or distress. Festinger (1957, p. 13) asserted the following: "These two elements are in a dissonant relation if, considering these two alone, the obverse of one element would follow from the other." To reduce dissonance, people add consonant cognition or change evaluations for one or both conditions in order to make them more consistent mentally. Such experience of psychological discomfort was found to drive individuals to avoid counterattitudinal information as a dissonance-reduction strategy.In Festinger's theory, there are two basic hypotheses: 1) The existence of dissonance, being psychologically uncomfortable, will motivate the person to try to reduce the dissonance and achieve consonance. Theories accounting for selective exposure: 2) When dissonance is present, in addition to trying to reduce it, the person will actively avoid situations and information which would likely increase the dissonance (Festinger 1957, p. 3). Theories accounting for selective exposure: The theory of cognitive dissonance was developed in the mid-1950s to explain why people of strong convictions are so resistant in changing their beliefs even in the face of undeniable contradictory evidence. It occurs when people feel an attachment to and responsibility for a decision, position or behavior. It increases the motivation to justify their positions through selective exposure to confirmatory information (Fischer, 2011). Fischer suggested that people have an inner need to ensure that their beliefs and behaviors are consistent. In an experiment that employed commitment manipulations, it impacted perceived decision certainty. Participants were free to choose attitude-consistent and inconsistent information to write an essay. Those who wrote an attitude-consistent essay showed higher levels of confirmatory information search (Fischer, 2011). The levels and magnitude of dissonance also play a role. Selective exposure to consistent information is likely under certain levels of dissonance. At high levels, a person is expected to seek out information that increases dissonance because the best strategy to reduce dissonance would be to alter one's attitude or decision (Smith et al., 2008).Subsequent research on selective exposure within the dissonance theory produced weak empirical support until the dissonance theory was revised and new methods, more conducive to measuring selective exposure, were implemented. To date, scholars still argue that empirical results supporting the selective exposure hypothesis are still mixed. This is possibly due to the problems with the methods of the experimental studies conducted. Another possible reason for the mixed results may be the failure to simulate an authentic media environment in the experiments.According to Festinger, the motivation to seek or avoid information depends on the magnitude of dissonance experienced (Smith et al., 2008). It is observed that there is a tendency for people to seek new information or select information that supports their beliefs in order to reduce dissonance. Theories accounting for selective exposure: There exist three possibilities which will affect extent of dissonance (Festinger 1957, pp. 127–131): Relative absence of dissonance.When little or no dissonance exists, there is little or no motivation to seek new information. For example, when there is an absence of dissonance, the lack of motivation to attend or avoid a lecture on 'The Advantages of Automobiles with Very High Horsepower Engines' will be independent of whether the car a new owner has recently purchased has a high or low horsepower engine. However, it is important to note the difference between a situation when there is no dissonance and when the information has no relevance to the present or future behavior. For the latter, accidental exposure, which the new car owner does not avoid, will not introduce any dissonance; while for the former individual, who also does not avoid information, dissonance may be accidentally introduced. Theories accounting for selective exposure: The presence of moderate amounts of dissonance.The existence of dissonance and consequent pressure to reduce it will lead to an active search of information, which will then lead people to avoid information that will increase dissonance. However, when faced with a potential source of information, there will be an ambiguous cognition to which a subject will react in terms of individual expectations about it. If the subject expects the cognition to increase dissonance, they will avoid it. In the event that one's expectations are proven wrong, the attempt at dissonance reduction may result in increasing it instead. It may in turn lead to a situation of active avoidance. Theories accounting for selective exposure: The presence of extremely large amounts of dissonance.If two cognitive elements exist in a dissonant relationship, the magnitude of dissonance matches the resistance to change. If the dissonance becomes greater than the resistance to change, then the least resistant elements of cognition will be changed, reducing dissonance. When dissonance is close to the maximum limit, one may actively seek out and expose oneself to dissonance-increasing information. If an individual can increase dissonance to the point where it is greater than the resistance to change, he will change the cognitive elements involved, reducing or even eliminating dissonance. Once dissonance is increased sufficiently, an individual may bring himself to change, hence eliminating all dissonance (Festinger 1957, pp. 127–131). Theories accounting for selective exposure: The reduction in cognitive dissonance following a decision can be achieved by selectively looking for decision-consonant information and avoiding contradictory information. The objective is to reduce the discrepancy between the cognitions, but the specification of which strategy will be chosen is not explicitly addressed by the dissonance theory. It will be dependent on the quantity and quality of the information available inside and outside the cognitive system. Theories accounting for selective exposure: Klapper's selective exposure In the early 1960s, Columbia University researcher Joseph T. Klapper asserted in his book The Effects Of Mass Communication that audiences were not passive targets of political and commercial propaganda from mass media but that mass media reinforces previously held convictions. Throughout the book, he argued that the media has a small amount of power to influence people and, most of the time, it just reinforces our preexisting attitudes and beliefs. He argued that the media effects of relaying or spreading new public messages or ideas were minimal because there is a wide variety of ways in which individuals filter such content. Due to this tendency, Klapper argued that media content must be able to ignite some type of cognitive activity in an individual in order to communicate its message. Prior to Klapper's research, the prevailing opinion was that mass media had a substantial power to sway individual opinion and that audiences were passive consumers of prevailing media propaganda. However, by the time of the release of The Effects of Mass Communication, many studies led to a conclusion that many specifically targeted messages were completely ineffective. Klapper's research showed that individuals gravitated towards media messages that bolstered previously held convictions that were set by peer groups, societal influences, and family structures and that the accession of these messages over time did not change when presented with more recent media influence. Klapper noted from the review of research in the social science that given the abundance of content within the mass media, audiences were selective to the types of programming that they consumed. Adults would patronize media that was appropriate for their demographics and children would eschew media that was boring to them. So individuals would either accept or reject a mass media message based upon internal filters that were innate to that person.The following are Klapper's five mediating factors and conditions to affect people: Predispositions and the related processes of selective exposure, selective perception, and selective retention. Theories accounting for selective exposure: The groups, and the norms of groups, to which the audience members belong. Interpersonal dissemination of the content of communication The exercise of opinion leadership The nature of mass media in a free enterprise society.Three basic concepts: Selective exposure – people keep away from communication of opposite hue. Selective perception – If people are confronting unsympathetic material, they do not perceive it, or make it fit for their existing opinion. Theories accounting for selective exposure: Selective retention – refers to the process of categorizing and interpreting information in a way that favors one category or interpretation over another. Furthermore, they just simply forget the unsympathetic material.Groups and group norms work as mediators. For example, one can be strongly disinclined to change to the Democratic Party if their family has voted Republican for a long time. In this case, the person's predisposition to the political party is already set, so they don't perceive information about Democratic Party or change voting behavior because of mass communication. Klapper's third assumption is inter-personal dissemination of mass communication. If someone is already exposed by close friends, which creates predisposition toward something, it will lead to an increase in exposure to mass communication and eventually reinforce the existing opinion. An opinion leader is also a crucial factor to form one's predisposition and can lead someone to be exposed by mass communication. The nature of commercial mass media also leads people to select certain types of media contents. Theories accounting for selective exposure: Cognitive economy model This new model combines the motivational and cognitive processes of selective exposure. In the past, selective exposure had been studied from a motivational standpoint. For instance, the reason behind the existence of selective exposure was that people felt motivated to decrease the level of dissonance they felt while encountering inconsistent information. They also felt motivated to defend their decisions and positions, so they achieved this goal by exposing themselves to consistent information only. However, the new cognitive economy model not only takes into account the motivational aspects, but it also focuses on the cognitive processes of each individual. For instance, this model proposes that people cannot evaluate the quality of inconsistent information objectively and fairly because they tend to store more of the consistent information and use this as their reference point. Thus, inconsistent information is often observed with a more critical eye in comparison to consistent information. According to this model, the levels of selective exposure experienced during the decision-making process are also dependent on how much cognitive energy people are willing to invest. Just as people tend to be careful with their finances, cognitive energy or how much time they are willing to spend evaluating all the evidence for their decisions works the same way. People are hesitant to use this energy; they tend to be careful so they don't waste it. Thus, this model suggests that selective exposure does not happen in separate stages. Rather, it is a combined process of the individuals' certain acts of motivations and their management of the cognitive energy. Implications: Media Recent studies have shown relevant empirical evidence for the pervasive influence of selective exposure on the greater population at large due to mass media. Researchers have found that individual media consumers will seek out programs to suit their individual emotional and cognitive needs. Individuals will seek out palliative forms of media during the recent times of economic crisis to fulfill a "strong surveillance need" and to decrease chronic dissatisfaction with life circumstances as well as fulfill needs for companionship. Consumers tend to select media content that exposes and confirms their own ideas while avoiding information that argues against their opinion. A study conducted in 2012 has shown that this type of selective exposure affects pornography consumption as well. Individuals with low levels of life satisfaction are more likely to have casual sex after consumption of pornography that is congruent with their attitudes while disregarding content that challenges their inherently permissive 'no strings attached' attitudes.Music selection is also affected by selective exposure. A 2014 study conducted by Christa L. Taylor and Ronald S. Friedman at the SUNY University at Albany, found that mood congruence was effected by self-regulation of music mood choices. Subjects in the study chose happy music when feeling angry or neutral but listened to sad music when they themselves were sad. The choice of sad music given a sad mood was due less to mood-mirroring but as a result of subjects having an aversion to listening to happy music that was cognitively dissonant with their mood.Politics are more likely to inspire selective exposure among consumers as opposed to single exposure decisions. For example, in their 2009 meta-analysis of Selective Exposure Theory, Hart et al. reported that "A 2004 survey by The Pew Research Center for the People & the Press (2006) found that Republicans are about 1.5 times more likely to report watching Fox News regularly than are Democrats (34% for Republicans and 20% of Democrats). In contrast, Democrats are 1.5 times more likely to report watching CNN regularly than Republicans (28% of Democrats vs. 19% of Republicans). Even more striking, Republicans are approximately five times more likely than Democrats to report watching "The O'Reilly Factor" regularly and are seven times more likely to report listening to "Rush Limbaugh" regularly." As a result, when the opinions of Republicans who only tune into conservative media outlets were compared to those of their fellow conservatives in a study by Stroud (2010), their beliefs were considered to be more polarized. The same result was retrieved from the study of liberals as well. Due to our greater tendency toward selective exposure, current political campaigns have been characterized as being extremely partisan and polarized. As Bennett and Iyengar (2008) commented, "The new, more diversified information environment makes it not only more feasible for consumers to seek out news they might find agreeable but also provides a strong economic incentive for news organizations to cater to their viewers' political preferences." Selective exposure thus plays a role in shaping and reinforcing individuals' political attitudes. In the context of these findings, Stroud (2008) comments "The findings presented here should at least raise the eyebrows of those concerned with the noncommercial role of the press in our democratic system, with its role in providing the public with the tools to be good citizens." The role of public broadcasting, through its noncommercial role, is to counterbalance media outlets that deliberately devote their coverage to one political direction, thus driving selective exposure and political division in a democracy. Implications: Many academic studies on selective exposure, however, are based on the electoral system and media system of the United States. Countries with a strong public service broadcasting like many European countries, on the other hand, have less selective exposure based on political ideology or political party. In Sweden, for instance, there were no differences in selective exposure to public service news between the political left and right over a period of 30 years. Implications: In early research, selective exposure originally provided an explanation for limited media effects. The "limited effects" model of communication emerged in the 1940s with a shift in the media effects paradigm. This shift suggested that while the media has effects on consumers' behavior such as their voting behavior, these effects are limited and influenced indirectly by interpersonal discussions and the influence of opinion leaders. Selective exposure was considered one necessary function in the early studies of media's limited power over citizens' attitudes and behaviors. Political ads deal with selective exposure as well because people are more likely to favor a politician that agrees with their own beliefs. Another significant effect of selective exposure comes from Stroud (2010) who analyzed the relationship between partisan selective exposure and political polarization. Using data from the 2004 National Annenberg Election Survey, analysts found that over time partisan selective exposure leads to polarization. This process is plausible because people can easily create or have access to blogs, websites, chats, and online forums where those with similar views and political ideologies can congregate. Much of the research has also shown that political interaction online tends to be polarized. Further evidence for this polarization in the political blogosphere can be found in the Lawrence et al. (2010)'s study on blog readership that people tend to read blogs that reinforce rather than challenge their political beliefs. According to Cass Sunstein's book, Republic.com, the presence of selective exposure on the web creates an environment that breeds political polarization and extremism. Due to easy access to social media and other online resources, people are "likely to hold even stronger views than the ones they started with, and when these views are problematic, they are likely to manifest increasing hatred toward those espousing contrary beliefs." This illustrates how selective exposure can influence an individual's political beliefs and subsequently his participation in the political system. Implications: One of the major academic debates on the concept of selective exposure is whether selective exposure contributes to people's exposure to diverse viewpoints or polarization. Scheufele and Nisbet (2012) discuss the effects of encountering disagreement on democratic citizenship. Ideally, true civil deliberation among citizens would be the rational exchange of non-like-minded views (or disagreement). However, many of us tend to avoid disagreement on a regular basis because we do not like to confront with others who hold views that are strongly opposed to our own. In this sense, the authors question about whether exposure to non-like-minded information brings either positive or negative effects on democratic citizenship. While there are mixed findings of peoples' willingness to participate in the political processes when they encounter disagreement, the authors argue that the issue of selectivity needs to be further examined in order to understand whether there is a truly deliberative discourse in online media environment.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Tetrairidium dodecacarbonyl** Tetrairidium dodecacarbonyl: Tetrairidium dodecacarbonyl is the chemical compound with the formula Ir4(CO)12. This tetrahedral cluster is the most common and most stable "binary" carbonyl of iridium. This air-stable species is only poorly soluble in organic solvents. It has been used to prepare bimetallic clusters and catalysts, e.g. for the water gas shift reaction, and reforming, but these studies are of purely academic interest. Structure: Each Ir center is octahedral, being bonded to 3 other iridium atoms and three terminal CO ligands. Ir4(CO)12 has Td symmetry with an average Ir-Ir distances of 2.693 Å. The related clusters Rh4(CO)12 and Co4(CO)12 have C3v symmetry because of the presence of three bridging CO ligands in each. Preparation: It is prepared in two steps by reductive carbonylation of hydrated iridium trichloride. The first step gives [Ir(CO)2Cl2]−. IrCl3 + 3 CO + H2O → [Ir(CO)2Cl2]− + CO2 + 2 H+ + Cl− 4 [Ir(CO)2Cl2]− + 6 CO + 2 H2O → Ir4(CO)12 + 2 CO2 + 4 H+ + 8 Cl−
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Songs2See** Songs2See: Songs2See is an application for music learning, practice and gaming developed by the Fraunhofer Institute for Digital Media Technology in Ilmenau - Germany and distributed by the company Songquito UG (haftungsbeschränkt). Features: Songs2See is composed of two main applications: the Songs2See Game, used at practice time, and the Songs2See Editor, used for exercise content creation The main advantages of Songs2See are: Users can play their own musical instruments to the computer microphone without the need of a game controller. Currently, guitar, voice, piano, saxophone, trumpet, bass and flute are supported. Users can create their own musical exercise content simply by importing audio files and using all the analysis features of the Songs2See Editor. The Songs2See Game: The Songs2See Game is an application where a selected music piece can be practiced in a gaming environment. Besides real-time performance feedback, the game also offers different visual aids to guide users through the performance. The Songs2See Game: Game View The Game View is a scrolling score-like animation that displays in real-time the melody to be played. Note durations are displayed both as blue bars of different lengths and as real notes (eighth notes, sixteenth notes, etc). Note pitches are indicated both by the location of the note objects in the staff and by note names placed inside the note heads. Indications about key, accidentals, time signature and bars are also included. The Songs2See Game: Instrument View The Instrument View displays the selected instrument and a real-time fingering animation that guides users through the performance of the selected piece. Green signs are used to show the current fingering and blue signs are used to show the next fingering in the melody. Practice Content Songs2See is delivered with a set of popular songs and instrument specific practice material for the user to play. Additionally, users can create their own content for the game using the Songs2See Editor. Options Different options can be modified in the Songs2See game, including performance loops, learning mode, note names, left-hand mode for guitar and bass players, microphone set up, game delay, etc. The Songs2See Editor: The Songs2See Editor is an application that allows users to create their own personal content for the Songs2See Game. Additionally, the different possibilities within the editor make it compatible with popular score-writing and sequencer software. Import Options Users can create content for the game starting from different types of format files and audio material: WAV, MP3, MIDI and MusicXML are currently supported. Analysis Options The Songs2See Editor analysis options include automatic main melody transcription, beat and key analysis, solo and backing track creation, different instrument transposition, efficient and easy editing of results, etc. These analysis options are direct research results from the Music Information Retrieval community. Export Options Every Songs2See Editor session can be exported for the Songs2See Game. Additionally, intermediate results can also be exported to be used in other applications: solo and backing tracks can be exported as audio files and transcription results can be exported as MIDI or MusicXML. Availability: The Songs2See Game is a platform independent flash-based application available both as desktop or web application. The Songs2See Editor is currently only available for Windows systems. History: The Songs2See project started in 2010 as a collaboration between the Fraunhofer Institute for Digital Media Technology and European partners as Grieg Music Education, Tampere University, Kids Interactive GmbH and Sweets for Brains GmbH. The project was funded by the Thuringian Ministry of Economy, Employment and Technology in the attempt to enable transnational cooperation between Thuringian companies and their partners from other European regions. After the conclusion of the project in March 2012, the company Songquito UG (haftungsbeschränkt) took over the commercialization, distribution and further development of Songs2See.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Interval tree** Interval tree: In computer science, an interval tree is a tree data structure to hold intervals. Specifically, it allows one to efficiently find all intervals that overlap with any given interval or point. It is often used for windowing queries, for instance, to find all roads on a computerized map inside a rectangular viewport, or to find all visible elements inside a three-dimensional scene. A similar data structure is the segment tree. Interval tree: The trivial solution is to visit each interval and test whether it intersects the given point or interval, which requires O(n) time, where n is the number of intervals in the collection. Since a query may return all intervals, for example if the query is a large interval intersecting all intervals in the collection, this is asymptotically optimal; however, we can do better by considering output-sensitive algorithms, where the runtime is expressed in terms of m , the number of intervals produced by the query. Interval trees have a query time of log ⁡n+m) and an initial creation time of log ⁡n) , while limiting memory consumption to O(n) . After creation, interval trees may be dynamic, allowing efficient insertion and deletion of an interval in log ⁡n) time. If the endpoints of intervals are within a small integer range (e.g., in the range [1,…,O(n)] ), faster and in fact optimal data structures exist with preprocessing time O(n) and query time O(1+m) for reporting m intervals containing a given query point (see for a very simple one). Naive approach: In a simple case, the intervals do not overlap and they can be inserted into a simple binary search tree and queried in log ⁡n) time. However, with arbitrarily overlapping intervals, there is no way to compare two intervals for insertion into the tree since orderings sorted by the beginning points or the ending points may be different. A naive approach might be to build two parallel trees, one ordered by the beginning point, and one ordered by the ending point of each interval. This allows discarding half of each tree in log ⁡n) time, but the results must be merged, requiring O(n) time. This gives us queries in log ⁡n)=O(n) , which is no better than brute-force. Naive approach: Interval trees solve this problem. This article describes two alternative designs for an interval tree, dubbed the centered interval tree and the augmented tree. Centered interval tree: Queries require log ⁡n+m) time, with n being the total number of intervals and m being the number of reported results. Construction requires log ⁡n) time, and storage requires O(n) space. Construction Given a set of n intervals on the number line, we want to construct a data structure so that we can efficiently retrieve all intervals overlapping another interval or point. Centered interval tree: We start by taking the entire range of all the intervals and dividing it in half at center (in practice, center should be picked to keep the tree relatively balanced). This gives three sets of intervals, those completely to the left of center which we'll call left , those completely to the right of center which we'll call right , and those overlapping center which we'll call center The intervals in left and right are recursively divided in the same manner until there are no intervals left. Centered interval tree: The intervals in center that overlap the center point are stored in a separate data structure linked to the node in the interval tree. This data structure consists of two lists, one containing all the intervals sorted by their beginning points, and another containing all the intervals sorted by their ending points. Centered interval tree: The result is a binary tree with each node storing: A center point A pointer to another node containing all intervals completely to the left of the center point A pointer to another node containing all intervals completely to the right of the center point All intervals overlapping the center point sorted by their beginning point All intervals overlapping the center point sorted by their ending point Intersecting Given the data structure constructed above, we receive queries consisting of ranges or points, and return all the ranges in the original set overlapping this input. Centered interval tree: With a point The task is to find all intervals in the tree that overlap a given point x . The tree is walked with a similar recursive algorithm as would be used to traverse a traditional binary tree, but with extra logic to support searching the intervals overlapping the "center" point at each node. For each tree node, x is compared to center , the midpoint used in node construction above. If x is less than center , the leftmost set of intervals, left , is considered. If x is greater than center , the rightmost set of intervals, right , is considered. Centered interval tree: As each node is processed as we traverse the tree from the root to a leaf, the ranges in its center are processed. If x is less than center , we know that all intervals in center end after x , or they could not also overlap center . Therefore, we need only find those intervals in center that begin before x . We can consult the lists of center that have already been constructed. Since we only care about the interval beginnings in this scenario, we can consult the list sorted by beginnings. Suppose we find the closest number no greater than x in this list. All ranges from the beginning of the list to that found point overlap x because they begin before x and end after x (as we know because they overlap center which is larger than x ). Thus, we can simply start enumerating intervals in the list until the startpoint value exceeds x Likewise, if x is greater than center , we know that all intervals in center must begin before x , so we find those intervals that end after x using the list sorted by interval endings. Centered interval tree: If x exactly matches center , all intervals in center can be added to the results without further processing and tree traversal can be stopped. Centered interval tree: With an interval For a result interval r to intersect our query interval q one of the following must hold: the start and/or end point of r is in q ; or r completely encloses q .We first find all intervals with start and/or end points inside q using a separately-constructed tree. In the one-dimensional case, we can use a search tree containing all the start and end points in the interval set, each with a pointer to its corresponding interval. A binary search in log ⁡n) time for the start and end of q reveals the minimum and maximum points to consider. Each point within this range references an interval that overlaps q and is added to the result list. Care must be taken to avoid duplicates, since an interval might both begin and end within q . This can be done using a binary flag on each interval to mark whether or not it has been added to the result set. Centered interval tree: Finally, we must find intervals that enclose q . To find these, we pick any point inside q and use the algorithm above to find all intervals intersecting that point (again, being careful to remove duplicates). Higher dimensions The interval tree data structure can be generalized to a higher dimension N with identical query and construction time and log ⁡n) space. Centered interval tree: First, a range tree in N dimensions is constructed that allows efficient retrieval of all intervals with beginning and end points inside the query region R . Once the corresponding ranges are found, the only thing that is left are those ranges that enclose the region in some dimension. To find these overlaps, N interval trees are created, and one axis intersecting R is queried for each. For example, in two dimensions, the bottom of the square R (or any other horizontal line intersecting R ) would be queried against the interval tree constructed for the horizontal axis. Likewise, the left (or any other vertical line intersecting R ) would be queried against the interval tree constructed on the vertical axis. Centered interval tree: Each interval tree also needs an addition for higher dimensions. At each node we traverse in the tree, x is compared with center to find overlaps. Instead of two sorted lists of points as was used in the one-dimensional case, a range tree is constructed. This allows efficient retrieval of all points in center that overlap region R Deletion If after deleting an interval from the tree, the node containing that interval contains no more intervals, that node may be deleted from the tree. This is more complex than a normal binary tree deletion operation. Centered interval tree: An interval may overlap the center point of several nodes in the tree. Since each node stores the intervals that overlap it, with all intervals completely to the left of its center point in the left subtree, similarly for the right subtree, it follows that each interval is stored in the node closest to the root from the set of nodes whose center point it overlaps. Centered interval tree: Normal deletion operations in a binary tree (for the case where the node being deleted has two children) involve promoting a node further from the leaf to the position of the node being deleted (usually the leftmost child of the right subtree, or the rightmost child of the left subtree). Centered interval tree: As a result of this promotion, some nodes that were above the promoted node will become its descendants; it is necessary to search these nodes for intervals that also overlap the promoted node, and move those intervals into the promoted node. As a consequence, this may result in new empty nodes, which must be deleted, following the same algorithm again. Centered interval tree: Balancing The same issues that affect deletion also affect rotation operations; rotation must preserve the invariant that nodes are stored as close to the root as possible. Augmented tree: Another way to represent intervals is described in Cormen et al. (2009, Section 14.3: Interval trees, pp. 348–354). Both insertion and deletion require log ⁡n) time, with n being the total number of intervals in the tree prior to the insertion or deletion operation. Augmented tree: An augmented tree can be built from a simple ordered tree, for example a binary search tree or self-balancing binary search tree, ordered by the 'low' values of the intervals. An extra annotation is then added to every node, recording the maximum upper value among all the intervals from this node down. Maintaining this attribute involves updating all ancestors of the node from the bottom up whenever a node is added or deleted. This takes only O(h) steps per node addition or removal, where h is the height of the node added or removed in the tree. If there are any tree rotations during insertion and deletion, the affected nodes may need updating as well. Augmented tree: Now, it is known that two intervals A and B overlap only when both low high and high low . When searching the trees for nodes overlapping with a given interval, you can immediately skip: all nodes to the right of nodes whose low value is past the end of the given interval. all nodes that have their maximum high value below the start of the given interval. Membership queries Some performance may be gained if the tree avoids unnecessary traversals. These can occur when adding intervals that already exist or removing intervals that don't exist. Augmented tree: A total order can be defined on the intervals by ordering them first by their lower bounds and then by their upper bounds. Then, a membership check can be performed in log ⁡n) time, versus the log ⁡n) time required to find duplicates if k intervals overlap the interval to be inserted or removed. This solution has the advantage of not requiring any additional structures. The change is strictly algorithmic. The disadvantage is that membership queries take log ⁡n) time. Augmented tree: Alternately, at the rate of O(n) memory, membership queries in expected constant time can be implemented with a hash table, updated in lockstep with the interval tree. This may not necessarily double the total memory requirement, if the intervals are stored by reference rather than by value. Augmented tree: Java example: Adding a new interval to the tree The key of each node is the interval itself, hence nodes are ordered first by low value and finally by high value, and the value of each node is the end point of the interval: Java example: Searching a point or an interval in the tree To search for an interval, one walks the tree, using the key (n.getKey()) and high value (n.getValue()) to omit any branches that cannot overlap the query. The simplest case is a point query: where a.compareTo(b) returns a negative value if a < b a.compareTo(b) returns zero if a = b a.compareTo(b) returns a positive value if a > bThe code to search for an interval is similar, except for the check in the middle: overlapsWith() is defined as: Higher dimensions Augmented trees can be extended to higher dimensions by cycling through the dimensions at each level of the tree. For example, for two dimensions, the odd levels of the tree might contain ranges for the x-coordinate, while the even levels contain ranges for the y-coordinate. This approach effectively converts the data structure from an augmented binary tree to an augmented kd-tree, thus significantly complicating the balancing algorithms for insertions and deletions. Augmented tree: A simpler solution is to use nested interval trees. First, create a tree using the ranges for the y-coordinate. Now, for each node in the tree, add another interval tree on the x-ranges, for all elements whose y-range is the same as that node's y-range. The advantage of this solution is that it can be extended to an arbitrary number of dimensions using the same code base. Augmented tree: At first, the additional cost of the nested trees might seem prohibitive, but this is usually not so. As with the non-nested solution earlier, one node is needed per x-coordinate, yielding the same number of nodes for both solutions. The only additional overhead is that of the nested tree structures, one per vertical interval. This structure is usually of negligible size, consisting only of a pointer to the root node, and possibly the number of nodes and the depth of the tree. Medial- or length-oriented tree: A medial- or length-oriented tree is similar to an augmented tree, but symmetrical, with the binary search tree ordered by the medial points of the intervals. There is a maximum-oriented binary heap in every node, ordered by the length of the interval (or half of the length). Also we store the minimum and maximum possible value of the subtree in each node (thus the symmetry). Medial- or length-oriented tree: Overlap test Using only start and end values of two intervals (ai,bi) , for i=0,1 , the overlap test can be performed as follows: a0<b1 and a1>b0 This can be simplified using the sum and difference: si=ai+bi di=bi−ai Which reduces the overlap test to: |s1−s0|<d0+d1 Adding interval Adding new intervals to the tree is the same as for a binary search tree using the medial value as the key. We push di onto the binary heap associated with the node, and update the minimum and maximum possible values associated with all higher nodes. Medial- or length-oriented tree: Searching for all overlapping intervals Let's use aq,bq,mq,dq for the query interval, and Mn for the key of a node (compared to mi of intervals) Starting with root node, in each node, first we check if it is possible that our query interval overlaps with the node subtree using minimum and maximum values of node (if it is not, we don't continue for this node). Medial- or length-oriented tree: Then we calculate min {di} for intervals inside this node (not its children) to overlap with query interval (knowing mi=Mn ): min {di}=|mq−Mn|−dq and perform a query on its binary heap for the di 's bigger than min {di} Then we pass through both left and right children of the node, doing the same thing. Medial- or length-oriented tree: In the worst-case, we have to scan all nodes of the binary search tree, but since binary heap query is optimum, this is acceptable (a 2- dimensional problem can not be optimum in both dimensions) This algorithm is expected to be faster than a traditional interval tree (augmented tree) for search operations. Adding elements is a little slower in practice, though the order of growth is the same.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Light aircraft pilot licence** Light aircraft pilot licence: The light aircraft pilot licence (LAPL) is a pilot license allowing the pilot to fly small aircraft. It is issued in EASA member states and the United Kingdom. Unlike most other licences, it is not covered by the ICAO framework and is usually not able to be used in other states or regulatory areas. Privileges: Separate LAPLs are issued for aeroplanes, helicopters, sailplanes (gliders) and balloons. Privileges: For aeroplanes, holders of an LAPL may act as pilot in command of single-engine piston aeroplanes or touring motor gliders with a maximum certificated take-off mass of 2,000 kg or less, carrying a maximum of 3 passengers.: FCL.105.A For helicopters, holders of an LAPL may act as pilot in command of single-engine helicopters with a maximum certificated take-off mass of 2,000 kg or less, carrying a maximum of 3 passengers.: FCL.105.H For sailplanes, holders of an LAPL may act as pilot in command of sailplanes and powered sailplanes.: FCL.105.S For balloons, holders of an LAPL may act as pilot in command of hot-air balloons or hot-air airships with a maximum of 3,400 m3 envelope capacity or gas balloons with a maximum of 1,260 m3, carrying a maximum of 3 passengers.: FCL.105.S Requirements: LAPL applicants must be at least 17 years old for aeroplanes and helicopters, or 16 years old for sailplanes and balloons.: FCL.100 Recency To use the licence, an LAPL holder needs to have, in the last 24 months, as pilot of an aeroplane or TMG:: FCL.140.A  12 hours of flight time as pilot in charge, including 12 take-offs and landings, refresher training of at least 1 hour of total flight time with an instructor. Legal basis: The LAPL was introduced in 2012. European Union and EASA member states The EU LAPL is defined in the Regulation (EU) No. 1178./2011. Compared to the ICAO licence on the level of a PPL the requirements, skill tests, and privileges are lowered. The rules and requirements for the license are stated in Part-FCL of the Regulation (EU) No. 1178./2011. United Kingdom When the United Kingdom left the EASA system at the end of 2020, EASA Part-FCL was retained in UK law as UK Part-FCL. As such, the UK continues to issue LAPLs, however these are not compatible with the EASA LAPL.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**WoW64** WoW64: In computing on Microsoft platforms, WoW64 (Windows 32-bit on Windows 64-bit) is a subsystem of the Windows operating system capable of running 32-bit applications on 64-bit Windows. It is included in all 64-bit versions of Windows—including Windows XP Professional x64 Edition, IA-64 and x64 versions of Windows Server 2003, as well as x64 versions of Windows Vista, Windows Server 2008, Windows 7, Windows 8, Windows Server 2012, Windows 8.1, Windows 10, Windows Server 2016, Windows Server 2019, Windows 11, Windows Server 2022, and Wine. as well as ARM64 versions of Windows 10, Windows 11 and Windows Server 2022, except in Windows Server Server Core where it is an optional component, and Windows Nano Server where it is not included. WoW64 aims to take care of many of the differences between 32-bit Windows and 64-bit Windows, particularly involving structural changes to Windows itself. Translation libraries: The WoW64 subsystem comprises a lightweight compatibility layer that has similar interfaces on all 64-bit versions of Windows. It aims to create a 32-bit environment that provides the interfaces required to run unmodified 32-bit Windows applications on a 64-bit system. WOW64 is implemented using several DLLs, some of which include: Wow64.dll, the core interface to the Windows NT kernel that translates (thunks) between 32-bit and 64-bit calls, including pointer and call stack manipulations Wow64win.dll, which provides the appropriate entry-points for 32-bit applications (win32k thunks) Wow64cpu.dll, which takes care of switching the processor from 32-bit to 64-bit mode. This is used in x86-64 implementations of Windows only.Other DLLs and binaries are included for Itanium and ARMv8 64-bit architectures to provide emulation to x86 or for 32-bit entry points if the architecture has a native 32-bit operating mode. Architectures: Despite its outwardly similar appearance on all versions of 64-bit Windows, WoW64's implementation varies depending on the target instruction set architecture. For example, the version of 64-bit Windows developed for the Intel Itanium 2 processor (known as the IA-64 architecture) uses Wow64win.dll to set up the emulation of x86 instructions within the Itanium 2's unique instruction set. This emulation is a much more computationally expensive task than the Wow64win.dll's functions on the x86-64 architecture, which switches the processor hardware from its 64-bit mode to compatibility mode when it becomes necessary to execute a 32-bit thread, and then handles the switch back to 64-bit mode. Registry and file system: The WoW64 subsystem also handles other key aspects of running 32-bit applications. It is involved in managing the interaction of 32-bit applications with the Windows components such as the Registry, which has distinct keys for 64-bit and 32-bit applications. For example, HKEY_LOCAL_MACHINE\Software\Wow6432Node is the 32-bit equivalent of HKEY_LOCAL_MACHINE\Software (although 32-bit applications are not aware of this redirection). Some Registry keys are mapped from 64-bit to their 32-bit equivalents, while others have their contents mirrored, depending on the edition of Windows. Registry and file system: The operating system uses the %SystemRoot%\system32 directory for its 64-bit library and executable files. This is done for backward compatibility reasons, as many legacy applications are hardcoded to use that path. When executing 32-bit applications, WoW64 transparently redirects access to "system32" (e.g. DLL loads) to %SystemRoot%\SysWoW64, which contains 32-bit libraries and executables. Exceptions from these redirects are: %SystemRoot%\system32\catroot %SystemRoot%\system32\catroot2 %SystemRoot%\system32\driverstore (redirected on Windows Server 2008, Windows Vista, Windows Server 2003 and Windows XP) %SystemRoot%\system32\drivers\etc %SystemRoot%\system32\logfiles %SystemRoot%\system32\spoolThe redirection helps to keep 32-bit applications working without them needing to be aware of the WoW64 status. If a 32-bit application wants to access the real %SystemRoot%\System32, it can do so through the pseudo-directory %SystemRoot%\sysnative since Windows Vista. Detection of Wow64 status is possible via IsWow64Process(). Registry and file system: There are two Program Files directories each visible to both 32-bit and 64-bit applications. The directory that stores the 32 bit files is called Program Files (x86) to differentiate between the two, while the 64 bit maintains the traditional Program Files name without any additional qualifier. File system redirection is not used to maintain the separation; instead, WoW64 changes FOLDERID_ProgramFiles and similar query results to point installer programs to the correct directory. Application compatibility: 32-bit applications that include only 32-bit kernel-mode device drivers, or that plug into the process space of components that are implemented purely as 64-bit processes (e.g. Windows Explorer) cannot be executed on a 64-bit platform. Application compatibility: 32-bit service applications are supported. The SysWOW64 folder located in the Windows folder on the OS drive contains several applications to support 32-bit applications (e.g. cmd.exe, odbcad32.exe, to register ODBC connections for 32-bit applications). 16-bit legacy applications for MS-DOS and early versions of Windows are usually incompatible with 64-bit versions of Windows Vista, 7, 8, and 10, but can be run on a 64-bit Windows OS via virtualization software. 32-bit versions of Windows XP, Vista, 7, 8, and 10 on the other hand, can usually run 16-bit applications with few to no problems. 16-bit applications cannot be directly run under x64 editions of Windows, because the CPU does not support VM86 mode when running in x64. Application compatibility: Internet Explorer is implemented as both a 32-bit and a 64-bit application because of the large number of 32-bit ActiveX components on the Internet that would not be able to plug into the 64-bit version. Application compatibility: Previously, the 32-bit version was used by-default and it was difficult to set the 64-bit version to be the default browser. This changed in Internet Explorer 10, which ran 32-bit add-ons inside a 64-bit session, eliminating the need to switch between the two versions. If a user was to go into the 32-bit folder (typically C:\Program Files (x86)\Internet Explorer) and double-click the iexplore.exe file there, the 64-bit version will still load. In Internet Explorer 9 and previous, this would load only the 32-bit version. Application compatibility: As of 2010, a bug in the translation layer of the x64 version of WoW64 also renders all 32-bit applications that rely on the Windows API function GetThreadContext incompatible. Such applications include application debuggers, call stack tracers (e.g. IDEs displaying call stack) and applications that use garbage collection (GC) engines. One of the more widely used but affected GC engines is the Boehm GC. It is also used as the default garbage collector of the equally popular Mono. While Mono has introduced a new (but optional) GC as of October 2010 called SGen-GC, it performs stack scanning in the same manner as Boehm GC, thus also making it incompatible under WoW64. No fix has been provided as of July 2016, although workarounds have been suggested. Performance: According to Microsoft, 32-bit software running under WOW64 has similar performance to executing under 32-bit Windows, but with fewer threads possible and other overheads.A 32-bit application can be given a full 4 gigabytes of virtual memory on a 64-bit system, whereas on a 32-bit system, some of this addressable memory is lost because it is used by the kernel and memory-mapped peripherals such as the display adaptor, typically resulting in apps being able to use either 2GB or 3GB of RAM at most.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Fulvio Cacace** Fulvio Cacace: Fulvio Cacace (died 1 December 2003) was an Italian chemist. Fulvio Cacace: In 1963, while at the Sapienza University of Rome, he devised the decay technique for the study of organic radicals and carbenium cations. The technique is based on the preparation of compounds containing the radioactive isotope tritium in place of common hydrogen. When the tritium undergoes beta decay, it is turned into a helium-3 atom, that detaches from the parent molecule, leaving the desired cation or radical behind. Fulvio Cacace: The technique has made it possible to study the chemistry of a vast number of such radicals and ions, in all sorts of environments, including solids, liquids, and gases. In particular, it has provided much of the knowledge of the chemistry of the helium hydride ion, specifically [3He3H]+. Some publications: (1966) "A Tracer Study of the Reactions of Ionic Intermediates Formed by Nuclear Decay of Tritiated Molecules. I. Methane-t4". (1970) "Gaseous Carbonium Ions from the Decay of Tritiated Molecules". (1973) "Gas-phase reaction of tert-butyl ions with arenes. Remarkable selectivity of a gaseous, charged electrophile". (1976) "Gas-phase alkylation of xylenes by tert-butyl(1+) ions". (1977) "Aromatic substitution in the gas phase. Ambident behavior of phenol toward t-C4H9+ cations". (1977) "Aromatic substitution in the liquid phase by bona fide free methyl cations. Alkylation of benzene and toluene". (1978) "Aromatic substitutions by [3H3]methyl decay ions. A comparative study of the gas- and liquid-phase attack on benzene and toluene". (1979) "Gas-phase reaction of free isopropyl ions with phenol and anisole". (1980) "Aromatic substitution in the gas phase. A comparative study of the alkylation of benzene and toluene with C3H7+ ions from the protonation of cyclopropane and propene". (1981) "Aromatic substitution in the gas phase. Alkylation of arenes by gaseous C4H9+ cations". (1982) "On the formation of adduct ions in gas-phase aromatic substitution". (1982) "Alkylation of nitriles with gaseous carbenium ions. The Ritter reaction in the dilute gas state". (1982) "Aromatic substitution in the gas phase. Alkylation of arenes by C4H9+ ions from the protonation of C4 alkenes and cycloalkanes with gaseous Brønsted acids". (1983) "Aromatic substitution in the gas phase. Intramolecular selectivity of the reaction of aniline with charged electrophiles". (1984) "Gas-phase reactions of free phenylium cations with C3H6 hydrocarbons". (1985) "Intramolecular selectivity of the alkylation of substituted anilines by gaseous cations". (1986) "Temperature dependence of the substrate and positional selectivity of the aromatic substitution by gaseous tert-butyl cation". (1990) "Nuclear Decay Techniques in Ion Chemistry". (1992) "Proton shifts in gaseous arenium ions and their role in the gas-phase aromatic substitution by free Me3C+ and Me3Si+ [tert-butyl and trimethylsilyl] cations". (1993) "Interannular proton transfer in thermal arenium ions from the gas-phase alkylation of 1,2-diphenylethane".
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Formyl peptide receptor 2** Formyl peptide receptor 2: N-formyl peptide receptor 2 (FPR2) is a G-protein coupled receptor (GPCR) located on the surface of many cell types of various animal species. The human receptor protein is encoded by the FPR2 gene and is activated to regulate cell function by binding any one of a wide variety of ligands including not only certain N-Formylmethionine-containing oligopeptides such as N-Formylmethionine-leucyl-phenylalanine (FMLP) but also the polyunsaturated fatty acid metabolite of arachidonic acid, lipoxin A4 (LXA4). Because of its interaction with lipoxin A4, FPR2 is also commonly named the ALX/FPR2 or just ALX receptor. Expression: The FPR2 receptor is expressed on human neutrophils, eosinophils, monocytes, macrophages, T cells, synovial fibroblasts, and intestinal and airway epithelium. Function: Many oligopeptides that possess an N-Formylmethionine N-terminal residue such as the prototypical tripeptide N-Formylmethionine-leucyl-phenylalanine (i.e. FMLP), are products of the protein synthesis conducted by bacteria. They stimulate granulocytes to migrate directionally (see chemotaxis) and become active in engulfing (see phagocytosis) and killing bacteria and thereby contribute to host defense by directing the innate immune response of acute inflammation to sites of bacterial invasion. Early studies suggested that these formyl oligopeptides operated by a Receptor (biochemistry) mechanism. Accordingly, the human leukocyte cell line, HL-60 promyelocytes (which do not respond to FMLP), was purposely differentiated to granulocytes (which do respond to FMLP) and used to partially purify and clone a gene that when transfected into FMLP-unresponsive cells bestowed responsiveness to this and other N-formyl oligopeptides. This receptor was initially named the formyl peptide receptor (i.e. FPR). However, a series of subsequent studies cloned two genes that encoded receptor-like proteins with amino acid sequences very similar to that of FPR. The three receptors had been given various names but are now termed formyl peptide receptor 1 (i.e. FPR1) for the first defined receptor, FPR2, and Formyl peptide receptor 3 (i.e. FPR3). FPR2 and FPR3 are termed formyl peptide receptors base on the similarities of their amino acid sequences to that of FPR1 rather than any preferences for binding formyl peptides. Indeed, FPR2 prefers a very different set of ligands and has some very different functions than FPR1 while FPR3 does not bind FMLP or many other N-formyl peptides which bind to FPR1 or FPR2. A major function for FPR2 is binding certain specialized pro-resolving mediators (SPMs), i.e. lipoxin (Lx)A4, and AT-LxA4 (metabolites of arachidonic acid) as well as resolvin D1 (RvD)1, RvD2, and AT-RvD1 (metabolites of docosahexaenoic acid) and thereby to mediate these metabolites activities in inhibiting and resolving inflammation (see Specialized pro-resolving mediators). However, FPR2 also mediates responses to a wide range of polypeptides and proteins which may serve to promote inflammation or regulate activities not directly involving inflammation. The function of FPR3 is not clear. Nomenclature: Confusingly, there are two "standard" nomenclatures for FPR receptors and their genes, the first used, FPR, FPR1, and FPR2 and its replacement, FPR1, FPR2, and FPR3. The latter nomenclature is recommended by the International Union of Basic and Clinical Pharmacology and is used here. Other previously used names for FPR1 are NFPR, and FMLPR; for FPR2 are FPRH1, FPRL1, RFP, LXA4R, ALXR, FPR2/ALX, HM63, FMLPX, and FPR2A; and for FPR3 are FPRH2, FPRL2, and FMLPY. Genes: Human The human FPR2 gene encodes the 351 amino acid receptor, FPR2, within an intronless open reading frame. It forms a cluster with FPR1 and FPR3 genes on chromosome 19q.13.3 in the order of FPR1, FPR2, and FPR3; this cluster also includes the genes for two other chemotactic factor receptors, the G protein-coupled C5a receptor (also termed CD88) and a second C5a receptor, GPR77 (i.e. C5a2 or C5L2), which has the structure of G protein receptors but apparently does not couple to G proteins and is of uncertain function. The FPR1, FPR2, and FPR3 paralogs, based on phylogenetic analysis, originated from a common ancestor with early duplication of FPR1 and FPR2/FPR3 splitting with FPR3 originating from the latest duplication event near the origin of primates. Genes: Mouse Mice have no less than 7 FPR receptors encoded by 7 genes that localize to chromosome 17A3.2 in the following order: Fpr1, Fpr-rs2 (or fpr2), Fpr-rs1 (or LXA4R), Fpr-rs4, Fpr-rs7, Fpr-rs7, Fpr-rs6, and Fpr-rs3; this locus also contains Pseudogenes ψFpr-rs2 and ψFpr-rs3 (or ψFpr-rs5) which lie just after Fpr-rs2 and Fpr-rs1, respectively. The 7 mouse FPR receptors have ≥50% amino acid sequence identity with each other as well as with the three human FPR receptors. Fpr2 and mFpr-rs1 bind with high affinity and respond to lipoxins but have little or no affinity for, and responsiveness to, formyl peptides; they thereby share key properties with human FPR2; Gene knockout studies The large number of mouse compared to human FPR receptors makes it difficult to extrapolate human FPR functions based on genetic (e.g. gene knockout or forced overexpression) or other experimental manipulations of the FPR receptors in mice. In any event, combined disruption of the Fpr2 and Fpr3 genes causes mice to mount enhanced acute inflammatory responses as evidenced in three models, intestine inflammation caused by mesenteric artery ischemia-reperfusion, paw swelling caused by carrageenan injection, and arthritis caused by the intraperatoneal injection of arthritis-inducing serum. Since Fpr2 gene knockout mice exhibit a faulty innate immune response to intravenous listeria monocytogenes injection, these results suggest that the human FPR2 receptor and mouse Fpr3 receptor have equivalent functions in dampening at least certain inflammatory response. Genes: Other species Rats express an ortholog of FPR2 (74% amino acid sequence identity) with high affinity for lipoxin A4. Cellular and tissue distribution: FPL2 is often co-expressed with FPR1. It is widely expressed by circulating blood neutrophils, eosinophils, basophils, and monocytes; lymphocyte T cells and B cells; tissue Mast cells, macrophages, fibroblasts, and immature dendritic cells; vascular endothelial cells; neural tissue glial cells, astrocytes, and neuroblastoma cells; liver hepatocytes; various types of epithelial cells; and various types of multicellular tissues. Ligands and ligand-based disease-related activities: FPR2 is also known as the LXA4 or ALX/FPR2 receptor based on studies finding that is a high affinity receptor for the arachidonic acid metabolite, lipoxin A4 (LXA4), and thereafter for a related arachidonic acid metabolite, the Epi-lipoxin, aspirin-triggered lipoxin A4 (i.e. ATL, 15-epi-LXA4) and a docosahexaenoic acid metabolite, resolvin D1 (i.e. RvD1); these three cell-derived fatty acid metabolites act to inhibit and resolve inflammatory responses. This receptor was previously known as an orphan receptor, termed RFP, obtained by screening myeloid cell-derived libraries with a FMLP-like probe. In addition to LXA4, LTA, RvD1, and FMLP, FPR2 binds a wide range of polypeptides, proteins, and products derived from these polypeptides and proteins. One or more of these various ligands may be involved not only in regulating inflammation but also be involved in the development of obesity, cognitive decline, reproduction, neuroprotection, and cancer. However, the most studied and accepted role for FPR2 receptors is in mediating the actions of the cited lipoxins and resolvins in dampening and resolving a wide range of inflammatory reactions (see lipoxin, Epi-lipoxin, and resolvin).The following is a list of FPR2/ALX ligands and in parentheses their suggested pro-inflammatory or anti-inflammatory actions base on in vitro and animal model studies: a) bacterial and mitochondrial N-formyl peptides such as FMLP (pro-inflammatory but perhaps less significant or insignificant compared to the actions of LXA4, ATL, and RvD1 on FPR2); b) Hp(2-20), a non-formyl peptide derived from Helicobacter pylori (pro-inflammatory by promoting inflammatory responses against this stomach ulcer-causing pathogen); c) T21/DP107 and N36, which are N-acetylated polypeptides derived from the gp41 envelope protein of the HIV-1 virus, F peptide, which is derived from gp120 protein of the HIV-1 Bru strain virus, and V3 peptide, which is derived from a linear sequence of the V3 region of the HIV-1 MN strain virus (unknown effect on inflammation and HIV infection); d) the N-terminally truncated form of the chemotactic chemokine, CCL23, termed CCL23 splice variant CCL23β(amino acids 22–137) and SHAAGtide, which is a product of CCL23β cleavage by pro-inflammatory proteases (pro-inflammatory); e) two N-acetyl peptides, Ac2–26 and Ac9–25 of Annexin A1 (ANXA1 or lipocortin 1), which at high concentrations fully stimulate neutrophil functions but at lower concentrations leave neutrophils desensitized (i.e. unresponsive) to the chemokine IL-8 (CXCL8) (pro-inflammatory and anti-inflammatory, respectively, highlighting the duality of FPR2/ALX functions in inflammation); f) Amyloid beta(1–42) fragment and prion protein fragment PrP(106–126) (pro-inflammatory, suggesting a role for FPR2/ALX in the inflammatory components of diverse amyloid-based diseases including Alzheimer's disease, Parkinson's disease, Huntington's disease, prion-based diseases such as Transmissible spongiform encephalopathy, Creutzfeldt–Jakob disease, and Kuru), and numerous other neurological and non-neurological diseases [see amyloid]); g) the neuroprotective peptide, Humanin (anti-inflammatory by inhibiting the pro-inflammatory effects of Amalyoid beta(1-42) in promoting Alzheimer's disease-related inflammation); h) two cleaved soluble fragments of UPARAP which is the Urokinase-type plasminogen activator receptor (uPAR), D2D3(88–274) and uPAR(84–95) (pro-inflammatory); i) LL-37 and CRAMP, which are enzymatic cleavage products of human and rat, respectively, Cathelicidin-related antimicrobial peptides, numerous Pleurocidins which are a family of cationic antimicrobial peptides found in fish and other vertebrates structurally and functionally similar to cathelicidins, and TemporinA, which is a frog-derived antimicrobial peptide ((pro-inflammatory products derived from host anti-microbial proteins); and j) Pituitary adenylate cyclase-activating polypeptide 27 (pro-inflammatory).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Crash reporter** Crash reporter: A crash reporter is usually a system software whose function is to identify reporting crash details and to alert when there are crashes, in production or on development / testing environments. Crash reports often include data such as stack traces, type of crash, trends and version of software. These reports help software developers- Web, SAAS, mobile apps and more, to diagnose and fix the underlying problem causing the crashes. Crash reports may contain sensitive information such as passwords, email addresses, and contact information, and so have become objects of interest for researchers in the field of computer security.Implementing crash reporting tools as part of the development cycle has become a standard, and crash reporting tools have become a commodity, many of them are offered for free, like Crashlytics. Crash reporter: Many giant industry players, that are part of the software development eco-system have entered the game. Companies such as Twitter, Google and others are putting a lot of efforts on encouraging software developers to use their APIs, knowing this will increase their revenues down the road (through advertisements and other mechanisms). As they realize that they must offer elegant solutions for as many as possible development issues, otherwise their competitors will take actions, they keep adding advanced features. Crash reporting tools make an important development functionality that giant companies include in their portfolio of solutions. Crash reporter: Many crash reporting tools are specialized in mobile app. Many of them are SDKs. macOS: In macOS there is a standard crash reporter in /System/Library/CoreServices/Crash Reporter.app. Crash Reporter.app sends the Unix crash logs to Apple for their engineers to look at. The top text field of the window has the crash log, while the bottom field is for user comments. Users may also copy and paste the log in their email client to send to the application vendor for them to use. Crash Reporter.app has 3 main modes: display nothing on crash, display "Application has crashed" dialog box or display Crash Report window. Windows: Microsoft Windows includes a crash reporting service called Windows Error Reporting that prompts users to send crash reports to Microsoft for online analysis. The information goes to a central database run by Microsoft. It consists of diagnostic information that helps the company or development team responsible for the crash to debug and resolve the issue if they choose to do so. Crash reports for third party software are available to third party developers who have been granted access by Microsoft. Windows: The system considers all parts of the debug and release process, such that targeted bug fixes can be applied through Windows Update. In other words, only people experiencing a particular type of crash can be offered the bug fix, thus limiting exposure to an issue. According to Der Spiegel, the Microsoft crash reporter has been exploited by NSA's Tailored Access Operations (TAO) unit to hack into the computers of Mexico's Secretariat of Public Security. According to the same source, Microsoft crash reports are automatically harvested in NSA's XKeyscore database, in order to facilitate such operations. CrashRpt Another error reporting library for Windows is CrashRpt. CrashRpt library is a light-weight open source error handling framework for applications created in Microsoft Visual C++ and running under Windows. The library is distributed under New BSD License. CrashRpt intercepts unhandled exceptions, creates a crash minidump file, builds a crash descriptor in XML format, presents an interface to allow user to review the crash report, and finally it compresses and sends the crash report to the software support team. Windows: CrashRpt also provides a server-side command line tool for crash report analysis named crprober. The tool is able to read all received crash reports from a directory and generate a summary file in text format for each crash report. It also groups similar crash reports making it easier to determine the most popular problems. The crprober tool does not provide any graphical interface, so it is rather cryptic and difficult to use. Windows: There is also an open-source server software named CrashFix Server that can store, organize and analyze crash reports sent by CrashRpt library. It can group similar crash reports, has a built-in bug tracker and can generate statistical reports. CrashFix server provides a web-based user interface making it possible for several project members to collaborate (upload debugging symbols, browse crash reports and associate bugs with crash reports). Linux: ABRT ABRT (Automated Bug Reporting Tool) is an error reporting tool made for Fedora and Red Hat Enterprise Linux. The developers do not currently have plans for porting it to other Linux distributions. ABRT intercepts core dumps or tracebacks from applications and (after user-confirmation) sends bug reports to various bug-tracking systems, such as Fedora Bugzilla [1]. Linux: Ubuntu Error tracker Ubuntu hosts a public error tracker at errors.ubuntu.com which collects hundreds of thousands of error reports daily from millions of machines. If a program crashes on Ubuntu, a crash handler (such as Apport) will notify the user and offer to report the crash. If the user chooses to report the crash, the details (possibly including a core dump) will be uploaded to an Ubuntu server (daisy.ubuntu.com) for analysis. A core dump is automatically processed to create a stack trace and crash signature. The crash signature is used to classify subsequent crash reports caused by the same error. Linux: GNOME Bug Buddy is the crash reporting tool used by the GNOME platform. When an application using the GNOME libraries crashes, Bug Buddy generates a stack trace using gdb and invites the user to submit the report to the GNOME bugzilla. The user can add comments and view the details of the crash report. KDE The crash reporting tool used by KDE is called Dr. Konqi. When an application using the KDE libraries crashes, Dr. Konqi generates a backtrace using gdb and invites the user to submit the report to the KDE bugzilla. The user can add comments and view the details of the crash report. Mozilla: Talkback Talkback (also known as the Quality Feedback Agent) was the crash reporter used by Mozilla software up to version 1.8.1 to report crashes of its products to a centralized server for aggregation or case-by-case analysis. Talkback is proprietary software licensed to the Mozilla Corporation by SupportSoft. If a Mozilla product (e.g. Mozilla Firefox, Mozilla Thunderbird) were to crash with Talkback enabled, the Talkback agent would appear, prompting the user to provide optional information regarding the crash. Talkback does not replace the native OS crash reporter which, if enabled, will appear along with the Talkback agent. Mozilla: Talkback has been replaced by Breakpad in Firefox since version 3. Breakpad Breakpad (previously called Airbag) is an open-source replacement for Talkback. Developed by Google and Mozilla, it is used in current Mozilla products such as Firefox and Thunderbird. Its significance is being the first open source multi-platform crash reporting system. Since 2007, Breakpad is included in Firefox on Windows and Mac OS X, and Linux. Breakpad is typically paired with Socorro which receives and classifies crashes from users. Breakpad itself is only part of a crash reporting system, as it includes no reporting mechanism. Crashpad: Crashpad is an open-source crash reporter used by Google in Chromium. It was developed as a replacement for Breakpad due to an update in macOS 10.10 which removed API's used by Breakpad. Crashpad currently consists of a crash-reporting client and some related tools for macOS and Windows, and is considered substantially complete for those platforms. Crashpad became the crash reporter client for Chromium on macOS as of March 2015, and on Windows as of November 2015. World of Warcraft: World of Warcraft is another program to use its own crash reporter, "Error Reporter". The error reporter may not detect crashes all the time; sometimes the OS crash reporter is invoked instead. Error Reporter has even been known to crash while reporting errors. Mobile OSs: Android and iOS operating systems also have built in crash reporting functionality.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**TAN-1057 A** TAN-1057 A: TAN-1057 A and TAN-1057 B are organic compounds found in the Flexibacter sp. PK-74 bacterium. TAN-1057 A and B are closely related structurally as diastereomers. Also related are TAN-1057 C and TAN-1057 D, isolated from the same bacteria. The four compounds have been shown to be an effective antibiotics against methicillin-resistant strains of Staphylococcus aureus which act through the inhibition of protein biosynthesis.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Rotimatic** Rotimatic: Rotimatic is an automated kitchen appliance that makes flatbread. It was invented by Indian-origin couple Pranoti Nagarkar and Rishi Israni in 2008. It was first shipped in 2016, and is currently available in twenty markets. As of October 2018, it has generated a revenue of US$40 million. History: Pranoti Nagarkar and Rishi Israni established their company ZImplistic Pte Ltd,. in Singapore with Rotimatic as their flagship product. The pre-order campaign started in 2014 and the product was delivered in 2016 and 2017 in Singapore and the United States respectively. As of April 2018, it is available in a total of 20 markets including the United Kingdom, Canada, Australia, New Zealand, and the United Arab Emirates.As of October 2020, Zimplistic, has been acquired by Light Ray Holdings, a special-purpose vehicle incorporated in the British Virgin Islands. As of April 2021, more than 70,000 Rotimatics have been sold across 20 countries (45,000 units in the U.S.) Inventor/Founder: Rotimatic was invented by Indian-born Pranoti Nagarkar and Rishi Israni. Nagarkar is a mechanical engineer and Israni studied computer science. They are the Co-Founder of Zimplistic Pte Ltd., which was incorporated in Singapore in 2008. Rotimatic is the flagship product of their company. They are both alumni of National University of Singapore. They have more than 35 patents under their belt. Investment: By April 2018, Zimplistic, had raised around US$45 million through four rounds of venture funding. According to Zimplistic, Rotimatic generated a revenue of US$40 million in the fiscal year 2017-2018 by selling nearly 40,000 machines, with pre-order sales generating US$5 million. Concept and design: To make roti (or other types of flatbread such as tortillas and puris), the user adds portions of flour, water, oil, and any additional ingredients into designated compartments to top up pre stored containers if needed. After selecting the thickness, softness, and 1 or 2 drops of oil, the user presses a button, and the machine then makes dough, flattens it, and cooks the roti in 90 seconds. Rotimatic can bake around 20 rotis starting from full compartments.Rotimatic uses machine learning so each machine takes some time to make good bread; they are also connected to the internet for software upgrades. It takes about a minute to make one roti after the machine has been fully heated up which takes more than five minutes. Weighing around 18 kilograms and measuring 16 by 16 inches, it has 15 sensors, 10 motors, and 300 parts.The worldwide retail price of Rotimatic as of April 2018 is US$999; a high end bread machine cost around $170 at that time. Rotimatic is manufactured in Malaysia. Reception: Mashable called Rotimatic the first robotic roti maker. It further added that Zimplistic claims that one Rotimatic roti costs roughly five cents. A store-bought roti would cost around 40 to 50 cents. Engadget referenced it as "the world's most expensive flatbread maker". Awards and recognition: Best Kitchen Gadget by CES in 2016 Best Consumer IoT Solution at 2020 IoT World Awards Open category winner at Start-Up@Singapore 2009
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Food drying** Food drying: Food drying is a method of food preservation in which food is dried (dehydrated or desiccated). Drying inhibits the growth of bacteria, yeasts, and mold through the removal of water. Dehydration has been used widely for this purpose since ancient times; the earliest known practice is 12,000 B.C. by inhabitants of the modern Middle East and Asia regions. Water is traditionally removed through evaporation by using methods such as air drying, sun drying, smoking or wind drying, although today electric food dehydrators or freeze-drying can be used to speed the drying process and ensure more consistent results. Food types: Many different foods can be prepared by dehydration. Meat has held a historically significant role. For centuries, much of the European diet depended on dried cod—known as salt cod, bacalhau (with salt), or stockfish (without). It formed the main protein source for the slaves on the West Indian plantations, and was a major economic force within the triangular trade. Dried fish most commonly cod or haddock, known as Harðfiskur, is a delicacy in Iceland, while dried reindeer meat is a traditional Sami food. Dried meats include prosciutto (Parma ham), bresaola, biltong and beef jerky. Food types: Dried fruits have been consumed historically due to their high sugar content and sweet taste, and a longer shelf-life from drying. Fruits may be used differently when dried. The plum becomes a prune, the grape a raisin. Figs and dates may be transformed into different products that can either be eaten as they are, used in recipes, or rehydrated. Freeze-dried vegetables are often found in food for backpackers, hunters, and the military. Garlic and onion are often dried and stored with their stalks braided. Edible mushrooms and fungi are sometimes dried for preservation or to be used as seasonings. Preparation: Home drying of vegetables, fruit and meat can be carried out with electrical dehydrators (household appliance) or by sun-drying or by wind. Preservatives such as potassium metabisulfite, BHA, or BHT may be used, but are not required. However, dried products without these preservatives may require refrigeration or freezing to ensure safe storage for a long time. Preparation: Industrial food dehydration is often accomplished by freeze-drying. In this case food is flash frozen and put into a reduced-pressure system which causes the water to sublimate directly from the solid to the gaseous phase. Although freeze-drying is more expensive than traditional dehydration techniques, it also mitigates the change in flavor, texture, and nutritional value. In addition, another widely used industrial method of drying of food is convective hot air drying. Industrial hot air dryers are simple and easy to design, construct and maintain. More so, it is very affordable and has been reported to retain most of the nutritional properties of food if dried using appropriate drying conditions.Hurdle technology is the combination of multiple food preservation methods. Hurdle technology uses low doses of multiple food preservation techniques in order to ensure food is not only safe but is desirable visually and texturally. Packaging: Packaging ensures effective food preservation. Some methods of packaging that are beneficial to dehydrated food are vacuum sealed, inert gases, or gases that help regulate respiration, biological organisms, and growth of microorganisms. Other methods: There are many different methods for drying, each with their own advantages for particular applications. These include:
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Extrachromosomal DNA** Extrachromosomal DNA: Extrachromosomal DNA (abbreviated ecDNA) is any DNA that is found off the chromosomes, either inside or outside the nucleus of a cell. Most DNA in an individual genome is found in chromosomes contained in the nucleus. Multiple forms of extrachromosomal DNA exist, and, while some of these serve important biological functions, they can also play a role in diseases such as cancer.In prokaryotes, nonviral extrachromosomal DNA is primarily found in plasmids, whereas, in eukaryotes extrachromosomal DNA is primarily found in organelles. Mitochondrial DNA is a main source of this extrachromosomal DNA in eukaryotes. The fact that this organelle contains its own DNA supports the hypothesis that mitochondria originated as bacterial cells engulfed by ancestral eukaryotic cells. Extrachromosomal DNA is often used in research into replication because it is easy to identify and isolate.Although extrachromosomal circular DNA (eccDNA) is found in normal eukaryotic cells, extrachromosomal DNA (ecDNA) is a distinct entity that has been identified in the nuclei of cancer cells and has been shown to carry many copies of driver oncogenes. ecDNA is considered to be a primary mechanism of gene amplification, resulting in many copies of driver oncogenes and very aggressive cancers. Extrachromosomal DNA in the cytoplasm has been found to be structurally different from nuclear DNA. Cytoplasmic DNA is less methylated than DNA found within the nucleus. It was also confirmed that the sequences of cytoplasmic DNA were different from nuclear DNA in the same organism, showing that cytoplasmic DNAs are not simply fragments of nuclear DNA. In cancer cells, ecDNA have been shown to be primarily isolated to the nucleus (reviewed in ). Extrachromosomal DNA: In addition to DNA found outside the nucleus in cells, infection by viral genomes also provides an example of extrachromosomal DNA. Prokaryotic: Although prokaryotic organisms do not possess a membrane-bound nucleus like eukaryotes, they do contain a nucleoid region in which the main chromosome is found. Extrachromosomal DNA exists in prokaryotes outside the nucleoid region as circular or linear plasmids. Bacterial plasmids are typically short sequences, consisting of 1 to a few hundred kilobase (kb) segments, and contain an origin of replication which allows the plasmid to replicate independently of the bacterial chromosome. The total number of a particular plasmid within a cell is referred to as the copy number and can range from as few as two copies per cell to as many as several hundred copies per cell. Circular bacterial plasmids are classified according to the special functions that the genes encoded on the plasmid provide. Fertility plasmids, or f plasmids, allow for conjugation to occur whereas resistance plasmids, or r plasmids, contain genes that convey resistance to a variety of different antibiotics such as ampicillin and tetracycline. Virulence plasmids contain the genetic elements necessary for bacteria to become pathogenic. Degradative plasmids that contain genes that allow bacteria to degrade a variety of substances such as aromatic compounds and xenobiotics. Bacterial plasmids can also function in pigment production, nitrogen fixation and the resistance to heavy metals.Naturally occurring circular plasmids can be modified to contain multiple resistance genes and several unique restriction sites, making them valuable tools as cloning vectors in biotechnology. Circular bacterial plasmids are also the basis for the production of DNA vaccines. Plasmid DNA vaccines are genetically engineered to contain a gene which encodes for an antigen or a protein produced by a pathogenic virus, bacterium or other parasites. Once delivered into the host, the products of the plasmid genes will then stimulate both the innate immune response and the adaptive immune response of the host. The plasmids are often coated with some type of adjuvant prior to delivery to enhance the immune response from the host.Linear bacterial plasmids have been identified in several species of spirochete bacteria, including members of the genus Borrelia (to which the pathogen responsible for Lyme disease belongs), several species of the gram positive soil bacteria of the genus Streptomyces, and in the gram negative species Thiobacillus versutus, a bacterium that oxidizes sulfur. Linear plasmids of prokaryotes are found either containing a hairpin loop or a covalently bonded protein attached to the telomeric ends of the DNA molecule. The adenine-thymine rich hairpin loops of the Borrelia bacteria range in size from 5 kilobase pairs (kb) to over 200 kb and contain the genes responsible for producing a group of major surface proteins, or antigens, on the bacteria that allow it to evade the immune response of its infected host. The linear plasmids which contain a protein that has been covalently attached to the 5’ end of the DNA strands are known as invertrons and can range in size from 9 kb to over 600 kb consisting of inverted terminal repeats. The linear plasmids with a covalently attached protein may assist with bacterial conjugation and integration of the plasmids into the genome. These types of linear plasmids represent the largest class of extrachromosomal DNA as they are not only present in certain bacterial cells, but all linear extrachromosomal DNA molecules found in eukaryotic cells also take on this invertron structure with a protein attached to the 5’ end.The long, linear "borgs" that co-occur with a species of archaeon – which may host them and shares many of their genes – could be an unknown form of extrachromosomal DNA structures. Eukaryotic: Mitochondrial Mitochondria present in eukaryotic cells contain multiple copies of mitochondrial DNA (mtDNA) in the mitochondrial matrix. In multicellular animals, including humans, the circular mtDNA chromosome contains 13 genes that encode proteins that are part of the electron transport chain and 24 genes for Other mitochondrial proteins; these genes are broken down into 2 rRNA genes and 22 tRNA genes. The size of an animal mtDNA plasmid is roughly 16.6 kb and, although it contains genes for tRNA and mRNA synthesis, proteins coded for by nuclear genes are still required for the mtDNA to replicate or for mitochondrial proteins to be translated. There is only one region of the mitochondrial chromosome that does not contain a coding sequence, the 1 kb region known as the D-loop to which nuclear regulatory proteins bind. The number of mtDNA molecules per mitochondrion varies from species to species, as well as between cells with different energy demands. For example, muscle and liver cells contain more copies of mtDNA per mitochondrion than blood and skin cells do. Due to the proximity of the electron transport chain within the mitochondrial inner membrane and the production of reactive oxygen species (ROS), and due to the fact that the mtDNA molecule is not bound by or protected by histones, the mtDNA is more susceptible to DNA damage than nuclear DNA. In cases where mtDNA damage does occur, the DNA can either be repaired via base excision repair pathways, or the damaged mtDNA molecule is destroyed (without causing damage to the mitochondrion since there are multiple copies of mtDNA per mitochondrion).The standard genetic code by which nuclear genes are translated is universal, meaning that each 3-base sequence of DNA codes for the same amino acid regardless of what species from which the DNA comes. However, this code is quite universal and is slightly different in mitochondrial DNA of fungi, animals, protists and plants. While most of the 3-base sequences (codons) in the mtDNA of these organisms do code for the same amino acids as those of the nuclear genetic code, a few are different. Eukaryotic: The coding differences are thought to be a result of chemical modifications in the transfer RNAs that interact with the messenger RNAs produced as a result of transcribing the mtDNA sequences. Eukaryotic: Chloroplast Eukaryotic chloroplasts, as well as the other plant plastids, also contain extrachromosomal DNA molecules. Most chloroplasts house all of their genetic material in a single ringed chromosome, however in some species there is evidence of multiple smaller ringed plasmids. A recent theory that questions the current standard model of ring shaped chloroplast DNA (cpDNA), suggests that cpDNA may more commonly take a linear shape. A single molecule of cpDNA can contain anywhere from 100-200 genes and varies in size from species to species. The size of cpDNA in higher plants is around 120–160 kb. The genes found on the cpDNA code for mRNAs that are responsible for producing necessary components of the photosynthetic pathway as well as coding for tRNAs, rRNAs, RNA polymerase subunits, and ribosomal protein subunits. Like mtDNA, cpDNA is not fully autonomous and relies upon nuclear gene products for replication and production of chloroplast proteins. Chloroplasts contain multiple copies of cpDNA and the number can vary not only from species to species or cell type to cell type, but also within a single cell depending upon the age and stage of development of the cell. For example, cpDNA content in the chloroplasts of young cells, during the early stages of development where the chloroplasts are in the form of indistinct proplastids, are much higher than those present when that cell matures and expands, containing fully mature plastids. Eukaryotic: Circular Extrachromosomal circular DNA (eccDNA) are present in all eukaryotic cells, are usually derived from genomic DNA, and consist of repetitive sequences of DNA found in both coding and non-coding regions of chromosomes. EccDNA can vary in size from less than 2000 base pairs to more than 20,000 base pairs. In plants, eccDNA contain repeated sequences similar to those that are found in the centromeric regions of the chromosomes and in repetitive satellite DNA. In animals, eccDNA molecules have been shown to contain repetitive sequences that are seen in satellite DNA, 5S ribosomal DNA and telomere DNA. Certain organisms, such as yeast, rely on chromosomal DNA replication to produce eccDNA whereas eccDNA formation can occur in other organisms, such as mammals, independently of the replication process. The function of eccDNA have not been widely studied, but it has been proposed that the production of eccDNA elements from genomic DNA sequences add to the plasticity of the eukaryotic genome and can influence genome stability, cell aging and the evolution of chromosomes.A distinct type of extrachromosomal DNA, denoted as ecDNA, is commonly observed in human cancer cells. ecDNA found in cancer cells contain one or more genes that confer a selective advantage. ecDNA are much larger than eccDNA, and are visible by light microscopy. ecDNA in cancers generally range in size from 1-3 MB and beyond. Large ecDNA molecules have been found in the nuclei of human cancer cells and are shown to carry many copies of driver oncogenes, which are transcribed in tumor cells. Based on this evidence it is thought that ecDNA contributes to cancer growth. Eukaryotic: Specialized tools exist that allow ecDNA to be identified, such as software developed by Paul Mischel and Vineet Bafna that allows ecDNA to be identified in microscopic images "Circle-Seq, a method for physically isolating ecDNA from cells, removing any remaining linear DNA with enzymes, and sequencing the circular DNA that remains", developed by Birgitte Regenberg and her team at the University of Copenhagen. Viral: Viral DNA are an example of extrachromosomal DNA. Understanding viral genomes is very important for understanding the evolution and mutation of the virus. Some viruses, such as HIV and oncogenic viruses, incorporate their own DNA into the genome of the host cell. Viral genomes can be made up of single stranded DNA (ssDNA), double stranded DNA (dsDNA) and can be found in both linear and circular form.One example of infection of a virus constituting as extrachromosomal DNA is the human papillomavirus (HPV). The HPV DNA genome undergoes three distinct stages of replication: establishment, maintenance and amplification. HPV infects epithelial cells in the anogenital tract and oral cavity. Normally, HPV is detected and cleared by the immune system. The recognition of viral DNA is an important part of immune responses. For this virus to persist, the circular genome must be replicated and inherited during cell division. Viral: Recognition by host cell Cells can recognize foreign cytoplasmic DNA. Understanding the recognition pathways has implications towards prevention and treatment of diseases. Cells have sensors that can specifically recognize viral DNA such as the Toll-like receptor (TLR) pathway.The Toll Pathway was recognized, first in insects, as a pathway that allows certain cell types to act as sensors capable of detecting a variety of bacterial or viral genomes and PAMPS (pathogen-associated molecular patterns). PAMPs are known to be potent activators of innate immune signaling. There are approximately 10 human Toll-Like Receptors (TLRs). Different TLRs in human detect different PAMPS: lipopolysaccharides by TLR4, viral dsRNA by TLR3, viral ssRNA by TLR7/TLR8, viral or bacterial unmethylated DNA by TLR9. TLR9 has evolved to detect CpG DNA commonly found in bacteria and viruses and to initiate the production of IFN (type I interferons ) and other cytokines. Inheritance: Inheritance of extrachromosomal DNA differs from the inheritance of nuclear DNA found in chromosomes. Unlike chromosomes, ecDNA does not contain centromeres and therefore exhibits a non-Mendelian inheritance pattern that gives rise to heterogeneous cell populations. In humans, virtually all of the cytoplasm is inherited from the egg of the mother. For this reason, organelle DNA, including mtDNA, is inherited from the mother. Mutations in mtDNA or other cytoplasmic DNA will also be inherited from the mother. This uniparental inheritance is an example of non-Mendelian inheritance. Plants also show uniparental mtDNA inheritance. Most plants inherit mtDNA maternally with one noted exception being the redwood Sequoia sempervirens that inherit mtDNA paternally.There are two theories why the paternal mtDNA is rarely transmitted to the offspring. One is simply the fact that paternal mtDNA is at such a lower concentration than the maternal mtDNA and thus it is not detectable in the offspring. A second, more complex theory, involves the digestion of the paternal mtDNA to prevent its inheritance. It is theorized that the uniparental inheritance of mtDNA, which has a high mutation rate, might be a mechanism to maintain the homoplasmy of cytoplasmic DNA. Clinical significance: Sometimes called EEs, extrachromosomal elements, have been associated with genomic instability in eukaryotes. Small polydispersed DNAs (spcDNAs), a type of eccDNA, are commonly found in conjunction with genome instability. SpcDNAs are derived from repetitive sequences such as satellite DNA, retrovirus-like DNA elements, and transposable elements in the genome. They are thought to be the products of gene rearrangements. Clinical significance: Extrachromosomal DNA (ecDNA) found in cancer have historically been referred to as Double minute chromosomes (DMs), which present as paired chromatin bodies under light microscopy. Double minute chromosomes represent ~30% of the cancer-containing spectrum of ecDNA, including single bodies and have been found to contain identical gene content as single bodies. The ecDNA notation encompasses all forms of the large, oncogene-containing, extrachromosomal DNA found in cancer cells. This type of ecDNA is commonly seen in cancer cells of various histologies, but virtually never in normal cells. ecDNA are thought to be produced through double-strand breaks in chromosomes or over-replication of DNA in an organism. Studies show that in cases of cancer and other genomic instability, higher levels of EEs can be observed.Mitochondrial DNA can play a role in the onset of disease in a variety of ways. Point mutations in or alternative gene arrangements of mtDNA have been linked to several diseases that affect the heart, central nervous system, endocrine system, gastrointestinal tract, eye, and kidney. Loss of the amount of mtDNA present in the mitochondria can lead to a whole subset of diseases known as mitochondrial depletion syndromes (MDDs) which affect the liver, central and peripheral nervous systems, smooth muscle and hearing in humans. There have been mixed, and sometimes conflicting, results in studies that attempt to link mtDNA copy number to the risk of developing certain cancers. Studies have been conducted that show an association between both increased and decreased mtDNA levels and the increased risk of developing breast cancer. A positive association between increased mtDNA levels and an increased risk for developing kidney tumors has been observed but there does not appear to be a link between mtDNA levels and the development of stomach cancer.Extrachromosomal DNA is found in Apicomplexa, which is a group of protozoa. The malaria parasite (genus Plasmodium), the AIDS-related pathogen (Taxoplasma and Cryptosporidium) are both members of the Apicomplexa group. Mitochondrial DNA (mtDNA) was found in the malaria parasite. There are two forms of extrachromosomal DNA found in the malaria parasites. One of these is 6-kb linear DNA and the second is 35-kb circular DNA. These DNA molecules have been researched as potential nucleotide target sites for antibiotics. Role of ecDNA in cancer: Gene amplification is among the most common mechanisms of oncogene activation. Gene amplifications in cancer are often on extrachromosomal, circular elements. One of the primary functions of ecDNA in cancer is to enable the tumor to rapidly reach high copy numbers, while also promoting rapid, massive cell-to-cell genetic heterogeneity. The most commonly amplified oncogenes in cancer are found on ecDNA and have been shown to be highly dynamic, re-integrating into non-native chromosomes as homogeneous staining regions (HSRs) and altering copy numbers and composition in response to various drug treatments.ecDNA is responsible for a large number of the more advanced and most serious cancers, as well as for the resistance to anti-cancer drugs.The circular shape of ecDNA differs from the linear structure of chromosomal DNA in meaningful ways that influence cancer pathogenesis. Oncogenes encoded on ecDNA have massive transcriptional output, ranking in the top 1% of genes in the entire transcriptome. In contrast to bacterial plasmids or mitochondrial DNA, ecDNA are chromatinized, containing high levels of active histone marks, but a paucity of repressive histone marks. The ecDNA chromatin architecture lacks the higher-order compaction that is present on chromosomal DNA and is among the most accessible DNA in the entire cancer genome. Role of ecDNA in cancer: EcDNAs could be clustered together within the nucleus, which can be referred to as ecDNA hubs. Spacially, ecDNA hubs could cause intermolecular enhancer–gene interactions to promote oncogene overexpression.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Disinfection by-product** Disinfection by-product: Disinfection by-products (DBPs) are organic and inorganic compounds resulting from chemical reactions between organic and inorganic substances such as contaminates and chemical treatment disinfection agents, respectively, in water during water disinfection processes. Chlorination disinfection byproducts: Chlorinated disinfection agents such as chlorine and monochloramine are strong oxidizing agents introduced into water in order to destroy pathogenic microbes, to oxidize taste/odor-forming compounds, and to form a disinfectant residual so water can reach the consumer tap safe from microbial contamination. These disinfectants may react with naturally present fulvic and humic acids, amino acids, and other natural organic matter, as well as iodide and bromide ions, to produce a range of DBPs such as the trihalomethanes (THMs), haloacetic acids (HAAs), bromate, and chlorite (which are regulated in the US), and so-called "emerging" DBPs such as halonitromethanes, haloacetonitriles, haloamides, halofuranones, iodo-acids such as iodoacetic acid, iodo-THMs (iodotrihalomethanes), nitrosamines, and others.Chloramine has become a popular disinfectant in the US, and it has been found to produce N-nitrosodimethylamine (NDMA), which is a possible human carcinogen, as well as highly genotoxic iodinated DBPs, such as iodoacetic acid, when iodide is present in source waters.Residual chlorine and other disinfectants may also react further within the distribution network – both by further reactions with dissolved natural organic matter and with biofilms present in the pipes. In addition to being highly influenced by the types of organic and inorganic matter in the source water, the different species and concentrations of DBPs vary according to the type of disinfectant used, the dose of disinfectant, the concentration of natural organic matter and bromide/iodide, the time since dosing (i.e. water age), temperature, and pH of the water.Swimming pools using chlorine have been found to contain trihalomethanes, although generally they are below current EU standard for drinking water (100 micrograms per litre). Concentrations of trihalomethanes (mainly chloroform) of up to 0.43 ppm have been measured. In addition, trichloramine has been detected in the air above swimming pools, and it is suspected in the increased asthma observed in elite swimmers. Trichloramine is formed by the reaction of urea (from urine and sweat) with chlorine and gives the indoor swimming pool its distinctive odor. Byproducts from non-chlorinated disinfectants: Several powerful oxidizing agents are used in disinfecting and treating drinking water, and many of these also cause the formation of DBPs. Ozone, for example, produces ketones, carboxylic acids, and aldehydes, including formaldehyde. Bromide in source waters can be converted by ozone into bromate, a potent carcinogen that is regulated in the United States, as well as other brominated DBPs.As regulations are tightened on established DBPs such as THMs and HAAs, drinking water treatment plants may switch to alternative disinfection methods. This change will alter the distribution of classes of DBPs. Occurrence: DBPs are present in most drinking water supplies that have been subject to chlorination, chloramination, ozonation, or treatment with chlorine dioxide. Many hundreds of DBPs exist in treated drinking water and at least 600 have been identified. The low levels of many of these DBPs, coupled with the analytical costs in testing water samples for them, means that in practice only a handful of DBPs are actually monitored. Increasingly it is recognized that the genotoxicities and cytotoxicities of many of the DBPs not subject to regulatory monitoring, (particularly iodinated, nitrogenous DBPs) are comparatively much higher than those DBPs commonly monitored in the developed world (THMs and HAAs).In 2021, a new group of DBPs known as halogenated pyridinols was discovered, containing at least 8 previously unknown heterocyclic nitrogenous DBPs. They were found to require low pH treatments of 3.0 to be removed effectively. When their developmental and acute toxicity was tested on zebrafish embryos, it found to be slightly lower than those of halogenated benzoquinones, but dozens of times higher than of commonly known DBPs such as tribromomethane and iodoacetic acid. Health effects: Epidemiological studies have looked at the associations between exposure to DBPs in drinking water with cancers, adverse birth outcomes and birth defects. Meta-analyses and pooled analyses of these studies have demonstrated consistent associations for bladder cancer and for babies being born small for gestational age, but not for congenital anomalies (birth defects). Early-term miscarriages have also been reported in some studies. The exact putative agent remains unknown, however, in the epidemiological studies since the number of DBPs in a water sample are high and exposure surrogates such as monitoring data of a specific by-product (often total trihalomethanes) are used in lieu of more detailed exposure assessment. The World Health Organization has stated that "the risk of death from pathogens is at least 100 to 1000 times greater than the risk of cancer from disinfection by-products (DBPs)" {and} the "risk of illness from pathogens is at least 10 000 to 1 million times greater than the risk of cancer from DBPs". Regulation and monitoring: The United States Environmental Protection Agency has set Maximum Contaminant Levels (MCLs) for bromate, chlorite, haloacetic acids and total trihalomethanes (TTHMs). In Europe, the level of TTHMs has been set at 100 micrograms per litre, and the level for bromate to 10 micrograms per litre, under the Drinking Water Directive. No guideline values have been set for HAAs in Europe. The World Health Organization has established guidelines for several DBPs, including bromate, bromodichloromethane, chlorate, chlorite, chloroacetic acid, chloroform, cyanogen chloride, dibromoacetonitrile, dibromochloromethane, dichloroacetic acid, dichloroacetonitrile, NDMA, and trichloroacetic acid.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**4-polytope** 4-polytope: In geometry, a 4-polytope (sometimes also called a polychoron, polycell, or polyhedroid) is a four-dimensional polytope. It is a connected and closed figure, composed of lower-dimensional polytopal elements: vertices, edges, faces (polygons), and cells (polyhedra). Each face is shared by exactly two cells. The 4-polytopes were discovered by the Swiss mathematician Ludwig Schläfli before 1853.The two-dimensional analogue of a 4-polytope is a polygon, and the three-dimensional analogue is a polyhedron. 4-polytope: Topologically 4-polytopes are closely related to the uniform honeycombs, such as the cubic honeycomb, which tessellate 3-space; similarly the 3D cube is related to the infinite 2D square tiling. Convex 4-polytopes can be cut and unfolded as nets in 3-space. Definition: A 4-polytope is a closed four-dimensional figure. It comprises vertices (corner points), edges, faces and cells. A cell is the three-dimensional analogue of a face, and is therefore a polyhedron. Each face must join exactly two cells, analogous to the way in which each edge of a polyhedron joins just two faces. Like any polytope, the elements of a 4-polytope cannot be subdivided into two or more sets which are also 4-polytopes, i.e. it is not a compound. Geometry: The convex regular 4-polytopes are the four-dimensional analogues of the Platonic solids. The most familiar 4-polytope is the tesseract or hypercube, the 4D analogue of the cube. Geometry: The convex regular 4-polytopes can be ordered by size as a measure of 4-dimensional content (hypervolume) for the same radius. Each greater polytope in the sequence is rounder than its predecessor, enclosing more content within the same radius. The 4-simplex (5-cell) is the limit smallest case, and the 120-cell is the largest. Complexity (as measured by comparing configuration matrices or simply the number of vertices) follows the same ordering. Visualisation: 4-polytopes cannot be seen in three-dimensional space due to their extra dimension. Several techniques are used to help visualise them. Orthogonal projectionOrthogonal projections can be used to show various symmetry orientations of a 4-polytope. They can be drawn in 2D as vertex-edge graphs, and can be shown in 3D with solid faces as visible projective envelopes. Visualisation: Perspective projectionJust as a 3D shape can be projected onto a flat sheet, so a 4-D shape can be projected onto 3-space or even onto a flat sheet. One common projection is a Schlegel diagram which uses stereographic projection of points on the surface of a 3-sphere into three dimensions, connected by straight edges, faces, and cells drawn in 3-space. Visualisation: SectioningJust as a slice through a polyhedron reveals a cut surface, so a slice through a 4-polytope reveals a cut "hypersurface" in three dimensions. A sequence of such sections can be used to build up an understanding of the overall shape. The extra dimension can be equated with time to produce a smooth animation of these cross sections. NetsA net of a 4-polytope is composed of polyhedral cells that are connected by their faces and all occupy the same three-dimensional space, just as the polygon faces of a net of a polyhedron are connected by their edges and all occupy the same plane. Topological characteristics: The topology of any given 4-polytope is defined by its Betti numbers and torsion coefficients.The value of the Euler characteristic used to characterise polyhedra does not generalize usefully to higher dimensions, and is zero for all 4-polytopes, whatever their underlying topology. This inadequacy of the Euler characteristic to reliably distinguish between different topologies in higher dimensions led to the discovery of the more sophisticated Betti numbers.Similarly, the notion of orientability of a polyhedron is insufficient to characterise the surface twistings of toroidal 4-polytopes, and this led to the use of torsion coefficients. Classification: Criteria Like all polytopes, 4-polytopes may be classified based on properties like "convexity" and "symmetry". Classification: A 4-polytope is convex if its boundary (including its cells, faces and edges) does not intersect itself and the line segment joining any two points of the 4-polytope is contained in the 4-polytope or its interior; otherwise, it is non-convex. Self-intersecting 4-polytopes are also known as star 4-polytopes, from analogy with the star-like shapes of the non-convex star polygons and Kepler–Poinsot polyhedra. Classification: A 4-polytope is regular if it is transitive on its flags. This means that its cells are all congruent regular polyhedra, and similarly its vertex figures are congruent and of another kind of regular polyhedron. Classification: A convex 4-polytope is semi-regular if it has a symmetry group under which all vertices are equivalent (vertex-transitive) and its cells are regular polyhedra. The cells may be of two or more kinds, provided that they have the same kind of face. There are only 3 cases identified by Thorold Gosset in 1900: the rectified 5-cell, rectified 600-cell, and snub 24-cell. Classification: A 4-polytope is uniform if it has a symmetry group under which all vertices are equivalent, and its cells are uniform polyhedra. The faces of a uniform 4-polytope must be regular. A 4-polytope is scaliform if it is vertex-transitive, and has all equal length edges. This allows cells which are not uniform, such as the regular-faced convex Johnson solids. A regular 4-polytope which is also convex is said to be a convex regular 4-polytope. A 4-polytope is prismatic if it is the Cartesian product of two or more lower-dimensional polytopes. A prismatic 4-polytope is uniform if its factors are uniform. The hypercube is prismatic (product of two squares, or of a cube and line segment), but is considered separately because it has symmetries other than those inherited from its factors. Classification: A tiling or honeycomb of 3-space is the division of three-dimensional Euclidean space into a repetitive grid of polyhedral cells. Such tilings or tessellations are infinite and do not bound a "4D" volume, and are examples of infinite 4-polytopes. A uniform tiling of 3-space is one whose vertices are congruent and related by a space group and whose cells are uniform polyhedra. Classification: Classes The following lists the various categories of 4-polytopes classified according to the criteria above: Uniform 4-polytope (vertex-transitive): Convex uniform 4-polytopes (64, plus two infinite families) 47 non-prismatic convex uniform 4-polytope including: 6 Convex regular 4-polytope Prismatic uniform 4-polytopes: {} × {p,q} : 18 polyhedral hyperprisms (including cubic hyperprism, the regular hypercube) Prisms built on antiprisms (infinite family) {p} × {q} : duoprisms (infinite family) Non-convex uniform 4-polytopes (10 + unknown)10 (regular) Schläfli-Hess polytopes 57 hyperprisms built on nonconvex uniform polyhedra Unknown total number of nonconvex uniform 4-polytopes: Norman Johnson and other collaborators have identified 2189 known cases (convex and star, excluding the infinite families), all constructed by vertex figures by Stella4D software.Other convex 4-polytopes: Polyhedral pyramid Polyhedral bipyramid Polyhedral prismInfinite uniform 4-polytopes of Euclidean 3-space (uniform tessellations of convex uniform cells) 28 convex uniform honeycombs: uniform convex polyhedral tessellations, including: 1 regular tessellation, cubic honeycomb: {4,3,4}Infinite uniform 4-polytopes of hyperbolic 3-space (uniform tessellations of convex uniform cells) 76 Wythoffian convex uniform honeycombs in hyperbolic space, including: 4 regular tessellation of compact hyperbolic 3-space: {3,5,3}, {4,3,5}, {5,3,4}, {5,3,5}Dual uniform 4-polytope (cell-transitive): 41 unique dual convex uniform 4-polytopes 17 unique dual convex uniform polyhedral prisms infinite family of dual convex uniform duoprisms (irregular tetrahedral cells) 27 unique convex dual uniform honeycombs, including: Rhombic dodecahedral honeycomb Disphenoid tetrahedral honeycombOthers: Weaire–Phelan structure periodic space-filling honeycomb with irregular cellsAbstract regular 4-polytopes: 11-cell 57-cellThese categories include only the 4-polytopes that exhibit a high degree of symmetry. Many other 4-polytopes are possible, but they have not been studied as extensively as the ones included in these categories.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Haploinsufficiency of A20** Haploinsufficiency of A20: Haploinsufficiency of A20 is a rare disease caused by mutations in the gene TNFAIP3. This gene is also known as A20. Signs and symptoms: These are variable even within families. the main features are recurrent oral, genital and/or gastrointestinal ulcers, musculoskeletal and gastrointestinal complaints, cutaneous lesions, episodic fever and recurrent infections. The age on onset is also variable ranging from the first week of life to 29 years. The male:female ratio is 1:3. Genetics: The TNFAIP3 gene is located on the long arm of chromosome 6 (6q23.3). Inheritance appears to be autosomal dominant with variable penetrance.It appears that two copies of this gene are required to avoid inflammatory features developing - hence the name haploinsufficiency. Pathogenesis: The gene encodes a protein - Tumor necrosis factor alpha-induced protein 3 - which inhibits the pro inflammatory actions of NF-κB. The protein encoded has both ubiquitin ligation deubiquitinase activities and forms part of the ubiquitin editing protein complex. It is involved in several biochemical pathways the details of which are still under investigation. Diagnosis: Diagnosis is made by sequencing the TNFAIP3 gene. The usual laboratory tests are consistent with non-specific inflammation. Antinuclear antibodies and anti-dsDNA antibodies may be positive. Biopsies show non specific inflammatory changes. Differential diagnosis The main differential diagnosis are Behçet's disease and systemic lupus erythematosus. Treatment: Response to colchicine has been variable. Cytokine inhibitors - including the anti-IL6 receptor biologic tocilizumab - appear to be the most effective.Given the rarity of this condition, optimal management has not yet been definitely identified.Stem cell transplants have been given to saves lives. History: This condition was first described in 2016.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**ClockworkMod** ClockworkMod: ClockworkMod is a software company, owned by Koushik "Koush" Dutta, which develops various software products for Android smartphones and tablets. The company is primarily known for its custom recovery image, ClockworkMod Recovery, which is used in many custom ROMs. ClockworkMod Recovery: ClockworkMod Recovery is an Android custom recovery image. Once installed, this recovery image replaces the Android device's stock recovery image. Using this recovery image, various system-level operations can be performed. For example, one can create and restore partition backups, root, install, and upgrade custom ROMs.ClockworkMod Recovery is free and open-source software, released under the terms of the Apache License 2.0 software license. CyanogenMod Recovery is a fork of ClockworkMod Recovery. ClockworkMod Recovery: Compared to other recoveries Unlike TWRP, but like the stock recovery, CWM Recovery uses volume buttons to navigate menus. Like the stock recovery, CWM can receive over-the-air updates for ROMs designed for their respective recoveries. Signature verification is not enforced on CWM Recovery, allowing the installation of Custom ROMs. CWM Recovery adds Nandroid backup support. This feature may not be present on CWM Recovery forks or successors. Other software: The company also provides the following apps: ROM Manager: An app for installing custom operating systems, known as ROMs. It was briefly pulled for violating Google Play's in-app-purchase policies. Tether: An app used for tethering regardless of carrier restrictions. Helium: An app used to backup user and system data to a phone without the need for root. DeskSMS: An app for sending and receiving text messages from an email, browser, or instant messenger client. AllCast: An app that enables streaming of local and cloud videos to Chromecast, AppleTV, FireTV, and DLNA devices. Vysor: An app that allows mirroring and control of an Android device through a desktop computer. It was temporarily removed due to licensing issues.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded