text
stringlengths
60
353k
source
stringclasses
2 values
**Cannabis in Romania** Cannabis in Romania: Cannabis in Romania is illegal for recreational and for medical use. Although it was technically legalized for medical use in 2013, it has not been eliminated from the Table I of High Risk Drugs, and as such its use is prohibited. Some of the earliest evidence of the psychoactive use of cannabis have been found in Romania, including the archaeological sites of Frumușica and Gurbănești. Prohibition: In 1928, Romania established laws for countering narcotics, including hashish and its preparations. Medical cannabis: A limited medical cannabis law was passed in 2013, allowing for the use of low-THC (below 0.2%) derivatives of the plant only. Advocacy for reform: Save Romania Union Youth is the first youth organisation of a Romanian political party to openly support the decriminalisation of cannabis.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Open Access Network** Open Access Network: The Open Access Network (OAN) encourages partnerships between scholarly societies, research libraries, and other institutional partners in order to support the infrastructure of scholarly communication and support open access publishing in the humanities and social sciences. It was launched in 2015 by K|N Consultants, the not-for-profit 501(c)(3) organization which authored the well-received white paper on which the OAN is based.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**CLDN16** CLDN16: Claudin-16 is a protein that in humans is encoded by the CLDN16 gene. It belongs to the group of claudins. CLDN16: Tight junctions represent one mode of cell-to-cell adhesion in epithelial or endothelial cell sheets, forming continuous seals around cells and serving as a physical barrier to prevent solutes and water from passing freely through the paracellular space. These junctions are composed of sets of continuous networking strands in the outwardly facing cytoplasmic leaflet, with complementary grooves in the inwardly facing extracytoplasmic leaflet. The protein encoded by this gene, a member of the claudin family, is an integral membrane protein and a component of tight junction strands. It is found primarily in the kidneys, specifically in the thick ascending limb of Henle, where it acts as either an intercellular pore or ion concentration sensor to regulate the paracellular resorption of magnesium ions. Defects in this gene are a cause of primary hypomagnesemia, which is characterized by massive renal magnesium wasting with hypomagnesemia and hypercalciuria, resulting in nephrocalcinosis and kidney failure. Model organisms: Model organisms have been used in the study of CLDN16 function. A conditional knockout mouse line, called Cldn16tm1a(KOMP)Wtsi was generated as part of the International Knockout Mouse Consortium program — a high-throughput mutagenesis project to generate and distribute animal models of disease to interested scientists.Male and female animals underwent a standardized phenotypic screen to determine the effects of deletion. Twenty five tests were carried out on homozygous mutant animals and one significant abnormality was observed: the mice displayed urolithiasis.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Java 3D** Java 3D: Java 3D is a scene graph-based 3D application programming interface (API) for the Java platform. It runs on top of either OpenGL or Direct3D until version 1.6.0, which runs on top of Java OpenGL (JOGL). Since version 1.2, Java 3D has been developed under the Java Community Process. A Java 3D scene graph is a directed acyclic graph (DAG). Java 3D: Compared to other solutions, Java 3D is not only a wrapper around these graphics APIs, but an interface that encapsulates the graphics programming using a true object-oriented approach. Here a scene is constructed using a scene graph that is a representation of the objects that have to be shown. This scene graph is structured as a tree containing several elements that are necessary to display the objects. Additionally, Java 3D offers extensive spatialized sound support. Java 3D: Java 3D and its documentation are available for download separately. They are not part of the Java Development Kit (JDK). History: Intel, Silicon Graphics, Apple, and Sun all had retained mode scene graph APIs under development in 1996. Since they all wanted to make a Java version, they decided to collaborate in making it. That project became Java 3D. Development was underway already in 1997. A public beta version was released in March 1998. The first version was released in December 1998. From mid-2003 through summer 2004, the development of Java 3D was discontinued. In the summer of 2004, Java 3D was released as a community source project, and Sun and volunteers have since been continuing its development. History: On January 29, 2008, it was announced that improvements to Java 3D would be put on hold to produce a 3D scene graph for JavaFX JavaFX with 3D support was eventually released with Java 8. The JavaFX 3D graphics functionality has more or less come to supersede Java 3D. Since February 28, 2008, the entire Java 3D source code is released under the GPL version 2 license with GPL linking exception.Since February 10, 2012, Java 3D uses JOGL 2.0 for its hardware accelerated OpenGL rendering. The port was initiated by Julien Gouesse. Features: Multithreaded scene graph structure Cross-platform Generic real-time API, usable for both visualization and gaming Support for retained, compiled-retained, and immediate mode rendering Includes hardware-accelerated JOGL, OpenGL, and Direct3D renderers (depending on platform) Sophisticated virtual-reality-based view model with support for stereoscopic rendering and complex multi-display configurations Native support for head-mounted display CAVE (multiple screen projectors) 3D spatial sound Programmable shaders, supporting both GLSL and CG Stencil buffer Importers for most mainstream formats, like 3DS, OBJ, VRML, X3D, NWN, and FLT Competing technologies: Java 3D is not the only high-level API option to render 3D in Java. In part due to the pause in development during 2003 and 2004, several competing Java scene graph technologies emerged: General purpose: Ardor3D JavaFXGaming: jMonkeyEngine Espresso3DVisualization: JrealityIn addition to those, many other C or C++ scene graph APIs offer Java support through JNI. At a lower level, the JOGL (JSR 231) OpenGL bindings for Java are a popular alternative to scene graph APIs such as Java 3D. LWJGL is another such binding.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Tubomanometry** Tubomanometry: Tubomanometry is a technique for assessing the eustachian tube opening function, and sometimes to determine a treatment plan. This technique was familiarised by D. Estève et al in 2001. Technique: The individual is asked to perform a swallowing maneuver. During this time, the nasal fossae and the nasopharynx are occluded by the velum. A tympanometry earplug is inserted into the external ear of the studied side to avoid fluctuations in ear pressure due to atmospheric conditions. The tubomanometer then creates a dysbarical situation, where the pressure conditions encountered during a rapid drop in altitude is reproduced. This is done by releasing an air bolus into the nasopharynx through airtight nozzles previously placed on both nostrils. At this point, the tubomanometer is able to make several recordings of the pressure variations in the rhinopharynx and on the eardrum as the eustachian tube opens. Interpretation: Immediate opening of the eustachian tube was observed in healthy subjects at 30-50 mbar pressure. In patients with chronic eustachian tube dysfunction, this opening could be registered in only 42% of the patients at 30 mbar and 58% at 50 mbar. The results are usually interpreted as R values. An R-value less than or equal to one indicates regular eustachian tube function and an R-value greater than 1 indicates a delayed opening of eustachian tube, thereby supporting the diagnosis of chronic eustachian tube dysfunction.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Tetrapeptide** Tetrapeptide: A tetrapeptide is a peptide, classified as an oligopeptide, since it only consists of four amino acids joined by peptide bonds. Many tetrapeptides are pharmacologically active, often showing affinity and specificity for a variety of receptors in protein-protein signaling. Present in nature are both linear and cyclic tetrapeptides (CTPs), the latter of which mimics protein reverse turns which are often present on the surface of proteins and druggable targets. Tetrapeptides may be cyclized by a fourth peptide bond or other covalent bonds. Tetrapeptide: Examples of tetrapeptides are: Tuftsin (L-threonyl-L-lysyl-L-prolyl-L-arginine) is a peptide related primarily to the immune system function. Rigin (glycyl-L-glutaminyl-L-prolyl-L-arginine) is a tetrapeptide with functions similar to those of tuftsin. Postin (Lys-Pro-Pro-Arg) is the N-terminal tetrapeptide of cystatin C and an antagonist of tuftsin. Endomorphin-1 (H-Tyr-Pro-Trp-Phe-NH2) and endomorphin-2 (H-Tyr-Pro-Phe-Phe-NH2) are peptide amides with the highest known affinity and specificity for the μ opioid receptor. Morphiceptin (H-Tyr-Pro-Phe-Pro-NH2) is a casomorphin peptide isolated from β-casein. Gluten exorphines A4 (H-Gly-Tyr-Tyr-Pro-OH) and B4 (H-Tyr-Gly-Gly-Trp-OH) are peptides isolated from gluten. Tyrosine-MIF-1 (H-Tyr-Pro-Leu-Gly-NH2) is an endogenous opioid modulator. Tetragastrin (N-((phenylmethoxy)carbonyl)-L-tryptophyl-L-methionyl-L-aspartyl-L-phenylalaninamide) is the C-terminal tetrapeptide of gastrin. It is the smallest peptide fragment of gastrin which has the same physiological and pharmacological activity as gastrin. Kentsin (H-Thr-Pro-Arg-Lys-OH) is a contraceptive peptide first isolated from female hamsters. Achatin-I (glycyl-phenylalanyl-alanyl-aspartic acid) is a neuroexcitatory tetrapeptide from giant African snail (Achatina fulica). Tentoxin (cyclo(N-methyl-L-alanyl-L-leucyl-N-methyl-trans-dehydrophenyl-alanyl-glycyl)) is a natural cyclic tetrapeptide produced by phytopathogenic fungi from genus Alternaria. Rapastinel (H-Thr-Pro-Pro-Thr-NH2) is a partial agonist of the NMDA receptor. HC-toxin, cyclo(D-Pro-L-Ala-D-Ala-L-Aeo), where Aeo is 2-amino-8-oxo-9,10-epoxy decanoic acid, is a virulence factor for the fungus Cochliobolus carbonum on its host, maize. Elamipretide, (D-Arg-dimethylTyr-Lys-Phe-NH2) a drug candidate that targets mitochondria.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Academic Technology Approval Scheme** Academic Technology Approval Scheme: The Academic Technology Approval Scheme (ATAS) is a scheme of the British government for certifying foreign students from outside the EU for entry into the United Kingdom to study or conduct research in certain sensitive technology-related fields. For these students, obtaining an ATAS certificate is a prerequisite for obtaining a visa. The ATAS was introduced on 1 November 2007 to prevent dissemination outside the UK of knowledge and skills that can be used to build and deliver weapons of mass destruction (WMD), by ensuring that applicants do not have links to Advanced Conventional Military Technology (ACMT), WMD programmes and their means of delivery.Affected students undergo a screening system to validate their reasons for coming to the UK. Academic Technology Approval Scheme: According to the Foreign and Commonwealth Office, the checks will attempt to filter out those students whose intentions are adverse to national security. Areas of study at which the ATAS is directed are chemistry, engineering, physics, biophysics, metallurgy and microbiology. Academic Technology Approval Scheme: In the earlier "Voluntary Vetting Scheme", some universities (such as Bristol University) were voluntarily reporting suspicious students from certain countries (including Iran and Egypt) to the government. With the introduction of ATAS, Cambridge University, which had refused to take part in the voluntary system, was required to cooperate with the authorities, too.ATAS was expanded on 1 October 2020 to include Advanced Conventional Military Technology (ACMT). Academic Technology Approval Scheme: In March 2021, the FCDO informed universities that all researchers would require ATAS from 21 May.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**1-alkyl-2-acetylglycerol O-acyltransferase** 1-alkyl-2-acetylglycerol O-acyltransferase: In enzymology, a 1-alkyl-2-acetylglycerol O-acyltransferase (EC 2.3.1.125) is an enzyme that catalyzes the chemical reaction acyl-CoA + 1-O-alkyl-2-acetyl-sn-glycerol ⇌ CoA + 1-O-alkyl-2-acetyl-3-acyl-sn-glycerolThus, the two substrates of this enzyme are acyl-CoA and 1-O-alkyl-2-acetyl-sn-glycerol, whereas its two products are CoA and 1-O-alkyl-2-acetyl-3-acyl-sn-glycerol. This enzyme belongs to the family of transferases, specifically those acyltransferases transferring groups other than aminoacyl groups. The systematic name of this enzyme class is acyl-CoA:1-O-alkyl-2-acetyl-sn-glycerol O-acyltransferase. This enzyme is also called 1-hexadecyl-2-acetylglycerol acyltransferase. This enzyme participates in ether lipid metabolism.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Buzz cut** Buzz cut: A buzz cut, or wiffle cut, is a variety of short hairstyles, especially where the length of hair is the same on all parts of the head. Rising to prominence initially with the advent of manual hair clippers, buzz cuts became increasingly popular in places where strict grooming conventions applied. In several nations, buzz cuts are often given to new recruits in the armed forces. However, buzz cuts are also used for stylistic reasons. Overview: The buzz cut rose to popularity with the advent of manual hair clippers by the Serbian inventor Nikola Bizumić in the late 19th century. These clippers were widely used by barbers to chop hair close and fast. The clipper accumulates hair in locks to rapidly remove the hair from the head. This type of haircut was normal where strict grooming conventions were in effect. Buzz cut styles today include the brush cut, crew cut, and flattop.The top of a buzz cut style may be clipped a uniform short length, producing a butch cut, or into one of several geometric shapes that include the crew cut, flattop, and other short styles. Also known as a fade haircut, the back and sides are tapered short, semi-short, or medium, corresponding with different clipper guard sizes. Buzz cuts can make the face look more defined and are popular with men and boys who want a short, low-maintenance hairstyle, as well as those with thinning or receding hairlines. However, thanks to the popularization by public figures like Sinead O'Connor, Natalie Portman, Amber Rose, and Willow Smith, the buzz cut has also become a popular haircut amongst women. It has also become a symbol of protest - going against society's standards of feminine beauty.In countries such as Australia, China, Russia, the United Kingdom, and the United States, military recruits are given buzz cuts when they enter training; this was originally done to prevent the spread of head lice, but is now done for ease of maintenance, cooling, and uniformity.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Synaptojanin** Synaptojanin: Synaptojanin is a protein involved in vesicle uncoating in neurons. This is an important regulatory lipid phosphatase. It dephosphorylates the D-5 position phosphate from phosphatidylinositol (3,4,5)-trisphosphate (PIP3) and Phosphatidylinositol (4,5)-bisphosphate(PIP2). It belongs to family of 5-phosphatases, which are structurally unrelated to D-3 inositol phosphatases like PTEN. Other members of the family of 5'phosphoinositide phosphatases include OCRL, SHIP1, SHIP2, INPP5J, INPP5E, INPP5B, INPP5A and SKIP. Synaptojanin Family: The synaptojanin family comprises proteins that are key players in the synaptic vesicle recovery at the synapse. In general, vesicles containing neurotransmitters fuse with the presynaptic cell in order to release neurotransmitter into the synaptic cleft. It is the release of neurotransmitters that allows neuron to neuron communication in the nervous system. The recovery of the vesicle is referred to as endocytosis and is important to reset the presynaptic cell with new neurotransmitter. Synaptojanin Family: Synaptojanin 1 and Synaptojanin 2 are the two main proteins in the synaptojanin family. Synaptojanin 2 can be further subdivided into synaptojanin 2a and synaptojanin 2b.The mechanism by which vesicles are recovered is thought to involve the synaptojanin attracting the protein clathrin, which coats the vesicle and initiates vesicle endocytosis. Synaptojanin Family: Synaptojanins are composed to three domains. The first is a central inositol 5-phosphatase domain, which can act on both PIP2 and PIP3. The second is an N-terminal Sac1-like inositol phosphatase domain, which, in vitro, can hydrolyze PIP and PIP2 to PI. The third is a C-terminal domain that is rich in the amino acid proline and interacts with several proteins also involved in vesicle endocytosis. Specifically, the c-terminal domain interacts with amphiphysin, endophilin, DAP160/intersectin, syndapin and Eps15. The function of endophilin appears to be a binding partner for synaptojanin such that it can interact with other proteins and is involved in the initiation of shallow clathrin coated pits. Dap160 is a molecular scaffolding protein and functions in actin recruitment. Dynamin is a GTPase involved in vesicle budding, specifically modulating the severance of the vesicle from the neuronal membrane. Dynamin appears to be playing a larger role in neurite formation because its vesicle pinching role and the possibility of it recycling plasma membrane and growth factor receptor proteins.Mutations in Synaptojanin 1 have been associated with autosomal recessive, early-onset parkinsonism. Role in Development: Synaptojanin, through its interactions with a variety of proteins and molecules is thought to play a role in the development of nervous systems. Role in Development: Ephrin Synaptojanin 1 has been found to be influenced by the protein ephrin. Ephrin is a chemorepellent meaning that its interactions with proteins results in an inactivation or retraction of processes when referring to neuronal migration. Ephrin's receptor is called Eph and is a receptor tyrosine kinase. Upon activation of the Eph receptor, synaptojanin 1 becomes phosphorylated at the proline rich domain and is inhibited from binding with any of its natural binding partners. Therefore, the presence of ephrin inactivates vesicle endocytosis. Role in Development: Calcium The influx of calcium in the neuron has been shown to activate a variety of molecules including some calcium dependent phosphatases that activate synaptojanin. Role in Development: Membranes Neuronal migration during development involves the extension of a neurite along the extracellular matrix. This extension is guided by the growth cone. However the actual extension of the neurite involves the insertion of membrane lipids immediately behind the growth one. In fact, membranes can be trafficked from degenerating extensions to elongating ones. Synaptojanin has been proposed as the mechanism by which membrane lipids can be trafficked around the developing neuron. Role in Development: Receptors During development, receptors are trafficked around the growth cone. This trafficking involves vesicle endocytosis. In the presence of nerve growth factor (NGF), TrkA receptors are trafficked to the stimulated side of the growth cone. Additionally, calcium and glutamate stimulate the trafficking of AMPA receptors to the stimulated side of the growth cone. Both of these receptors are trafficked via synaptojanin. Model organisms: Model organisms have been used in the study of Synaptojanin function. A conditional knockout mouse line of synaptojanin 2, called Synj2tm1a(EUCOMM)Wtsi was generated as part of the International Knockout Mouse Consortium program — a high-throughput mutagenesis project to generate and distribute animal models of disease to interested scientists — at the Wellcome Trust Sanger Institute.Male and female animals underwent a standardized phenotypic screen to determine the effects of deletion. Twenty two tests were carried out on mutant mice, but no significant abnormalities were observed.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Fashion merchandising** Fashion merchandising: Fashion merchandising can be defined as the planning and promotion of sales by presenting a product to the right market at the proper time, by carrying out organized, skillful advertising, using attractive displays, etc. Merchandising, within fashion retail, refers specifically to the stock planning, management, and control process. Fashion Merchandising is a job that is done world- wide. This position requires well-developed quantitative skills, and natural ability to discover trends, meaning relationships and interrelationships among standard sales and stock figures. In the fashion industry, there are two different merchandising teams: the visual merchandising team, and the fashion merchandising team. Fashion merchandising: The visual merchandising team are the people in charge of designing the layout, floor plan, and the displays of the store in order to increase sales. Fashion merchandising: The fashion merchandising team are the people who are involved in the production of fashion designs and distribution of final products to the end consumer. Fashion merchandisers work with designers to ensure that designs will be affordable and desired by the target market. Fashion merchandising involves apparel, accessories, beauty, and housewares. The end goal of fashion merchandising in any of these departments is to earn a profit. Fashion merchandisers' decisions can considerably impact the success of the manufacturer, designer, or retailer for which they work. Background: During ancient times, individuals shopped in markets for goods. The ancients were attracted to rare fashions that brought variation and excitement into their lives. These markets have transformed into today's department, specialty, and discount retailers. For many years, businesspeople in the fashion industry were convinced that they could persuade consumers to desire their particular products. Fashion executives had no interest in the needs and wants of consumers. However, fashion personnel realized that they would have to adapt fashion items to the demands of consumers. Rights of merchandising: In modern merchandising, distribution responsibilities are absent, and focus is placed on planning and analysis. A separate team is tasked with distribution. Large organizations separate merchandisers by type. There are retail merchandisers and product merchandisers. Retail merchandisers manage store allocation and must maximize sales. Product merchandisers manage the flow of materials to suppliers and then the flow of product to stores. Product merchandisers then pass control of product to the retail merchandisers. Rights of merchandising: Modern Structure Many large organizations have concluded that distribution requires highly detailed work and that it is necessary to have a team specifically for that purpose. This is due to the fine details of allocation, which require focus on aspects such as colour and sizes for a specific store. This approach not only minimizes costs, but also extends to areas like better control of the overall process. Organizations that do not conduct distribution this way risk losing control of their stock at both the highest and lowest level. This is a result of the lack of uniformity and oversight. Rights of merchandising: The distribution team specializes not only in managing distribution, but they are also focused on sales and profit. They employ detailed, accurate information about distribution points sourced from product planners. They possess the ability to manage dynamic stock demands. They partner with buyers and merchandisers for any necessary repeat buying. Though they are positioned to manage stock, they still operate within the limits of the buying plan, and merchandisers ensure they remain within this realm. Buyers provide guidelines for distribution, such as the type of stores where product should be distributed; for example, a product may have only been acquired for the top 3 stores. The team also supports the goals of an organization through being instrumental in responding to trends. Rights of merchandising: The nature of modern analysis has allowed many merchandisers to plan as much as four seasons ahead, and they are expected to apply the data. This further increases the demands placed on their roles and emphasizes the need to task out minor details that do not require their input or much of their supervision.Fashion merchandisers follow the five rights of merchandising, or 5Rs, to ensure that they properly meet the needs of consumers; thus, turning a profit.The five rights of merchandising include: the right merchandise at the right price at the right time in the right place in the right quantities.By researching and answering the five rights of merchandising, fashion merchandisers can gain an understanding of what products consumers want, when and where they wish to make purchases, and what prices will have the highest demand. Both fashion retailers and manufacturers utilize the 5Rs. Manufacturers: Clothing manufacturers practice fashion merchandising differently than retailers. Manufacturer merchandisers forecast customers' preferences for silhouettes, sizes, colors, quantities, and costs each season. When making decisions, manufacturer merchandisers must keep retailers and end consumers in mind. Following the forecasting stage, manufacturer merchandisers meet with designers to develop products that consumers will purchase most. By referring to the five rights of merchandising, manufacturer merchandisers determine the best fabric, notions, product methods, and promotions for products. These decisions all contribute to final retail costs, which must be affordable to end consumers. Retailers: In comparison to manufacturer merchandisers, retailer merchandisers also begin their process by forecasting industry and fashion trends with their target markets in mind. Sales are predicted in retail dollars and beginning of the month (BOM) stock. Similar to manufacturer merchandisers, retailer merchandisers must make all decisions regarding the final consumer. Decisions are made based on the past, present, and future of the economy, sales, industry and fashion trends, region and world events, and the fashion cycle. When selecting merchandise to offer, retailer merchandisers will consider their target markets' color, style, size, and cost preferences. Once accurate decisions are made, retailer merchandisers will order goods from vendors or produce private labels. Following shipment, ordered seasonal apparel assortments are strategically arranged on sales floors, or visually merchandised. Education: Individuals interested in building a career in fashion merchandising should earn an associate's or bachelor's degree in fashion merchandising or a related field, such as marketing. Relevant courses include, but are not limited to, fashion, accounting, economics, textile and merchandising, psychology, marketing, and management. In addition to schooling, those aspiring to work as fashion merchandisers are required to do an internship with any retail company of their choice as well as work in the retail field. It is also suggested that one stays caught up in the latest fashion trends, which can be done by reading blogs, magazines, traveling, and shopping. A fashion merchandiser will not only be responsible for choosing the best clothes, but for making the store appealing to the eye. The proper education is very important in order to be successful in this career. Careers: Fashion merchandising careers are as follows: Buyer: Develop six-month buying plans and order assortments for each season. Travel to markets and trade shows to purchase the latest fashions for stores. Account executive: The liaison between manufacturers and buyers. Handle several retail accounts, present manufactures' lines to buyers, and relay fashion and promotional information. Store manager: Hiring, training, and overseeing employees as well as monitoring sales for a specific retail store. Merchandise coordinator: Responsible for visual merchandising. A liaison between the manufacturer and retailer. Showroom manager: Display fashion lines, present collections, and manage multiple retail accounts. Also, manage expenses and ensure profitability. Merchandise planner: Assist a fashion company with meeting objectives through technologically and mathematically calculated solutions. Additionally, discover trends, develop financial plans, and determine merchandise reorders.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Island single malt** Island single malt: Island single malts are the single malt Scotch whiskies produced on the islands around the perimeter of the Scottish mainland. The islands (excluding Islay) are not recognised in the Scotch Whisky Regulations as a distinct whisky producing region, but are considered to be part of the Highland region. Islay is itself recognised as a distinct whisky producing region (see Islay whisky). Island single malt: Other sources, however, indicate that the Islands, excluding Islay, constitute a sixth distinct region. This unofficial region includes the following whisky-producing islands: Arran, Jura, Mull, Orkney, and Skye: with their respective distilleries: Arran, Jura, Tobermory, Highland Park, Scapa and Talisker. The whiskies produced on the Islands are extremely varied and have few similarities, though can often be distinguished from other whisky regions by generally having a smokier flavour with peaty undertones. One source states that the flavour depends on the use of peat which "varies widely depending on the distiller". Island malt distilleries: Abhainn Dearg distillery, on Lewis Arran distillery, on Arran Highland Park distillery, in Orkney Isle of Raasay distillery, on Raasay Jura distillery, on Jura Saxa Vord distillery, on Unst Scapa distillery, in Orkney Talisker distillery, on Skye Tobermory distillery, on Mull, producing Tobermory and Ledaig Torabhaig distillery, on Skye In development Isle of Barra distillery, on Barra Isle of Harris distillery, on Harris, Outer Hebrides
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Heavy equipment** Heavy equipment: Heavy Equipment, Heavy Machinery, Earthmovers, Construction Vehicles, or Construction Equipment, refers to heavy-duty vehicles specially designed to execute construction tasks, most frequently involving earthwork operations or other large construction tasks. Heavy equipment usually comprises five equipment systems: the implement, traction, structure, power train, and control/information. Heavy equipment has been used since at least the 1st century BC when the ancient Roman engineer Vitruvius described a crane in De architectura when it was powered via human or animal labor. Heavy equipment: Heavy equipment functions through the mechanical advantage of a simple machine, the ratio between input force applied and force exerted is multiplied, making tasks which could take hundreds of people and weeks of labor without heavy equipment far less intensive in nature. Some equipment uses hydraulic drives as a primary source of motion. Heavy equipment: The word plant, in this context, has come to mean any type of industrial equipment, including mobile equipment (e.g. in the same sense as powerplant). However, plant originally meant "structure" or "establishment" – usually in the sense of factory or warehouse premises; as such, it was used in contradistinction to movable machinery, e.g. often in the phrase "plant and equipment". History: The use of heavy equipment has a long history; the ancient Roman engineer Vitruvius (1st century BCE) gave descriptions of heavy equipment and cranes in ancient Rome in his treatise De architectura. The pile driver was invented around 1500. The first tunnelling shield was patented by Marc Isambard Brunel in 1818. History: From horses, through steam and diesel, to electric and robotic Until the 19th century and into the early 20th century heavy machines were drawn under human or animal power. With the advent of portable steam-powered engines the drawn machine precursors were reconfigured with the new engines, such as the combine harvester. The design of a core tractor evolved around the new steam power source into a new machine core traction engine, that can be configured as the steam tractor and the steamroller. During the 20th century, internal-combustion engines became the major power source of heavy equipment. Kerosene and ethanol engines were used, but today diesel engines are dominant. Mechanical transmission was in many cases replaced by hydraulic machinery. The early 20th century also saw new electric-powered machines such as the forklift. Caterpillar Inc. is a present-day brand from these days, starting out as the Holt Manufacturing Company. The first mass-produced heavy machine was the Fordson tractor in 1917. History: The first commercial continuous track vehicle was the 1901 Lombard Steam Log Hauler. The use of tracks became popular for tanks during World War I, and later for civilian machinery like the bulldozer. The largest engineering vehicles and mobile land machines are bucket-wheel excavators, built since the 1920s. History: "Until almost the twentieth century, one simple tool constituted the primary earthmoving machine: the hand shovel – moved with animal and human powered, sleds, barges, and wagons. This tool was the principal method by which material was either sidecast or elevated to load a conveyance, usually a wheelbarrow, or a cart or wagon drawn by a draft animal. In antiquity, an equivalent of the hand shovel or hoe and head basket—and masses of men—were used to move earth to build civil works. Builders have long used the inclined plane, levers, and pulleys to place solid building materials, but these labor-saving devices did not lend themselves to earthmoving, which required digging, raising, moving, and placing loose materials. The two elements required for mechanized earthmoving, then as now, were an independent power source and off-road mobility, neither of which could be provided by the technology of that time."Container cranes were used from the 1950s and onwards, and made containerization possible. History: Nowadays such is the importance of this machinery, some transport companies have developed specific equipment to transport heavy construction equipment to and from sites. History: Most of the major equipment manufacturers such as Caterpillar, Volvo, Liebherr, and Bobcat have released or have been developing fully or partially electric-powered heavy equipment. Commercially-available models and R&D models were announced in 2019 and 2020.Robotics and autonomy has been a growing concern for heavy equipment manufacturers with manufacturers beginning research and technology acquisition. A number of companies are currently developing (Caterpillar and Bobcat) or have launched (Built Robotics) commercial solutions to the market. Types: These subdivisions, in this order, are the standard heavy equipment categorization. Images Military engineering vehicles Traction: Off-the-road tires and tracks: Heavy equipment requires specialized tires for various construction applications. While many types of equipment have continuous tracks applicable to more severe service requirements, tires are used where greater speed or mobility is required. An understanding of what equipment will be used for during the life of the tires is required for proper selection. Tire selection can have a significant impact on production and unit cost. There are three types of off-the-road tires, transport for earthmoving machines, work for slow moving earthmoving machines, and load and carry for transporting as well as digging. Off-highway tires have six categories of service C compactor, E earthmover, G grader, L loader, LS log-skidder and ML mining and logging. Within these service categories are various tread types designed for use on hard-packed surface, soft surface and rock. Tires are a large expense on any construction project, careful consideration should be given to prevent excessive wear or damage. Heavy equipment operator: A heavy equipment operator drives and operates heavy equipment used in engineering and construction projects. Typically only skilled workers may operate heavy equipment, and there is specialized training for learning to use heavy equipment. Much publication about heavy equipment operators focuses on improving safety for such workers. The field of occupational medicine researches and makes recommendations about safety for these and other workers in safety-sensitive positions. Equipment cost: Due to the small profit margins on construction projects it is important to maintain accurate records concerning equipment utilization, repairs and maintenance. The two main categories of equipment costs are ownership cost and operating cost. Equipment cost: Ownership cost To classify as an ownership cost an expense must have been incurred regardless of if the equipment is used or not. These costs are as follows: Depreciation can be calculated several ways, the simplest is the straight-line method. The annual depreciation is constant, reducing the equipment value annually. The following are simple equations paraphrased from the Peurifoy & Schexnayder text: Operating cost For an expense to be classified as an operating cost, it must be incurred through use of the equipment. These costs are as follows: The biggest distinction from a cost standpoint is if a repair is classified as a major repair or a minor repair. A major repair can change the depreciable equipment value due to an extension in service life, while a minor repair is normal maintenance. How a firm chooses to cost major and minor repairs vary from firm to firm depending on the costing strategies being used. Some firms will charge only major repairs to the equipment while minor repairs are costed to a project. Another common costing strategy is to cost all repairs to the equipment and only frequently replaced wear items are excluded from the equipment cost. Many firms keep their costing structure closely guarded as it can impact the bidding strategies of their competition. In a company with multiple semi-independent divisions, the equipment department often wants to classify all repairs as "minor" and charge the work to a job – therefore improving their 'profit' from the equipment. Models: Die-cast metal promotional scale models of heavy equipment are often produced for each vehicle to give to prospective customers. These are typically in 1:50 scale. The popular manufacturers of these models are Conrad and NZG in Germany, even for US vehicles. Notable manufacturers: The largest 10 construction equipment manufacturers in 2020 based on revenue data of top 50 manufacturers published by KHL Group Other manufacturers include:
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Collapse** Collapse: Collapse or its variants may refer to: Concepts: Collapse (structural) Collapse (topology), a mathematical concept Collapsing manifold Collapse, the action of collapsing or telescoping objects Collapsing user interface elements Accordion (GUI) -- collapsing list items Code folding -- collapsing subsections of programs or text Outliner -- supporting folding and unfolding subsections Ecosystem collapse or Ecological collapse Economic collapse Gravitational collapse creating astronomical objects Societal collapse Dissolution of the Soviet Union, the collapse of Soviet federalism State collapse Wave function collapse, in physics Medicine and biology: In medicine, collapse can refer to various forms of transient loss of consciousness such as syncope, or loss of postural muscle tone without loss of consciousness. It can also refer to: Circulatory collapse Lung collapse Hydrophobic collapse in protein folding Art, entertainment and media: Literature Collapse: How Societies Choose to Fail or Succeed, a book by Jared Diamond Collapse (journal), a journal of philosophical research and development published in the United Kingdom Film Collapse (film), a 2009 documentary directed by Chris Smith and starring Michael Ruppert Collapse, a 2010 documentary film based on the book Collapse: How Societies Choose to Fail or Succeed Games Collapse (2008 video game), an action game released in 2008 for Microsoft Windows Collapse!, a 1999 series games created by GameHouse Collapse, a fictional event in the computer game Dreamfall The Collapse (Deus Ex), a fictional event within the plot of the computer game Deus Ex and its sequel Deus Ex: Invisible War Music Albums Collapse (Across Five Aprils album), 2006 Collapse (Deas Vail album), 2006 Collapse EP, 2018 record by Aphex Twin Songs "Collapse" (Soul Coughing song), 1996 "Collapse" (Saosin song), 2006 "Collapse" (Imperative Reaction song), 2006 "Collapsed" (Aly & AJ song), 2005
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Groundcover** Groundcover: Groundcover or ground cover is any plant that grows over an area of ground. Groundcover provides protection of the topsoil from erosion and drought. In an ecosystem, the ground cover forms the layer of vegetation below the shrub layer known as the herbaceous layer. The most widespread ground covers are grasses of various types. Groundcover: In ecology, groundcover is a difficult subject to address because it is known by several different names and is classified in several different ways. The term groundcover could also be referring to "the herbaceous layer," "regenerative layer", "ground flora" or even "step over." In agriculture, ground cover refers to anything that lies on top of the soil and protects it from erosion and inhibits weeds. It can be anything from a low layer of grasses to a plastic material. The term ground cover can also specifically refer to landscaping fabric, a breathable tarp that allows water and gas exchange. Groundcover: In gardening jargon, however, the term groundcover refers to plants that are used in place of weeds and improves appearance by concealing bare earth. Contributions to the environment: The herbaceous layer is often overlooked in most ecological analyses because it is so common and contributes the smallest amount of the environment's overall biomass. However, groundcover is crucial to the survival of many environments. The groundcover layer of a forest can contribute up to 90% of the ecosystem's plant diversity. Additionally, the herbaceous layer ratio of biomass to contribution to plant productivity is disproportionate in many ecosystems. The herbaceous layer can constitute up to 4% of the overall net primary productivity (NPP) of an ecosystem, four times its average biomass. Contributions to the environment: Reproduction Groundcover typically reproduces one of five ways: Lateral growth Side growth: Branches on the side of the plant extend outwards upon contact with the soil. Base growth: New plants produced from the base of the origin plant. Contributions to the environment: Under/Above-ground growth: Produced from rhizomes and stolons RootsLike most foliage, groundcover reacts to both natural and anthropogenic disturbances. These responses can be classified as legacy or active responses. Legacy responses occur during long-term changes to an environment, such as the conversion of a forest to agricultural land and back into forest. Active responses occur with sudden disturbances to the environment, such as tornadoes and forest fires. Contributions to the environment: Groundcover has also been known to influence the placement and growth of tree seedlings. All tree seedlings must first fall from their origin trees and then permeate the layer created by groundcover in order to reach the soil and germinate. The groundcover filters out a large amount seeds, but lets a smaller portion of seeds pass through and grow. This filtration provides ample amount of space between the seeds for future growth. In some areas, the groundcover can become so dense that no seeds can permeate the surface, and the forest is instead converted to shrubbery. Groundcover also inhibits the amount of light which reaches the floor of an ecosystem. An experiment conducted with the rhododendron maximum canopy in the southern Appalachian region concluded that 4–8% of total sunlight makes it to the herbaceous layer, whereas only about 1–2% reaches the ground. Contributions to the environment: Variation Two common variations of groundcover are residency and transient species. Residency species typically reach a maximum of 1.5 metres (4 ft 11 in) in height, and are therefore permanently classified as herbaceous. Transient species are capable of growing past this height, and are therefore only temporarily considered herbaceous. These height differences make ideal environments for a variety of animals, such as the reed warbler, the harvest mouse and the wren.Groundcover can also be classified in terms of its foliage. Groundcover that keeps its foliage for the entire year is known as evergreen, whereas groundcover that loses its foliage in the winter months is known as deciduous. In gardening: Five general types of plants are commonly used as groundcovers in gardening: Vines, which are woody plants with slender, spreading stems Herbaceous plants, or non-woody plants Shrubs of low-growing, spreading species Moss of larger, coarser species Ornamental grasses, especially low-growing varietiesOf these types, some of the most common groundcovers include: Alfalfa (Medicago sativa) Clover (Trifolium) Dichondra Bacopa (Bacopa) Carpobrotus Delairea Ivy (Hedera) Gazania (Gazania rigens) Ground-elder (Aegopodium podagraria) Ice plant Japanese honeysuckle (Lonicera japonica) Juniperus horizontalis Creeping lantana Lilyturf (Liriope muscari and Liriope spicata) Mint (Mentha) Mesembryanthemum cordifolium Nasturtium (Tropaeolum majus) Pachysandra Pearlwort (Sagina subulata) Periwinkle (Vinca) Shasta daisy (Leucanthemum) Soleirolia (Soleirolia soleirolii) Spider plant (Chlorophytum comosum) In roof gardens Groundcover is a popular solution for difficult gardening issues because it is low maintenance, aesthetically pleasing and fast growing, minimizing the spread of weeds. For this reason, ground cover is also a common choice for roof gardens. Roofs take on the brunt of incoming weather, meaning any plants on a roof must be resistant to long-term exposure to sun, overwatering from rain and harsh winds. Groundcover plants are able to sustain themselves in such conditions while also providing lush vegetation to what would otherwise be unused space.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**FBXO7** FBXO7: F-box only protein 7 is a protein that in humans is encoded by the FBXO7 gene. Mutations in FBXO7 have been associated with Parkinson's disease. Function: This gene encodes a member of the F-box protein family which is characterized by an approximately 40 amino acid motif, the F-box. The F-box proteins constitute one of the four subunits of the ubiquitin protein ligase complex called SCFs (SKP1-cullin-F-box), which function in phosphorylation-dependent ubiquitination. The F-box proteins are divided into 3 classes: Fbws containing WD-40 domains, Fbls containing leucine-rich repeats, and Fbxs containing either different protein-protein interaction modules or no recognizable motifs. The protein encoded by this gene belongs to the Fbxs class and it may play a role in regulation of hematopoiesis. Alternatively spliced transcript variants of this gene have been identified with the full-length natures of only some variants being determined. Interactions: FBXO7 has been shown to interact with SKP1A, CUL1, CDK6, p27, PI31, Parkin, and PINK1.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Beat (filmmaking)** Beat (filmmaking): In filmmaking, a beat is a small amount of action resulting in a pause in dialogue. Beats usually involve physical gestures like a character walking to a window or removing their glasses and rubbing their eyes. Short passages of internal monologue can also be considered a sort of internal beat. Beats are also known as "stage business".The word "beat" is industry slang that was derived from a famous Russian writer who told someone that writing the script was just a matter of putting all the bits together. In his heavy accent he pronounced bits as "beats".A beat sheet is a document with all the events in a movie script to guide the writing of that script. Beats as pacing elements: Beats are specific, measured, and spaced to create a pace that moves the progress of the story forward. Audiences feel uneven or erratic beats. Uneven beats are the most forgettable or sometimes tedious parts of a film. Erratic beats jolt the audience unnecessarily. Every cinematic genre has a beat that is specific to its development. Action film has significantly more beats (usually events); drama has fewer beats (usually protagonist decisions or discovery). Between each beat a sequence occurs. This sequence is often a series of scenes that relates to the last beat and leads up to the next beat. Beats as pacing elements: Following is a beat example from The Shawshank Redemption: At 25 minutes: Andy talks to Red and asks for rock hammer. - Decision At 30 minutes: Andy gets rock hammer. - Event At 35 minutes: Andy risks his life to offer financial advice to Mr. Hadley. - Decision At 40 minutes: Andy notes ease of carving his name in the wall. - DiscoveryAfter each beat listed above, a significant series of results takes place in the form of the sequence, but what most people remember are the beats, the moment something takes place with the protagonist. Beats as pacing elements: McKee Stories are divided into Acts, Acts into Sequences, Sequences into Scenes, and Scenes into Beats. Robert McKee uses the word "beat" differently from that described above. He first defines a scene not as action occurring in one place but as action "that turns the value-charged condition of a character's life on at least one value with a degree of perceptible significance". He describes the Beat as "the smallest element of structure...(Not to be confused with...an indication...meaning 'short pause')". He defines a Beat as: "an exchange of behavior in action/reaction. Beat by Beat these changing behaviors shape the turning of a scene." Specifically, a scene will contain multiple beats, the clashes in the conflict, which build a scene to eventually turn the values of a character's life, called a "Story Event". He further describes beats as "distinctively different behaviors, . . . clear changes of action/reaction."
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Kickboxing weight classes** Kickboxing weight classes: Kickboxing weight classes are weight classes that pertain to the sport of kickboxing. Kickboxing weight classes: Organizations will often adopt their own rules for weight limits, causing ambiguity in the sport regarding how a weight class should be defined. For a variety of reasons (largely historical), weight classes of the same name can be of vastly different weights. For example, a boxing middleweight weighs up to 72 kg (160 lb), an ISKA middleweight upper limit is 75 kg (165 lb), and a K-1 middleweight upper limit is 70 kg (154 lb). Comparison of organizations: This table gives names and limits recognised by the widely regarded sanctioning bodies and promotions in professional kickboxing, Muay Thai and shoot boxing. AJKF: The (now defunct) All Japan Kickboxing Federation (AJKF) utilized the following weight classes: Enfusion: Enfusion utilizes the following weight classes: Glory: Glory utilizes the following weight classes: IKF: The International Kickboxing Federation (IKF) utilizes the following weight classes: ISKA: The International Sport Kickboxing Association (ISKA) utilizes the following weight classes: It's Showtime: It's Showtime (now defunct) utilized the following weight classes: K-1: K-1 utilizes the following weight classes: K-1 JAPAN: K-1 Japan Group utilizes the following weight classes: KOK: King of Kings (KOK) utilizes the following weight classes: Krush: Krush utilizes the following weight classes: MAJKF: The Martial Arts Japan Kickboxing Federation (MAJKF) utilizes the following weight classes: NJKF: The New Japan Kickboxing Federation (NJKF) utilizes the following weight classes: PKA: The (now defunct) Professional Karate Association (PKA) utilized the following weight classes: ONE Championship: The ONE Championship utilizes the following weight classes: RISE: Real Impact Sports Entertainment (RISE) utilizes the following weight classes: Shoot Boxing: The Shoot Boxing utilizes the following weight classes: Superkombat: Superkombat Fighting Championship utilizes the following weight classes: Superleague: Superleague utilizes the following weight classes: WAKO: The World Association of Kickboxing Organizations (WAKO) utilizes the following weight classes: WBC Muaythai: The World Boxing Council Muaythai (WBC Muaythai) utilizes the following weight classes: WFCA: The World Full Contact Association (WFCA) utilizes the following weight classes: WKA: The World Kickboxing Association (WKA) utilizes the following weight classes for both amateur and professional competitions: WKN: The World Kickboxing Network (WKN) utilizes the following weight classes: WMC: The World Muaythai Council (WMC) utilizes the following weight classes: WPMF: The World Professional Muaythai Federation (WPMF) utilizes the following weight classes:
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Basket weaving** Basket weaving: Basket weaving (also basketry or basket making) is the process of weaving or sewing pliable materials into three-dimensional artifacts, such as baskets, mats, mesh bags or even furniture. Craftspeople and artists specialized in making baskets may be known as basket makers and basket weavers. Basket weaving is also a rural craft. Basketry is made from a variety of fibrous or pliable materials—anything that will bend and form a shape. Examples include pine, straw, willow, oak, wisteria, forsythia, vines, stems, fur, hide, grasses, thread, and fine wooden splints. There are many applications for basketry, from simple mats to hot air balloon gondolas. Many Indigenous peoples are renowned for their basket-weaving techniques. History: While basket weaving is one of the widest spread crafts in the history of any human civilization, it is hard to say just how old the craft is, because natural materials like wood, grass, and animal remains decay naturally and constantly. So without proper preservation, much of the history of basket making has been lost and is simply speculated upon. History: Middle East The earliest reliable evidence for basket weaving technology in the Middle East comes from the Pre-Pottery Neolithic phases of Tell Sabi Abyad II and Çatalhöyük. Although no actual basketry remains were recovered, impressions on floor surfaces and on fragments of bitumen suggest that basketry objects were used for storage and architectural purposes. The extremely well-preserved Early Neolithic ritual cave site of Nahal Hemar yielded thousands of intact perishable artefacts, including basketry containers, fabrics, and various types of cordage. Additional Neolithic basketry impressions have been uncovered at Tell es-Sultan (Jericho), Netiv HaGdud, Beidha, Shir, Tell Sabi Abyad III, Domuztepe, Umm Dabaghiyah, Tell Maghzaliyah, Tepe Sarab, Jarmo, and Ali Kosh.The oldest known baskets were discovered in Faiyum in upper Egypt and have been carbon dated to between 10,000 and 12,000 years old, earlier than any established dates for archaeological evidence of pottery vessels, which were too heavy and fragile to suit far-ranging hunter-gatherers. The oldest and largest complete basket, discovered in the Negev in the Middle East, dates to 10,500 years old. However, baskets seldom survive, as they are made from perishable materials. The most common evidence of a knowledge of basketry is an imprint of the weave on fragments of clay pots, formed by packing clay on the walls of the basket and firing. History: Industrial Revolution During the Industrial Revolution, baskets were used in factories and for packing and deliveries. Wicker furniture became fashionable in Victorian society. World Wars During the World Wars some pannier baskets were used for dropping supplies of ammunition and food to the troops. Types: Basketry may be classified into four types: Coiled basketry, using grasses, rushes and pine needles Plaiting basketry, using materials that are wide and braid-like: palms, yucca or New Zealand flax Twining basketry, using materials from roots and tree bark. This is a weaving technique where two or more flexible weaving elements ("weavers") cross each other as they weave through the stiffer radial spokes. Types: Wicker and Splint basketry, using materials like reed, cane, willow, oak, and ash Materials used in basketry: Weaving with rattan core (also known as reed) is one of the more popular techniques being practiced, because it is easily available. It is pliable, and when woven correctly, it is very sturdy. Also, while traditional materials like oak, hickory, and willow might be hard to come by, reed is plentiful and can be cut into any size or shape that might be needed for a pattern. This includes flat reed, which is used for most square baskets; oval reed, which is used for many round baskets; and round reed, which is used to twine; another advantage is that reed can also be dyed easily to look like oak or hickory.Many types of plants can be used to create baskets: dog rose, honeysuckle, blackberry briars once the thorns have been scraped off and many other creepers. Willow was used for its flexibility and the ease with which it could be grown and harvested. Willow baskets were commonly referred to as wickerwork in England.Water hyacinth is used as a base material in some areas where the plant has become a serious pest. For example, a group in Ibadan led by Achenyo Idachaba have been creating handicrafts in Nigeria. Materials used in basketry: Vine Because vines have always been readily accessible and plentiful for weavers, they have been a common choice for basketry purposes. The runners are preferable to the vine stems because they tend to be straighter. Pliable materials like kudzu vine to more rigid, woody vines like bittersweet, grapevine, honeysuckle, wisteria and smokevine are good basket weaving materials. Although many vines are not uniform in shape and size, they can be manipulated and prepared in a way that makes them easily used in traditional and contemporary basketry. Most vines can be split and dried to store until use. Once vines are ready to be used, they can be soaked or boiled to increase pliability. Materials used in basketry: Wicker The type of baskets that reed is used for are most often referred to as "wicker" baskets, though another popular type of weaving known as "twining" is also a technique used in most wicker baskets.Popular styles of wicker baskets are vast, but some of the more notable styles in the United States are Nantucket Baskets and Williamsburg Baskets. Nantucket Baskets are large and bulky, while Williamsburg Baskets can be any size, so long as the two sides of the basket bow out slightly and get larger as it is weaved up. Process: The parts of a basket are the base, the side walls, and the rim. A basket may also have a lid, handle, or embellishments. Process: Most baskets begin with a base. The base can either be woven with reed or wooden. A wooden base can come in many shapes to make a wide variety of shapes of baskets. The "static" pieces of the work are laid down first. In a round basket, they are referred to as "spokes"; in other shapes, they are called "stakes" or "staves". Then the "weavers" are used to fill in the sides of a basket. Process: A wide variety of patterns can be made by changing the size, colour, or placement of a certain style of weave. To achieve a multi-coloured effect, aboriginal artists first dye the twine and then weave the twines together in complex patterns. Basketry around the world: Asia South Asia Basketry exists throughout the Indian subcontinent. Since palms are found in the south, basket weaving with this material has a long tradition in Tamil Nadu and surrounding states. Basketry around the world: East Asia Chinese bamboo weaving, Taiwanese bamboo weaving, Japanese bamboo weaving and Korean bamboo weaving go back centuries. Bamboo is the prime material for making all sorts of baskets, since it is the main material that is available and suitable for basketry. Other materials that may be used are ratan and hemp palm.In Japan, bamboo weaving is registered as a traditional Japanese craft (工芸, kōgei) with a range of fine and decorative arts. Basketry around the world: Southeast Asia Southeast Asia has thousands of sophisticated forms of indigenous basketry produce, many of which use ethnic-endemic techniques. Materials used vary considerably, depending on the ethnic group and the basket art intended to be made. Bamboo, grass, banana, reeds, and trees are common mediums. Oceania Polynesia Basketry is a traditional practice across the Pacific islands of Polynesia. It uses natural materials like pandanus, coconut fibre, hibiscus fibre, and New Zealand flax according to local custom. Baskets are used for food and general storage, carrying personal goods, and fishing. Basketry around the world: Australia Basketry has been traditionally practised by the women of many Aboriginal Australian peoples across the continent for centuries.The Ngarrindjeri women of southern South Australia have a tradition of coiled basketry, using the sedge grasses growing near the lakes and mouth of the Murray River.The fibre basketry of the Gunditjmara people is noted as a cultural tradition, in the World Heritage Listing of the Budj Bim Cultural Landscape in western Victoria, Australia, used for carrying the short-finned eels that were farmed by the people in an extensive aquaculture system. Basketry around the world: North America Native American Basketry Native Americans traditionally make their baskets from the materials available locally. Arctic and Subarctic Arctic and Subarctic tribes use sea grasses for basketry. At the dawn of the 20th century, Inupiaq men began weaving baskets from baleen, a substance derived from whale jaws, and incorporating walrus ivory and whale bone in basketry. Basketry around the world: Northeastern In New England, traditional baskets are woven from Swamp Ash. The wood is peeled off a felled log in strips, following the growth rings of the tree. In Maine and the Great Lakes regions, traditional baskets are woven from black ash splints. Baskets are also woven from sweet grass, as is traditionally done by Canadian indigenous peoples. Birchbark is used throughout the Subarctic, by a wide range of peoples from the Dene to Ojibwa to Mi'kmaq. Birchbark baskets are often embellished with dyed porcupine quills. Some of the more notable styles are Nantucket Baskets and Williamsburg Baskets. Nantucket Baskets are large and bulky, while Williamsburg Baskets can be any size, so long as the two sides of the basket bow out slightly and get larger as it is woven up. Basketry around the world: Kelly Church (Grand Traverse Band of Ottawa and Chippewa Indians) Edith Bondie (Chippewa Indians) Southeastern Southeastern peoples, such as the Atakapa, Cherokee, Choctaw, and Chitimacha, traditionally use split river cane for basketry. A particularly difficult technique for which these peoples are known is double-weave or double-wall basketry, in which each basketry is formed by an interior and exterior wall seamlessly woven together. Doubleweave, although rare, is still practiced today, for instance by Mike Dart (Cherokee Nation). Basketry around the world: Rowena Bradley (Cherokee Nation) Mike Dart (Cherokee Nation) Northwestern Northwestern peoples use spruce root, cedar bark, and swampgrass. Ceremonial basketry hats are particularly valued by Northeast peoples and are worn today at potlatches. Traditionally, women wove basketry hats, and men painted designs on them. Delores Churchill is a Haida from Alaska who began weaving in a time when Haida basketry was in decline, but she and others have ensured it will continue by teaching the next generation. Basketry around the world: Delores Churchill (Haida) Joe Feddersen (Colville) Boeda Strand (Snohomish) Californian and Great Basin Indigenous peoples of California and Great Basin are known for their basketry skills. Coiled baskets are particularly common, woven from sumac, yucca, willow, and basket rush. The works by Californian basket makers include many pieces in museums. Elsie Allen (Pomo people) Mary Knight Benson (Pomo people) William Ralganal Benson (Pomo people) Carrie Bethel (Mono Lake Paiute) Loren Bommelyn (Tolowa) Nellie Charlie (Mono Lake Paiute/Kucadikadi) Louisa Keyser "Dat So La Lee" (Washoe people) is arguably the most famous Native American weaver. Basketry around the world: Lena Frank Dick (1889-1965) (Washoe people) followed behind Keyser by one generation, and her baskets were frequently mistaken for Keyser's. L. Frank (Tongva-Acagchemem) Sarah Jim Mayo (Washoe) Mabel McKay (Pomo people) Essie Pinola Parrish (Kashaya-Pomo) Lucy Telles (Mono Lake Paiute - Kucadikadi) Southwestern Annie Antone (Tohono O'odham) Damian Jim (Navajo) Terrol Dew Johnson (Tohono O'odham) Mexico In northwestern Mexico, the Seri people continue to "sew" baskets using splints of the limberbush plant, Jatropha cuneata. Other North American Basketry Matt Tommey is a North American artist who weaves sculptural baskets out of kudzu. Mary Jackson is a world-famous African-American sweetgrass basket weaver. In 2008, she was named a MacArthur Fellow for her basket weaving. Elizabeth F. Kinlaw is a North American basketweaver known for her sweetgrass baskets and whose work has been displayed in the Smithsonian Institution. Europe In Greece, basket weaving is practiced by the anchorite monks of Mount Athos. Basketry around the world: Africa Senegal Wolof baskets are a coil basket created by the Wolof tribe of Senegal. These baskets is considered a women's craft, which have been passed across generations. The Wolof baskets were traditionally made by using thin cuts of palm frond and a thick grass called njodax; however contemporary Wolof baskets often incorporate plastic as a replacement for the palm fronds and/or re-use of discarded prayer mat materials. These baskets are strong and used for laundry hampers, planters, bowls, rugs, and more. Basketry around the world: South Africa Zulu baskets are a traditional craft in the KwaZulu-Natal province of South Africa and were used for utilitarian purposes including holding water, beer, or food; the baskets can take many months to weave. Starting in the late 1960s, Zulu basketry was a dying art form due to the introduction of tin and plastic water containers. Kjell Lofroth, a Swedish minister living in South Africa, noticed a decline in the local crafts, and after a drought in the KwaZulu-Natal province and he formed the Vukani Arts Association (English: wake up and get going) to financially support single women and their families. In this time period of the late 1960s, only three elderly women knew the craft of Zulu basket weaving but because of the Vukani Arts Association they taught others and revived the art. Beauty Ngxongo is the most renowned living Zulu basket weaver.Zulu telephone wire baskets are a contemporary craft. These are often brightly colored baskets and made with telephone wire (sometimes from a recycled source), which is a substitute for native grasses.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Helix (ear)** Helix (ear): The helix is the prominent rim of the auricle. Where the helix turns downwards posteriorly, a small tubercle is sometimes seen, namely the auricular tubercle of Darwin.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Corneal ectatic disorders** Corneal ectatic disorders: Corneal ectatic disorders or corneal ectasia are a group of uncommon, noninflammatory, eye disorders characterised by bilateral thinning of the central, paracentral, or peripheral cornea. Types: Keratoconus, a progressive, noninflammatory, bilateral, asymmetric disease, characterized by paraxial stromal thinning and weakening that leads to corneal surface distortion. Keratoglobus, a rare noninflammatory corneal thinning disorder, characterised by generalised thinning and globular protrusion of the cornea. Pellucid marginal degeneration, a bilateral, noninflammatory disorder, characterized by a peripheral band of thinning of the inferior cornea. Posterior keratoconus, a rare condition, usually congenital, which causes a nonprogressive thinning of the inner surface of the cornea, while the curvature of the anterior surface remains normal. Usually only a single eye is affected. Post-LASIK ectasia, a complication of LASIK eye surgery. Terrien's marginal degeneration, a painless, noninflammatory, unilateral or asymmetrically bilateral, slowly progressive thinning of the peripheral corneal stroma. Diagnosis: Usually diagnosed clinically by several clinical tests. Although some investigations might needed for confirming the diagnosis and to differentiate different types of corneal ectatic diseases. Corneal topography Corneal tomography Treatment: Treatment options include contact lenses and intrastromal corneal ring segments for correcting refractive errors caused by irregular corneal surface, corneal collagen cross-linking to strengthen a weak and ectatic cornea, or corneal transplant for advanced cases.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Ball screw** Ball screw: A ball screw (or ballscrew) is a mechanical linear actuator that translates rotational motion to linear motion with little friction. A threaded shaft provides a helical raceway for ball bearings which act as a precision screw. As well as being able to apply or withstand high thrust loads, they can do so with minimum internal friction. They are made to close tolerances and are therefore suitable for use in situations in which high precision is necessary. The ball assembly acts as the nut while the threaded shaft is the screw. Ball screw: In contrast to conventional leadscrews, ball screws tend to be rather bulky, due to the need to have a mechanism to recirculate the balls. History: The ball screw was invented independently by H.M. Stevenson and D. Glenn who were issued in 1898 patents 601,451 and 610,044 respectively. History: Early precise screwshafts were produced by starting with a low-precision screwshaft, and then lapping the shaft with several spring-loaded nut laps. By rearranging and inverting the nut laps, the lengthwise errors of the nuts and shaft were averaged. Then, the very repeatable shaft's pitch is measured against a distance standard. A similar process is sometimes used today to produce reference standard screw shafts, or master manufacturing screw shafts. Design: Low friction in ball screws yields high mechanical efficiency compared to alternatives. A typical ball screw may be 90 percent efficient, versus 20 to 25 percent efficiency of an Acme lead screw of equal size. Lack of sliding friction between the nut and screw lends itself to extended lifespan of the screw assembly (especially in no-backlash systems), reducing downtime for maintenance and parts replacement, while also decreasing demand for lubrication. This, combined with their overall performance benefits and reduced power requirements, may offset the initial costs of using ball screws. Design: Ball screws may also reduce or eliminate backlash common in lead screw and nut combinations. The balls may be preloaded so that there is no "wiggle" between the ball screw and ball nut. This is particularly desirable in applications where the load on the screw varies quickly, such as machine tools. Design: Because of their very high mechanical efficiency, especially compared to traditional lead screws, ball screws can potentially be back-driven (i.e., a linear force applied directly to the nut can induce a rotation of the shaft, an effect counterproductive to most uses). While this is usually of limited consequence to motorized applications, and potentially even provides a mild protective effect in some cases, it makes them generally unsuitable for application in manually actuated systems, such as hand-fed machine tools. The static torque and digital control of an appropriate servomotor can be made to resist and compensate, but hand cranked mechanisms would require additional mechanisms to prevent undesirable behaviors. Such undesirable behavior could range from simple loss of control of the machine, such as self-feeding (the tool of the machine causing motion of the axes without the control input of the operator), to potentially dangerous cases where unexpected force could be transmitted all the way to an operator's limbs and pose a risk of injury. Because an ordinary lead screw resists or even prohibits such reverse operation, they are inherently safer and more reliable for manual use. The magnitude of force needed to consequentially back-drive an Acme lead screw would usually be sufficient to destroy the mechanism, immobilizing the machine and absorbing any dangerous force before it could pose a risk to an operator. Design: The circulating balls travel inside the thread form of the screw and nut, and balls are recirculated through various types of return mechanisms. If the ball nut did not have a return mechanism the balls would fall out of the end of the ball nut when they reached the end of the nut. For this reason several different recirculation methods have been developed. An external ballnut employs a stamped tube which picks up balls from the raceway by use of a small pick-up finger. Balls travel inside the tube and are then replaced back into the thread raceway. An internal button ballnut employs a machined or cast button style return which allows balls to exit the raceway track and move one thread then reenter the raceway. An endcap return ball nut employs a cap on the end of the ball nut. The cap is machined to pick up balls coming out of the end of the nut and direct them down holes which are bored transversely down the ballnut. The complement cap on the other side of the nut directs balls back into the raceway. The returning balls are not under significant mechanical load and the return path may incorporate injection moulded low-friction plastic parts. Design: A ball screw involves significantly more parts and surface interactions than many similar systems. While a basic lead screw is composed of only a solid shaft and a solid nut with simple mating geometries, a ball screw requires precisely formed curved contours and multi-part assemblies to facilitate the action of the bearing balls. This makes them more expensive to manufacture and sometimes to maintain, and provides more potential avenues for failure if the apparatus is not properly cared for. Equations: T=Fl2πν with the rotary input driving in the conventional way, or F=2πνTl if the linear force is backdriving the system Where T is torque applied to screw or nut, F is linear force applied, l is ball screw lead, and ν is ball screw efficiency. Selection of the standard to be used is an agreement between the supplier and the user and has some significance in the design of the screw. In the United States, ASME has developed the B5.48-1977 Standard entitled "Ball Screws". Equations: The correct evaluation of the curvatures of ball screw grooves allows to accurately design the constructive parameters of this mechanism and to enhance its performance. The formulation commonly used in literature refers to the ball bearings geometry, ignoring the shape of the section’s profile and the helix angle. In particular, the first principal curvature is calculated asfor the screw shaft groove, and asfor the nut groove, where φ is the contact angle, rm is the pitch circle radius and rb is the ball radius. Equations: The second principal curvature is simplyfor the screw shaft groove and for the nut groove, in which fs and fn are, respectively, the conformity factors of the groove profiles of the screw shaft and nut. Equations: These formulations do not take into account the shape of the groove profiles and the presence of the helix angle: more recent publications found the exact solution for the curvature of the grooves of screw shaft and nut. A new research proposes a new formulation which approximates the real curvature values with a maximum relative error of approximately 0.5%. Therefore, a much more precise formula for the first principal curvature of the screw shaft groove isandfor the nut groove, where arctan ⁡(l2πrm) is the helix angle. Operation: To maintain their inherent accuracy and ensure long life, great care is needed to avoid contamination with dirt and abrasive particles. This may be achieved by using rubber or leather bellows to completely or partially enclose the working surfaces. Another solution is to use a positive pressure of filtered air when they are used in a semi-sealed or open enclosure. Operation: While reducing friction, ball screws can operate with some preload, effectively eliminating backlash (slop) between input (rotation) and output (linear motion). This feature is essential when they are used in computer-controlled motion-control systems, e.g., CNC machine tools and high precision motion applications (e.g., wire bonding). Operation: To obtain proper rolling action of the balls, as in a standard ball bearing, it is necessary that, when loaded in one direction, the ball makes contact at one point with the nut, and one point with the screw. In practice, most ball screws are designed to be lightly preloaded, so that there is at least a slight load on the ball at four points, two in contact with the nut and two in contact with the screw. This is accomplished by using a thread profile that has a slightly larger radius than the ball, the difference in radii being kept small (e.g. a simple V thread with flat faces is unsuitable) so that elastic deformation around the point of contact allows a small, but non-zero contact area to be obtained, like any other rolling element bearing. To this end, the threads are usually machined as a "gothic arch" profile. If a simple semicircular thread profile were used, contact would only be at two points, on the outer and inner edges, which would not resist axial loading. Operation: To remove backlash and obtain the optimum stiffness and wear characteristics for a given application, a controlled amount of preload is usually applied. This is accomplished in some cases by machining the components such that the balls are a "tight" fit when assembled, however this gives poor control of the preload, and cannot be adjusted to allow for wear. It is more common to design the ball nut as effectively two separate nuts which are tightly coupled mechanically, with adjustment by either rotating one nut with respect to the other, so creating a relative axial displacement, or by retaining both nuts tightly together axially and rotating one with respect to the other, so that its set of balls is displaced axially to create the preload. Manufacture: Ball screw shafts may be fabricated by rolling, yielding a less precise, but inexpensive and mechanically efficient product. Rolled ball screws have a positional precision of several thousandths of an inch per foot. Manufacture: Ball screw are classified using "accuracy grades" from C0 (most precise) to C10. High-precision screw shafts are typically precise to one thousandth of an inch per foot (830 nanometers per centimeter) or better. They have historically been machined to gross shape, case-hardened, and then ground. The three step process is needed because high temperature machining distorts the work-piece. Hard whirling is a recent (2008) precision machining technique that minimizes heating of the work, and can produce precision screws from case-hardened bar stock. Instrument quality screw shafts are typically precise to 250 nanometers per centimeter. They are produced on precision milling machines with optical distance measuring equipment and special tooling. Similar machines are used to produce optical lenses and mirrors. Instrument screw shafts are generally made of Invar, to prevent temperature from changing tolerances too much. Applications: Ball screws are used in aircraft and missiles to move control surfaces, especially for electric fly by wire, and in automobile power steering to translate rotary motion from an electric motor to axial motion of the steering rack. They are also used in machine tools, robots, and precision assembly equipment. High precision ball screws are used in steppers for semiconductor manufacturing. Applications: A ball screw is used to expand the Deployable Tower Assembly (DTA) structure on the James Webb Space Telescope. Similar systems: Another form of linear actuator based on a rotating rod is the threadless ballscrew, a.k.a. "rolling ring drive". In this design, three (or more) rolling-ring bearings are arranged symmetrically in a housing surrounding a smooth (threadless) actuator rod or shaft. The bearings are set at an angle to the rod, and this angle determines the direction and rate of linear motion per revolution of the rod. An advantage of this design over the conventional ballscrew or leadscrew is the practical elimination of backlash and loading caused by preload nuts.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Structural variation** Structural variation: Genomic structural variation is the variation in structure of an organism's chromosome. It consists of many kinds of variation in the genome of one species, and usually includes microscopic and submicroscopic types, such as deletions, duplications, copy-number variants, insertions, inversions and translocations. Originally, a structure variation affects a sequence length about 1kb to 3Mb, which is larger than SNPs and smaller than chromosome abnormality (though the definitions have some overlap). However, the operational range of structural variants has widened to include events > 50bp. The definition of structural variation does not imply anything about frequency or phenotypical effects. Many structural variants are associated with genetic diseases, however many are not. Recent research about SVs indicates that SVs are more difficult to detect than SNPs. Approximately 13% of the human genome is defined as structurally variant in the normal population, and there are at least 240 genes that exist as homozygous deletion polymorphisms in human populations, suggesting these genes are dispensable in humans. Rapidly accumulating evidence indicates that structural variations can comprise millions of nucleotides of heterogeneity within every genome, and are likely to make an important contribution to human diversity and disease susceptibility. Microscopic structural variation: Microscopic means that it can be detected with optical microscopes, such as aneuploidies, marker chromosome, gross rearrangements and variation in chromosome size. The frequency in human population is thought to be underestimated due to the fact that some of these are not actually easy to identify. These structural abnormalities exist in 1 of every 375 live births by putative information. Sub-microscopic structural variation: Sub-microscopic structural variants are much harder to detect owing to their small size. The first study in 2004 that used DNA microarrays could detect tens of genetic loci that exhibited copy number variation, deletions and duplications, greater than 100 kilobases in the human genome. However, by 2015 whole genome sequencing studies could detect around 5,000 of structural variants as small as 100 base pairs encompassing approximately 20 megabases in each individual genome. These structural variants include deletions, tandem duplications, inversions, mobile element insertions. The mutation rate is also much higher than microscopic structural variants, estimated by two studies at 16% and 20% respectively, both of which are probably underestimates due to the challenges of accurately detecting structural variants. It has also been shown that the generation of spontaneous structural variants significantly increases the likelihood of generating further spontaneous single nucleotide variants or indels within 100 kilobases of the structural variation event. Copy-number variation: Copy-number variation (CNV) is a large category of structural variation, which includes insertions, deletions and duplications. In recent studies, copy-number variations are tested on people who do not have genetic diseases, using methods that are used for quantitative SNP genotyping. Results show that 28% of the suspected regions in the individuals actually do contain copy number variations. Also, CNVs in human genome affect more nucleotides than Single Nucleotide Polymorphism (SNP). Copy-number variation: It is also noteworthy that many of CNVs are not in coding regions. Because CNVs are usually caused by unequal recombination, widespread similar sequences such as LINEs and SINEs may be a common mechanism of CNV creation. Inversion: There are several inversions known which are related to human disease. For instance, recurrent 400kb inversion in factor VIII gene is a common cause of haemophilia A, and smaller inversions affecting idunorate 2-sulphatase (IDS) will cause Hunter syndrome. More examples include Angelman syndrome and Sotos syndrome. However, recent research shows that one person can have 56 putative inversions, thus the non-disease inversions are more common than previously supposed. Also in this study it's indicated that inversion breakpoints are commonly associated with segmental duplications. One 900 kb inversion in the chromosome 17 is under positive selection and are predicted to increase its frequency in European population. Other structural variants: More complex structural variants can occur include a combination of the above in a single event. The most common type of complex structural variation are non-tandem duplications, where sequence is duplicated and inserted in inverted or direct orientation into another part of the genome. Other classes of complex structural variant include deletion-inversion-deletions, duplication-inversion-duplications, and tandem duplications with nested deletions. There are also cryptic translocations and segmental uniparental disomy (UPD). There are increasing reports of these variations, but are more difficult to detect than traditional variations because these variants are balanced and array-based or PCR-based methods are not able to locate them. Structural variation and phenotypes: Some genetic diseases are suspected to be caused by structural variations, but the relation is not very certain. It is not plausible to divide these variants into two classes as "normal" or "disease", because the actual output of the same variant will also vary. Also, a few of the variants are actually positively selected for (mentioned above). Structural variation and phenotypes: A series of studies have shown that gene disrupting spontaneous (de novo) CNVs disrupt genes approximately four times more frequently in autism than in controls and contribute to approximately 5–10% of cases. Inherited variants also contribute to around 5–10% of cases of autism.Structural variations also have its function in population genetics. Different frequency of a same variation can be used as a genetic mark to infer relationship between populations in different areas. A complete comparison between human and chimpanzee structural variation also suggested that some of these may be fixed in one species because of its adaptative function. There are also deletions related to resistance against malaria and AIDS. Also, some highly variable segments are thought to be caused by balancing selection, but there are also studies against this hypothesis. Database of structural variation: Some of genome browsers and bioinformatic databases have a list of structural variations in human genome with an emphasis on CNVs, and can show them in the genome browsing page, for example, UCSC Genome Browser. Under the page viewing a part of the genome, there are "Common Cell CNVs" and "Structural Var" which can be enabled. On NCBI, there is a special page for structural variation. In that system, both "inner" and "outer" coordinates are shown; they are both not actual breakpoints, but surmised minimal and maximum range of sequence affected by the structural variation. The types are classified as insertion, loss, gain, inversion, LOH, everted, transchr and UPD. Methods of detection: New methods have been developed to analyze human genetic structural variation at high resolutions. The methods used to test the genome are in either a specific targeted way or in a genome wide manner. For Genome wide tests, array-based comparative genome hybridization approaches bring the best genome wide scans to find new copy number variants. These techniques use DNA fragments that are labeled from a genome of interest and are hybridized, with another genome labeled differently, to arrays spotted with cloned DNA fragments. This reveals copy number differences between two genomes.For targeted genome examinations, the best assays for checking specific areas of the genome are primarily PCR based. The best established of the PCR based methods is real time quantitative polymerase chain reaction (qPCR). A different approach is to specifically check certain areas that surround known segmental duplications since they are usually areas of copy number variation. An SNP genotyping method that offers independent fluorescence intensities for two alleles can be used to target the nucleotides in between two copies of a segmental duplication. From this, an increase in intensity from one of the alleles compared to the other can be observed. Methods of detection: With the development of next-generation sequencing (NGS) technology, four classes of strategies for the detection of structural variants with NGS data have been reported, with each being based on patterns that are diagnostic of different classes of SV. Read-depth or read-count methods assume a random distribution (e.g. Poisson distribution) of reads from short read sequencing. The divergence from this distribution is investigated to discover duplications and deletions. Regions with duplication will show higher read depth while those with deletion will result in lower read depth. Split-read methods enable detection of insertions (including mobile element insertions) and deletions down to single base-pair resolution. The presence of a SV is identified from discontinuous alignment to the reference genome. A gap in the read marks a deletion and in the reference marks an insertion. Read pair methods examine the length and orientation of paired-end reads from short read sequencing data. For example, read pairs further apart than expected indicate a deletion. Translocations, inversions and tandem duplications can likewise be discovered using read-pairs. De novo sequence assembly may be applied with reads that are accurate enough. While, in practice, use of this method is limited by the length of sequence reads, long read based genome assemblies offer structural variation discovery for classes such as insertions that escape detection when using other methods.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Miliary fever** Miliary fever: Miliary fever was a medical term in the past (Wolfgang Amadeus Mozart's death report showed this term), used to indicate a general cause of infectious disease that cause an acute fever and skin rashes similar to the cereal grain called proso millet. After subsequent advances in medicine, this term fell into disuse, supplanted by other more specific names of diseases, for example the modern miliary tuberculosis.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Hyperhidrosis** Hyperhidrosis: Hyperhidrosis is a condition characterized by abnormally increased sweating, in excess of that required for regulation of body temperature. Although primarily a benign physical burden, hyperhidrosis can deteriorate quality of life from a psychological, emotional, and social perspective. In fact, hyperhidrosis almost always leads to psychological as well as physical and social consequences. People suffering from it present difficulties in the professional field, more than 80% experiencing a moderate to severe emotional impact from the disease and half are subject to depression. Hyperhidrosis: This excess of sweat happens even if the person is not engaging in tasks that require muscular effort, and it does not depend on the exposure to heat. Common places to sweat can include underarms, face, neck, back, groin, feet, and hands. It has been called by some researchers 'the silent handicap'.Both diaphoresis and hidrosis can mean either perspiration (in which sense they are synonymous with sweating) or excessive perspiration, in which case they refer to a specific, narrowly defined, clinical disorder. Classification: Hyperhidrosis can either be generalized, or localized to specific parts of the body. Hands, feet, armpits, groin, and the facial area are among the most active regions of perspiration due to the high number of sweat glands (eccrine glands in particular) in these areas. When excessive sweating is localized (e.g. palms, soles, face, underarms, scalp) it is referred to as primary hyperhidrosis or focal hyperhidrosis. Excessive sweating involving the whole body is termed generalized hyperhidrosis or secondary hyperhidrosis. It is usually the result of some other, underlying condition. Classification: Primary or focal hyperhidrosis may be further divided by the area affected, for instance, palmoplantar hyperhidrosis (symptomatic sweating of only the hands or feet) or gustatory hyperhidrosis (sweating of the face or chest a few moments after eating certain foods).Hyperhidrosis can also be classified by onset, either congenital (present at birth) or acquired (beginning later in life). Primary or focal hyperhidrosis usually starts during adolescence or even earlier and seems to be inherited as an autosomal dominant genetic trait. It must be distinguished from secondary hyperhidrosis, which can start at any point in life. Secondary hyperhidrosis may be due to a disorder of the thyroid or pituitary glands, diabetes mellitus, tumors, gout, menopause, certain drugs, or mercury poisoning.One classification scheme uses the amount of skin affected. In this scheme, excessive sweating in an area of 100 square centimeters (16 square inches) or more is differentiated from sweating that affects only a small area.Another classification scheme is based on possible causes of hyperhidrosis. Causes: The cause of primary hyperhidrosis is unknown. Anxiety or excitement can exacerbate the condition. A common complaint of people is a nervous condition associated with sweating, then sweat more because the person is nervous. Other factors can play a role, including certain foods and drinks, nicotine, caffeine, and smells. Similarly, secondary (generalized) hyperhidrosis has many causes including certain types of cancer, disturbances of the endocrine system, infections, and medications. Causes: Primary Primary (focal) hyperhidrosis has many causes. Idiopathic unilateral circumscribed hyperhidrosis Reported association with: Blue rubber bleb nevus Glomus tumor POEMS syndrome Burning feet syndrome (Gopalan's) Trench foot Causalgia Pachydermoperiostosis Pretibial myxedema Gustatory sweating associated with: Encephalitis Syringomyelia Diabetic neuropathies Herpes zoster (shingles) Parotitis Parotid abscesses Thoracic sympathectomy Auriculotemporal or Frey's syndrome Miscellaneous Lacrimal sweating (due to postganglionic sympathetic deficit, often seen in Raeder's syndrome) Harlequin syndrome Emotional hyperhidrosis Cancer A variety of cancers have been associated with the development of secondary hyperhidrosis including lymphoma, pheochromocytoma, carcinoid tumors (resulting in carcinoid syndrome), and tumors within the thoracic cavity. Causes: Endocrine Certain endocrine conditions are also known to cause secondary hyperhidrosis including diabetes mellitus (especially when blood sugars are low), acromegaly, hyperpituitarism, pheochromocytoma (tumor of the adrenal glands, present in 71% of patients) and various forms of thyroid disease. Medications Use of selective serotonin reuptake inhibitors (e.g., sertraline) is a common cause of medication-induced secondary hyperhidrosis. Other medications associated with secondary hyperhidrosis include tricyclic antidepressants, stimulants, opioids, nonsteroidal anti-inflammatory drugs (NSAIDs), glyburide, insulin, anxiolytic agents, adrenergic agonists, and cholinergic agonists. Causes: Miscellaneous In people with a history of spinal cord injuries Autonomic dysreflexia Orthostatic hypotension Posttraumatic syringomyelia Associated with peripheral neuropathies Familial dysautonomia (Riley-Day syndrome) Congenital autonomic dysfunction with universal pain loss Exposure to cold, notably associated with cold-induced sweating syndrome Associated with probable brain lesions Episodic with hypothermia (Hines and Bannick syndrome) Episodic without hypothermia Olfactory Associated with systemic medical problems Parkinson's disease Fibromyalgia Congestive heart failure Anxiety Obesity Menopausal state Night sweats Compensatory Infantile acrodynia induced by chronic low-dose mercury exposure, leading to elevated catecholamine accumulation and resulting in a clinical picture resembling pheochromocytoma. Causes: Febrile diseases Vigorous exercise A hot, humid environment Diagnosis: Symmetry of excessive sweating in hyperhidrosis is most consistent with primary hyperhidrosis. To diagnose this condition, a dermatologist gives the person a physical exam. This includes looking closely at the areas of the body that sweat excessively. A dermatologist also asks very specific questions. This helps the physician understand why the person has excessive sweating. Sometimes medical testing is necessary. Some patients require a test called the sweat test. This involves coating some of their skin with a powder that turns purple when the skin gets wet. Diagnosis: Excessive sweating affecting only one side of the body is more suggestive of secondary hyperhidrosis and further investigation for a neurologic cause is recommended. Treatment: Antihydral cream is one of the solutions prescribed for hyperhidrosis for palms. Topical agents for hyperhidrosis therapy include formaldehyde lotion, topical anticholinergics etc. These agents reduce perspiration by denaturing keratin, in turn occluding the pores of the sweat glands. They have a short-lasting effect. Formaldehyde is classified as a probable human carcinogen. Contact sensitization is increased, especially with formalin. Aluminium chlorohydrate is used in regular antiperspirants. However, hyperhidrosis requires solutions or gels with a much higher concentration. These antiperspirant solutions or hyperhidrosis gels are especially effective for treatment of axillary or underarm regions. Normally it takes around three to five days to see improvement. The most common side-effect is skin irritation. For severe cases of plantar and palmar hyperhidrosis, there has been some success with conservative measures such as higher strength aluminium chloride antiperspirants. Treatment algorithms for hyperhidrosis recommend topical antiperspirants as the first line of therapy for hyperhidrosis. Both the International Hyperhidrosis Society and the Canadian Hyperhidrosis Advisory Committee have published treatment guidelines for focal hyperhidrosis that are said to be evidence-based. Treatment: Prescription medications called anticholinergics, often taken by mouth, are sometimes used in the treatment of both generalized and focal hyperhidrosis. Anticholinergics used for hyperhidrosis include propantheline, glycopyrronium bromide or glycopyrrolate, oxybutynin, methantheline, and benzatropine. Use of these drugs can be limited, however, by side-effects, including dry mouth, urinary retention, constipation, and visual disturbances such as mydriasis (dilation of the pupils) and cycloplegia. For people who find their hyperhidrosis is made worse by anxiety-provoking situations (public speaking, stage performances, special events such as weddings, etc.), taking an anticholinergic medicine before the event may be helpful.Several anticholinergic drugs can reduce hyperhidrosis. Oxybutynin (brand name Ditropan) is one that has shown promise, although it can have side-effects, such as drowsiness, visual symptoms and dryness of the mouth and other mucous membranes. Glycopyrrolate is another drug sometimes used. It is said to be nearly as effective as oxybutynin, but has similar side-effects. In 2018, the U.S. Food and Drug Administration (FDA) approved the topical anticholinergic glycopyrronium tosylate (brand name Qbrexza) for the treatment of primary axillary hyperhidrosis.For peripheral hyperhidrosis, some people have found relief by simply ingesting crushed ice water. Ice water helps to cool excessive body heat during its transport through the blood vessels to the extremities, effectively lowering overall body temperature to normal levels within ten to thirty minutes. Treatment: Procedures Injections of botulinum toxin type A can be used to block neural control of sweat glands. The effect can last from 3–9 months depending on the site of injections. This use has been approved by the U.S. Food and Drug Administration (FDA). The duration of the beneficial effect in primary palmar hyperhidrosis has been found to increase with repetition of the injections. The Botox injections tend to be painful. Various measures have been tried to minimize the pain, one of which is the application of ice. Treatment: This was first demonstrated by Khalaf Bushara and colleagues as the first nonmuscular use of BTX-A in 1993. BTX-A has since been approved for the treatment of severe primary axillary hyperhidrosis (excessive underarm sweating of unknown cause), which cannot be managed by topical agents.A microwave-based device has been tried for excessive underarm perspiration and appears to show promise. With this device, rare but serious side effects exist and are reported in the literature, such as paralysis of the upper limbs and brachial plexus.Tap water iontophoresis as a treatment for palmoplantar hyperhidrosis was originally described in the 1950s. Studies showed positive results and good safety with tap water iontophoresis. One trial found it decreased sweating by about 80%. Treatment: Surgery Sweat gland removal or destruction is one surgical option available for axillary hyperhidrosis (excessive underarm perspiration). There are multiple methods for sweat gland removal or destruction, such as sweat gland suction, retrodermal curettage, and axillary liposuction, Vaser, or Laser Sweat Ablation. Sweat gland suction is a technique adapted for liposuction.The other main surgical option is endoscopic thoracic sympathectomy (ETS), which cuts, burns, or clamps the thoracic ganglion on the main sympathetic chain that runs alongside the spine. Clamping is intended to permit the reversal of the procedure. ETS is generally considered a "safe, reproducible, and effective procedure and most patients are satisfied with the results of the surgery". Satisfaction rates above 80% have been reported, and are higher for children. The procedure brings relief from excessive hand sweating in about 85–95% of people. ETS may be helpful in treating axillary hyperhidrosis, facial blushing and facial sweating, but failure rates in people with facial blushing and/or excessive facial sweating are higher and such people may be more likely to experience unwanted side effects.ETS side-effects have been described as ranging from trivial to devastating. The most common side-effect of ETS is compensatory sweating (sweating in different areas than prior to the surgery). Major problems with compensatory sweating are seen in 20–80% of people undergoing the surgery. Most people find the compensatory sweating to be tolerable while 1–51% claim that their quality of life decreased as a result of compensatory sweating." Total body perspiration in response to heat has been reported to increase after sympathectomy. The original sweating problem may recur due to nerve regeneration, sometimes as early as 6 months after the procedure.Other possible side-effects include Horner's Syndrome (about 1%), gustatory sweating (less than 25%) and excessive dryness of the palms (sandpaper hands). Some people have experienced cardiac sympathetic denervation, which can result in a 10% decrease in heart rate both at rest and during exercise, resulting in decreased exercise tolerance.Percutaneous sympathectomy is a minimally invasive procedure similar to the botulinum method, in which nerves are blocked by an injection of phenol. The procedure provides temporary relief in most cases. Some physicians advocate trying this more conservative procedure before resorting to surgical sympathectomy, the effects of which are usually not reversible. Prognosis: Hyperhidrosis can have physiological consequences such as cold and clammy hands, dehydration, and skin infections secondary to maceration of the skin. Hyperhidrosis can also have devastating emotional effects on one's individual life.Those with hyperhidrosis may have greater stress levels and more frequent depression.Excessive sweating or focal hyperhidrosis of the hands interferes with many routine activities, such as securely grasping objects. Some people with focal hyperhidrosis sufferers avoid situations where they will come into physical contact with others, such as greeting a person with a handshake. Hiding embarrassing sweat spots under the armpits limits the affected person's arm movements and pose. In severe cases, shirts must be changed several times during the day and require additional showers both to remove sweat and control body odor issues or microbial problems such as acne, dandruff, or athlete's foot. Additionally, anxiety caused by self-consciousness to the sweating may aggravate the sweating. Excessive sweating of the feet makes it harder for people to wear slide-on or open-toe shoes, as the feet slide around in the shoe because of sweat.Some careers present challenges for people with hyperhidrosis. For example, careers that require the use of a knife may not be safely performed by people with excessive sweating of the hands. The risk of dehydration can limit the ability of some to function in extremely hot (especially if also humid) conditions. Even the playing of musical instruments can be uncomfortable or difficult because of sweaty hands. Epidemiology: It is estimated that the incidence of focal hyperhidrosis may be as high as 2.8% of the population of the United States. It affects men and women equally, and most commonly occurs among people aged 25–64 years, though some may have been affected since early childhood. About 30–50% of people have another family member affected, implying a genetic predisposition.In 2006, researchers at Saga University in Japan reported that primary palmar hyperhidrosis maps to gene locus 14q11.2–q13.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Postural restoration** Postural restoration: Postural restoration is a posture based approach to physical medicine. Its advocates claim that it improves postural adaptations, the function of the respiratory system and asymmetrical patterns. They claim that the treatment aims to maximize neutrality in the body through manual and non-manual exercise techniques designed to reposition, retrain, and restore these asymmetrical patterned positions. It is used by some physical therapy and athletic trainers.Despite common preferences among physiotherapists for certain postures, there is little strong evidence that any specific posture leads to better medical outcomes. Mechanism: Advocates for this technique claim that it can improve breathing mechanics, including diaphragmatic function. They use the term "zone of apposition" to describe where the diaphragm attaches to the rib cage. The diaphragm's mechanical action and respiratory advantage depends on its relationship and anatomical arrangement with the rib cage. History: Physical therapist Ron Hruska developed his method postural restoration in the early 1990s. In 1999, he founded the Postural Restoration Institute, located in Lincoln, Nebraska, to train other healthcare professionals in his method.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Microsatellite** Microsatellite: A microsatellite is a tract of repetitive DNA in which certain DNA motifs (ranging in length from one to six or more base pairs) are repeated, typically 5–50 times. Microsatellites occur at thousands of locations within an organism's genome. They have a higher mutation rate than other areas of DNA leading to high genetic diversity. Microsatellites are often referred to as short tandem repeats (STRs) by forensic geneticists and in genetic genealogy, or as simple sequence repeats (SSRs) by plant geneticists.Microsatellites and their longer cousins, the minisatellites, together are classified as VNTR (variable number of tandem repeats) DNA. The name "satellite" DNA refers to the early observation that centrifugation of genomic DNA in a test tube separates a prominent layer of bulk DNA from accompanying "satellite" layers of repetitive DNA.They are widely used for DNA profiling in cancer diagnosis, in kinship analysis (especially paternity testing) and in forensic identification. They are also used in genetic linkage analysis to locate a gene or a mutation responsible for a given trait or disease. Microsatellites are also used in population genetics to measure levels of relatedness between subspecies, groups and individuals. History: Although the first microsatellite was characterised in 1984 at the University of Leicester by Weller, Jeffreys and colleagues as a polymorphic GGAT repeat in the human myoglobin gene, the term "microsatellite" was introduced later, in 1989, by Litt and Luty. The name "satellite" DNA refers to the early observation that centrifugation of genomic DNA in a test tube separates a prominent layer of bulk DNA from accompanying "satellite" layers of repetitive DNA. The increasing availability of DNA amplification by PCR at the beginning of the 1990s triggered a large number of studies using the amplification of microsatellites as genetic markers for forensic medicine, for paternity testing, and for positional cloning to find the gene underlying a trait or disease. Prominent early applications include the identifications by microsatellite genotyping of the eight-year-old skeletal remains of a British murder victim (Hagelberg et al. 1991), and of the Auschwitz concentration camp doctor Josef Mengele who escaped to South America following World War II (Jeffreys et al. 1992). Structures, locations, and functions: A microsatellite is a tract of tandemly repeated (i.e. adjacent) DNA motifs that range in length from one to six or up to ten nucleotides (the exact definition and delineation to the longer minisatellites varies from author to author), and are typically repeated 5–50 times. For example, the sequence TATATATATA is a dinucleotide microsatellite, and GTCGTCGTCGTCGTC is a trinucleotide microsatellite (with A being Adenine, G Guanine, C Cytosine, and T Thymine). Repeat units of four and five nucleotides are referred to as tetra- and pentanucleotide motifs, respectively. Most eukaryotes have microsatellites, with the notable exception of some yeast species. Microsatellites are distributed throughout the genome. The human genome for example contains 50,000–100,000 dinucleotide microsatellites, and lesser numbers of tri-, tetra- and pentanucleotide microsatellites. Many are located in non-coding parts of the human genome and therefore do not produce proteins, but they can also be located in regulatory regions and coding regions. Structures, locations, and functions: Microsatellites in non-coding regions may not have any specific function, and therefore might not be selected against; this allows them to accumulate mutations unhindered over the generations and gives rise to variability that can be used for DNA fingerprinting and identification purposes. Other microsatellites are located in regulatory flanking or intronic regions of genes, or directly in codons of genes – microsatellite mutations in such cases can lead to phenotypic changes and diseases, notably in triplet expansion diseases such as fragile X syndrome and Huntington's disease.Telomeres are linear sequences of DNA that sit at the very ends of chromosomes and protect the integrity of genomic material (not unlike an aglet on the end of a shoelace) during successive rounds of cell division due to the "end replication problem". In white blood cells, the gradual shortening of telomeric DNA has been shown to inversely correlate with ageing in several sample types. Telomeres consist of repetitive DNA, with the hexanucleotide repeat motif TTAGGG in vertebrates. They are thus classified as minisatellites. Similarly, insects have shorter repeat motifs in their telomeres that could arguably be considered microsatellites. Mutation mechanisms and mutation rates: Unlike point mutations, which affect only a single nucleotide, microsatellite mutations lead to the gain or loss of an entire repeat unit, and sometimes two or more repeats simultaneously. Thus, the mutation rate at microsatellite loci is expected to differ from other mutation rates, such as base substitution rates. The mutation rate at microsatellite loci depends on the repeat motif sequence, the number of repeated motif units and the purity of the canonical repeated sequence. A variety of mechanisms for mutation of microsatellite loci have been reviewed, and their resulting polymorphic nature has been quantified. The actual cause of mutations in microsatellites is debated. Mutation mechanisms and mutation rates: One proposed cause of such length changes is replication slippage, caused by mismatches between DNA strands while being replicated during meiosis. DNA polymerase, the enzyme responsible for reading DNA during replication, can slip while moving along the template strand and continue at the wrong nucleotide. DNA polymerase slippage is more likely to occur when a repetitive sequence (such as CGCGCG) is replicated. Because microsatellites consist of such repetitive sequences, DNA polymerase may make errors at a higher rate in these sequence regions. Several studies have found evidence that slippage is the cause of microsatellite mutations. Typically, slippage in each microsatellite occurs about once per 1,000 generations. Thus, slippage changes in repetitive DNA are three orders of magnitude more common than point mutations in other parts of the genome. Most slippage results in a change of just one repeat unit, and slippage rates vary for different allele lengths and repeat unit sizes, and within different species. If there is a large size difference between individual alleles, then there may be increased instability during recombination at meiosis.Another possible cause of microsatellite mutations are point mutations, where only one nucleotide is incorrectly copied during replication. A study comparing human and primate genomes found that most changes in repeat number in short microsatellites appear due to point mutations rather than slippage. Mutation mechanisms and mutation rates: Microsatellite mutation rates Direct estimates of microsatellite mutation rates have been made in numerous organisms, from insects to humans. In the desert locust Schistocerca gregaria, the microsatellite mutation rate was estimated at 2.1 × 10−4 per generation per locus. The microsatellite mutation rate in human male germ lines is five to six times higher than in female germ lines and ranges from 0 to 7 × 10−3 per locus per gamete per generation. In the nematode Pristionchus pacificus, the estimated microsatellite mutation rate ranges from 8.9 × 10−5 to 7.5 × 10−4 per locus per generation.Microsatellite mutation rates vary with base position relative to the microsatellite, repeat type, and base identity. Mutation rate rises specifically with repeat number, peaking around six to eight repeats and then decreasing again. Increased heterozygosity in a population will also increase microsatellite mutation rates, especially when there is a large length difference between alleles. This is likely due to homologous chromosomes with arms of unequal lengths causing instability during meiosis. Biological effects of microsatellite mutations: Many microsatellites are located in non-coding DNA and are biologically silent. Others are located in regulatory or even coding DNA – microsatellite mutations in such cases can lead to phenotypic changes and diseases. A genome-wide study estimates that microsatellite variation contributes 10–15% of heritable gene expression variation in humans. Biological effects of microsatellite mutations: Effects on proteins In mammals, 20–40% of proteins contain repeating sequences of amino acids encoded by short sequence repeats. Most of the short sequence repeats within protein-coding portions of the genome have a repeating unit of three nucleotides, since that length will not cause frame-shifts when mutating. Each trinucleotide repeating sequence is transcribed into a repeating series of the same amino acid. In yeasts, the most common repeated amino acids are glutamine, glutamic acid, asparagine, aspartic acid and serine. Biological effects of microsatellite mutations: Mutations in these repeating segments can affect the physical and chemical properties of proteins, with the potential for producing gradual and predictable changes in protein action. For example, length changes in tandemly repeating regions in the Runx2 gene lead to differences in facial length in domesticated dogs (Canis familiaris), with an association between longer sequence lengths and longer faces. This association also applies to a wider range of Carnivora species. Length changes in polyalanine tracts within the HoxA13 gene are linked to hand-foot-genital syndrome, a developmental disorder in humans. Length changes in other triplet repeats are linked to more than 40 neurological diseases in humans, notably trinucleotide repeat disorders such as fragile X syndrome and Huntington's disease. Evolutionary changes from replication slippage also occur in simpler organisms. For example, microsatellite length changes are common within surface membrane proteins in yeast, providing rapid evolution in cell properties. Specifically, length changes in the FLO1 gene control the level of adhesion to substrates. Short sequence repeats also provide rapid evolutionary change to surface proteins in pathenogenic bacteria; this may allow them to keep up with immunological changes in their hosts. Length changes in short sequence repeats in a fungus (Neurospora crassa) control the duration of its circadian clock cycles. Biological effects of microsatellite mutations: Effects on gene regulation Length changes of microsatellites within promoters and other cis-regulatory regions can change gene expression quickly, between generations. The human genome contains many (>16,000) short sequence repeats in regulatory regions, which provide 'tuning knobs' on the expression of many genes.Length changes in bacterial SSRs can affect fimbriae formation in Haemophilus influenzae, by altering promoter spacing. Dinucleotide microsatellites are linked to abundant variation in cis-regulatory control regions in the human genome. Microsatellites in control regions of the Vasopressin 1a receptor gene in voles influence their social behavior, and level of monogamy.In Ewing sarcoma (a type of painful bone cancer in young humans), a point mutation has created an extended GGAA microsatellite which binds a transcription factor, which in turn activates the EGR2 gene which drives the cancer. In addition, other GGAA microsatellites may influence the expression of genes that contribute to the clinical outcome of Ewing sarcoma patients. Biological effects of microsatellite mutations: Effects within introns Microsatellites within introns also influence phenotype, through means that are not currently understood. For example, a GAA triplet expansion in the first intron of the X25 gene appears to interfere with transcription, and causes Friedreich's ataxia. Tandem repeats in the first intron of the Asparagine synthetase gene are linked to acute lymphoblastic leukaemia. A repeat polymorphism in the fourth intron of the NOS3 gene is linked to hypertension in a Tunisian population. Reduced repeat lengths in the EGFR gene are linked with osteosarcomas.An archaic form of splicing preserved in zebrafish is known to use microsatellite sequences within intronic mRNA for the removal of introns in the absence of U2AF2 and other splicing machinery. It is theorized that these sequences form highly stable cloverleaf configurations that bring the 3' and 5' intron splice sites into close proximity, effectively replacing the spliceosome. This method of RNA splicing is believed to have diverged from human evolution at the formation of tetrapods and to represent an artifact of an RNA world. Biological effects of microsatellite mutations: Effects within transposons Almost 50% of the human genome is contained in various types of transposable elements (also called transposons, or 'jumping genes'), and many of them contain repetitive DNA. It is probable that short sequence repeats in those locations are also involved in the regulation of gene expression. Applications: Microsatellites are used for assessing chromosomal DNA deletions in cancer diagnosis. Microsatellites are widely used for DNA profiling, also known as "genetic fingerprinting", of crime stains (in forensics) and of tissues (in transplant patients). They are also widely used in kinship analysis (most commonly in paternity testing). Also, microsatellites are used for mapping locations within the genome, specifically in genetic linkage analysis to locate a gene or a mutation responsible for a given trait or disease. As a special case of mapping, they can be used for studies of gene duplication or deletion. Researchers use microsatellites in population genetics and in species conservation projects. Plant geneticists have proposed the use of microsatellites for marker assisted selection of desirable traits in plant breeding. Applications: Cancer diagnosis In tumour cells, whose controls on replication are damaged, microsatellites may be gained or lost at an especially high frequency during each round of mitosis. Hence a tumour cell line might show a different genetic fingerprint from that of the host tissue, and, especially in colorectal cancer, might present with loss of heterozygosity. Microsatellites analyzed in primary tissue therefore been routinely used in cancer diagnosis to assess tumour progression. Genome Wide Association Studies (GWAS) have been used to identify microsatellite biomarkers as a source of genetic predisposition in a variety of cancers. Applications: Forensic and medical fingerprinting Microsatellite analysis became popular in the field of forensics in the 1990s. It is used for the genetic fingerprinting of individuals where it permits forensic identification (typically matching a crime stain to a victim or perpetrator). It is also used to follow up bone marrow transplant patients.The microsatellites in use today for forensic analysis are all tetra- or penta-nucleotide repeats, as these give a high degree of error-free data while being short enough to survive degradation in non-ideal conditions. Even shorter repeat sequences would tend to suffer from artifacts such as PCR stutter and preferential amplification, while longer repeat sequences would suffer more highly from environmental degradation and would amplify less well by PCR. Another forensic consideration is that the person's medical privacy must be respected, so that forensic STRs are chosen which are non-coding, do not influence gene regulation, and are not usually trinucleotide STRs which could be involved in triplet expansion diseases such as Huntington's disease. Forensic STR profiles are stored in DNA databanks such as the UK National DNA Database (NDNAD), the American CODIS or the Australian NCIDD. Applications: Kinship analysis (paternity testing) Autosomal microsatellites are widely used for DNA profiling in kinship analysis (most commonly in paternity testing). Paternally inherited Y-STRs (microsatellites on the Y chromosome) are often used in genealogical DNA testing. Applications: Genetic linkage analysis During the 1990s and the first several years of this millennium, microsatellites were the workhorse genetic markers for genome-wide scans to locate any gene responsible for a given phenotype or disease, using segregation observations across generations of a sampled pedigree. Although the rise of higher throughput and cost-effective single-nucleotide polymorphism (SNP) platforms led to the era of the SNP for genome scans, microsatellites remain highly informative measures of genomic variation for linkage and association studies. Their continued advantage lies in their greater allelic diversity than biallelic SNPs, thus microsatellites can differentiate alleles within a SNP-defined linkage disequilibrium block of interest. Thus, microsatellites have successfully led to discoveries of type 2 diabetes (TCF7L2) and prostate cancer genes (the 8q21 region). Applications: Population genetics Microsatellites were popularized in population genetics during the 1990s because as PCR became ubiquitous in laboratories researchers were able to design primers and amplify sets of microsatellites at low cost. Their uses are wide-ranging. A microsatellite with a neutral evolutionary history makes it applicable for measuring or inferring bottlenecks, local adaptation, the allelic fixation index (FST), population size, and gene flow. As next generation sequencing becomes more affordable the use of microsatellites has decreased, however they remain a crucial tool in the field. Applications: Plant breeding Marker assisted selection or marker aided selection (MAS) is an indirect selection process where a trait of interest is selected based on a marker (morphological, biochemical or DNA/RNA variation) linked to a trait of interest (e.g. productivity, disease resistance, stress tolerance, and quality), rather than on the trait itself. Microsatellites have been proposed to be used as such markers to assist plant breeding. Analysis: Repetitive DNA is not easily analysed by next generation DNA sequencing methods,for some technologies struggle with homopolymeric tracts. A variety of software approaches have been created for the analysis or raw nextgen DNA sequencing reads to determine the genotype and variants at repetitive loci. Microsatellites can be analysed and verified by established PCR amplification and amplicon size determination, sometimes followed by Sanger DNA sequencing. Analysis: In forensics, the analysis is performed by extracting nuclear DNA from the cells of a sample of interest, then amplifying specific polymorphic regions of the extracted DNA by means of the polymerase chain reaction. Once these sequences have been amplified, they are resolved either through gel electrophoresis or capillary electrophoresis, which will allow the analyst to determine how many repeats of the microsatellites sequence in question there are. If the DNA was resolved by gel electrophoresis, the DNA can be visualized either by silver staining (low sensitivity, safe, inexpensive), or an intercalating dye such as ethidium bromide (fairly sensitive, moderate health risks, inexpensive), or as most modern forensics labs use, fluorescent dyes (highly sensitive, safe, expensive). Instruments built to resolve microsatellite fragments by capillary electrophoresis also use fluorescent dyes. Forensic profiles are stored in major databanks. The British data base for microsatellite loci identification was originally based on the British SGM+ system using 10 loci and a sex marker. The Americans increased this number to 13 loci. The Australian database is called the NCIDD, and since 2013 it has been using 18 core markers for DNA profiling. Analysis: Amplification Microsatellites can be amplified for identification by the polymerase chain reaction (PCR) process, using the unique sequences of flanking regions as primers. DNA is repeatedly denatured at a high temperature to separate the double strand, then cooled to allow annealing of primers and the extension of nucleotide sequences through the microsatellite. This process results in production of enough DNA to be visible on agarose or polyacrylamide gels; only small amounts of DNA are needed for amplification because in this way thermocycling creates an exponential increase in the replicated segment. With the abundance of PCR technology, primers that flank microsatellite loci are simple and quick to use, but the development of correctly functioning primers is often a tedious and costly process. Analysis: Design of microsatellite primers If searching for microsatellite markers in specific regions of a genome, for example within a particular intron, primers can be designed manually. This involves searching the genomic DNA sequence for microsatellite repeats, which can be done by eye or by using automated tools such as repeat masker. Once the potentially useful microsatellites are determined, the flanking sequences can be used to design oligonucleotide primers which will amplify the specific microsatellite repeat in a PCR reaction. Analysis: Random microsatellite primers can be developed by cloning random segments of DNA from the focal species. These random segments are inserted into a plasmid or bacteriophage vector, which is in turn implanted into Escherichia coli bacteria. Colonies are then developed, and screened with fluorescently–labelled oligonucleotide sequences that will hybridize to a microsatellite repeat, if present on the DNA segment. If positive clones can be obtained from this procedure, the DNA is sequenced and PCR primers are chosen from sequences flanking such regions to determine a specific locus. This process involves significant trial and error on the part of researchers, as microsatellite repeat sequences must be predicted and primers that are randomly isolated may not display significant polymorphism. Microsatellite loci are widely distributed throughout the genome and can be isolated from semi-degraded DNA of older specimens, as all that is needed is a suitable substrate for amplification through PCR. Analysis: More recent techniques involve using oligonucleotide sequences consisting of repeats complementary to repeats in the microsatellite to "enrich" the DNA extracted (microsatellite enrichment). The oligonucleotide probe hybridizes with the repeat in the microsatellite, and the probe/microsatellite complex is then pulled out of solution. The enriched DNA is then cloned as normal, but the proportion of successes will now be much higher, drastically reducing the time required to develop the regions for use. However, which probes to use can be a trial and error process in itself. Analysis: ISSR-PCR ISSR (for inter-simple sequence repeat) is a general term for a genome region between microsatellite loci. The complementary sequences to two neighboring microsatellites are used as PCR primers; the variable region between them gets amplified. The limited length of amplification cycles during PCR prevents excessive replication of overly long contiguous DNA sequences, so the result will be a mix of a variety of amplified DNA strands which are generally short but vary much in length. Analysis: Sequences amplified by ISSR-PCR can be used for DNA fingerprinting. Since an ISSR may be a conserved or nonconserved region, this technique is not useful for distinguishing individuals, but rather for phylogeography analyses or maybe delimiting species; sequence diversity is lower than in SSR-PCR, but still higher than in actual gene sequences. In addition, microsatellite sequencing and ISSR sequencing are mutually assisting, as one produces primers for the other. Analysis: Limitations Repetitive DNA is not easily analysed by next generation DNA sequencing methods, which struggle with homopolymeric tracts. Therefore, microsatellites are normally analysed by conventional PCR amplification and amplicon size determination. The use of PCR means that microsatellite length analysis is prone to PCR limitations like any other PCR-amplified DNA locus. A particular concern is the occurrence of 'null alleles': Occasionally, within a sample of individuals such as in paternity testing casework, a mutation in the DNA flanking the microsatellite can prevent the PCR primer from binding and producing an amplicon (creating a "null allele" in a gel assay), thus only one allele is amplified (from the non-mutated sister chromosome), and the individual may then falsely appear to be homozygous. This can cause confusion in paternity casework. It may then be necessary to amplify the microsatellite using a different set of primers. Null alleles are caused especially by mutations at the 3' section, where extension commences. Analysis: In species or population analysis, for example in conservation work, PCR primers which amplify microsatellites in one individual or species can work in other species. However, the risk of applying PCR primers across different species is that null alleles become likely, whenever sequence divergence is too great for the primers to bind. The species may then artificially appear to have a reduced diversity. Null alleles in this case can sometimes be indicated by an excessive frequency of homozygotes causing deviations from Hardy-Weinberg equilibrium expectations.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Bernhard W. Roth** Bernhard W. Roth: Bernhard Wilhelm Roth (born 13 November 1970) is a German experimental physicist. Scientific career: From 1992 to 1997 Roth studied physics at the Universität Bielefeld from where he obtained his diploma in physics. He received his doctoral degree (Dr. rer. nat.) in the field of atomic and particle physics at the Universität Bielefeld. From 2002 to 2007 her worked as assistant professor and group leader in experimental quantum optics at the Institute for Experimental Physics, Quantum Optics and Relativity Group at the Heinrich-Heine-Universität Düsseldorf. In 2007 Roth obtained his state doctorate (Habilitation) in experimental physics in the field of production and spectroscopy on ultracold molecular ions at the Heinrich-Heine-Universität Düsseldorf. From 2007 to 2010 he was associate professor and group leader at the Institute for Experimental Physics of the Heinrich-Heine-Universität Düsseldorf and from 2011 to 2012 center manager at the Centre for Innovation Competence innoFSPEC Potsdam of the Leibniz Institute for Astrophysics Potsdam (AIP) and the Universität Potsdam. Since 2012 he is scientific and managing director of the Hannover Centre for Optical Technologies HOT, an interdisciplinary research centre of the Gottfried Wilhelm Leibniz Universität Hannover. In 2012 Roth also obtained his state doctorate in physics at the Gottfried Wilhelm Leibniz Universität Hannover and in 2014 he was appointed extraordinary professor in physics at the Faculty of Mathematics and Physics of the Gottfried Wilhelm Leibniz Universität Hannover, see also. As director of HOT he is one of the coordinators of the International Master Program Optical Technologies: Photonics and Laser Technology (M.Sc.) at the Leibniz Universität Hannover. Research: The scientific activities of B. Roth are focused on applied and fundamental research in optics and photonics. This includes the development of integrated functional photonics and polymer-optical sensing, e.g., based on fibre-optic or planar concepts, laser spectroscopy, optofluidics, and analytics in the life sciences, optical technologies for multimodal imaging in medicine, information and illumination technology as well as digital holography. Furthermore, hybrid multi-physics and multi-scale numerical simulations for complex optical systems and algorithms for simulation inversion are investigated. Initial research fields include quantum optics and laser spectroscopy, in particular laser cooling of trapped atomic and molecular ions and high-precision laser spectroscopy of ultracold molecular ions, and low-energy atomic and particle physics., Roth is member of the Collaborative Research Center PlanOS - Planar optronic systems team, and one of the principal investigators (PI) in the Cluster of Excellence PhoenixD: Photonics, Optics, Engineering - Innovation Across Disciplines of the German Research Foundation DFG Awards: Roth is recipient of the prestigious Kaiser-Friedrich Forschungspreis 2018 (Kaiser-Friedrich Research Award 2018) for Photonic Technologies for the Digital Laboratory for the project SmartSens together with Dr. Johanna-Gabriela Walter (TCI, Leibniz University Hannover) and Dr. Kort Bremer (HOT, Leibniz University Hannover). The prize was awarded for the development of novel smartphone-based optical sensing for medicine and the life sciences. In 2021, Roth received the Kaiser-Friedrich Forschungspreis 2020 (Kaiser-Friedrich Research Award 2020) for Photonic Technologies in Environmental and Climate Protection together with Dr. Ann-Kathrin Kniggendorf (HOT, Leibniz University Hannover) for the project OPTIMUS. The prize was awarded for the development of new optical systems for the online-detection of microplastics in the environment. Publications: Roth, Bernhard (2001). Spinabhängige Asymmetriefunktionen in der elastischen und inelastischen Elektron-Cäsium-Streuung bei mittleren Energien. University Bielefeld (doctoral thesis, German). Roth, Bernhard (2007). Production, Manipulation and Spectroscopy of Cold Trapped Molecular Ions. University Düsseldorf (habilitation thesis).As author / co-author he contributed, for example, to the following books: Friedrich, B.; Krems, R.; Stwalley, W. (Eds.) (2009). Cold Molecules: Theory, Experiment, Applications. CRC Press, Taylor and Francis. ISBN 978-1420059038. Roth, B. (2008). Cold Molecules: Cold Trapped Molecular Ions - Production, Manipulation, and Spectroscopy. VDM Verlag Saarbrücken. ISBN 978-3-8364-9399-4. Lachmayer, R.; Lippert, R.B.; Kaierle, S. (Eds.) (2017). Additive Serienfertigung - Erfolgsfaktoren und Handlungsfelder für die Anwendung. Springer Vieweg Verlag. ISBN 978-3-662-56462-2.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Flow network** Flow network: In graph theory, a flow network (also known as a transportation network) is a directed graph where each edge has a capacity and each edge receives a flow. The amount of flow on an edge cannot exceed the capacity of the edge. Often in operations research, a directed graph is called a network, the vertices are called nodes and the edges are called arcs. A flow must satisfy the restriction that the amount of flow into a node equals the amount of flow out of it, unless it is a source, which has only outgoing flow, or sink, which has only incoming flow. A network can be used to model traffic in a computer network, circulation with demands, fluids in pipes, currents in an electrical circuit, or anything similar in which something travels through a network of nodes. Definition: A network is a directed graph G = (V, E) with a non-negative capacity function c for each edge, and without multiple arcs (i.e. edges with the same source and target nodes). Without loss of generality, we may assume that if (u, v) ∈ E, then (v, u) is also a member of E. Additionally, if (v, u) ∉ E then we may add (v, u) to E and then set the c(v, u) = 0. If two nodes in G are distinguished – one as the source s and the other as the sink t – then (G, c, s, t) is called a flow network. Flows: Flow functions model the net flow of units between pairs of nodes, and are useful when asking questions such as what is the maximum number of units that can be transferred from the source node s to the sink node t? The amount of flow between two nodes is used to represent the net amount of units being transferred from one node to the other. Flows: The excess function xf : V → R represents the net flow entering a given node u (i.e. the sum of the flows entering u) and is defined byA node u is said to be active if xf (u) > 0 (i.e. the node u consumes flow), deficient if xf (u) < 0 (i.e. the node u produces flow), or conserving if xf (u) = 0. In flow networks, the source s is deficient, and the sink t is active. Flows: Pseudo-flows, feasible flows, and pre-flows are all examples of flow functions. Flows: A pseudo-flow is a function f of each edge in the network that satisfies the following two constraints for all nodes u and v: Skew symmetry constraint: The flow on an arc from u to v is equivalent to the negation of the flow on the arc from v to u, that is: f (u, v) = −f (v, u). The sign of the flow indicates the flow's direction. Flows: Capacity constraint: An arc's flow cannot exceed its capacity, that is: f (u, v) ≤ c(u, v).A pre-flow is a pseudo-flow that, for all v ∈ V \{s}, satisfies the additional constraint: Non-deficient flows: The net flow entering the node v is non-negative, except for the source, which "produces" flow. That is: xf (v) ≥ 0 for all v ∈ V \{s}.A feasible flow, or just a flow, is a pseudo-flow that, for all v ∈ V \{s, t}, satisfies the additional constraint: Flow conservation constraint: The total net flow entering a node v is zero for all nodes in the network except the source s and the sink t , that is: xf (v) = 0 for all v ∈ V \{s, t}. In other words, for all nodes in the network except the source s and the sink t , the total sum of the incoming flow of a node is equal to its outgoing flow (i.e. ∑(u,v)∈Ef(u,v)=∑(v,z)∈Ef(v,z) , for each vertex v ∈ V \{s, t}).The value | f | of a feasible flow f for a network, is the net flow into the sink t of the flow network, that is: | f | = xf (t). Note, the flow value in a network is also equal to the total outgoing flow of source s, that is: | f | = -xf (s). Also, if we define A as a set of nodes in G such that s ∈ A and t ∉ A, the flow value is equal to the total net flow going out of A (i.e. | f | = f out(A) - f in(A)). The flow value in a network is the total amount of flow from s to t. Concepts useful to flow problems: Flow Decomposition Flow Decomposition is more than just a dry mathematical concept; it's the heartbeat of understanding flow networks. Imagine a bustling network, filled with intricate paths and cycles. Flow Decomposition is like a master key, unlocking the network's secrets by breaking down a given flow into a collection of path flows and cycle flows. This isn't just a mathematical exercise; it's a lens through which we can analyze and truly grasp the network's flow structure. But it doesn't stop there. Flow Decomposition shines in optimization problems, where it becomes a powerful tool to maximize or minimize specific flow parameters. From the highways of transportation to the complex pathways of logistics and network design, Flow Decomposition is not just a theory; it's a practice that fuels efficiency and innovation. Concepts useful to flow problems: Adding arcs and flows We do not use multiple arcs within a network because we can combine those arcs into a single arc. To combine two arcs into a single arc, we add their capacities and their flow values, and assign those to the new arc: Given any two nodes u and v, having two arcs from u to v with capacities c1(u,v) and c2(u,v) respectively is equivalent to considering only a single arc from u to v with a capacity equal to c1(u,v)+c2(u,v). Concepts useful to flow problems: Given any two nodes u and v, having two arcs from u to v with pseudo-flows f1(u,v) and f2(u,v) respectively is equivalent to considering only a single arc from u to v with a pseudo-flow equal to f1(u,v)+f2(u,v).Along with the other constraints, the skew symmetry constraint must be remembered during this step to maintain the direction of the original pseudo-flow arc. Adding flow to an arc is the same as adding an arc with the capacity of zero. Concepts useful to flow problems: Residuals The residual capacity of an arc e with respect to a pseudo-flow f is denoted cf, and it is the difference between the arc's capacity and its flow. That is, cf (e) = c(e) - f(e). From this we can construct a residual network, denoted Gf (V, Ef), with a capacity function cf which models the amount of available capacity on the set of arcs in G = (V, E). More specifically, capacity function cf of each arc (u, v) in the residual network represents the amount of flow which can be transferred from u to v given the current state of the flow within the network. Concepts useful to flow problems: This concept is used in Ford–Fulkerson algorithm which computes the maximum flow in a flow network. Concepts useful to flow problems: Note that there can be an unsaturated path (a path with available capacity) from u to v in the residual network, even though there is no such path from u to v in the original network. Since flows in opposite directions cancel out, decreasing the flow from v to u is the same as increasing the flow from u to v. Concepts useful to flow problems: Augmenting paths An augmenting path is a path (u1, u2, ..., uk) in the residual network, where u1 = s, uk = t, and for all ui, ui + 1 (cf (ui, ui + 1) > 0) (1 ≤ i < k). More simply, an augmenting path is an available flow path from the source to the sink. A network is at maximum flow if and only if there is no augmenting path in the residual network Gf. Concepts useful to flow problems: The bottleneck is the minimum residual capacity of all the edges in a given augmenting path. See example explained in the "Example" section of this article. The flow network is at maximum flow if and only if it has a bottleneck with a value equal to zero. If any augmenting path exists, its bottleneck weight will be greater than 0. In other words, if there is a bottleneck value greater than 0, then there is an augmenting path from the source to the sink. However, we know that if there is any augmenting path, then the network is not at maximum flow, which in turn means that, if there is a bottleneck value greater than 0, then the network is not at maximum flow. Concepts useful to flow problems: The term "augmenting the flow" for an augmenting path means updating the flow f of each arc in this augmenting path to equal the capacity c of the bottleneck. Augmenting the flow corresponds to pushing additional flow along the augmenting path until there is no remaining available residual capacity in the bottleneck. Multiple sources and/or sinks Sometimes, when modeling a network with more than one source, a supersource is introduced to the graph. This consists of a vertex connected to each of the sources with edges of infinite capacity, so as to act as a global source. A similar construct for sinks is called a supersink. Example: In Figure 1 you see a flow network with source labeled s, sink t, and four additional nodes. The flow and capacity is denoted f/c . Notice how the network upholds the skew symmetry constraint, capacity constraint, and flow conservation constraint. The total amount of flow from s to t is 5, which can be easily seen from the fact that the total outgoing flow from s is 5, which is also the incoming flow to t. Note, Figure 1 is often written in the notation style of Figure 2. Example: In Figure 3 you see the residual network for the given flow. Notice how there is positive residual capacity on some edges where the original capacity is zero in Figure 1, for example for the edge (d,c) . This network is not at maximum flow. There is available capacity along the paths (s,a,c,t) , (s,a,b,d,t) and (s,a,b,d,c,t) , which are then the augmenting paths. The bottleneck of the (s,a,c,t) path is equal to min (c(s,a)−f(s,a),c(a,c)−f(a,c),c(c,t)−f(c,t)) min (cf(s,a),cf(a,c),cf(c,t)) min (5−3,3−2,2−1) min (2,1,1)=1 Applications: Picture a series of water pipes, fitting into a network. Each pipe is of a certain diameter, so it can only maintain a flow of a certain amount of water. Anywhere that pipes meet, the total amount of water coming into that junction must be equal to the amount going out, otherwise we would quickly run out of water, or we would have a buildup of water. We have a water inlet, which is the source, and an outlet, the sink. A flow would then be one possible way for water to get from source to sink so that the total amount of water coming out of the outlet is consistent. Intuitively, the total flow of a network is the rate at which water comes out of the outlet. Applications: Flows can pertain to people or material over transportation networks, or to electricity over electrical distribution systems. For any such physical network, the flow coming into any intermediate node needs to equal the flow going out of that node. This conservation constraint is equivalent to Kirchhoff's current law. Applications: Flow networks also find applications in ecology: flow networks arise naturally when considering the flow of nutrients and energy between different organisms in a food web. The mathematical problems associated with such networks are quite different from those that arise in networks of fluid or traffic flow. The field of ecosystem network analysis, developed by Robert Ulanowicz and others, involves using concepts from information theory and thermodynamics to study the evolution of these networks over time. Classifying flow problems: The simplest and most common problem using flow networks is to find what is called the maximum flow, which provides the largest possible total flow from the source to the sink in a given graph. There are many other problems which can be solved using max flow algorithms, if they are appropriately modeled as flow networks, such as bipartite matching, the assignment problem and the transportation problem. Maximum flow problems can be solved in polynomial time with various algorithms (see table). The max-flow min-cut theorem states that finding a maximal network flow is equivalent to finding a cut of minimum capacity that separates the source and the sink, where a cut is the division of vertices such that the source is in one division and the sink is in another. Classifying flow problems: In a multi-commodity flow problem, you have multiple sources and sinks, and various "commodities" which are to flow from a given source to a given sink. This could be for example various goods that are produced at various factories, and are to be delivered to various given customers through the same transportation network. In a minimum cost flow problem, each edge u,v has a given cost k(u,v) , and the cost of sending the flow f(u,v) across the edge is f(u,v)⋅k(u,v) . The objective is to send a given amount of flow from the source to the sink, at the lowest possible price. Classifying flow problems: In a circulation problem, you have a lower bound ℓ(u,v) on the edges, in addition to the upper bound c(u,v) . Each edge also has a cost. Often, flow conservation holds for all nodes in a circulation problem, and there is a connection from the sink back to the source. In this way, you can dictate the total flow with ℓ(t,s) and c(t,s) . The flow circulates through the network, hence the name of the problem. Classifying flow problems: In a network with gains or generalized network each edge has a gain, a real number (not zero) such that, if the edge has gain g, and an amount x flows into the edge at its tail, then an amount gx flows out at the head. In a source localization problem, an algorithm tries to identify the most likely source node of information diffusion through a partially observed network. This can be done in linear time for trees and cubic time for arbitrary networks and has applications ranging from tracking mobile phone users to identifying the originating source of disease outbreaks.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Annulus (botany)** Annulus (botany): An annulus in botany is an arc or a ring of specialized cells on the sporangium. These cells are arranged in a single row, and are associated with the release or dispersal of spores. Ferns: In leptosporangiate ferns, the annulus located on the outer rim of the sporangium and serves in spore dispersal. It consists typically of a ring or belt of dead water-filled cells with differentially thickened cell walls that stretches about two-thirds around each sporangium in leptosporangiate ferns. The thinner walls on the outside allow water to evaporate quickly under dry conditions. This dehiscence causes the cells to shrink and a contraction and straightening of the annulus ring, eventually rupturing the sporangial wall by ripping apart thin-walled lip cells on the opposite side of the sporangium. As more water evaporates, air bubbles form in the cells causing the contracted annulus to snap forward again, thus dislodging and launching the spores away from the plant. The type and position of the annulus is variable (e.g. patch, apical, oblique, or vertical) and can be used to distinguish major groups of leptosporangiate ferns. Mosses: In mosses, an annulus is a complete ring of cells around the tip of the sporangium, which dissolve to allow the cap to fall off and the spores to be released. Footnotes: Noblin et al. (2012) The Fern Sporangium: A Unique Catapult. Science 335 (6074): 1322.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Invisible disability** Invisible disability: Invisible disabilities, also known as hidden disabilities or non-visible disabilities (NVDs), are disabilities that are not immediately apparent. They are typically chronic illnesses and conditions that significantly impair normal activities of daily living. For example, some people with visual or auditory disabilities who do not wear glasses or hearing aids, or who use discreet hearing aids, may not be obviously disabled. Some people who have vision loss may wear contact lenses. Invisible disability: Invisible disabilities can also include issues with mobility, such as a sitting disability like chronic back pain, joint problems, or chronic pain. People affected may not use mobility aids on some days, or at all, because severity of pain or level of mobility may change from day to day. Most people with repetitive strain injury move in a typical and inconspicuous way, and are even encouraged by the medical community to be as active as possible, including playing sports; yet those people can have dramatic limitations in how much they can type, write or how long they can hold a phone or other objects in their hands. Mental disabilities or illnesses, such as ADHD, Dyslexia, Autism, or Schizophrenia, are also classified as invisible disabilities because they are usually not detected immediately by looking or talking to a person. Invisible disability: 96% of people with chronic illnesses have an invisible disability. It is estimated that 1 in 10 Americans live with an invisible disability. This number is likely higher worldwide, as 80% of all people with disabilities live in developing countries Impact: Invisible disabilities can hinder a person's efforts to go to school, work, socialize, and more. Although the disability creates a challenge for the person who has it, the reality of the disability can be difficult for others to recognize or acknowledge. Others may not understand the cause of the problem, if they cannot see evidence of it in a visible way. Students with cognitive impairments find it difficult to organize and complete school work, but teachers who are unaware of the reason for a student's difficulties, can become impatient. A columnist for Psychology Today wrote: I recently met Grace, a woman who had a traumatic brain injury when she was sixteen years old. She was in a car accident, an all too common occurrence. An accident occurs, the head hits a part of the car and internal damage to the brain results, ranging from mild to severe. Grace shows no outside cues of brain damage. There are no visible cues of her head injury. Grace's walking, vision and physical reflexes look "normal." [...] People look at Grace and assume she is fine and then react to her difficulty as if she is being lazy or choosing to be obstinate. Teachers' judgments of Grace have been based on assumptions made from Grace's physical appearance. Impact: This lack of understanding can be detrimental to a person's social capital. People may see someone with an invisible disability as lazy, weak, or antisocial. A disability may cause someone to lose connections with friends or family due to this lack of understanding, potentially leading to a lower self-esteem. Impact: A disability that may be visible in some situations may not be obvious in others, which can result in a serious problem. For example, a plane passenger who is deaf may be unable to hear verbal instructions given by a flight attendant. It is for this reason that travellers with a hidden disability are advised to inform the airline of their need for accommodations before their flight. One such passenger wrote in The Globe and Mail that: Once, flying to Washington shortly after 9/11, I didn't hear the announcement that absolutely no one was to get out of their seat for the last 30 minutes of the flight. Normally, I get up to use the washroom 20 minutes before landing. If the nice stewardess had not remembered me and come over to my seat, crouched down to my eye level, and told me that if I had to use the washroom, I had better use it right now, who knows what might have happened. I later learned the air marshals on board would have thrown a blanket on me and wrestled me to the floor. Impact: Some employees with an invisible disability choose not to disclose their diagnosis with their employer, due to social stigma directed at people with disabilities, either in the workplace or in society in general. This may occur when a psychiatric disability is involved, or a number of other medical conditions that are invisible. Researchers in the human resources field may need to take this non-disclosure into account when carrying out studies. Many people who think of those with a disability generally consider them lower to middle class due to their medical costs, and also because many people with disabilities often lack reliable, full-time employment. According to one US survey, 74% of individuals with a disability do not use a wheelchair or other aids that may visually portray their disability. A 2011 survey found that 88% of people with an invisible disability had negative views of disclosing their disability to employers. Data from the Bureau of Labor Statistics in 2017 states that the unemployment rate for individuals with an invisible disability is higher than those without one. The unemployment rate for people with a disability was 9.2%, while the rate of those without was less than half of this at only 4.2%. BBC states that people with HIV specifically have an unemployment rate three times higher than those without HIV. Beyond the work force, Bureau of Labor Statistics data also showed that individuals with an invisible disability are also less likely to receive a bachelor's degree or higher education. Prevalence: United States In the United States, 96% of people with chronic medical conditions show no outward signs of their illness, and 10% experience symptoms that are considered disabling.Nearly one in two Americans (165 million) has a chronic medical condition of one kind or another. However, most of these people are not actually disabled, as their medical conditions do not impair normal activities.Ninety-six percent of people with chronic medical conditions live with a condition that is invisible. These people do not use a cane or any assistive device and act as if they did not have a medical condition. About a quarter of them have some type of activity limitation, ranging from mild to severe; the remaining 75% are not disabled by their chronic conditions. Legal protection: Those with invisible disabilities are protected by national and local disability laws, such as the Americans with Disabilities Act in the US. The Rehabilitation Act of 1973 has been amended several times such that the definition of "handicapped" includes the statement, "any person who... (C) is regarded as having such an impairment".This particular defining point of "handicapped" puts the assessment of impairment in the hands of observers who may or may not regard others as having an impairment. For people with disabilities, invisible or not, this creates a space for discriminatory practices which stem from the observer's perception of who is disabled and who is not. Legal protection: In the United Kingdom, the Equality Act 2010 (and the Disability Discrimination Act 1995 before it) require employers to make reasonable adjustments for employees with disabilities, both visible and invisible. Responses: A growing number of organizations, governments, and institutions are implementing policies and regulations to accommodate persons with invisible disabilities. Governments and school boards have implemented screening tests to identify students with learning disabilities, as well as other invisible disabilities, such as vision or hearing difficulties, or problems in cognitive ability, motor skills, or social or emotional development. If a hidden disability is identified, resources can be used to place a child in a special education program that will help them progress in school.One mitigation is to provide an easy way for people to self-designate as having an invisible disability, and for organizations to have processes in place to assist those so self-designating. An example of this is the Hidden Disabilities Sunflower, initially launched in the UK in 2016 but now gaining some international recognition as well. Campaigns: In the UK activist Athaly Altay began the End Fake Claiming Campaign in 2021, to raise awareness of the widespread harassment faced by people with Invisible Disabilities. The campaign calls on the UK government to update Hate Crime laws to make fake claiming a specific hate crime.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Källén–Lehmann spectral representation** Källén–Lehmann spectral representation: The Källén–Lehmann spectral representation gives a general expression for the (time ordered) two-point function of an interacting quantum field theory as a sum of free propagators. It was discovered by Gunnar Källén and Harry Lehmann independently. This can be written as, using the mostly-minus metric signature, Δ(p)=∫0∞dμ2ρ(μ2)1p2−μ2+iϵ, where ρ(μ2) is the spectral density function that should be positive definite. In a gauge theory, this latter condition cannot be granted but nevertheless a spectral representation can be provided. This belongs to non-perturbative techniques of quantum field theory. Mathematical derivation: The following derivation employs the mostly-minus metric signature. In order to derive a spectral representation for the propagator of a field Φ(x) , one considers a complete set of states {|n⟩} so that, for the two-point function one can write ⟨0|Φ(x)Φ†(y)|0⟩=∑n⟨0|Φ(x)|n⟩⟨n|Φ†(y)|0⟩. We can now use Poincaré invariance of the vacuum to write down ⟨0|Φ(x)Φ†(y)|0⟩=∑ne−ipn⋅(x−y)|⟨0|Φ(0)|n⟩|2. Mathematical derivation: Next we introduce the spectral density function ρ(p2)θ(p0)(2π)−3=∑nδ4(p−pn)|⟨0|Φ(0)|n⟩|2 .Where we have used the fact that our two-point function, being a function of pμ , can only depend on p2 . Besides, all the intermediate states have p2≥0 and p0>0 . It is immediate to realize that the spectral density function is real and positive. So, one can write ⟨0|Φ(x)Φ†(y)|0⟩=∫d4p(2π)3∫0∞dμ2e−ip⋅(x−y)ρ(μ2)θ(p0)δ(p2−μ2) and we freely interchange the integration, this should be done carefully from a mathematical standpoint but here we ignore this, and write this expression as ⟨0|Φ(x)Φ†(y)|0⟩=∫0∞dμ2ρ(μ2)Δ′(x−y;μ2) where Δ′(x−y;μ2)=∫d4p(2π)3e−ip⋅(x−y)θ(p0)δ(p2−μ2) .From the CPT theorem we also know that an identical expression holds for ⟨0|Φ†(x)Φ(y)|0⟩ and so we arrive at the expression for the time-ordered product of fields ⟨0|TΦ(x)Φ†(y)|0⟩=∫0∞dμ2ρ(μ2)Δ(x−y;μ2) where now Δ(p;μ2)=1p2−μ2+iϵ a free particle propagator. Now, as we have the exact propagator given by the time-ordered two-point function, we have obtained the spectral decomposition.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Minipermeameter** Minipermeameter: In petroleum engineering, a minipermeameter is a gas-based device for measuring permeability in porous rocks.Minipermeametry has been used in the oil industry since the late 1960s (Eijpe and Weber, 1971) without becoming in any way a standard experimental method in core analysis or reservoir characterisation. The laboratory minipermeametry can make important contributions both as an improved methodology within experimental petrophysics and as a source of data invaluable in routine reservoir characterisation (C. HALVORSEN AND A. HURST, 1990) The values obtained from the minipermeameter should possibly be calibrated by a Klinkenberg correction
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Inerter (mechanical networks)** Inerter (mechanical networks): In the study of mechanical networks in control theory, an inerter is a two-terminal device in which the forces applied at the terminals are equal, opposite, and proportional to relative acceleration between the nodes. Under the name of J-damper the concept has been used in Formula 1 racing car suspension systems. It can be constructed with a flywheel mounted on a rack and pinion. It has a similar effect to increasing the inertia of the sprung object. Discovery: Malcolm C. Smith, a control engineering professor at the University of Cambridge, first introduced inerters in a 2002 paper. Smith extended the analogy between electrical and mechanical networks (the mobility analogy). He observed that the analogy was incomplete, since it was missing a mechanical device playing the same role as an electrical capacitor. The analogy makes mass the analogy of capacitance, but the capacitor representing a mass always has one terminal connected to ground potential. In a real electrical network, capacitors can be connected between any two arbitrary potentials, they are not limited to ground. Noticing this, Smith set about finding a mechanical device that was a true analog of a capacitor. He found that he could construct such a device using gears and flywheels, one of several possible methods. Discovery: The constitutive equation is, F=b(v˙2−v˙1) ,where the constant b is the inertance and has units of mass. Construction: A linear inerter can be constructed by meshing a flywheel with a rack gear. The pivot of the flywheel forms one terminal of the device, and the rack gear forms the other. A rotational inerter can be constructed by meshing a flywheel with the ring gear of a differential. The side gears of the differential form the two terminals. Applications: Shortly after its discovery, the inerter principle was used under the name of J-damper in the suspension systems of Formula 1 racing cars. When tuned to the natural oscillation frequencies of the tires, the inerter reduced the mechanical load on the suspension. McLaren Mercedes began using a J-damper in early 2005, and Renault shortly thereafter. J-dampers were at the center of the 2007 Formula One espionage controversy which arose when Phil Mackereth left McLaren for Renault. Applications: Researchers are developing new vibration-control devices based on inerters to build high-rise skyscrapers which can withstand high winds.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Tetrahedron Computer Methodology** Tetrahedron Computer Methodology: The Tetrahedron Computer Methodology was a short lived journal that was published by Pergamon Press (now Elsevier) to experiment with electronic submission of articles in the ChemText format, and the sharing source code to enable reproducibility. It was the first chemical journal to be published electronically, with issues distributed in print and on floppy disks. It is likely it was also the first journal to accept submissions in a non-paper format (on floppy disks). The journal ceased publication owing to technical and non-technical reasons, and may have lacked sufficient institutional support. The last issue appeared in 1992 but was dated 1990.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Catering** Catering: Catering is the business of providing food service at a remote site or a site such as a hotel, hospital, pub, aircraft, cruise ship, park, festival, filming location or film studio. History of catering: The earliest account of major services being catered in the United States was an event for William Howe of Philadelphia in 1778. The event served local foods that were a hit with the attendees, who eventually popularized catering as a career. The official industry began to be recognized around the 1820’s, with the caterers being disproportionately African-American. The catering business began to form around 1820, centered in Philadelphia. History of catering: Robert Bogle The industry began to professionalize under the reigns of Robert Bogle who is recognized as "the originator of catering." Catering was originally done by servants of wealthy elites. Butlers and house slaves, which were often black, were in a good position to become caterers. Essentially, caterers in the 1860s were "public butlers" as they organized and executed the food aspect of a social gathering. A public butler was a butler working for several households. Bogle took on the role of public butler and took advantage of the food service market in the hospitality field. Caterers like Bogle were involved with events likely to be catered today, such as weddings and funerals. Bogle also is credited with creating the Guild of Caterers and helping train other black caterers. This is important because catering provided not only jobs to black people but also opportunities to connect with elite members of Philadelphia society. Over time, the clientele of caterers became the middle class, who could not afford lavish gatherings and increasing competition from white caterers led to a decline in black catering businesses. History of catering: Evolution of Catering By the 1840’s many restaurant owners began to combine catering services with their shops. Second generation Caterers grew the industry on the East coast, becoming more widespread. Common usage of the word "caterer" came about in the 1880s at which point local directories began to use these term to describe the industry. White businessmen took over the industry by the 1900’s, with the Black Catering population disappearing.In the 1930s, the Soviet Union, creating more simple menus, began developing state public catering establishments as part of its collectivization policies. A rationing system was implemented during World War II, and people became used to public catering. After the Second World War, many businessmen embraced catering as an alternative way of staying in business after the war. By the 1960s, the home-made food was overtaken by eating in public catering establishments.By the 2000s, personal chef services started gaining popularity, with more women entering the workforce. People between 15 and 24 years of age spent as little as 11–17 minutes daily on food preparation and clean-up activities in 2006-2016, according to figures revealed by the American Time Use Survey conducted by the US Bureau of Labor Statistics. There are many types of catering, including Event catering, Wedding Catering and Corporate Catering. Event catering: An event caterer serves food at indoor and outdoor events, including corporate events and parties at home. Mobile catering: A mobile caterer serves food directly from a vehicle, cart or truck which is designed for the purpose. Mobile catering is common at outdoor events such as concerts, workplaces, and downtown business districts. Mobile catering services require less maintenance costs when compared with other catering services. Mobile caterers may also be known as food trucks in some areas. Seat-back catering: Seat-back catering was a service offered by some charter airlines in the United Kingdom (e.g., Court Line, which introduced the idea in the early 1970s, and Dan-Air) that involved embedding two meals in a single seat-back tray. "One helping was intended for each leg of a charter flight, but Alan Murray, of Viking Aviation, had earlier revealed that 'with the ingenious use of a nail file or coin, one could open the inbound meal and have seconds'. The intention of participating airlines was to "save money, reduce congestion in the cabin and give punters the chance to decide when to eat their meal". By requiring less galley space on board, the planes could offer more passenger seats.According to TravelUpdate's columnist, "The Flight Detective", "Salads and sandwiches were the usual staples," and "a small pellet of dry ice was put into the compartment for the return meal to try to keep it fresh." However, in addition to the fact that passengers on one leg were able to consume the food intended for other passengers on the following leg, there was a "food hygiene" problem, and the concept was discontinued by 1975. Canapé catering: A canapé caterer serves canapés at events. They have become a popular type of food at events, Christmas parties and weddings. Canapé catering: A canapé is a type of hors d'oeuvre, a small, prepared, and often decorative food, consisting of a small piece of bread or pastry. They should be easier to pick up and not be bigger than one or two bites. The bite-sized food is usually served before the starter or main course or alone with drinks at a drinks party. Wedding catering: A wedding caterer provides food for a wedding reception and party, traditionally called a wedding breakfast. A wedding caterer can be hired independently or can be part of a package designed by the venue. There are many different types of wedding caterers, each with their approach to food. Shipboard catering: Merchant ships – especially ferries, cruise liners, and large cargo ships – often carry Catering Officers. In fact, the term "catering" was in use in the world of the merchant marine long before it became established as a land-bound business.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Creative Mythology** Creative Mythology: Creative Mythology is Volume IV of the comparative mythologist Joseph Campbell's The Masks of God. The book concerns "creative mythology", Campbell's term for the efforts by an individual to communicate his experience through signs, an attempt that can become "living myth". Summary: Campbell writes that in "creative mythology", "the individual has had an experience of his own - of order, horror, beauty, or even mere exhilaration-which he seeks to communicate through signs; and if his realization has been of a certain depth and import, his communication will have the force and value of living myth-for those, that is to say, who receive and respond to it of themselves, with recognition, uncoerced.” Campbell gives as examples Thomas Mann and James Joyce.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Impedance control** Impedance control: Impedance control is an approach to dynamic control relating force and position. It is often used in applications where a manipulator interacts with its environment and the force position relation is of concern. Examples of such applications include humans interacting with robots, where the force produced by the human relates to how fast the robot should move/stop. Simpler control methods, such as position control or torque control, perform poorly when the manipulator experiences contacts. Thus impedance control is commonly used in these settings. Impedance control: Mechanical impedance is the ratio of force output to motion input. This is analogous to electrical impedance that is the ratio of voltage output to current input (e.g. resistance is voltage divided by current). A "spring constant" defines the force output for a displacement (extension or compression) of the spring. A "damping constant" defines the force output for a velocity input. If we control the impedance of a mechanism, we are controlling the force of resistance to external motions that are imposed by the environment. Impedance control: Mechanical admittance is the inverse of impedance - it defines the motions that result from a force input. If a mechanism applies a force to the environment, the environment will move, or not move, depending on its properties and the force applied. For example, a marble sitting on a table will react very differently to a given force than will a log floating in a lake. Impedance control: The key theory behind the method is to treat the environment as an admittance and the manipulator as an impedance. It assumes the postulate that "no controller can make the manipulator appear to the environment as anything other than a physical system." This rule of thumb can also be stated as: "in the most common case in which the environment is an admittance (e.g. a mass, possibly kinematically constrained) that relation should be an impedance, a function, possibly nonlinear, dynamic, or even discontinuous, specifying the force produced in response to a motion imposed by the environment." Principle: Impedance control doesn't simply regulate the force or position of a mechanism. Instead it regulates the relationship between force and position on the one hand, and velocity and acceleration on the other hand, i.e. the impedance of the mechanism. It requires a position (velocity or acceleration) as input and has a resulting force as output. The inverse of impedance is admittance. It imposes position. Principle: So actually the controller imposes a spring-mass-damper behavior on the mechanism by maintaining a dynamic relationship between force (F) and position, velocity and acceleration (x,v,a) : F=Ma+Cv+Kx+f+s , with f being friction and s being static force. Principle: Masses ( M ) and springs (with stiffness K ) are energy storing elements, whereas a damper (with damping C ) is an energy dissipating device. If we can control impedance, we are able to control energy exchange during interaction, i.e. the work being done. So impedance control is interaction control.Note that mechanical systems are inherently multi-dimensional - a typical robot arm can place an object in three dimensions ( (x,y,z) coordinates) and in three orientations (e.g. roll, pitch, yaw). In theory, an impedance controller can cause the mechanism to exhibit a multi-dimensional mechanical impedance. For example, the mechanism might act very stiff along one axis and very compliant along another. By compensating for the kinematics and inertias of the mechanism, we can orient those axes arbitrarily and in various coordinate systems. For example, we might cause a robotic part holder to be very stiff tangentially to a grinding wheel, while being very compliant (controlling force with little concern for position) in the radial axis of the wheel. Mathematical Basics: Joint space An uncontrolled robot can be expressed in Lagrangian formulation as where q denotes joint angular position, M is the symmetric and positive-definite inertia matrix, c the Coriolis and centrifugal torque, g the gravitational torque, h includes further torques from, e.g., inherent stiffness, friction, etc., and τext summarizes all the external forces from the environment. The actuation torque τ on the left side is the input variable to the robot. Mathematical Basics: One may propose a control law of the following form: where qd denotes the desired joint angular position, K and D are the control parameters, and M^ , c^ , g^ , and h^ are the internal model of the corresponding mechanical terms. Inserting (2) into (1) gives an equation of the closed-loop system (controlled robot): K(qd−q)+D(q˙d−q˙)+M(q)(q¨d−q¨)=τext. Mathematical Basics: Let e=qd−q , one obtains Ke+De˙+Me¨=τext Since the matrices K and D have the dimension of stiffness and damping, they are commonly referred to as stiffness and damping matrix, respectively. Clearly, the controlled robot is essentially a multi-dimensional mechanical impedance (mass-spring-damper) to the environment, which is addressed by τext Task space The same principle also applies to task space. An uncontrolled robot has the following task-space representation in Lagrangian formulation: F=Λ(q)x¨+μ(x,x˙)+γ(q)+η(q,q˙)+Fext where q denotes joint angular position, x task-space position, Λ the symmetric and positive-definite task-space inertia matrix. The terms μ , γ , η , and Fext are the generalized force of the Coriolis and centrifugal term, the gravitation, further nonlinear terms, and environmental contacts. Note that this representation only applies to robots with redundant kinematics. The generalized force F on the left side corresponds to the input torque of the robot. Mathematical Basics: Analogously, one may propose the following control law: F=Kx(xd−x)+Dx(x˙d−x˙)+Λ^(q)x¨d+μ^(q,q˙)+γ^(q)+η^(q,q˙), where xd denotes the desired task-space position, Kx and Dx are the task-space stiffness and damping matrices, and Λ^ , μ^ , γ^ , and η^ are the internal model of the corresponding mechanical terms. Mathematical Basics: Similarly, one has ex=xd−x as the closed-loop system, which is essentially a multi-dimensional mechanical impedance to the environment ( Fext ) as well. Thus, one can choose desired impedance (mainly stiffness) in the task space. For example, one may want to make the controlled robot act very stiff along one direction while relatively compliant along others by setting 1000 )N/m, assuming the task space is a three-dimensional Euclidean space. The damping matrix Dx is usually chosen such that the closed-loop system (3) is stable. Applications: Impedance control is used in applications such as robotics as a general strategy to send commands to a robotics arm and end effector that takes into account the non-linear kinematics and dynamics of the object being manipulated.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Acoustic wave equation** Acoustic wave equation: In physics, the acoustic wave equation governs the propagation of acoustic waves through a material medium resp. a standing wavefield. The form of the equation is a second order partial differential equation. The equation describes the evolution of acoustic pressure p or particle velocity u as a function of position x and time t . A simplified (scalar) form of the equation describes acoustic waves in only one spatial dimension, while a more general form describes waves in three dimensions. Propagating waves in a pre-defined direction can also be calculated using first order one-way wave equation. For lossy media, more intricate models need to be applied in order to take into account frequency-dependent attenuation and phase speed. Such models include acoustic wave equations that incorporate fractional derivative terms, see also the acoustic attenuation article or the survey paper. In one dimension: Equation The wave equation describing a standing wave field in one dimension (position x ) is ∂2p∂x2−1c2∂2p∂t2=0, where p is the acoustic pressure (the local deviation from the ambient pressure), and where c is the speed of sound. In one dimension: Solution Provided that the speed c is a constant, not dependent on frequency (the dispersionless case), then the most general solution is p=f(ct−x)+g(ct+x) where f and g are any two twice-differentiable functions. This may be pictured as the superposition of two waveforms of arbitrary profile, one ( f ) traveling up the x-axis and the other ( g ) down the x-axis at the speed c . The particular case of a sinusoidal wave traveling in one direction is obtained by choosing either f or g to be a sinusoid, and the other to be zero, giving sin ⁡(ωt∓kx) .where ω is the angular frequency of the wave and k is its wave number. In one dimension: Derivation The derivation of the wave equation involves three steps: derivation of the equation of state, the linearized one-dimensional continuity equation, and the linearized one-dimensional force equation. In one dimension: The equation of state (ideal gas law) PV=nRT In an adiabatic process, pressure P as a function of density ρ can be linearized to P=Cρ where C is some constant. Breaking the pressure and density into their mean and total components and noting that C=∂P∂ρ :P−P0=(∂P∂ρ)(ρ−ρ0) .The adiabatic bulk modulus for a fluid is defined as B=ρ0(∂P∂ρ)adiabatic which gives the result P−P0=Bρ−ρ0ρ0 .Condensation, s, is defined as the change in density for a given ambient fluid density. In one dimension: s=ρ−ρ0ρ0 The linearized equation of state becomes p=Bs where p is the acoustic pressure ( P−P0 ).The continuity equation (conservation of mass) in one dimension is ∂ρ∂t+∂∂x(ρu)=0 .Where u is the flow velocity of the fluid. Again the equation must be linearized and the variables split into mean and variable components. In one dimension: ∂∂t(ρ0+ρ0s)+∂∂x(ρ0u+ρ0su)=0 Rearranging and noting that ambient density changes with neither time nor position and that the condensation multiplied by the velocity is a very small number: ∂s∂t+∂∂xu=0 Euler's Force equation (conservation of momentum) is the last needed component. In one dimension the equation is: ρDuDt+∂P∂x=0 ,where D/Dt represents the convective, substantial or material derivative, which is the derivative at a point moving along with the medium rather than at a fixed point. In one dimension: Linearizing the variables: (ρ0+ρ0s)(∂∂t+u∂∂x)u+∂∂x(P0+p)=0 .Rearranging and neglecting small terms, the resultant equation becomes the linearized one-dimensional Euler Equation: ρ0∂u∂t+∂p∂x=0 .Taking the time derivative of the continuity equation and the spatial derivative of the force equation results in: ∂2s∂t2+∂2u∂x∂t=0 ρ0∂2u∂x∂t+∂2p∂x2=0 .Multiplying the first by ρ0 , subtracting the two, and substituting the linearized equation of state, −ρ0B∂2p∂t2+∂2p∂x2=0 .The final result is ∂2p∂x2−1c2∂2p∂t2=0 where c=Bρ0 is the speed of propagation. In three dimensions: Equation Feynman provides a derivation of the wave equation for sound in three dimensions as ∇2p−1c2∂2p∂t2=0, where ∇2 is the Laplace operator, p is the acoustic pressure (the local deviation from the ambient pressure), and c is the speed of sound. In three dimensions: A similar looking wave equation but for the vector field particle velocity is given by ∇2u−1c2∂2u∂t2=0 .In some situations, it is more convenient to solve the wave equation for an abstract scalar field velocity potential which has the form ∇2Φ−1c2∂2Φ∂t2=0 and then derive the physical quantities particle velocity and acoustic pressure by the equations (or definition, in the case of particle velocity): u=∇Φ ,p=−ρ∂∂tΦ Solution The following solutions are obtained by separation of variables in different coordinate systems. They are phasor solutions, that is they have an implicit time-dependence factor of eiωt where ω=2πf is the angular frequency. The explicit time dependence is given by Real ⁡[p(r,k)eiωt] Here k=ω/c is the wave number. In three dimensions: Cartesian coordinates p(r,k)=Ae±ikr Cylindrical coordinates p(r,k)=AH0(1)(kr)+BH0(2)(kr) .where the asymptotic approximations to the Hankel functions, when kr→∞ , are H0(1)(kr)≃2πkrei(kr−π/4) H0(2)(kr)≃2πkre−i(kr−π/4) Spherical coordinates p(r,k)=Are±ikr .Depending on the chosen Fourier convention, one of these represents an outward travelling wave and the other a nonphysical inward travelling wave. The inward travelling solution wave is only nonphysical because of the singularity that occurs at r=0; inward travelling waves do exist.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Crack arrestor** Crack arrestor: A crack arrestor (otherwise known as a rip-stop doubler) is a structural engineering device. Being typically shaped into ring or strip, and composed of a strong material, it serves to contain stress corrosion cracking or fatigue cracking, helping to prevent the catastrophic failure of a device. The crack arrestor can be as simple as a thickened region of metal, or may be constructed of a laminated or woven material that can be designed to withstand deformation without failure. When correctly applied, the technique is capable of redirecting movement and safely distributing stresses. The crack arrestor is considered to be compatible with fail-safe design practices. Applications: Crack arrestors have seen extensive use in the aviation sector, particularly upon large pressurised aircraft as a means of guarding against progressive metal fatigue. Specifically, the skin of the fuselage typically has a large number of high stress locations, rivetting being a leading cause, making these points of potential crack initiation. Calculations are frequently used to simulate crack propagation, as well as the effectiveness of mitigating measures, such as crack arrestors, in ensuring the aircraft can be safely operated.Following two catastrophic airframe failures in 1954, crack arrestors were used as additional reinforcement of the fuselage of the de Havilland Comet, although this was only one of several design changes made to address structural design weaknesses related to metal fatigue and skin stresses that had been previously unknown to the aviation industry.Naval vessels are another place where crack arrestors have been extensively used. As of the 2010s, the United States Navy frequently applies them to areas of the ship that have been damaged or otherwise have received repairs in order to ensure that the affected element is not lacking in either strength or durability. It has been acknowledged that ships primarily composed of aluminium are significantly more prone to crack propagation than older steel counterparts; thus, the use of mitigating measures is likely to become more commonplace.Crack arrestors have also been used in civil engineering. They have long been used in the nuclear industry as a structural element of reactors. Numerous pipelines used from transporting chemicals have been reinforced with such devices to protect against bursting and exterior damage alike. While commonly applied to metal alloys, appropriately designed crack arrestors have been used with composite materials as well. During 2008, Airbus Group was awarded a patent for a new design technique for a crack arrestor component.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Electropositive shark repellent** Electropositive shark repellent: Electropositive metals (EPMs) are a new class of shark repellent materials that produce a measurable voltage when immersed in an electrolyte such as seawater. The voltages produced are as high as 1.75 VDC in seawater. It is hypothesized that this voltage overwhelms the ampullary organ in sharks, producing a repellent action. Since bony fish lack the ampullary organ, the repellent is selective to sharks and rays. The process is electrochemical, so no external power input is required. As chemical work is done, the metal is lost in the form of corrosion. Depending on the alloy or metal utilized and its thickness, the electropositive repellent effect lasts up to 48 hours. The reaction of the electropositive metal in seawater produces hydrogen gas bubbles and an insoluble nontoxic hydroxide as a precipitate which settles downward in the water column. History: SharkDefense made the discovery of electrochemical shark repellent effects on May 1, 2006 at South Bimini, Bahamas at the Bimini Biological Field Station. An electropositive metal, which was a component of a permanent magnet, was chosen as an experimental control for a tonic immobility experiment by Eric Stroud using a juvenile lemon shark (Negaprion brevirostris). It was anticipated that this metal would produce no effect, since it was not ferromagnetic. However, a violent rousing response was observed when the metal was brought within 50 cm of the shark's nose. The experiment was repeated with three other juvenile lemon sharks and two other juvenile nurse sharks (Ginglymostoma cirratum), and care was taken to eliminate all stray metal objects in the testing site. Patrick Rice, Michael Herrmann, and Eric Stroud were present at this first trial. Mike Rowe, from Discovery Channel’s Dirty Jobs series, subsequently witnessed and participated in a test using an electropositive metal within 24 hours after the discovery.In the next three months, a variety of transition metals, lanthanides, post-transition metals, metalloids, and non-metal samples were screened for rousing activity using the tonic immobility bioassay in juvenile lemon sharks and juvenile nurse sharks. All behaviors were scored from 0 to 4 depending on the response. It was determined that Group I, II, III, and Lanthanide metals all produced rousing responses, but the average score generally increased with electropositivity.Further testing using salt bridge electrochemical cells were conducted during 2006 and 2007 at the Oak Ridge Shark Lab. Using seawater as the electrolyte and a shark fin clipping as the cathode, voltages measured closely correlated with the standard reduction potential of the metal under test. SharkDefense now hypothesizes that a net positive charge from the cations produced by the electropositive metals accumulate on the electronegative skin of the shark. The net increase of the charge on the shark's skin is perceived by the ampullae of Lorenzini, and above 1.2 eV potential, aversion is produced. History: Electropositive metals are reducing agents and liberate hydrogen gas in seawater via hydrolysis, producing a half-cell voltage of about −0.86 eV. Simultaneously, an insoluble metal hydroxide precipitate is produced, which is inert for shark repellent activity. As such, metal is lost to corrosion in the process of generating cations. SharkDefense conducted corrosion loss studies in 2008 at South Bimini, Bahamas, and found that a 70 gram piece of a custom electropositive alloy retained more than 50% of its original weight after 70 hours of immersion. Losses due to corrosion are heavily a function of temperature, therefore, the cold seawater at fishing depths serves to reduce the corrosion rate. Research and testing: Stoner and Kaimmer (2008) reported success using cerium mischmetal and Pacific spiny dogfish (Squalus acanthias, a type of shark) in captivity, both with tonic immobility and feeding preference tests. Lead metal was used as a control. Encouraged by the results, a longline study was conducted off Homer, Alaska in late 2007 with the cooperation of the International Pacific Halibut Commission. Again, lead was used as a control. This study found a 17% reduction in Pacific spiny dogfish catch, and a 48% reduction in clearnose skate catch. Research and testing: However, Tallack et al. reported that cerium mischmetal was entirely ineffective against Atlantic spiny dogfish in the Gulf of Maine. Mandelman et al. reported that the repellent effect disappeared after starvation using captive Atlantic spiny dogfish, and that a species-specific variation in response to the mischmetals exist between captive Atlantic spiny dogfish and dusky smoothhounds (Mustelis canis).Stroud (SharkDefense, 2006) and Fisher (VIMS) observed captive cownose rays (Rhinoptera bonasus) changing swim elevation and ignoring blue crab baits in cages that contained neodymium-praseodymium mischmetal. The position of the treatment cages were alternated, and all cages were placed in the swim path of the rays. Research and testing: Brill et al. (2008) reported that captive juvenile sandbar sharks (Carcharhinus plumbeus) maintained a 50–60 cm clearance in their swimming patterns when a piece of neodymium-praseodymium mischmetal was placed in the tank. Wang, Swimmer, and Laughton (2007) reported aversive responses to neodymium-praseodymium mischmetals placed near baits offered to adult Galapagos (C. galapagensis) and Sandbar sharks on bamboo poles in Hawaii. Research and testing: In July 2008 Richard Brill of NMFS/VIMS and SharkDefense both conducted more at-sea trials with electropositive metals in an effort to reduce shark bycatch in commercial fisheries. As of August 2, 2008, Brill reported nearly a 3:1 reduction in sandbar shark catch when plastic decoys were compared to metals. A high statistical significance was obtained, as reported in the Virginian-Pilot by Joanne Kimberlin. SharkDefense later developed a simple on-hook treatment and a bait attachment which were being tested on Atlantic longlining vessels in 2008. Research and testing: Favaro and Cole (2013) determined through meta-analysis that electropositive metals did not reduce elasmobranch by-catch in commercial long-line fisheries, which raises concerns on the effectiveness of this approach as a shark deterrent or repellent to protect water users. Selectivity: As expected, teleosts (bony fish) are not repelled by the electropositive metal's cation liberation in seawater. This is because teleosts lack the ampullae of Lorenzini. Teleost response was confirmed using captive Cobia (Rachycentron canadum) and Pacific halibut (Hippoglossus stenolepis). In July 2008 swordfish (Xiphias gladius) catch was reported on experimental hooks treated with electropositive metal. Limitations: As with all shark repellents, 100% effectiveness will not be achieved with electropositive metals. The metals are particularly effective when the shark is relying on its electrosense. It is likely that electropositive metals are ineffective for deliberately stimulated (chummed) sharks, competitively feeding sharks, and shark "frenzies". The metals are very useful in the environment of commercial fisheries, and possibly recreational and artisanal fisheries.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Integration by reduction formulae** Integration by reduction formulae: In integral calculus, integration by reduction formulae is a method relying on recurrence relations. It is used when an expression containing an integer parameter, usually in the form of powers of elementary functions, or products of transcendental functions and polynomials of arbitrary degree, can't be integrated directly. But using other methods of integration a reduction formula can be set up to obtain the integral of the same or similar expression with a lower integer parameter, progressively simplifying the integral until it can be evaluated. This method of integration is one of the earliest used. How to find the reduction formula: The reduction formula can be derived using any of the common methods of integration, like integration by substitution, integration by parts, integration by trigonometric substitution, integration by partial fractions, etc. The main idea is to express an integral involving an integer parameter (e.g. power) of a function, represented by In, in terms of an integral that involves a lower value of the parameter (lower power) of that function, for example In-1 or In-2. This makes the reduction formula a type of recurrence relation. In other words, the reduction formula expresses the integral In=∫f(x,n)dx, in terms of Ik=∫f(x,k)dx, where k<n. How to compute the integral: To compute the integral, we set n to its value and use the reduction formula to express it in terms of the (n – 1) or (n – 2) integral. The lower index integral can be used to calculate the higher index ones; the process is continued repeatedly until we reach a point where the function to be integrated can be computed, usually when its index is 0 or 1. Then we back-substitute the previous results until we have computed In. How to compute the integral: Examples Below are examples of the procedure. Cosine integral Typically, integrals like cos n⁡xdx, can be evaluated by a reduction formula. Start by setting: cos n⁡xdx. Now re-write as: cos cos ⁡xdx, Integrating by this substitution: cos sin ⁡x), cos sin ⁡x). Now integrating by parts: cos cos sin sin cos cos sin sin cos sin cos sin cos sin cos sin cos cos cos sin cos cos cos sin ⁡x+(n−1)In−2−(n−1)In, solving for In: cos sin ⁡x+(n−1)In−2, cos sin ⁡x+(n−1)In−2, cos sin ⁡x+n−1nIn−2, so the reduction formula is: cos cos sin cos n−2⁡xdx. To supplement the example, the above can be used to evaluate the integral for (say) n = 5; cos 5⁡xdx. Calculating lower indices: cos sin ⁡x+45I3, cos sin ⁡x+23I1, back-substituting: cos sin ⁡x+C1, cos sin sin ⁡x+C2,C2=23C1, cos sin cos sin sin ⁡x]+C, where C is a constant. Exponential integral Another typical example is: ∫xneaxdx. Start by setting: In=∫xneaxdx. Integrating by substitution: xndx=d(xn+1)n+1, In=1n+1∫eaxd(xn+1), Now integrating by parts: ∫eaxd(xn+1)=xn+1eax−∫xn+1d(eax)=xn+1eax−a∫xn+1eaxdx, (n+1)In=xn+1eax−aIn+1, shifting indices back by 1 (so n + 1 → n, n → n – 1): nIn−1=xneax−aIn, solving for In: In=1a(xneax−nIn−1), so the reduction formula is: ∫xneaxdx=1a(xneax−n∫xn−1eaxdx). An alternative way in which the derivation could be done starts by substituting eax Integration by substitution: eaxdx=d(eax)a, In=1a∫xnd(eax), Now integrating by parts: ∫xnd(eax)=xneax−∫eaxd(xn)=xneax−n∫eaxxn−1dx, which gives the reduction formula when substituting back: In=1a(xneax−nIn−1), which is equivalent to: ∫xneaxdx=1a(xneax−n∫xn−1eaxdx). Another alternative way in which the derivation could be done by integrating by parts: In=∫xnxeaxdx, , dv=eax, , v=eaxa In=xneaxa−∫nxn−1eaxadx In=xneaxa−na∫xn−1eaxdx Remember: In−1=∫xn−1eaxdx ∴In=xneaxa−naIn−1 which gives the reduction formula when substituting back: In=1a(xneax−nIn−1), which is equivalent to: ∫xneaxdx=1a(xneax−n∫xn−1eaxdx). Tables of integral reduction formulas: Rational functions The following integrals contain: Factors of the linear radical ax+b Linear factors px+q and the linear radical ax+b Quadratic factors x2+a2 Quadratic factors x2−a2 , for x>a Quadratic factors a2−x2 , for x<a (Irreducible) quadratic factors ax2+bx+c Radicals of irreducible quadratic factors ax2+bx+c note that by the laws of indices: In+12=I2n+12=∫1(ax2+bx+c)2n+12dx=∫1(ax2+bx+c)2n+1dx Transcendental functions The following integrals contain: Factors of sine Factors of cosine Factors of sine and cosine products and quotients Products/quotients of exponential factors and powers of x Products of exponential and sine/cosine factors
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Hunt's Snack Pack** Hunt's Snack Pack: Hunt's Snack Pack is a pudding snack manufactured since 1977 by ConAgra Foods. About: Snack Packs were introduced in 1968 in single-serve aluminum/metal cans, before switching to plastic cups in 1984 and clear plastic cups in 1990. They are marketed as healthy treats for children. In the 1970s Snack Pack was sold in Australia via the Foster Clark company with the television slogan "if it wasn't for a Snack Pack, a kid'd starve". In popular culture: Snack Pack appears in the movie Billy Madison as it is the title character's favorite dessert. He is disappointed that Juanita packed him a banana instead of a Snack Pack in his lunch, so he attempts to take one from a schoolboy in exchange for his banana during lunch time, but fails. Billy eventually gets a whole pack of Snack Packs as a present from Miss Vaughn when celebrating passing Third Grade. In popular culture: In episode 16 of season 3 That '70s Show, Kitty Forman gives Fez and Hyde a pair of Snack Packs. However, instead of the period-accurate aluminum containers of the 1970s the Snack Packs are in the modern-day clear plastic current containers.In Episode 14 of Season 2 of How I Met Your Mother, Marshal demands a Snack Pack from a child in Lily's kindergarten class after spraying his pants with juice for blackmailing him with the Super Bowl results. In popular culture: In Episode 8 of Season 1 of the Netflix series Stranger Things (Chapter 8: The Upside Down), Dustin and Lucas raid the school cafeteria for a hidden stash of chocolate Snack Packs. Flavors: Many of the above flavors are available in *no sugar added* and *fat free* varieties. Also, Lemon and Lemon Meringue Pie contain no milk products.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Guarded suspension** Guarded suspension: In concurrent programming, guarded suspension is a software design pattern for managing operations that require both a lock to be acquired and a precondition to be satisfied before the operation can be executed. The guarded suspension pattern is typically applied to method calls in object-oriented programs, and involves suspending the method call, and the calling thread, until the precondition (acting as a guard) is satisfied. Usage: Because it is blocking, the guarded suspension pattern is generally only used when the developer knows that a method call will be suspended for a finite and reasonable period of time. If a method call is suspended for too long, then the overall program will slow down or stop, waiting for the precondition to be satisfied. If the developer knows that the method call suspension will be indefinite or for an unacceptably long period, then the balking pattern may be preferred. Implementation: In Java, the Object class provides the wait() and notify() methods to assist with guarded suspension. In the implementation below, originally found in Kuchana (2004), if there is no precondition satisfied for the method call to be successful, then the method will wait until it finally enters a valid state. Implementation: An example of an actual implementation would be a queue object with a get method that has a guard to detect when there are no items in the queue. Once the put method notifies the other methods (for example, a get method), then the get method can exit its guarded state and proceed with a call. Once the queue is empty, then the get method will enter a guarded state once again.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Olympus PEN E-PL9** Olympus PEN E-PL9: The Olympus PEN E-PL9 is a rangefinder-styled digital mirrorless interchangeable lens camera announced by Olympus Corp. in February 2018. It succeeds the Olympus PEN E-PL8. The E-PL9 was succeeded by the Olympus PEN E-PL10 announced in October 2019.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Woven coverlet** Woven coverlet: A woven coverlet or coverlid (derived from Cat. cobre-lit) is a type of bed covering with a woven design in colored wool yarn on a background of natural linen or cotton. Coverlets were woven in almost every community in the United States from the colonial era until the late 19th century. History: Coverlets of 18th century America were twill-woven with a linen warp and woolen weft. The wool was most often dyed a dark blue from indigo, but madder red, walnut brown, and a lighter "Williamsburg blue" were also used. History: From the turn of the 19th century, simple twill-woven coverlets gave way to patterned hand-woven coverlets made in two different ways:Overshot weave coverlets were made with a plain woven undyed cotton warp and weft and repeating geometric patterns made with a supplementary dyed woolen weft. Made on a simple four-harness loom, overshot coverlets were often made in the home and remained a common craft in rural Appalachia into the early 20th century.Double-cloth coverlets were double-woven, with two sets of interconnected warps and wefts, requiring the more elaborate looms of professional weavers. Wool for these coverlets was spun (and often dyed) at home and then delivered to a local weaver who made up the coverlet.Summer-winter coverlets were reversible, and the summer-winter term refers to the structure not the color. The summer-winter coverlet should not be confused with double weave and is more closely related to overshot. Like double weave, it is dark on one side and light on the other but there is only one layer of cloth, therefore it is much lighter in mass and thickness. History: Following the introduction of the jacquard loom in the early 1820s, machine-woven coverlets in large-scale floral designs became popular.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Flatness problem** Flatness problem: The flatness problem (also known as the oldness problem) is a cosmological fine-tuning problem within the Big Bang model of the universe. Such problems arise from the observation that some of the initial conditions of the universe appear to be fine-tuned to very 'special' values, and that small deviations from these values would have extreme effects on the appearance of the universe at the current time. Flatness problem: In the case of the flatness problem, the parameter which appears fine-tuned is the density of matter and energy in the universe. This value affects the curvature of space-time, with a very specific critical value being required for a flat universe. The current density of the universe is observed to be very close to this critical value. Since any departure of the total density from the critical value would increase rapidly over cosmic time, the early universe must have had a density even closer to the critical density, departing from it by one part in 1062 or less. This leads cosmologists to question how the initial density came to be so closely fine-tuned to this 'special' value. Flatness problem: The problem was first mentioned by Robert Dicke in 1969.: 62,  : 61  The most commonly accepted solution among cosmologists is cosmic inflation, the idea that the universe went through a brief period of extremely rapid expansion in the first fraction of a second after the Big Bang; along with the monopole problem and the horizon problem, the flatness problem is one of the three primary motivations for inflationary theory. Energy density and the Friedmann equation: According to Einstein's field equations of general relativity, the structure of spacetime is affected by the presence of matter and energy. On small scales space appears flat – as does the surface of the Earth if one looks at a small area. On large scales however, space is bent by the gravitational effect of matter. Since relativity indicates that matter and energy are equivalent, this effect is also produced by the presence of energy (such as light and other electromagnetic radiation) in addition to matter. The amount of bending (or curvature) of the universe depends on the density of matter/energy present. Energy density and the Friedmann equation: This relationship can be expressed by the first Friedmann equation. In a universe without a cosmological constant, this is: H2=8πG3ρ−kc2a2 Here H is the Hubble parameter, a measure of the rate at which the universe is expanding. ρ is the total density of mass and energy in the universe, a is the scale factor (essentially the 'size' of the universe), and k is the curvature parameter — that is, a measure of how curved spacetime is. A positive, zero or negative value of k corresponds to a respectively closed, flat or open universe. The constants G and c are Newton's gravitational constant and the speed of light, respectively. Energy density and the Friedmann equation: Cosmologists often simplify this equation by defining a critical density, ρc . For a given value of H , this is defined as the density required for a flat universe, i.e. k=0 . Thus the above equation implies ρc=3H28πG .Since the constant G is known and the expansion rate H can be measured by observing the speed at which distant galaxies are receding from us, ρc can be determined. Its value is currently around 10−26 kg m−3. The ratio of the actual density to this critical value is called Ω, and its difference from 1 determines the geometry of the universe: Ω > 1 corresponds to a greater than critical density, ρ>ρc , and hence a closed universe. Ω < 1 gives a low density open universe, and Ω equal to exactly 1 gives a flat universe. Energy density and the Friedmann equation: The Friedmann equation, 3a28πGH2=ρa2−3kc28πG, can be re-arranged into ρca2−ρa2=−3kc28πG, which after factoring ρa2 , and using Ω=ρ/ρc , leads to (Ω−1−1)ρa2=−3kc28πG. The right hand side of the last expression above contains constants only and therefore the left hand side must remain constant throughout the evolution of the universe. Energy density and the Friedmann equation: As the universe expands the scale factor a increases, but the density ρ decreases as matter (or energy) becomes spread out. For the standard model of the universe which contains mainly matter and radiation for most of its history, ρ decreases more quickly than a2 increases, and so the factor ρa2 will decrease. Since the time of the Planck era, shortly after the Big Bang, this term has decreased by a factor of around 10 60 , and so (Ω−1−1) must have increased by a similar amount to retain the constant value of their product. Current value of Ω: Measurement The value of Ω at the present time is denoted Ω0. This value can be deduced by measuring the curvature of spacetime (since Ω = 1, or ρ=ρc , is defined as the density for which the curvature k = 0). The curvature can be inferred from a number of observations. Current value of Ω: One such observation is that of anisotropies (that is, variations with direction - see below) in the Cosmic Microwave Background (CMB) radiation. The CMB is electromagnetic radiation which fills the universe, left over from an early stage in its history when it was filled with photons and a hot, dense plasma. This plasma cooled as the universe expanded, and when it cooled enough to form stable atoms it no longer absorbed the photons. The photons present at that stage have been propagating ever since, growing fainter and less energetic as they spread through the ever-expanding universe. Current value of Ω: The temperature of this radiation is almost the same at all points on the sky, but there is a slight variation (around one part in 100,000) between the temperature received from different directions. The angular scale of these fluctuations - the typical angle between a hot patch and a cold patch on the sky - depends on the curvature of the universe which in turn depends on its density as described above. Thus, measurements of this angular scale allow an estimation of Ω0.Another probe of Ω0 is the frequency of Type-Ia supernovae at different distances from Earth. These supernovae, the explosions of degenerate white dwarf stars, are a type of standard candle; this means that the processes governing their intrinsic brightness are well understood so that a measure of apparent brightness when seen from Earth can be used to derive accurate distance measures for them (the apparent brightness decreasing in proportion to the square of the distance - see luminosity distance). Comparing this distance to the redshift of the supernovae gives a measure of the rate at which the universe has been expanding at different points in history. Since the expansion rate evolves differently over time in cosmologies with different total densities, Ω0 can be inferred from the supernovae data. Current value of Ω: Data from the Wilkinson Microwave Anisotropy Probe (WMAP, measuring CMB anisotropies) combined with that from the Sloan Digital Sky Survey and observations of type-Ia supernovae constrain Ω0 to be 1 within 1%. In other words, the term |Ω − 1| is currently less than 0.01, and therefore must have been less than 10−62 at the Planck era. The cosmological parameters measured by Planck spacecraft mission reaffirmed previous results by WMAP. Current value of Ω: Implication This tiny value is the crux of the flatness problem. If the initial density of the universe could take any value, it would seem extremely surprising to find it so 'finely tuned' to the critical value ρc . Indeed, a very small departure of Ω from 1 in the early universe would have been magnified during billions of years of expansion to create a current density very far from critical. In the case of an overdensity ( ρ>ρc ) this would lead to a universe so dense it would cease expanding and collapse into a Big Crunch (an opposite to the Big Bang in which all matter and energy falls back into an extremely dense state) in a few years or less; in the case of an underdensity ( ρ<ρc ) it would expand so quickly and become so sparse it would soon seem essentially empty, and gravity would not be strong enough by comparison to cause matter to collapse and form galaxies resulting in a big freeze. In either case the universe would contain no complex structures such as galaxies, stars, planets and any form of life.This problem with the Big Bang model was first pointed out by Robert Dicke in 1969, and it motivated a search for some reason the density should take such a specific value. Solutions to the problem: Some cosmologists agreed with Dicke that the flatness problem was a serious one, in need of a fundamental reason for the closeness of the density to criticality. But there was also a school of thought which denied that there was a problem to solve, arguing instead that since the universe must have some density it may as well have one close to ρc as far from it, and that speculating on a reason for any particular value was "beyond the domain of science". That, however, is a minority viewpoint, even among those sceptical of the existence of the flatness problem. Several cosmologists have argued that, for a variety of reasons, the flatness problem is based on a misunderstanding, but that seems to be widely ignored by many. Enough cosmologists saw the problem as a real one, however, for various solutions to be proposed. Solutions to the problem: Anthropic principle One solution to the problem is to invoke the anthropic principle, which states that humans should take into account the conditions necessary for them to exist when speculating about causes of the universe's properties. If two types of universe seem equally likely but only one is suitable for the evolution of intelligent life, the anthropic principle suggests that finding ourselves in that universe is no surprise: if the other universe had existed instead, there would be no observers to notice the fact. Solutions to the problem: The principle can be applied to solve the flatness problem in two somewhat different ways. The first (an application of the 'strong anthropic principle') was suggested by C. B. Collins and Stephen Hawking, who in 1973 considered the existence of an infinite number of universes such that every possible combination of initial properties was held by some universe. In such a situation, they argued, only those universes with exactly the correct density for forming galaxies and stars would give rise to intelligent observers such as humans: therefore, the fact that we observe Ω to be so close to 1 would be "simply a reflection of our own existence."An alternative approach, which makes use of the 'weak anthropic principle', is to suppose that the universe is infinite in size, but with the density varying in different places (i.e. an inhomogeneous universe). Thus some regions will be over-dense (Ω > 1) and some under-dense (Ω < 1). These regions may be extremely far apart - perhaps so far that light has not had time to travel from one to another during the age of the universe (that is, they lie outside one another's cosmological horizons). Therefore, each region would behave essentially as a separate universe: if we happened to live in a large patch of almost-critical density we would have no way of knowing of the existence of far-off under- or over-dense patches since no light or other signal has reached us from them. An appeal to the anthropic principle can then be made, arguing that intelligent life would only arise in those patches with Ω very close to 1, and that therefore our living in such a patch is unsurprising.This latter argument makes use of a version of the anthropic principle which is 'weaker' in the sense that it requires no speculation on multiple universes, or on the probabilities of various different universes existing instead of the current one. It requires only a single universe which is infinite - or merely large enough that many disconnected patches can form - and that the density varies in different regions (which is certainly the case on smaller scales, giving rise to galactic clusters and voids). Solutions to the problem: However, the anthropic principle has been criticised by many scientists. For example, in 1979 Bernard Carr and Martin Rees argued that the principle “is entirely post hoc: it has not yet been used to predict any feature of the Universe.” Others have taken objection to its philosophical basis, with Ernan McMullin writing in 1994 that "the weak Anthropic principle is trivial ... and the strong Anthropic principle is indefensible." Since many physicists and philosophers of science do not consider the principle to be compatible with the scientific method, another explanation for the flatness problem was needed. Solutions to the problem: Inflation The standard solution to the flatness problem invokes cosmic inflation, a process whereby the universe expands exponentially quickly (i.e. a grows as eλt with time t , for some constant λ ) during a short period in its early history. The theory of inflation was first proposed in 1979, and published in 1981, by Alan Guth. His two main motivations for doing so were the flatness problem and the horizon problem, another fine-tuning problem of physical cosmology. However, “In December, 1980 when Guth was developing his inflation model, he was not trying to solve either the flatness or horizon problems. Indeed, at that time, he knew nothing of the horizon problem and had never quantitatively calculated the flatness problem”. He was a particle physicist trying to solve the magnetic monopole problem.” The proposed cause of inflation is a field which permeates space and drives the expansion. The field contains a certain energy density, but unlike the density of the matter or radiation present in the late universe, which decrease over time, the density of the inflationary field remains roughly constant as space expands. Therefore, the term ρa2 increases extremely rapidly as the scale factor a grows exponentially. Recalling the Friedmann Equation (Ω−1−1)ρa2=−3kc28πG ,and the fact that the right-hand side of this expression is constant, the term |Ω−1−1| must therefore decrease with time. Solutions to the problem: Thus if |Ω−1−1| initially takes any arbitrary value, a period of inflation can force it down towards 0 and leave it extremely small - around 10 62 as required above, for example. Subsequent evolution of the universe will cause the value to grow, bringing it to the currently observed value of around 0.01. Thus the sensitive dependence on the initial value of Ω has been removed: a large and therefore 'unsurprising' starting value need not become amplified and lead to a very curved universe with no opportunity to form galaxies and other structures. Solutions to the problem: This success in solving the flatness problem is considered one of the major motivations for inflationary theory. Solutions to the problem: Post inflation Although inflationary theory is regarded as having had much success, and the evidence for it is compelling, it is not universally accepted: cosmologists recognize that there are still gaps in the theory and are open to the possibility that future observations will disprove it. In particular, in the absence of any firm evidence for what the field driving inflation should be, many different versions of the theory have been proposed. Many of these contain parameters or initial conditions which themselves require fine-tuning in much the way that the early density does without inflation. Solutions to the problem: For these reasons work is still being done on alternative solutions to the flatness problem. These have included non-standard interpretations of the effect of dark energy and gravity, particle production in an oscillating universe, and use of a Bayesian statistical approach to argue that the problem is non-existent. The latter argument, suggested for example by Evrard and Coles, maintains that the idea that Ω being close to 1 is 'unlikely' is based on assumptions about the likely distribution of the parameter which are not necessarily justified. Despite this ongoing work, inflation remains by far the dominant explanation for the flatness problem. The question arises, however, whether it is still the dominant explanation because it is the best explanation, or because the community is unaware of progress on this problem. In particular, in addition to the idea that Ω is not a suitable parameter in this context, other arguments against the flatness problem have been presented: if the universe collapses in the future, then the flatness problem "exists", but only for a relatively short time, so a typical observer would not expect to measure Ω appreciably different from 1; in the case of a universe which expands forever with a positive cosmological constant, fine-tuning is needed not to achieve a (nearly) flat universe, but also to avoid it. Solutions to the problem: Einstein–Cartan theory The flatness problem is naturally solved by the Einstein–Cartan–Sciama–Kibble theory of gravity, without an exotic form of matter required in inflationary theory. This theory extends general relativity by removing a constraint of the symmetry of the affine connection and regarding its antisymmetric part, the torsion tensor, as a dynamical variable. It has no free parameters. Including torsion gives the correct conservation law for the total (orbital plus intrinsic) angular momentum of matter in the presence of gravity. The minimal coupling between torsion and Dirac spinors obeying the nonlinear Dirac equation generates a spin-spin interaction which is significant in fermionic matter at extremely high densities. Such an interaction averts the unphysical big bang singularity, replacing it with a bounce at a finite minimum scale factor, before which the Universe was contracting. The rapid expansion immediately after the big bounce explains why the present Universe at largest scales appears spatially flat, homogeneous and isotropic. As the density of the Universe decreases, the effects of torsion weaken and the Universe smoothly enters the radiation-dominated era.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Back to school (marketing)** Back to school (marketing): In merchandising, back to school is the period in which students and their parents purchase school supplies and apparel for the upcoming school year. At many department stores, back to school sales are advertised as a time when school supplies, children's, and young adults' clothing goes on sale. Office supplies have also become an important part of back to school sales, with the rise in prominence of personal computers and related equipment in education; traditional supplies such as paper, pens, pencils and binders will often be marked at steep discounts, often as loss leaders to entice shoppers to buy other items in the store. Many states offer tax-free periods (usually about a week) at which time any school supplies and children's clothing purchased does not have sales tax added. Timing: Back to school period of time usually starts and ends in August before the school year starts in the United States, Europe, and Canada. In Australia and New Zealand, this usually occurs in February, while in Malaysia, this period lasts from November to December. In India, the back to school sales traditionally start in the month of June when schools are about to open. In Japan, which is unusual in that it starts its school year in spring, the back to school sales are traditionally held in March.In Canada and the United States, back to school shopping is associated with Labor Day, which falls on the first Monday of September. While Labor Day is a widely observed holiday, it has no official celebration. Labor Day has since become symbolic as the unofficial "end of summer". Most schools and colleges begin their school year around this time, so the holiday has become a back to school shopping tradition. Much as Memorial Day and Victoria Day, and Canada Day and Independence Day are associated with summer and patriotic products respectively, and American Thanksgiving has been associated with the impending start of the Christmas shopping season.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Looping (education)** Looping (education): Looping in education is the practice of moving groups of children up from one grade to the next with the same teacher. This system, which is also called multiyear grouping, lasts from two to five years and, as the class moves on, the teacher loops back to pick another group of children. This practice is particularly prevalent in Europe and Asia. Background: It is believed that young learners experience a complex period of development and that it requires consistency, which can be provided by the looping learning framework. Looping allows teachers to address this issue by providing continuity as well as a stable and secure learning environment. It had its origin in Waldorf education, which spread in the United States in 1928 after it was first introduced in Europe. During the 19th and early 20th centuries, the looping system was implicit in the educational structure, particularly in one-room schools where there was only one teacher available for all students. Outcomes: According to its proponents, looping offers several benefits and these include an improved student-teacher relationship due to the stability and emotional security provided to the learners as well as a greater opportunity for teachers to get to know them, leading to individualization of their learning program. It is also suggested that it provides more instructional time since there is less time required at the beginning of the school year on routines of procedures and familiarization. The "carryover" relationship keeps the class from starting from scratch on the next year of the loop, allowing them to gain up to six extra weeks of instructional time. Looping also facilitates better social interaction and could enhance a sense of family and community within the classroom.There are also studies that show students who loop tend to have better attendance. It is also associated with improved reading and math performance as well as improved conflict resolution and teamwork capabilities.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**MeRIPseq** MeRIPseq: MeRIPseq (or MeRIP-seq) stands for methylated RNA immunoprecipitation sequencing, which is a method for detection of post-transcriptional RNA modifications, developed by Kate Meyer et al. while working in the laboratory of Sammie Jaffrey at Cornell University Graduate School of Medical Sciences. It is also called m6A-seq.A variation of the MerIP-seq method was coined by Benjamin Delatte and colleagues in 2016. This variant, called hMerIP-seq (hydroxymethylcytosine RNA immunoprecipitation), uses an antibody that specifically recognizes 5-hydroxymethylcytosine, a modified RNA base affecting in vitro translation and brain development in Drosophila.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Hilbert–Schmidt operator** Hilbert–Schmidt operator: In mathematics, a Hilbert–Schmidt operator, named after David Hilbert and Erhard Schmidt, is a bounded operator A:H→H that acts on a Hilbert space H and has finite Hilbert–Schmidt norm where {ei:i∈I} is an orthonormal basis. The index set I need not be countable. However, the sum on the right must contain at most countably many non-zero terms, to have meaning. This definition is independent of the choice of the orthonormal basis. In finite-dimensional Euclidean space, the Hilbert–Schmidt norm HS is identical to the Frobenius norm. ||·||HS is well defined: The Hilbert–Schmidt norm does not depend on the choice of orthonormal basis. Indeed, if {ei}i∈I and {fj}j∈I are such bases, then If ei=fi, then {\textstyle \sum _{i}\|Ae_{i}\|^{2}=\sum _{i}\|A^{*}e_{i}\|^{2}.} As for any bounded operator, A=A∗∗. Replacing A with A∗ in the first formula, obtain {\textstyle \sum _{i}\|A^{*}e_{i}\|^{2}=\sum _{j}\|Af_{j}\|^{2}.} The independence follows. Examples: An important class of examples is provided by Hilbert–Schmidt integral operators. Every bounded operator with a finite-dimensional range (these are called operators of finite rank) is a Hilbert–Schmidt operator. The identity operator on a Hilbert space is a Hilbert–Schmidt operator if and only if the Hilbert space is finite-dimensional. Given any x and y in H , define x⊗y:H→H by (x⊗y)(z)=⟨z,y⟩x , which is a continuous linear operator of rank 1 and thus a Hilbert–Schmidt operator; moreover, for any bounded linear operator A on H (and into H ), Tr ⁡(A(x⊗y))=⟨Ax,y⟩ .If T:H→H is a bounded compact operator with eigenvalues ℓ1,ℓ2,… of |T|=T∗T , where each eigenvalue is repeated as often as its multiplicity, then T is Hilbert–Schmidt if and only if {\textstyle \sum _{i=1}^{\infty }\ell _{i}^{2}<\infty } , in which case the Hilbert–Schmidt norm of T is HS {\textstyle \left\|T\right\|_{\operatorname {HS} }={\sqrt {\sum _{i=1}^{\infty }\ell _{i}^{2}}}} .If k∈L2(μ×μ) , where (X,Ω,μ) is a measure space, then the integral operator K:L2(μ)→L2(μ) with kernel k is a Hilbert–Schmidt operator and HS =‖k‖2 Space of Hilbert–Schmidt operators: The product of two Hilbert–Schmidt operators has finite trace-class norm; therefore, if A and B are two Hilbert–Schmidt operators, the Hilbert–Schmidt inner product can be defined as The Hilbert–Schmidt operators form a two-sided *-ideal in the Banach algebra of bounded operators on H. They also form a Hilbert space, denoted by BHS(H) or B2(H), which can be shown to be naturally isometrically isomorphic to the tensor product of Hilbert spaces where H∗ is the dual space of H. The norm induced by this inner product is the Hilbert–Schmidt norm under which the space of Hilbert–Schmidt operators is complete (thus making it into a Hilbert space). The space of all bounded linear operators of finite rank (i.e. that have a finite-dimensional range) is a dense subset of the space of Hilbert–Schmidt operators (with the Hilbert–Schmidt norm).The set of Hilbert–Schmidt operators is closed in the norm topology if, and only if, H is finite-dimensional. Properties: Every Hilbert–Schmidt operator T : H → H is a compact operator. A bounded linear operator T : H → H is Hilbert–Schmidt if and only if the same is true of the operator := {\textstyle \left|T\right|:={\sqrt {T^{*}T}}} , in which case the Hilbert–Schmidt norms of T and |T| are equal. Hilbert–Schmidt operators are nuclear operators of order 2, and are therefore compact operators. If S:H1→H2 and T:H2→H3 are Hilbert–Schmidt operators between Hilbert spaces then the composition T∘S:H1→H3 is a nuclear operator. Properties: If T : H → H is a bounded linear operator then we have HS T is a Hilbert–Schmidt operator if and only if the trace Tr of the nonnegative self-adjoint operator T∗T is finite, in which case HS Tr ⁡(T∗T) If T : H → H is a bounded linear operator on H and S : H → H is a Hilbert–Schmidt operator on H then HS HS , HS HS , and HS HS ‖T‖ . In particular, the composition of two Hilbert–Schmidt operators is again Hilbert–Schmidt (and even a trace class operator). Properties: The space of Hilbert–Schmidt operators on H is an ideal of the space of bounded operators B(H) that contains the operators of finite-rank. If A is a Hilbert–Schmidt operator on H then where {ei:i∈I} is an orthonormal basis of H, and ‖A‖2 is the Schatten norm of A for p = 2. In Euclidean space, HS is also called the Frobenius norm.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Proprietary file format** Proprietary file format: A proprietary file format is a file format of a company, organization, or individual that contains data that is ordered and stored according to a particular encoding-scheme, designed by the company or organization to be secret, such that the decoding and interpretation of this stored data is easily accomplished only with particular software or hardware that the company itself has developed. The specification of the data encoding format is not released, or underlies non-disclosure agreements. A proprietary format can also be a file format whose encoding is in fact published, but is restricted through licences such that only the company itself or licensees may use it. In contrast, an open format is a file format that is published and free to be used by everybody. Proprietary file format: Proprietary formats are typically controlled by a company or organization for its own benefits, and the restriction of its use by others is ensured through patents or as trade secrets. It is thus intended to give the licence holder exclusive control of the technology to the (current or future) exclusion of others. Typically such restrictions attempt to prevent reverse engineering, though reverse engineering of file formats for the purposes of interoperability is generally believed to be legal by those who practice it. Legal positions differ according to each country's laws related to, among other things, software patents. Proprietary file format: Because control over a format may be exerted in varying ways and in varying degrees, and documentation of a format may deviate in many different ways from the ideal, there is not necessarily a clear black/white distinction between open and proprietary formats. Nor is there any universally recognized "bright line" separating the two. The lists of prominent formats below illustrate this point, distinguishing "open" (i.e. publicly documented) proprietary formats from "closed" (undocumented) proprietary formats and including a number of cases which are classed by some observers as open and by others as proprietary. Privacy, ownership, risk and freedom: One of the contentious issues surrounding the use of proprietary formats is that of ownership of created content. If the information is stored in a way which the user's software provider tries to keep secret, the user may own the information by virtue of having created it, but they have no way to retrieve it except by using a version of the original software which produced the file. Without a standard file format or reverse engineered converters, users cannot share data with people using competing software. The fact that the user depends on a particular brand of software to retrieve the information stored in a proprietary format file increases barriers of entry for competing software and may contribute to vendor lock-in concept. Privacy, ownership, risk and freedom: The issue of risk comes about because proprietary formats are less likely to be publicly documented and therefore less future proof. If the software firm owning right to that format stops making software which can read it then those who had used the format in the past may lose all information in those files. This is particularly common with formats that were not widely adopted. However, even ubiquitous formats such as Microsoft Word cannot be fully reverse-engineered. Prominent proprietary formats: Open proprietary formats AAC – an open standard, but owned by Via Licensing GEDCOM – an open specification for genealogy data exchange, owned by the Church of Jesus Christ of Latter-day Saints MP3 – an open standard, but subject to patents in some countries Closed proprietary formats CDR – (non-documented) CorelDraw's native format primarily used for vector graphic drawings DWG – (non-documented) AutoC AD drawing PSD – (documented) Adobe Photoshop's native image format RAR – (partially documented) archive and compression file format owned by Alexander L. Roshal WMA – a closed format, owned by Microsoft Controversial RTF – a formatted text format (proprietary, published specification, defined and maintained only by Microsoft) SWF – Adobe Flash format (formerly closed/undocumented, now partially or completely open) XFA – Adobe XML Forms Architecture, used in PDF files (published specification by Adobe, required but not documented in the PDF ISO 32000-1 standard; controlled and maintained only by Adobe) ZIP – a base version of this data compression and archive file format is in the public domain, but newer versions have some patented features Formerly proprietary GIF – CompuServe's Graphics Interchange Format (the specification's royalty-free licence requires implementers to give CompuServe credit as owner of the format; separately, patents covering certain aspects of the specification were held by Unisys until they expired in 2004) PDF – Adobe's Portable Document Format (open since 2008 - ISO 32000-1), but there are still some technologies indispensable for the application of ISO 32000-1 that are defined only by Adobe and remain proprietary (e.g. Adobe XML Forms Architecture, Adobe JavaScript). Prominent proprietary formats: DOC – Microsoft Word Document (formerly closed/undocumented, now Microsoft Open Specification Promise) XLS – Microsoft Excel spreadsheet file format (formerly closed/undocumented, now Microsoft Open Specification Promise) PPT – Microsoft PowerPoint Presentation file format (formerly closed/undocumented, now Microsoft Open Specification Promise)
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Send Me To Heaven** Send Me To Heaven: Send Me To Heaven (officially stylized as S.M.T.H.) is an Android application developed by Carrot Pop which measures the vertical distance that a mobile phone is thrown. Players compete against each other by seeking to throw their phones higher than others, often at the risk of damaging their phones. The app was immediately banned from the App Store but remains available from Google Play, where it maintains a cult following. Gameplay: Petr Svarovsky, a Czech-born Norwegian artist who founded Carrot Pop to develop transgressive smartphone apps, indicated during an interview with WIRED magazine that he hoped to have people destroy as many iPhones as possible while playing his game. "The original idea was to have very expensive gadgets, which people in certain societies buy just to show off, and to get them to throw it." Nonetheless, the mobile game opens with a warning that requests players to be aware of their surroundings, along with a legal disclaimer absolving the developer from any injuries or damages that may result from playing.Players are instructed to throw their phones as high as they can, with minimal rotation for most accurate results. The maximum height is calculated via the phone's accelerometer. Because some phones have accelerometers positioned off-center, any rotation in those phones may confound the data. "Cheating" by throwing a phone from a tall building typically returns an error message. The app's calculations keep track of how long the phone takes to rise and fall, and an error message is displayed if the distance fallen exceeds the length of the ascent.Exceptionally good scores may appear on the game's leader board, which is divided into the categories World Top 10, Week Top 10, Day Top 10 and Local Top 10. Some users reported scores as high as 40 meters (131 feet), which Svarovsky discovered was the result of players firing their phones into the air with slingshots. Reception: According to Svarovsky, the first demo of the game took place at a music festival in Oslo, Norway. Attendees were so enthusiastic with the idea that many began throwing their phones into the air without bothering to download the app.Apple rejected Send Me To Heaven from the App Store, citing policies against encouraging the damage of an iOS device. The app was accepted by Google Play without comment. Reception: Users have left generally positive reviews for the app. An official Facebook page allows players to share photos and videos of their attempts. The game attracted notoriety upon its release in 2013 and has experienced brief renewals of popularity since, most recently in 2017. As of 2021, the game remains actively updated by its developers. Reception: Due to the rejection of Send Me To Heaven from the App Store, the only iOS device currently running a copy of Send Me To Heaven is Petr Svarovsky's personal iPhone, which contains the app prototype. Svarovsky has attempted to sell the badly damaged iPhone multiple times as a "collectible" game item. Although badly damaged, the iPhone is still functional and is marketed as including the following bonus content: Svarovsky's ex-girlfriend's phone number, Svarovsky's dentist's phone number, some cat photographs, and some heavy metal songs. The phone was offered for $30,000 on Etsy and, later, for $100,000 on Saatchi Art. Similar applications: I Am Rich, a mobile application that was also banned from the App Store
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**VICE** VICE: The software program VICE, standing for VersatIle Commodore Emulator, is a free and cross platform emulator for Commodore's 8-bit computers. It runs on Linux, Amiga, Unix, MS-DOS, Win32, Mac OS X, OS/2, RISC OS, QNX, GP2X, Pandora (console), Dingoo A320, Syllable, and BeOS host machines. VICE is free software, released under the GNU General Public License since 2004. VICE for Microsoft Windows (Win32) prior to v3.3 were known as WinVICE, the OS/2 variant is called Vice/2, and the emulator running on BeOS is called BeVICE. History: The development of VICE began in 1993 by a Finnish programmer Jarkko Sonninen, who was the founder of the project. Sonninen retired from the project in 1994.VICE 2.1, released on December 19, 2008, emulates the Commodore 64, Commodore 128, Commodore VIC-20, Commodore Plus/4, C64 Direct-to-TV (with its additional video modes) and all the Commodore PET models including the CBM-II but excluding the 'non-standard' features of the SuperPET 9000. WinVICE supports digital joysticks via a parallel port driver, and, with a CatWeasel PCI card, is planned to perform hardware SID playback (requires optional SID chip installed in socket). History: As of 2004, VICE was one of the most widely used emulators of the Commodore 8-bit personal computers.: 5  It is also one of the few usable Commodore emulators to exist on free Unix-based platforms, including most Linux and BSD distributions. VICE 3.4 drops support for Syllable Desktop, SCO, QNX, SGI, AIX, OPENSTEP/NeXTSTEP/Rhapsody, and Solaris/OpenIndiana, as well as remaining traces of support for Minix, SkyOS, UNIXWARE, and Sortix, due to lack of staff. VICE 3.5 drops explicit support for OS/2 and AmigaOS, due to the transition to GTK3 UI. On December 2022, the VICE emulator was used as an inspiration for an Apple Macintosh emulator powered by a Raspberry Pi.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**2-Formylbenzoate dehydrogenase** 2-Formylbenzoate dehydrogenase: 2-Formylbenzoate dehydrogenase (EC 1.2.1.78, 2-carboxybenzaldehyde dehydrogenase, 2CBAL dehydrogenase, PhdK) is an enzyme with systematic name 2-formylbenzoate:NAD+ oxidoreductase. This enzyme catalyses the following chemical reaction 2-formylbenzoate + NAD+ + H2O ⇌ o-phthalic acid + NADH + H+The enzyme is involved in phenanthrene degradation.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Emotiv Systems** Emotiv Systems: Emotiv Systems is an Australian electronics innovation company developing technologies to evolve human computer interaction incorporating non-conscious cues into the human-computer dialogue to emulate human to human interaction. Developing brain–computer interfaces (BCIs) based on electroencephalography (EEG) technology, Emotiv Systems produced the EPOC near headset, a peripheral targeting the gaming market for Windows, OS X and Linux platforms. The EPOC has 16 electrodes and was originally designed to work as a BCI input device.Emotive Systems Pty Ltd was founded in 2003 by technology entrepreneurs Tan Le, Nam Do, Allan Snyder, and Neil Weste. Emotiv Systems: Emotiv Research Pty Ltd was founded in 2011 also by Tan Le. Nam Do, Allan Snyder, and Neil Weste are not affiliated with this business. This business has operated in America under the name Emotiv Lifesciences Inc until December 2013 when it became Emotiv Inc. It is not affiliated with Emotiv Systems.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**SCREEN3** SCREEN3: SCREEN3 is a technology used and designed by Motorola to push news and information to mobile phones. Functionality: The SCREEN3 feature functions by downloading headlines and displaying these to the user as part of the contents of the idle screen. After the user has configured which feeds or channels are of interest, feed updates are continuously and automatically pushed to the handset via packet data transfers which happen in the background. Functionality: While viewing the small images and accompanying headlines ("bites") which scroll by on the screen, users can choose to view a snippet of the actual story ("the snack"). Users then are taken to a condensed summary version of the story or headline. At this point, if the story is deemed sufficiently interesting, users can opt to view the full story ("the meal"), causing the web browser to be launched to retrieve the appropriate web page. Functionality: Billing is usually set up in such a way that users are not charged for headlines and text snippets pushed to the handset, and are instead only charged any per-byte data fees when actually viewing the full story. Rates vary by carrier, but examples include US$0.01 per kilobyte, or US$19.99 for unlimited data access. Availability: SCREEN3 became available in Motorola products starting with phones shipping near the end of 2005. The technology is not available on all handsets, nor is it supported by all wireless carriers, so exact availability varies. It also requires a data plan of some form, so customers with entry-level voice service plans will be unable to take advantage of this functionality. Availability: Products advertised as supporting SCREEN3 Motorola KRZR K1 Motorola KRZR K3 Motorola MotoRokr E6 Motorola PEBL U6 Motorola RAZR V3 Motorola RAZR V3i Motorola RAZR V3x Motorola RAZR V3xx Motorola RAZR V6 Motorola RAZR² V9 Motorola RIZR Z3 Motorola SLVR L7 Motorola V195 Motorola V360 Motorola V557 Motorola SLVR L9 Wireless carriers advertised as supporting SCREEN3 Cingular Wireless Telefonica China Mobile ChungHwa Telecom M1 (Singapore) (Discontinued support on 2008–03) Optus (Australia) 3 (Australia) Telstra (Australia) AirTel India Motorola Retail markets supporting SCREEN3 Australia Hong Kong India Indonesia United Kingdom Taiwan
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Staphylococcus pseudintermedius** Staphylococcus pseudintermedius: Staphylococcus pseudintermedius is a gram positive coccus bacteria of the genus Staphylococcus found worldwide. It is primarily a pathogen for domestic animals, but has been known to affect humans as well. S. pseudintermedius is an opportunistic pathogen that secretes immune modulating virulence factors, has many adhesion factors, and the potential to create biofilms, all of which help to determine the pathogenicity of the bacterium. Diagnoses of Staphylococcus pseudintermedius have traditionally been made using cytology, plating, and biochemical tests. More recently, molecular technologies like MALDI-TOF, DNA hybridization and PCR have become preferred over biochemical tests for their more rapid and accurate identifications. This includes the identification and diagnosis of antibiotic resistant strains. Morphology and classification: Staphylococci spp. are a genus of gram positive cocci of 0.5 - 1 μm diameter. Staphylococcus pseudintermedius is a non-motile and non-spore forming, facultatively anaerobic bacterium. It appears primarily as grape-like clusters morphologically, but can also be seen as individual or paired cocci. This clustered configuration, as well as the positive catalase test, differentiates staphylococci spp. from streptococci spp., which manifests in chains. Due to its ability to clot blood, S. pseudintermedius subcategorized into a group of coagulase positive (CoPS) staphylococci. CoPS strains typically express more virulence factors. This CoPS characteristic is a contributing factor to its biochemical similarities to S.aureus.Staphylococcal organisms belong to a more encompassing Staphylococcaceae family of organisms. S. aureus and S. epidermidis are other notable species which fall into the same genus as S. pseudintermedius, under this taxonomic categorization. S. pseudintermedius, S. intermedius, and S. delphini are largely phenotypically indiscriminate and therefore comprise the 'Staphylococcus intermedius group' of organisms. Biochemical properties of these three organisms place them as an intermediate between S. aureus and S. epidermidis.Staphylococcus pseudintermedius was first identified as a novel species in 2005 using 16S rRNA sequencing of the tRNA intergenic length polymorphisms of the AJ780976 gene loci. Differing strains of S. pseudintermedius, LMG 22219 - LMG 22222, have been identified in various species: cat, horse, dog, and parrot, respectively. These strains comprise a Staphylococcal species that is distinct from other species within the genus, as distinguished by DNA hybridizations of genome sequences. Previously, many S. pseudintermedius infections or isolates were identified as S. intermedius, before its identification as a distinct taxonomic species.Isolation of S. pseudintermedius from the skin and mucosa of healthy canine can be between 20-90%, with these frequencies being reduced in healthy felines to 5-45%. It is the most commonly identified Staphylococcal organism in these veterinary species. S. pseudintermedius is classified as a biocontainment risk level 2 organisms due to its moderately pathogenic characteristics. Diagnosis: Cytology Using the gram stain technique, Staphylococci are easily identified by their clumped, gram positive, coccus morphology. Slides can be prepared directly using a patient swab, which lends to convenience in the clinic or classroom. However, given that S. pseudintermedius is prevalent within the normal microflora of numerous species, it is better identified as the agent of disease when a corresponding immune reaction is also observed. Where available, the need to identify immune reactors can be avoided by first inoculating the sample onto differential agar plates like SM110, which inhibit the growth of non-staphylococcus bacteria. Cytology alone does not allow for the differentiation between different species in the genus Staphylococci. Diagnosis: Plating When plated on sheep or bovine blood agar, S. pseudintermedius displays incomplete ß-hemolysis. Colonies of S. pseudintermedius on sheep agar are described as medium in size and non-pigmented or grey-white. This can be useful for differentiating S. pseudintermedius from coagulase negative Staphylococci, and from S. aureus which tends to be yellow and display more variable hemolytic patterns on agar plates. S. pseudintermedius colonies are not hemolytic on equine blood agar. While plating may help differentiate species, biochemical or DNA testing may be necessary. Diagnosis: Biochemical tests Historically, biochemical tests have been an important tool used to discriminate between species of Staphylococci. Tests used to identify S. pseudintermedius specifically include DNase, hyaluronidase, coagulase, catalase, and acetoin production tests, amongst others. It can still be difficult to differentiate between members of the S. intermedius group using these methods alone; in veterinary medicine, such diagnoses have relied on the assumption that S. pseudintermedius is the known only member of this group to infect canine skin. More recently, studies using molecular identification methods have found that different S. pseudintermedius strains harbor more phenotypic diversity than previously thought. It has been speculated that these differences have led to underestimation of the importance of S. pseudintermedius in human skin infections. Further, for this reason, S. pseudintermedius is no longer considered to be reliably identifiable using commercially available biochemical tests alone. More sensitive methods like MALDI-TOF have therefore since become preferred. Diagnosis: Identification of methicillin-resistance Molecular methods, like MALDI-TOF and qPCR primers, are the gold standard for accurately identifying the presence of mecA genes, which confer resistance to Beta-lactam drugs in S. pseudintermedius (a term coined "methicillin-resistance"). However, methicillin-resistance can still be identified reliably using biochemical or phenotypic methods, such as disc diffusion. Although cefoxitin disks have been used, oxacillin disks are considered to be much more sensitive, and thus a more accurate method for predicting methicillin resistance in S. pseudintermedius strains. Epidemiology: In dogs, S. pseudintermedius is normally found on the microflora of the skin. The presence of S. pseudintermediushas been observed in higher amounts on dogs that suffer from atopic dermatitis. It is also one of the leading causes of bacterial skin and soft tissue infections, such as pyoderma, urinary tract infections, and surgical site infections. It is also known to infect cats, although not as common. It is transferred by animal-animal contact, and some dog-human zoonoses have also been reported. Transmission is done either vertically or horizontally. The overall prevalence of S. pseudintermedius in small animals is increasing every year, specifically in small animals worldwide.Staphylococcus pseudintermedius is becoming a threat due to its heterogeneous qualities and multi-drug resistance phenotype. Methicillin-resistant S. pseudintermedius (MRSP) has five major clonal complexe (CC) lineages, each with their own unique traits regarding genetic diversity, geographical distribution and antimicrobial resistance. The majority of all MRSP isolates were found in Europe and Asia, with North America, South America, and Oceania contributing only a small portion. The CC71 and CC258 lineages were mostly seen in Europe, CC68 was mostly seen in North America, and CC45 and CC112 seen in Asia. The top three antimicrobials worldwide that MRSP is found to be resistant to are erythromycin, clindamycin, and tetracycline.When looking at the epidemiology of the Staphylococcus intermedius group (SIG), which includes S. pseudintermedius, S. intermedius, and S. delphini, it is noted that in humans most of the recorded cases were above the age of 50, diabetic, and/or immunocompromised in some way. Most of the cultures came from wound sites and respiratory specimens. S. pseudintermedius is not normally found within the microflora of humans. Humans that work in close proximity to animals are at higher risk of S. pseudintermedius infections, such as veterinarians, animal trainers, and zookeepers. Although the risk of pet owners becoming infected by their pets is low, there have been reported cases. Pathogenicity and virulence: As previously described, Staphylococcus pseudintermedius, an opportunistic pathogen, is a part of the normal microbiome of skin and mucous membranes in animals. Animals acquire this bacteria through vertical transmission. The strain of S. pseudintermedius colonizing the mother's vaginal mucous membrane is transferred during birth and becomes a part of the offspring's microbiome. A compromised immune system or tissue injury allows this bacteria to push past host defences and create an infection. We then seen clinical manifestations such as purulent dermatitis, otitis externa, conjunctivitis, urinary tract infections, and post-operative infections. Disease is most commonly seen in dogs and cats with canine pyoderma being the most notable manifestation of S. pseudintermedius.The virulence of S. pseudintermedius is an area of on going research and has many unknowns. The virulence factors carried by S. pseudintermedius vary between strains and do not determine if the bacteria will cause an infection. Rather, infection is a result of an animal's immune status, environment, and genetics.Numerous virulence factors such as enzymes, toxins, and binding proteins have been associated with S. pseudintermedius strains. These include proteases, thermonucleases, coagulases, DNAase, lipase, hemolysin, clumping factor, leukotoxin, enterotoxin, protein A, and exfoliative toxin. Pathogenicity and virulence: Immune modulating virulence factors Haemolysins, leukotoxins, exfoliative toxins, and enterotoxins are secreted from the bacteria to modulate the host's immune response.The pore-forming cytotoxins, α-hemolysin and β-hemolysin, lyse erythrocytes of sheep and rabbits. Leukotoxin destroys host leukocytes and causes tissue necrosis. Exfoliative toxin is responsible for the majority of symptoms seen in canine pyoderma and otitis i.e. skin exfoliation and crusting. Exfoliative toxin causes vesicle formation and erosion in epithelial cells resulting in splitting of the skin. Super-antigens such as enterotoxins activate host immune cells causing T cell proliferation and cytokine release. This virulence factor induces vomiting and has been associated with food poisoning in humans. Protein A, an immunoglobulin binding protein, has been found on the surface of S. pseudintermedius. Protein A attaches to the Fc region of host antibodies, rendering them useless. Without the Fc region, the host immune system cannot recognize that antibody; the complement system cannot be activated and phagocytes cannot destroy the bacteria. Pathogenicity and virulence: Virulence factors for dissemination and adhesion The previously mentioned protein A as well as clumping factor are surface proteins that allow the bacteria to bind to host cells. S. pseudintermedius has been found to produce biofilms, an extracellular matrix of protein, DNA, and polysaccharide, which aids the bacteria in avoiding the host immune system and resisting drugs. Biofilms allow the bacteria to persist on medical equipment even after disinfection and adhere to host cells, a component of chronic infections. Fragments of a biofilm can break off and disseminate to other sites in the body, spreading infection. Quorum sensing, a mechanism that coordinates the bacteria's colonization efforts, has been reported in some strains. Coagulase, lipase, and DNAase produced by the bacteria also aid in its dissemination throughout the host body. Zoonosis: Staphyloccus pseudintermedius has zoonotic potential as it has been found in humans that live with companion animals in the same household. S. pseudintermedius is not a normal commensal bacterium found in humans, however it is capable of adapting to the human microflora and has become increasingly more common. People whom are at the highest risk for contracting this pathogen are pet owners and veterinarians due to their higher contact with dogs and to a lesser extent cats. The most common place of colonization in the human body is within the nasal cavity and from here, the bacteria can cause infections. S. pseudintermedius infections in a human host have been known to cause endocarditis, post-surgical infections, inflammation of the nasal cavity (rhinosinusitis) and catheter-related bacteremia. Staphyloccus pseudintermedius becomes established in a human wound, it has the ability to form antibiotic resistance biofilms. Mechanisms of biofilm resistance of S. pseudintermedius are likely multifactorial and may help to establish infections in humans. Zoonosis: Resistance in humans There is an increasing prevalence of antibiotic resistance, specifically to methicillin of Staphyloccus pseudintermedius which makes it more challenging to treat when habituating a human host. Veterinary dermatologists are exposed to animals with skin and soft infections that commonly possess MRSP (methicillin‐resistant Staphylococcus pseudintermedius). Veterinarians have been found to be colonized with MRSP but not MSSP (methicillin‐susceptible S. pseudintermedius). Treatment of human MRSP infections is done with antibiotics and these should not be used for treatment in animals. Oral antimicrobial treatment for active infection is commonly done with the use of mupirocin, linezolid, quinupristin, rifampicin or vancomyocin are possible treatments. Hand washing, sterilizing equipment and hygiene practices should be implemented to decrease the spread of Staphylococcus infections.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Soybean agglutinin** Soybean agglutinin: Soybean agglutinins (SBA) also known as soy bean lectins (SBL) are lectins found in soybeans. It is a family of similar legume lectins. As a lectin, it is an antinutrient that chelates minerals. In human foodstuffs, less than half of this lectin is deactivated even with extensive cooking (boiling for 20 minutes). Characteristics: SBAs have a molecular weight of 120 kDa and an isoelectric point near pH 6.0 SBA preferentially binds to oligosaccharide structures with terminal α-helix or β-sheet linked N-acetylgalactosamine, and to a lesser extent, galactose residues. Binding can be blocked by substitutions on penultimate sugars, such as fucose attached to the penultimate galactose in blood group B. Soybean lectin has a metal binding site, which is conserved among beans.SBA binds to intestinal epithelial cells, causing inflammation and intestinal permeability, and is a major factor in acute inflammation from raw soybean meal fed to animals.Studies on rats fed SBA had complex changes: With increasing doses of soybean agglutinin, the activities of aspartate aminotransferase linearly increased in plasma and decreased plasma insulin content without decrease in blood glucose levels. Consumption of soybean agglutinin resulted in a depletion of lipid and an overgrowth of small intestine and pancreas in rats. Meanwhile, poor growth of spleen and kidneys and pancreatic hypertrophy was observed in the soybean agglutinin-fed rats. Applications: An important application for SBA is the separation of pluripotent stem cells from human bone marrow. Cells fractionated by SBA do not produce graft vs host disease and can be used in bone marrow transplantation across histocompatibility barriers.SBA binding has been investigated as a useful tool for detection of stomach cancer.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Coracoclavicular ligament** Coracoclavicular ligament: The coracoclavicular ligament is a ligament of the shoulder. It connects the clavicle to the coracoid process of the scapula. Structure: The coracoclavicular ligament connects the clavicle to the coracoid process of the scapula. It it is not part of the acromioclavicular joint articulation, but is usually described with it, since it keeps the clavicle in contact with the acromion. It consists of two fasciculi, the trapezoid ligament in front, and the conoid ligament behind. These ligaments are in relation, in front, with the subclavius muscle and the deltoid muscle; behind, with the trapezius. Structure: Variation The insertions of the coracoclavicular ligament can occur in slightly different places in different people. It may contain three fascicles rather than two. Function: The coracoclavicular ligament is a strong stabilizer of the acromioclavicular joint. It is also important in the transmission of weight of the upper limb to the axial skeleton. There is very little movement at the AC joint. Clinical significance: The coracoclavicular ligament may be damaged during a severe dislocated clavicle. Damage may be repaired with surgery.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Seesaw** Seesaw: A seesaw (also known as a teeter-totter or teeterboard) is a long, narrow board supported by a single pivot point, most commonly located at the midpoint between both ends; as one end goes up, the other goes down. These are most commonly found at parks and school playgrounds. Mechanics: Mechanically, a seesaw is a lever which consists of a beam and fulcrum with the effort and load on either sides. Varieties: The most common playground design of seesaw features a board balanced in the center. A person sits on each end, and they take turns pushing their feet against the ground to lift their side into the air. Playground seesaws usually have handles for the riders to grip as they sit facing each other. One problem with the seesaw's design is that if a child allows himself/herself to hit the ground suddenly after jumping, or exits the seesaw at the bottom, the other child may fall and be injured. For this reason, seesaws are often mounted above a soft surface such as foam, wood chips, or sand. Varieties: Seesaws also are manufactured in shapes designed to look like other things, such as airplanes, helicopters, and animals. Seesaws, and the eagerness of children to play with them, are sometimes used to aid in mechanical processes. For example, at the Gaviotas community in Colombia, a children's seesaw is connected to a water pump.In 2019, a set of seesaws were installed spanning the US-Mexico border fence between El Paso and Ciudad Juárez. Name origin and variations: Seesaws go by several different names around the world. Seesaw, or its variant see-saw, is a direct Anglicisation of the French ci-ça, meaning literally, this-that, seemingly attributable to the back-and-forth motion for which a seesaw is known.The term may also be attributable to the repetitive motion of a saw. It may have its origins in a combination of "scie" – the French word for "saw" with the Anglo-Saxon term "saw". Thus "scie-saw" became "see-saw". Another possibility, is the more obvious situation of the apparent appearance, disappearance, and re-emergence of the person, seated opposite one's position, as they, seemingly, "rise" and "fall", against a changing, oscillating background - therefore: "I see you", followed by, "I saw you". Name origin and variations: In the northern inland and westernmost region of the United States, a seesaw is also called a "teeter-totter." According to linguist Peter Trudgill, the term originates from the Nordic language word tittermatorter. A "teeter-totter" may also refer to a two-person swing on a swing seat, on which two children sit facing each other and the teeter-totter swings back and forth in a pendulum motion. Name origin and variations: Both teeter-totter (from teeter, as in to teeter on the edge) and seesaw (from the verb saw) demonstrate the linguistic process called reduplication, where a word or syllable is doubled, often with a different vowel. Reduplication is typical of words that indicate repeated activity, such as riding up and down on a seesaw. In the southeastern New England region of the United States, it is sometimes referred to as a tilt or a tilting board. Name origin and variations: According to Michael Drout, "There are almost no 'Teeter-' forms in Pennsylvania, and if you go to western West Virginia and down into western North Carolina there is a band of 'Ridey-Horse' that heads almost straight south. This pattern suggests a New England term that spread down the coast and a separate, Scots-Irish development in Appalachia. 'Hickey-horse' in the coastal regions of North Carolina is consistent with other linguistic and ethnic variations." Popularity: In the early 2000s, seesaws have been removed from many playgrounds in the United States, citing safety concerns. However, some people have questioned whether or not the seesaws should have been removed, indicating the fun provided by seesaws may outweigh the safety risk posed using them.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Mars hoax** Mars hoax: The Mars hoax was a hoax circulated by e-mail that began in 2003, that claimed that Mars would look as large as the full Moon to the naked eye on August 27, 2003. The hoax has since resurfaced each time before Mars is at its closest to Earth, about every 26 months. It began from a misinterpretation and exaggeration of a sentence in an e-mail message that reported the close approach between Mars and the Earth in August 2003. At that time, the distance between the two planets was about 55,758,000 kilometres (34,646,000 mi), which was the closest distance between them since September 24, 57,617 BC, when the distance has been calculated to have been about 55,718,000 kilometres (34,622,000 mi). Background: Both Earth and Mars are in elliptical orbits around the Sun in approximately the same plane. By the nature of the laws of physics, the distance between them varies periodically from a minimum equal to the distance between their orbits at some point along them, to a maximum when they are on opposite sides of the Sun. These minimum (opposition) and maximum distances vary considerably as the two planets progress along their elliptical orbits, and occur about every 780 days. Mars was closer to the Earth in August 2003 (at the opposition) than it had been since 57,617 BC, and than it will be until 2287.There was another opposition on 30 October 2005, but with a minimum distance about 25% greater than in 2003 (as reported in the original email, text below) and apparent diameter correspondingly smaller. The magnitude was −2.3, about 60% as bright as 2003. (The Moon has an apparent diameter of around 30 minutes of arc, i.e., 1800 arcseconds, with magnitude of about −12.7 when full, about 9,000 times brighter than Mars in the 2003 approach.) Origin: The Mars hoax originated from an e-mail message in 2003, sometimes titled "Mars Spectacular", with images of Mars and the full moon side by side: The Red Planet is about to be spectacular! This month and next, Earth is catching up with Mars in an encounter that will culminate in the closest approach between the two planets in recorded history. The next time Mars may come this close is in 2287. Due to the way Jupiter's gravity tugs on Mars and perturbs its orbit, astronomers can only be certain that Mars has not come this close to Earth in the Last 5,000 years, but it may be as long as 60,000 years before it happens again. Origin: The encounter will culminate on August 27th when Mars comes to within 34,649,589 miles (55,763,108 km) of Earth and will be (next to the moon) the brightest object in the night sky. It will attain a magnitude of – 2.9 and will appear 25.11 arc seconds wide. At a modest 75-power magnificationMars will look as large as the full moon to the naked eye. Mars will be easy to spot. At the beginning of August it will rise in the east at 10 p.m. and reach its azimuth at about 3:00 a.m. Origin: By the end of August when the two planets are closest, Mars will rise at nightfall and reach its highest point in the sky at 12:30 a.m. That's pretty convenient to see something that no human being has seen in recorded history. So, mark your calendar at the beginning of August to see Mars grow progressively brighter and brighter throughout the month. Share this with your children and grandchildren. NO ONE ALIVE TODAY WILL EVER SEE THIS AGAIN Although the e-mail itself is correct except for the statement that "it may be as long as 60,000 years before it happens again" (in fact, Mars will definitely come closer in 2287), the hoax stemmed from a misinterpretation of the sentence "At a modest 75-power magnification Mars will look as large as the full moon to the naked eye". The message was often [laxly] quoted with a line break in the middle of this sentence, leading some readers to mistakenly believe that "Mars will look as large as the full moon to the naked eye" when, in reality, this sentence means that Mars enlarged 75 times will look as big as the moon unenlarged. Origin: It is quite obviously scientifically incorrect that Mars, normally never more than a dot in the night sky, could suddenly become visibly large due to normal variations in orbit. If Mars did appear as large as the moon it would be so close that it would cause tidal and gravitational effects—Mars has about twice the diameter of the Moon, and hence would be about twice as far away for the same apparent size. It has nine times the mass of the Moon, and would have about the same tidal effect (nine times the larger mass divided by relative distance cubed). Resurfacing: The hoax has resurfaced a number of times since 2003, often showing an altered image of twin moons over the Nilov Monastery, and may continue to do so, always announcing an imminent close Earth–Mars approach. The content of the original email, although almost entirely correct for August 27, 2003, has falsely been redated to announce a new close Earth–Mars approach—the real close approach was in 2003 only—also misinterpreting the original e-mail by saying that Mars will look as large as the Moon. The later e-mails are incorrect, as Mars will not come as close to Earth as it did in 2003 until August 28, 2287.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Master regulator** Master regulator: In genetics, a master regulator gene is a regulator gene at the top of a gene regulation hierarchy, particularly in regulatory pathways related to cell fate and differentiation. Examples: Most genes considered master regulators code for transcription factor proteins, which in turn alter the expression of downstream genes in the pathway. Canonical examples of master regulators include Oct-4 (also called POU5F1), SOX2, and NANOG, all transcription factors involved in maintaining pluripotency in stem cells. Master regulators involved in development and morphogenesis can also appear as oncogenes relevant to tumorigenesis and metastasis, as with the Twist transcription factor.Other genes reported as master regulators code for SR proteins, which function as splicing factors, and some noncoding RNAs. Criticism: The master regulator concept has been criticized for being a "simplified paradigm" that fails to account for the multifactorial influences on some cell fates.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Robot combat** Robot combat: Robot combat is a mode of robot competition in which custom-built machines fight using various methods to incapacitate each other. The machines have generally been remote-controlled vehicles rather than autonomous robots. Robot combat: Robot combat competitions have been made into television series, including Robot Wars in the UK and BattleBots in the US. These shows were originally broadcast in the late 1990s to early 2000s and experienced revivals in the mid-2010s. As well as televised competitions, smaller robot combat events are staged for live audiences such as those organized by the Robot Fighting League. Robot combat: Robot builders are generally hobbyists and the complexity and cost of their machines can vary substantially. Robot combat uses weight classes, with the heaviest robots able to exert more power and destructive capabilities. The rules of competitions are designed for the safety of the builders, operators, and spectators while also providing for an entertaining spectacle. Robot combat arenas are generally surrounded by a bulletproof screen. Robot combat: Competitor robots come in a variety of designs, with different strategies for winning fights. Robot designs typically incorporate weapons for attacking opponents, such as axes, hammers, flippers, and spinning devices. Rules almost always prohibit gun-like weapons as well as other strategies not conducive to the safety and enjoyment of participants and spectators. History: Among the oldest robotic combat competitions extant in the United States are the "Critter Crunch" (founded about 1987) in Denver and "Robot Battles" (founded in 1991) based in the southeastern U.S. Both events are run by members of the "Denver Mad Scientists Society". 1987 – The "Denver Mad Scientists Society" organized the first Critter Crunch competition at Denver's MileHiCon science-fiction convention. 1990 – The First Robot Olympics took place in Glasgow, Scotland organized by the Turing Institute and marking a 'peacetime' recreational contest between robots from multiple countries. 1991 – Kelly Lockhart organized the first "Robot Battles" competition at Atlanta's DragonCon science-fiction convention. 1994 – Marc Thorpe organized the first Robot Wars competition in San Francisco. Four annual competitions were held from 1994 to 1997. History: 1997 – Rights to the Robot Wars name is transferred to British TV production company Mentorn, who produce the Robot Wars television series. Series 1 and 2 feature competitive games and obstacle courses as well as simple combat. In Series 3, the main competition switches to entirely combat. In the UK, Robot Wars aired 157 episodes across nine series (seven main tournament series and two "Extreme" side-competition series) from 1998 to 2003. Three spin-off series were produced for the United States (2001–2002), two for the Netherlands (2001–2003), and one for Germany (2002). History: 1999 – Former Robot Wars competitors in the U.S. organize a new competition, named BattleBots. The first tournament was shown as a webcast, with the second tournament shown as a cable 'Pay-per-view' event. 2000 – BattleBots is picked up as a weekly television program on Comedy Central. It would span five seasons ending in 2002. 2001 – Robotica appears on The Learning Channel as a weekly series. The format features tests of power, speed, and maneuverability as well as combat. The show ran for three series, ending in 2002. 2002 – Foundation of the Robot Fighting League (RFL), a regulatory body composed of the organizers of robot combat events in the United States, Canada, and Brazil. The body produces a unified set of regulations and promotes the sport. 2003 – Foundation of the Fighting Robots Association (FRA), a regulatory body managing robot combat events in the United Kingdom and Europe. 2004 – Robot Combat is included as an event at the ROBOlympics in San Francisco, California, with competitors from multiple countries. ROBOlympics competitions including Robot Combat run from 2004 to 2008. 2008 – ROBOlympics changes its name to RoboGames and, while most events are not combat-related, Robot combat is significantly featured. Events run from 2008 to 2013, 2015–2018, and in 2023. Robot combat matches are live streamed to Twitch starting in 2017. 2009 – Three official BattleBots competitions were managed and filmed in the hopes of securing a television sponsorship, though no deals materialized. 2013 – Robot Combat League, a fictional Syfy show themed around robot combat, premieres for one season. 2015 – BattleBots returns to television as a summer series on the ABC television network; it is renewed for a second season, which aired in the summer of 2016. 2016 – Robot Wars returns to British television on BBC2, with two further series in 2017. History: 2017 – Human-piloted "robot" fight: Eagle Prime (produced by MegaBots) vs. Kuratas (produced by Suidobashi Heavy Industries) 2018 – After a year hiatus, BattleBots returns to television on the Discovery Channel and The Science Channel. New seasons of BattleBots have been produced for the network yearly as of 2023. The first seasons of King of Bots (KoB), Fighting my Bot, This Is Fighting Robots (TIFR) and Clash Bots are held and broadcast in China. After the cancellation of Robot Wars by the BBC, Bugglebots, a beetleweight competition featuring former BB, RW, and KoB competitors, is broadcast on YouTube. Another season of Bugglebots is broadcast in 2019. Norwalk Havoc Robot League (NHRL) is founded, an organization which hosts and live streams the largest 3–30 lb robot combat competition league in the world. History: 2021 – BattleBots: Bounty Hunters, a spin-off of BattleBots, premieres on Discovery+. A sequel series, BattleBots: Champions, premieres in 2022. Rules: Robot combat involves remotely controlled robots fighting in a purpose-built arena. A robot loses when it is immobilized, which may be due to damage inflicted from the other robot, being pushed into a position where it cannot drive (though indefinite holds or pins are typically not permitted), or being removed from the arena. Fights typically have a time limit, after which, if no robot is victorious, a judge or judges evaluates the performances to decide upon a winner. Rules: Weight classes Similar to human combat sports, robot combat is conducted in weight classes though with maximum limits even in the heaviest class. Heavier robots are able to exert more power and have stronger armor and are generally more difficult and expensive to build. Class definitions vary between competitions. The below table shows classifications for two organizations: the UK-based Fighting Robots Association (FRA) and the North American SPARC. Rules: Most televised events are of heavyweights. Its worth noting that the definitions of each weight category have changed over time - with European (FRA) rules for heavyweights advancing from 80 kg, to 100 kg, to 110 kg over time. Currently Battlebots has a weight limit of 250 lb (113 kg). To encourage diversity of design, rules often give an extra weight allotment for robots that can walk rather than roll on wheels. Rules: Safety precautions Given the violent nature of robot fighting, safety is a central factor in the design of the venue, which is generally a sturdy arena, usually constructed of steel, wood, and bullet resistant clear polycarbonate plastic. The smaller, lighter classes compete in smaller arenas than the heavyweights.Competition rules set limits on construction features that are too dangerous or which could lead to uninteresting contests. Strict limits are placed on materials and pressures used in pneumatic or hydraulic actuators, and fail-safe systems are required for electronic control circuits. Generally off-limits for use as weapons are nets, liquids, deliberate radio jamming, high-voltage electric discharge, untethered projectiles, and usually fire. Rules: Robot fighting associations The sport has no overall governing body, though some regional associations oversee several events in managerial or advisory capacities with published rulesets. These include: Robot Fighting League (RFL), primarily U.S., operated 2002—2012 Fighting Robot Association (FRA), U.K and Europe, 2003–present Standardised Procedures for the Advancement of Robot Combat (SPARC), U.S., 2015–presentThe major televised competitions have operated outside of these associations. Combat robot weaponry and design: An effective combat robot must have some method of damaging or controlling the actions of its opponent while at the same time protecting itself from aggression. The tactics employed by combat robot operators and the robot designs which support those tactics are numerous. Although some robots have multiple weapons, the more successful competitors concentrate on a single form of attack. This is a list of most of the basic types of weapons. Most robot weaponry falls into one of the following categories: Inactive weaponry Inactive weaponry does not rely on a power source independent from a robot's mobility. Many modern rulesets, such as the rebooted versions of BattleBots and Robot Wars, require robots to have an active weapon in order to improve the visual spectacle, thus eliminating certain designs such as torque-reaction axlebots and thwackbots, and requiring other designs such as wedges and rammers to incorporate some other kind of weapon. Combat robot weaponry and design: Rammer – Robots employing high-power drive trains and heavy armor are able to use their speed and maneuverability to crash into their opponent repeatedly with the hope of damaging weapons and vital components. Their pushing power may also be used to shove their opponent into arena traps. Rammers (AKA 'Bricks') typically have four or six wheels for traction and stability and are often designed to be fully operational when inverted. Because many modern rulesets require all robots to have a moving weapon, modern rammers are often equipped with other weapon types. Robot Wars Series 6 champion Tornado and Series 7 runner-up Storm II were effective rammers. The former used interchangeable weaponry (usually a small spinning drum) while the latter opted for a lifting arm to avoid disqualification. Battlebots 3.0 superheavyweight champion Vladiator was a rammer armed with a small lifting spike. The D2 beetleweight kit bot is a popular and effective rammer design for entry-level hobbyists. Combat robot weaponry and design: Wedge – Similar in concept to a rammer, the wedge uses a low-clearance inclined ramp or scoop to move in under an opponent and break its contact with the arena floor – decreasing its mobility and rendering it easy to push off into a wall or trap. The wedge is also useful in deflecting attacks by other robots. Small wedge-lets are used to lift an opposing bot and feed it to a secondary weapon system. A small wedge may be attached to the rear of a robot with other weaponry for use as a 'backup' in case the main weapon fails. Like rammers, modern wedges must be combined with some other weapon in order to be legal in some modern competitions. The lower the degree of inclination of the wedge, the higher the chances of lifting the opponent bot from the ground. The 1995 US Robot Wars middleweight champion La Machine was an early and effective static wedge design, as was Robot Wars Series 1 champion, Roadblock in 1997. Two-time lightweight BattleBots champion Dr. Inferno Jr. was a low rectangular machine surrounded on all sides by hinged wedges. 2018 BattleBots competitor DUCK! utilized a powered lifting wedge. Original Sin is a four-wheeled ramming robot which has won eight heavyweight RoboGames competitions thanks to a combination of durability and hinged wedges. The Panzer series of robots have managed to win several competitions (Robotica season 3 and both seasons of Robot Wars: Extreme Warriors) with six-wheeled drive and a powered or unpowered wedge. Combat robot weaponry and design: Thwackbot – A narrow, high-speed, usually two-wheel drive attached to a long boom with an impact weapon on the end creates a robot that can spin in place at a high speed, swinging the weapon in a horizontal circle. The simplicity and durability of the design are appealing, but the robot cannot be made to move in a controlled manner while spinning without employing sophisticated electronics (See Melty-Brain Spinner, below). The 1995 US Robot Wars lightweight champion Test Toaster 1 was a thwackbot, as were T-Wrex and Golddigger from the BattleBots series. Combat robot weaponry and design: Torque Reaction – A variant on the thwackbot is the torque reaction hammer, also known as axlebots. These robots have two very large wheels with the small body of the robot hanging in between them. A long weapon boom has a vertically oriented hammer, pick, or axe on the end. On acceleration, the weapon boom swings upward and over to the rear of the robot to offset the motor torque. When the robot brakes or reverses direction, the weapon will swing forcibly back over the top and hopefully impact the opponent. These robots are simple and can put on a flashy, aggressive show, but their attack power is relatively small and, like thwackbots, they can be hard to control. BattleBots 2.0 middleweight champion Spaz was a torque reaction pickaxe robot, whilst Robot Wars Series 4 Grand Finalist Stinger primarily relied on a bludgeoning mace. BattleBots 3.0–5.0 semifinalist Overkill combined a wedge with a massive swinging blade. Combat robot weaponry and design: Spinners Spinners are weaponry based around blades, cylinders, discs, or bars rotating at high speed around an axis. This is among the most popular and destructive forms of weaponry, thanks to its potential to quickly deliver a high amount of kinetic energy over a small area. Combat robot weaponry and design: Saw Blades – A popular weapon in the early years of robotic combat, these robots use a dedicated motor to power either a modified chainsaw or circular saw, or a custom-built cutting disc, usually at high speeds (up to 10,000 rpm). The serrated blade is used to slice through an opponent's armor to try and reach its internal components. These weapons can create spectacular showers of sparks, and are easy to combine with other designs, but can be ineffective against robots with tougher armor. The aforementioned Robot Wars champion Roadblock had a rear-mounted circular saw in addition to its wedge, while Series 4 runner-up Pussycat had a custom cutting disc with four serrated blades. BattleBots 5.0 middleweight runner-up S.O.B. used a wide metal box (a "dustpan") in conjunction with a saw blade mounted on an arm. While true saws are obsolete in higher weight classes, a vertical spinner mounted on an articulating arm has seen renewed popularity in recent years. BattleBots 2023 champion SawBlaze combines a three-pronged dustpan design with a "hammer saw": a spinning blade mounted on a 180º pivoting arm. Combat robot weaponry and design: Vertical Spinner – A vertical disc or bar spinner consists of a thick circular disc or flat bar mounted on a horizontal axis. Rather than many small teeth to cut like a saw, most spinners have few large teeth to catch opponents and either throw them into the air or rip off chunks of armor. Vertical spinners are ubiquitous at all levels of competition, especially in the US. A majority of BattleBots competitors use spinning vertical discs or bars, including 1.0 lightweight champion Backlash, its heavyweight brother Nightmare, 2018–2019 champion Bite Force, and 2021 champion End Game, among many others. 2022 BattleBots champion Tantrum bears a "puncher", with a small vertical spinner mounted on a sliding mechanism. Vertical spinners are less common in Robot Wars, with Series 5–6 competitor S3, Series 7 grand-finalist X-Terminator, and Series 9–10 competitor Aftershock as three notable exceptions. Combat robot weaponry and design: Drum Spinner – Drum spinners are a variant of vertical spinners, consisting of a thick, short cylinder resembling a steamroller's wheel with teeth spinning on a horizontal axis. Drum spinners can accelerate faster than vertical discs or bars, but have less reach. Good drum spinners can land a solid hit almost every time they contact another robot and send it flying as high as a normal vertical disc or bar. Drums are also much thicker, meaning almost the entire front of the robot is taken up by a weapon. Drum spinners have a tendency to suffer from extreme drive issues due to the large amounts of gyroscopic forces. Among the most successful drum spinners are designed by the Brazilian Team RioBotz: BattleBots competitor Minotaur and its RoboGames equivalent, Touro Maximus. Four-wheeled drum spinners are a popular design in China, with Chiyung Jinlun and Xiake (from the same team) as reliable finalists in televised competitions. Drum spinners are also effective at lower weight classes, such as two-time RoboGames lightweight champion UnMakerBot, NHRL champion beetleweight Shredit Bro, and the commercially available Weta kit beetleweight bots. Combat robot weaponry and design: Eggbeater – An eggbeater spinner is similar to a drum but uses a broad rectangular frame, rather than a solid cylinder as its choice of weapon shape. Eggbeaters are more lightweight than drums, but due to their less aerodynamic design, they are usually most effective at lower weight classes. The 3 pound (Beetleweight) robot Lynx has dominated its weight class to such an extent that it is being retired to give other teams a chance to win. Combat robot weaponry and design: Vertical discs, bars, drums, and eggbeaters are continuous with each other to the point where it can be difficult to cleanly define each weapon type. For example, BattleBots 2019 and 2022 runner-up Witch Doctor has used a two-toothed "drisc", which is narrower than a drum but broader than a disc. BattleBots competitor Copperhead uses a broad steel drum with notches cut out, giving it similar properties to an eggbeater. Brazilian Team Ua!rrior has fielded successful drisc and eggbeater bots at multiple weight classes, including Federal M.T. (four-time RoboGames lightweight champion), General (two-time RoboGames middleweight champion), and Black Dragon (2019–present BattleBots competitor) Horizontal Spinner – Horizontal spinners rotate around a vertical axis, with the rotating blade or disc typically mounted below, under, or at mid-height on the front of the robot. Undercutters have a spinner low enough almost to scrape the ground. Thanks to their broad reach, horizontal spinners can impart large impacts and may throw other robots across the arena floor. Tombstone, a spinner armed with a horizontal bar, was the champion of BattleBots 2016, and its sister machine Last Rites has been a renowned competitor in RoboGames since 2005. Notable British horizontal spinners include Hypno-Disc (a grand finalist in Robot Wars series 3–5) and Carbide (champion of Robot Wars series 9). Some robots have a bar-shaped horizontal spinner mounted above the center of a low rectangular chassis. Horizontal spinners with this design include three-time BattleBots middleweight champion Hazard, American mid–late 2000s competitor Brutality, and modern Battlebots competitors Icewave and Bloodsport. Combat robot weaponry and design: Full Body Spinner – Taking the concept of the spinner to the extreme, a full-body spinner rotates a massive horizontally spinning mechanism around the entire circumference of the robot as a stored energy weapon. Other robot components (batteries, weapon motor casing) may be attached to the shell to increase the spinning mass while keeping the mass of the drive train to a minimum. Full body spinners require more time to spin the weapon up to speed, typically cannot self-right without the assistance of stabilizing bars, and can be unstable — the original BattleBots competitor Mauler was an infamous example in its first few years of competition. Combat robot weaponry and design: Shell spinner – Shell spinners are the most common variety of full body spinner, encasing the robot in a spinning shell powered from below by an electric motor. These shells may be cylindrical, conical, or dome-shaped. The 1995 US Robot Wars heavyweight co-champion Blendo was the first effective shell spinner, with its weapon derived from a metal wok. Among the most successful shell spinners are three-time BattleBots lightweight champion Ziggo and Robot Wars Series 7 champion Typhoon 2. Some shell spinners have competed nearly continuously since 2001, including Team LOGICOM's Shrederator series and Team Robotic Death Company's Megabyte. Both teams have seen success in untelevised and televised events in the United States and China. Combat robot weaponry and design: Ring / Rim spinner – Robots with ring or rim spinners impact opponents with a ring-shaped blade or battering surface spinning around the circumference of the chassis. These designs have the advantage of invertibility, at the cost of complexity, since they rely on a series of gears to translate motor power to the external ring. BattleBots 2016 competitor The Ringmaster is an example of a ring-spinner. Combat robot weaponry and design: Cage / Overhead spinner – A cage spinner impacts opponents with a spinning open frame resembling a helicopter rotor rather than a solid shell. These spinners are particularly uncommon. The most notable example is BattleBots 3.0 heavyweight champion Son of Whyachi, armed with bludgeoning hammer heads attached to a triangular spinning frame. Full-body drum spinner – A full body drum spinner is similar in construction to a thwackbot, with a tubular two-wheeled chassis encased by a vertically spinning cylindrical shell. These designs are rare and notoriously unreliable despite their high damage potential. Examples include Robot Wars competitor Barber-Ous and BattleBots competitor Axe Backwards. Combat robot weaponry and design: Melty-Brain Spinner (also known as Tornado Drive or Translational Drift)– A variation of the full-body spinner designed to operate without an independent weapon motor. These robots utilize a complex combination of rotational sensors and fine motor control to drive in such a way that the entire robot can simultaneously rotate on the spot and move across an arena in a controlled manner. The drive is usually implemented with an LED light system that indicates to the driver the direction the robot will move when commanded to move forwards. This kind of design tends to be incorporated into invertible builds and requires a spin-up time like other spinners. One of the earliest known examples of this kind of robot is BattleBots lightweight Herr Gepoünden, a thwackbot which reached the quarter-finals of season 3.0 and persisted in untelevised competitions until 2017, long past the heyday of other lightweight thwackbots. The most successful heavyweight melty-brain spinner is Nuts 2, which had chains connected to two "flail" weapons on either side of the machine. Nuts 2 ultimately finished joint 3rd (with Behemoth) in Series 10 of Robot Wars, ending the dominant run of Series 8 finalist and Series 9 champion Carbide along the way by breaking the robot's weapon chain. Combat robot weaponry and design: Control bot weaponry Lifter – Using tactics similar to a wedge, the lifter uses a powered arm, prow, or platform to get underneath the opponent and lift it away from the arena surface to remove its maneuverability. The lifter may then push the other robot toward arena traps or attempt to toss the opponent onto its back. The lifter is typically powered by either an electric or pneumatic actuator. Lifters were most effective in older competitions, when self-righting mechanisms and high-power weaponry were less common. Two-time US Robot Wars and four-time BattleBots heavyweight champion Biohazard used an electric lifting arm to great effect. Lifting forks were utilized by Robot Wars series 2 champion Panic Attack and two-time BattleBots heavyweight champion Vlad the Impaler. Thanks to their narrow profile and simplicity, lifters are often combined with other weaponry. Sewer Snake, four-time RoboGames heavyweight champion, was a six-wheeled rammer with a lifting wedge. Modern BattleBots competitor Whiplash has seen success by combining a small spinning disc and lifting arm into a single weapon. Combat robot weaponry and design: Flipper – Although mechanically resembling a lifter, the flipper uses much higher levels of pneumatic power to launch a lifting arm or panel upward at high acceleration similar to a catapult. An effective flipper can throw opponents end-over-end through the air, causing damage from the landing impact or, in Robot Wars, toss it completely out of the arena. Flippers use a large volume of compressed gas and often have a limited number of effective attacks before their supply runs low. Combat robot weaponry and design: CO2-powered flippers are among the most abundant weapon types in UK heavyweight competitions. The two-time Robot Wars champion Chaos 2 used a flipping plate powerful enough to throw other robots out of the arena. Other successful Robot Wars flippers include Series 5 runner-up Bigger Brother, Series 8 champion Apollo, and Series 10 champion Eruption, among many others. Behemoth, armed with a flipping scoop, has been competing continuously since Series 2 in 1998 and finally reached joint 3rd place in Series 10 in 2017. Some British flippers have been significantly more successful in untelevised competitions, such as Ripper, Kronic, and the Iron Awe series. British flippers have also competed in China, including Vulcan (from Team Apollo) and Tánshè (TIFR runner-up, from Team Hurtz) While most flippers operate with the flipping mechanism hinged at the machine's rear, Robot Wars' Firestorm achieved remarkable success with a front-hinged flipper, placing third in Robot Wars on three separate occasions (Series 3, 5, and 6) and never failing to advance to the series' semifinal rounds. Robot Wars Series 2 runner-up Cassius also utilized a front-hinged flipping arm. Combat robot weaponry and design: Most American flippers utilize Nitrogen gas, though carbon dioxide was also used back in the old Battlebots, but this gas has been banned now. Team Inertia Labs has had great success in BattleBots with robots utilizing a characteristic flipping arm design. Their machines include BattleBots 4.0 superheavyweight champion Toro, BattleBots 5.0 middleweight champion T-Minus, and BattleBots 2015 semi-finalist Bronco. A similar flipping mechanism was used by 2006–2010 RoboGames superheavyweight competitor Ziggy, a machine so dominant that it has been attributed as one of several factors responsible for the retirement of the superheavyweight class. Ziggy's heavyweight successor, Ziggy Jr., competes in BattleBots under the name Lucky. Combat robot weaponry and design: Experimental flippers have seen some success in recent seasons of BattleBots. Hydra, introduced by Team Whyachi in 2019, is able to store a huge number of powerful flips by relying on compressed hydraulic fluid rather than pneumatic gas. Blip, introduced by Team Seems Reasonable in 2021, powers its flipping plate using energy stored in a cord wound by an electric flywheel. Combat robot weaponry and design: Stabber – Mechanically similar to the flipper is the stabber, a rare weapon type which throws or stabs opponents forward with a pneumatic spike. An effective stabber can penetrate into the opponent, damage vital inner parts. When they fail to penetrate, they throw their opponent back across the arena into walls or traps. Stabbers typically use a large volume of compressed gas, which limits the number of times they can fire their weapon in a fight. Classic BattleBots superheavyweight competitor Rammstein was a stabber. Combat robot weaponry and design: Clamper / Grabber – Clampers and Grabbers are an example of robots oriented around controlling and grappling their opponents rather than direct damage. They make use of an arm or claw that descends from above to secure the opposing robot in place on a wedge or lifting platform. In some clampers, the entire assembly may lift and carry the opponent wherever the operator pleases: these were called grapplers. Diesector, the superheavyweight champion of BattleBots 2.0 and 5.0, combined an electric clamper with smaller hammer arms. Middleweight BattleBots 4.0 runner-up Complete Control was another successful lifting clamper. Big Nipper, a horizontal grabber/lifter, won several untelevised championships in the UK after the end of Robot Wars. Bite Force won the 2015 season of BattleBots using a grabbing arm as its only form of weaponry, though in subsequent series its design was modified into a vertical spinner on a four-wheeled chassis. Combat robot weaponry and design: Crusher – Crushers are similar to grabbers, though they emphasize damage via one or more piercing hydraulic arms. Like flywheels, crushers can be separated into horizontal and vertical variants. Robot Wars Series 5 champion Razer was the first vertical crusher, and by far the most successful of its era. Another UK-built vertical crusher, Spectre, won the first King of Bots tournament in 2018, and has competed in BattleBots 2019 and 2023 under the name Quantum. Two-time Robot Wars Annihilator champion Kan-Opener was armed with a pair of horizontal crushing claws, one of the few examples of a successful horizontal crusher. Combat robot weaponry and design: Hammers and axes Swinging an overhead axe, spike, or hammer at high speed onto your opponent offers another method of attacking the vulnerable top surface. The weapon is typically driven by a pneumatic or electric actuator via a rack and pinion or direct mechanical linkage. The attack may damage the opposing robot directly, or may lodge in their robot and provide a handle for dragging them toward a trap. Several successful hammerbots have been designed by UK's Team Hurtz: Battlebots 1.0 heavyweight semi-finalist Killerhurtz was armed with a spike-headed pneumatic axe, Robot Wars Series 6 grand finalist Terrorhurtz possessed a two-bladed pneumatic axe, and Battlebots 2016 quarter-finalist Beta utilized an electric hammer. Robot Wars Series 2 grand finalist Killertron was one of the earliest effective examples of an axebot, with a two-headed electrically powered pickaxe. Other successful hammerbots include Deadblow (BattleBots 1.0 middleweight runner-up), FrenZy (BattleBots 2.0 heavyweight semi-finalist), Dominator 2 (Robot Wars series 4–6 competitor), Thor (Robot Wars series 6–10 competitor), Chomp (BattleBots 2016 quarter-finalist), and Shatter! (BattleBots 2021 quarter-finalist). Chomp is a rare example of a combat robot with autonomous technology, with hardware and software integrated so that it always faces its opponent during a match. Combat robot weaponry and design: Interchangeable weaponry It is increasingly common for robots to have interchangeable weaponry or other modular components, allowing them to adapt to a wide range of opponents and increasing their versatility; such robots are often referred to as "Swiss army bots", in reference to Swiss army knives. Arguably the earliest example was Robot Wars Series 1 contestant Plunderbird, which could change between a pneumatic spike and a circular saw on an extendable arm. Successful Swiss army bots include Robot Wars Series 6 champion Tornado, BattleBots 2016 runner-up Bombshell, and top-ranked US Beetleweight Silent Spring.Sometimes, robots that were not originally Swiss army bots have had their weapons changed or altered on the fly, typically due to malfunctions. In BattleBots 2015, Ghost Raptor's spinning bar weapon broke in its first fight; builder Chuck Pitzer then improvised new weapons for each following fight, including a "De-Icer" arm attachment which it used to unbalance and defeat bar spinner Icewave in the quarter-finals. Combat robot weaponry and design: Prohibited weaponry Since the first robot combat competitions, some types of weapons have been prohibited either because they violated the spirit of the competition or they could not be safely used. Prohibited weapons have generally included: Radio jamming High voltage electric discharge Liquids (glue, oil, water, corrosives...) Fire (except in the new BattleBots and Norwalk Havoc) Explosives Un-tethered projectiles (except in BattleBots from 2018 season onwards) Entanglers (except in Robot Wars from series 10 onwards) Lasers above 1 milliwatt Visual obstruction Halon – a specific fire extinguishing gas effective as a weapon in stopping internal combustion engines. Note that current rules do not specifically ban Halon as it is no longer commercially available.Individual competitions have made exceptions to the above list. Notably, the Robotica competitions allowed flame weapons and the release of limited quantities of liquids on a case-by-case basis. The modern series of BattleBots also permits the use of flamethrowers and, as of 2016, untethered projectiles, provided that the latter are merely for show. Competitions may also restrict or ban certain otherwise legal weapons, such as banning spinners and other high-power weapons at events where the arena is not able to contain these weapons, and the new Battlebots recently banned usage of carbon dioxide gas. A well-known example of this is the Sportsman ruleset.Arena traps have also been granted exceptions to the list of prohibited weapons. Robot Wars in particular used flame devices both in the stationary traps and on one of the roaming "House Robots". Combat robot weaponry and design: Unusual weaponry and tactics A very wide variety of unusual weapons and special design approaches have been tried with varying success and several types of weapons would have been tried had they not been prohibited. Combat robot weaponry and design: SRiMech – Many robots are incapable of driving inverted (upside-down), due to their shape, weaponry, or both. However, others risk immobilization if turned over off of their wheels. A SRiMech (self-righting mechanism) is not inherently a form of weaponry, but rather an Active Design element that returns an inverted robot to mobility in the upright state. The SRiMech is typically an electric or pneumatic arm or extension on the upper surface of the robot which pushes against the arena floor to roll or flip the robot upright. Most flippers, some lifters, and even some carefully designed axes or vertical spinners can double as SRiMechs. Team Nightmare's lightweight vertical spinner Backlash was designed such that when flipped it would hit the ground with the spinning disc and kick back upright (though this never worked). The first successful unaided use of an SRiMech in competition was at the 1997 U.S. Robot Wars, when the immobilized Vlad the Impaler used a dedicated pneumatic device to pop back upright in a match against Biohazard. The first competitor to use a SRiMech in a televised competition was Cassius, using its front-hinged flipping arm to right itself in Robot Wars series 2. Combat robot weaponry and design: Entangling weapons – Several early US Robot Wars competitors sought to immobilize their opponents with entangling weapons. Nets and streamers of adhesive tape were both tried with mixed success. Entangling weapons were prohibited in Robot Wars and BattleBots from 1997 onward, but the Robotica competitions allowed nets, magnets, and other entanglers on a case-by-case basis, and Robot Wars allowed limited use of entanglers in Series 10. One of the more infamous recent usages of entanglers was a BattleBots fight between Complete Control and Ghost Raptor in the first reboot season, where a net was hidden in a "present" held by Complete Control and rammed into Ghost Raptor, jamming the spinner and other mechanics. The match was stopped, but Derek Young, the driver and captain of Complete Control, mentioned that entanglers weren't explicitly forbidden in the new ruleset, which was true, but a rematch was scheduled with the explicit note of nets being forbidden from then on. Combat robot weaponry and design: Flame weapons – Although prohibited for use by competitors in Robot Wars and the first edition (2000–05) of BattleBots, the rules for Robotica, the Robot Fighting League and the post-2015 version of BattleBots do allow flame weapons under some circumstances. RFL super heavyweight competitor Alcoholic Stepfather (unique for using mecanum wheels for movement around an arena) and Robotica competitor Solar Flare, as well as the later BattleBots series competitors Free Shipping and overhead pneumatic-pickaxe armed Chomp employing gaseous flamethrower weapons. Gruff is a BattleBots competitor that competed with its main weapon solely as a high power flamethrower (two as of season 5) with the help of a lifter, with moderate success. Flamethrowers are seldom effective weapons, mainly due to their effectiveness being limited for safety reasons, but are audience favorites. Combat robot weaponry and design: Smothering weapons – The BattleBots and Robot Wars lightweight competitor Tentoumushi used a large plastic sandbox cover shaped like a ladybug ("tentoumushi" being Japanese for ladybug) on a powered arm to drop down over opposing robots, covering and encircling them. Once covered, it was difficult to tell what the opponent was doing and who was dragging whom around the arena. One version of the robot had a circular saw concealed under the cover to inflict physical damage, another had a small grappling hook. Combat robot weaponry and design: Tethered projectiles – Although tethered projectiles are specifically allowed and discussed in major rules sets, their use is quite rare. Neptune fought at BattleBots 3.0 with pneumatic spears on tethers, but was unable to damage its opponent. During a friendly weapons test, Team Juggerbot allowed the builders of Neptune to take a couple shots against their bot. One of two shots penetrated an aluminum panel below the main armor, while the other bounced off the top armor. Combat robot weaponry and design: Multibots (clusterbots) – A single robot that breaks apart into multiple, independently controlled robots has appealed to a few competitors. The Robot Wars heavyweight Gemini and the BattleBots middleweight Pack Raptors were two-part multibots that had some success. The rules concerning clusterbots have varied over the years, either stating that 50% of the clusterbot has to be immobilised to eliminate the robot from the tournament (in the Dutch version of Robot Wars, there was a 3-part multibot named √3, and although one of its parts was tossed out of the arena by Matilda, the robot as a whole was still deemed mobile, and the other 2 parts of √3 did enough to win the match), or that all of a multibot's segments have to be incapacitated before a knock-out victory can be declared, and members without active weapons no longer count. Current Robot Fighting League match rules require the latter to be achieved. In recent years, successful heavyweight multibots include Thunder and Lightning (a pair of vertical spinners which came in 4th place in King of Bots season 1) and Crash n' Burn (a pair of wedgebots competing in RoboGames). Combat robot weaponry and design: Minibots (nuisancebots) – Similar to the concept of multibots, minibots are small robots, typically no larger than a featherweight, that fight alongside a larger main robot with the aim of harassing or distracting opponents. They are often sacrificial in nature and have minimal weaponry. BattleBots 2015 competitor Witch Doctor was accompanied by a featherweight minibot named Shaman that was equipped with a flamethrower, and which gained significant popularity for its spirited performances during battles. Other Battlebots competitors also successfully used minibots such as Son of Whyachi in 2016, and 2018 competitor WAR Hawk and their beetleweight minibot WAR Stop, which was equipped with a wedge. Combat robot weaponry and design: Halon gas – Rhino fought at the 1997 U.S. Robot Wars event with a halon gas fire extinguisher, which was very effective at stopping internal combustion engines. Gas weapons of this nature were promptly prohibited from future competitions. Combat robot weaponry and design: Pneumatic Cannon – First implemented by season eight Battlebots competitor Double Jeopardy, the robot fired off a 5-pound "slug" at 190 mph, exerting 4,500 pounds of force upon impact. This robot, however, did not perform well during its competition, as it only had one shot at landing a good hit: from there, it would have to rely on pushing its opponents, at which it failed. It subsequently upgraded its cannon to be more powerful and added the ability to fire more than one shot, though to this day, it has only one win under its belt. Unusual propulsion: The great majority of combat robots roll on wheels, which are very effective on the smooth surfaces used for typical robot combat competition. Other propulsion strategies do pop-up with some frequency. Unusual propulsion: Tank treads – Numerous combat robots have used treads or belts in place of wheels in an attempt to gain additional traction. Treads are generally heavier and more vulnerable to damage than a wheeled system and offer no particular traction advantage on the types of surfaces common in robot combat. Most uses of treads are for their striking appearance. The Robot Wars competitors Track-tion, 101 and Mortis along with the BattleBots super heavyweight Ronin used treads. Biteforce, the winner of the 2015 Battlebots Competitions, originally used magnets embedded in its treads in an attempt to gain extra downforce without extra weight. Current users of treads include 2022 NHRL champion and BattleBots contestant Emulsifier and BattleBots fan-favorite Rusty. Unusual propulsion: Walking – The spectacle of a multi-legged robot walking across the arena into combat is a big audience favorite. Robot combat rules typically have given walking robots an additional weight allowance to offset their slower speed, the complexity of the mechanism, and to encourage their construction. What the event organizers had in mind was something like the spider-legged robot Mechadon, but what most often was produced were simple rule-shaving propulsion systems that attempted to save as much of the extra weight allowance as possible for additional weaponry. Attempts at more restrictive definitions of "Walking" have effectively eliminated walking robots from competition. BattleBots heavyweight champion Son of Whyachi used a controversial cam-driven "Shufflebot" propulsion system, which was promptly declared ineligible for additional weight allowance at subsequent competitions. Unusual propulsion: Gyroscopic precession – Used in the Antweight robot Gyrobot, as well as the Battlebots competitor Wrecks, this system uses a gyroscope and stationary feet that lift as the entire robot rotates due to gyroscopic precession when the gyroscope is tilted by a servo motor. This design can use the gyroscope as a spinning weapon (horizontal or vertical) which allows for efficient double-usage of the gyroscope mass. Although Gyrobot and Wrecks appear to be walking as it translates across the arena, they're not classified as walking robots under current rules. This unusual drive train produces strange and often unpredictable movements, though has shown to be successful in combat. Unusual propulsion: Suction fan – Several competitors experimented with the use of fans to evacuate air from a low-clearance shell to suck the robot down onto the arena surface and add traction. Robotica competitor Armorgeddon used a suction fan to increase traction and pushing power, and Robot Wars and Battlebots competitor Killerhurtz experimented with use of a suction fan to counter the forces from its hammer/axe weapon, a system that was demonstrated as giving the robot the ability to climb walls but was never utilised in combat. Similar designs have appeared in robot-sumo competitions where traction is a key factor. Unusual propulsion: Magnetic Wheels – Another approach to gaining traction and stability involves the use of rare-earth magnets, either ring-shaped as wheels or simply attached to the robot's base. This is, naturally, only effective in arenas which have magnetic metal surfaces. Due to the expense of large ring magnets, this trick has been used almost exclusively in three-pound and under "insect class" robots, although a lightweight battlebot General Gau tried implementing them. A multibot named Hammer and Anvil would later use magnets in the lightweight category, with some success. Heavyweight Robotica competitor Hot Wheels attempted to use a large chassis-mounted magnet to gain traction and apparent weight, and Beta unsuccessfully attempted to use an electromagnet to counter the reaction forces of its massive hammer weapon at the BattleBots competition. This however was removed for future competitions as the power of the magnets rendered the robot unable to move. Unusual propulsion: Mecanum wheels – Together with a specialized motor control system, mecanum wheels allow controlled motion in any direction without turning, as demonstrated by Alcoholic Stepfather in a 2004 match, and by the hammer-wielding Battlebots competitor Shatter! in 2019. Unusual propulsion: Flying – The 1995 US Robot Wars event had a flying competitor: S.P.S. #2 was a lighter-than-air craft buoyed by three weather balloons and propelled by small electric fans. It attempted to drop a net on the opponent. Nearly invulnerable to attack, it won the first match against Orb of Doom (see reference below), but ventured too close to the arena floor in the second match and was dragged down and "popped". Starting in 2016, BattleBots permitted the use of drones as "nuisance bots"; these typically proved hard to control, and one was memorably swatted out of the air by a rake that competitor HyperShock had attached to its lifting forks. These drones are usually armed with flamethrowers, but there is no evidence that these have ever had an effect on the opponent, and as of World Championship VII, only one drone, named Spitfire, remains, and it is used very infrequently. Unusual propulsion: Rolling sphere – The aforementioned Orb of Doom was a featherweight competitor at the 1995 US Robot Wars. It consisted of a lightweight, rigid shell made of carbon fiber-kevlar cloth and polyester resin, applied over a foam core pattern. Inside was an offset-weight mechanism made from a battery-powered electric drill. A similar looking robot named Psychosprout appeared in the UK Robot Wars. Unusual propulsion: Rolling tube – Snake competed at Battlebots and the US Robot Wars using a series of actuators to bend its triangular cross-section tubular body to roll, writhe, and slither across the arena. Unusual propulsion: Shuffling – refers to the movement of robots that are propelled by a cam-driven system. See Walking Brush Drive – Similar to Gyroscopic precession, brush drive uses brushes affixed to the bottom of the robot, akin to non-combat bristlebots. These work in tandem with a pair of vertical spinning weapons to make the robot slide across the arena. This form of locomotion has been utilized by RoboGames 2017 competitor Clean Sweeper. Unusual propulsion: Magnets and Rapid Deceleration – While it has never been done, an entrant to Battlebots' seventh season, titled Bad Penny, had planned on using a magnetic system combined with a braking system to move their robot around the arena. Six magnets would pull down on the floor with over 2000 pounds (~909 kilograms) of force. To move, the robot would rely on rapidly braking its spinning ring, which was around the entire robot, while simultaneously turning off five of the six magnets. This, in turn, would force the robot to pivot around the one magnet still on. Unusual propulsion: Hopping – Using pneumatic legs or spikes, robots such as the featherweight Spazhammer were capable of moving around the arena by repeatedly stabbing the floor. Unusual propulsion: Propeller – No Fly Zone, an antweight competing at RoboGames since 2015, drives forwards using thrust generated by a diagonal spinning bar on the front of the robot, similar to an airplane propeller. There is only a single wheel on the back of the robot, used for steering rather than forward movement. A similar heavyweight machine, Crossfire, competed in the first season of King of Bots. Robot-sumo: Robot-sumo is a related sport where robots try to shove each other out of a ring rather than destroy or disable each other. Unlike remote-controlled combat robots, machines in these competitions are often automated.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Austin Model 1** Austin Model 1: Austin Model 1, or AM1, is a semi-empirical method for the quantum calculation of molecular electronic structure in computational chemistry. It is based on the Neglect of Differential Diatomic Overlap integral approximation. Specifically, it is a generalization of the modified neglect of differential diatomic overlap approximation. Related methods are PM3 and the older MINDO. Austin Model 1: AM1 was developed by Michael Dewar and co-workers and published in 1985. AM1 is an attempt to improve the MNDO model by reducing the repulsion of atoms at close separation distances. The atomic core-atomic core terms in the MNDO equations were modified through the addition of off-center attractive and repulsive Gaussian functions. The complexity of the parameterization problem increased in AM1 as the number of parameters per atom increased from 7 in MNDO to 13-16 per atom in AM1. The results of AM1 calculations are sometimes used as the starting points for parameterizations of forcefields in molecular modelling. Austin Model 1: AM1 is implemented in the MOPAC, AMPAC, Gaussian, CP2K, GAMESS (US), PC GAMESS, GAMESS (UK), and SPARTAN programs. An extension of AM1 is SemiChem Austin Model 1 (SAM1), which is implemented in the AMPAC program and which explicitly treats d-orbitals. An extension of AM1 is AM1* that is available in VAMP software.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Hubs and nodes** Hubs and nodes: Hubs and nodes is a geographic model explaining how linked regions can co-operate to fulfill elements of an industry's value chain and collectively gain sufficient mass to drive innovation growth. The model of hubs and nodes builds on Porter's cluster model which served well in the past, but as businesses and regions around the world have adjusted to the realities of globalization, the concept of clusters is becoming outdated. Disaggregation of clusters: Companies are realizing that they may not require a particular stage of production to be in close geographic proximity. As barriers to long-distance national and global transactions have fallen through advances in technology and logistics, such as the growth of the Internet and overnight package services, it has become increasingly possible to relocate operations such as research, product development, and manufacturing to countries and regions with relevant expertise and lower costs. It is common among consumer goods, for example, to have concept generation centered in one locale, product testing and refinement in another, and manufacturing and distribution in still others. Elements of development, production and distribution are being more and more completed beyond the borders of historical clusters. As more companies progress beyond the cluster model, they increasingly expand and diversify their operations to locations where their investments will be most profitable. For companies adequately prepared for the rapid globalization process, their research and development, manufacturing and distribution stages fare better as these businesses are able to reduce their costs and potentially realizing new efficiencies and increased speeds of product development. The spirit of the cluster model may remain intact, and the various stages of production will still be shared by a number of different entities, but geographical proximity need no longer bind the entities together
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Lysergic acid methyl ester** Lysergic acid methyl ester: Lysergic acid methyl ester is an analogue of lysergic acid. It is a member of the tryptamine family and is extremely uncommon. It acts on the 5-HT receptors in the brain, as do most tryptamines.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Lamm-Honigmann process** Lamm-Honigmann process: The Lamm-Honigmann process is a storage and heat to power conversion process that consists of using the effect of vapor pressure depression of a working fluid mixture compared to a pure working fluid of that mixture. This process is named after their independent inventors Emile Lamm (US patent from 1870) and Moritz Honigmann (German patent from 1883). Both inventors envisioned and realized the same process principle for usage as energy storage in so-called Fireless locomotive but with different working fluid pairs: Emile Lamm used ammonia and water, Moritz Honigmann used water and caustic soda. Lamm-Honigmann process: Compared to conventional fire-less locomotives (that usually work with reservoirs of pure pressurized water or air) the advantage of the process proposed by Lamm and Honigmann is that the loss in pressure ratio during discharging of the storage is smaller, and therefore theoretically a larger storage density can be achieved. The process can be considered as a Carnot battery technologies. Process principle: A mixture of water and e.g. any salt has according to Raoult's law a smaller vapor pressure than the pure mixture. More specifically, the vapor pressure depression is larger the larger the salt mass fraction is. This pressure potential is used in the Lamm-Honigmann process to expand the working fluid, e.g. water vapor, in an expansion device and generate mechanical or subsequently electrical energy. The working fluid is evaporated from a reservoir (Evaporator) and than expanded into a concentrated solution of the working fluid pair that has lower vapor pressure (Absorber). The working fluid is absorbed by the solution and heat of absorption is transferred to the evaporator to hold the pressure in the evaporator. During this discharging process the pressure in the absorber is rising due to dilution of the mixture, until the pressure potential is not large enough anymore to drive the expansion device or whatever is connected to it. The storage is discharged. Process principle: The charging process consists of re-concentrating the working fluid mixture by means of heat or mechanical energy. In the case of thermal charging, the diluted solution is heated and the working fluid is desorbed and condensed in a condenser at the same pressure level. The heat of condensation has to be transferred to the environment or another heat sink. In case of mechanical charging the discharging process is literally inverted. A compression device brings the working fluid that is desorbed out of the mixture to a larger pressure level, where it is condensed. The heat of condensation is transferred to the mixture for desorption of the working fluid. Process principle: The process can equally be realized using solid sorption pairs (e.g. Zeolith/water) or chemicals of a reversible chemical reaction (e.g. Calcium chloride/water), but no realized prototypes are known. Application as stationary energy storage: Whereas the storage densities achievable with the recently investigated working fluid pairs are with 1.4-17.5 Wh/kg not large enough for mobile applications, current research work focuses on its application as stationary energy storage with a flexible use of different kinds of energy for charging and discharging as indicated storage efficiencies are comparable to other bulk energy storage systems such as pumped hydro, liquid air energy storage or hydrogen storage.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Ubiquity (software)** Ubiquity (software): Ubiquity is the default installer for Ubuntu and its derivatives. It is run from the Live CD or USB and can be triggered to run from the options on the device or on the desktop of the Live mode. It was first introduced in Ubuntu 6.06 LTS "Dapper Drake". At program start, it allows the user to change the language to a local language if they prefer. It is designed to be easy to use. Features: Ubiquity consists of a configuration wizard allowing the user to easily install Ubuntu and shows a slideshow showcasing many of Ubuntu's features while it is installing. Ubuntu 10.04 included in Ubiquity a slideshow, which meets users with Ubuntu. In Ubuntu 10.10 "Maverick Meerkat", the installer team made changes to simplify the tool and speed up the installation wizard.Ubiquity allows the user to choose the installer to automatically update the software while it's installing. If the user allows this, the installer will download the latest packages from the Ubuntu repository ensuring the system is up to date. Features: The installer also allows the user to set Ubiquity to install closed source or patented third party software such as Adobe Flash and Fluendo's MP3 codec software that is commonly needed by users while Ubuntu is installing.Ubiquity can begin to format the file system and copy system files after the user completes the partition configuration wizard, while the user is inputting data such as username, password, location etc. which reduces install time. When reviewing Ubuntu 10.10, Ryan Paul from Ars Technica said “During my tests, I was able to perform a complete installation in less than 15 minutes.” Ubiquity also provides an interactive map to specify time zones.At the bottom the installer window, a progress bar is shown once the installation has started. At the end of the configuration stage, a slideshow will show up until the end of install. The slideshow display short summaries and screenshots about the applications in Ubuntu. However though, not all the software shown is in the default installation and are available to download from the Ubuntu Software Center. These are so the user becomes more aware about other applications available for the platform.Before Ubuntu 12.04 LTS, Ubiquity offered a migration assistant which brought over user accounts from Windows, OS X and other Linux distributions along with e-mail and Instant messaging accounts, Bookmarks from Firefox and Internet Explorer as well as the user's pictures, wallpapers, documents, music, photos folder although this was a Windows only feature. At the Ubuntu Developers Summit for Ubuntu 12.10, the developers agreed to remove this feature citing a lack of testing and a high number of bugs as the reason why the feature has been removed.Ubuntu created Ubiquity edition for servers named Subiquity. It is graphical installer for Ubuntu Server versions, and included from Ubuntu 18.04. Features: In October 2018, Lubuntu switched to using Calamares instead of Ubiquity. Ports: Ubiquity allows OEMs and other Ubuntu derivatives to customise aspects of it such as the slideshow and branding elements. Some Ubiquity ports include: Kubuntu Xubuntu Ubuntu MATE Linux Mint Elementary OS Peppermint Linux OSMoreover, installation steps may be skipped by changing the install scripts making it possible for OEMs and others to set special defaults or create an automated install routine.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**History of aspirin** History of aspirin: Aspirin (acetylsalicylic acid) is a novel organic compound that does not occur in nature, and was first successfully synthesised in 1899. In 1897, scientists at the drug and dye firm Bayer began investigating acetylated organic compounds as possible new medicines, following the success of acetanilide ten years earlier. By 1899, Bayer created acetylsalicylic acid and named the drug 'Aspirin', going on to sell it around the world.: 27  The word Aspirin was Bayer's brand name, rather than the generic name of the drug; however, Bayer's rights to the trademark were lost or sold in many countries. Aspirin's popularity grew over the first half of the twentieth century, leading to fierce competition with the proliferation of aspirin brands and products.Aspirin's popularity declined after the development of acetaminophen/paracetamol in 1956 and ibuprofen in 1962. In the 1960s and 1970s, John Vane and others discovered the basic mechanism of aspirin's effects,: 226–231  while clinical trials and other studies from the 1960s to the 1980s established aspirin's efficacy as an anti-clotting agent that reduces the risk of clotting diseases.: 247–257  Aspirin sales revived considerably in the last decades of the twentieth century, and remain strong in the twenty-first with widespread use as a preventive treatment for heart attacks and strokes.: 267–269 History of willow in medicine: Numerous authors have claimed that willow was used by the ancients as a painkiller, but there is no evidence that this is true. All such accounts date from after the discovery of aspirin, and are possibly based on a misunderstanding of the chemistry. Bartram's 1998 Encyclopedia of Herbal Medicine is perhaps typical when it states, 'in 1838 chemists identified salicylic acid in the bark of White Willow. After many years, it was synthesised as acetylsalicylic acid, now known as aspirin.' It goes on to claim that willow extract has the same medical properties as aspirin, which is incorrect. History of willow in medicine: Ancient medical uses for willow were more varied. The Roman author Aulus Cornelius Celsus recommended using the leaves, pounded and boiled in vinegar, as treatment for uterine prolapse, but it is unclear what he considered the therapeutic action to be; it is unlikely to have been pain relief, as he recommended cauterization in the following paragraph (De Medicina, book VI, p. 287, chapter 18, section 10). Gerard quotes Dioscorides, 'that [the bark] being burnt to ashes, and steeped in vinegar, takes away corns and other like risings in the feet and toes,' which is similar to modern uses of salicylic acid. Translations of Hippocrates make no mention of willow at all. History of willow in medicine: Nicholas Culpeper, in The Complete Herbal, gave many uses for willow, including to staunch wounds, to 'stay the heat of lust' in man or woman, and to provoke urine ('if stopped') but, like Celsus, made no mention of any analgesic properties. He also used the burnt ashes of willow bark, mixed with vinegar, to 'take away warts, corns, and superfluous flesh.' Although Turner (1551) thought that fevers could be cured by 'cooling the air' with boughs and leaves of willow, the earliest known mention of willow bark extract for treating fever came in 1763, when a letter from English chaplain Edward Stone to the Royal Society described the dramatic power of powdered white willow bark to cure intermittent fevers, or 'ague'. Stone had 'accidentally' tasted the bark of a willow tree in 1758 and noticed an astringency reminiscent of Peruvian bark, which he knew was used to treat malaria. Over the next five years he treated some 50 ague sufferers, with universal success, except in a few severe cases, where it merely reduced their symptoms. Stone's remedy was trialled by a few pharmacists, but was never widely adopted. During the American Civil War, Confederate forces experimented with willow as a cure for malaria, without success. Synthesis of acetylsalicylic acid: In the 19th century, as the young discipline of organic chemistry began to grow in Europe, scientists attempted to isolate and purify alkaloids and other novel organic chemicals. After unsuccessful attempts by Italian chemists Brugnatelli and Fontana in 1826, Johann Buchner obtained relatively pure salicin crystals from willow bark in 1828; the following year, Pierre-Joseph Leroux developed another procedure for extracting modest yields of salicin. In 1834, Swiss pharmacist Johann Pagenstecher extracted a substance from meadowsweet which, he suggested, might reveal an "excellent therapeutic aspect", although he was uninterested in increasing the number of chemicals available to pharmaceutical science. By 1838, Italian chemist Raffaele Piria found a method of obtaining a more potent acid form of willow extract, which he named salicylic acid. The German chemist who had been working to identify the Spiraea extract, Karl Jacob Löwig, soon realized that it was in fact the same salicylic acid that Piria had found.: 38–40  The first evidence that salicylates might have medical uses came in 1876, when the Scottish physician Thomas MacLagan experimented with salicin as a treatment for acute rheumatism, with considerable success, as he reported in The Lancet. Meanwhile, German scientists tried salicylic acid in the form of sodium salicylate with less success and more severe side effects. The treatment of rheumatic fever with salicin gradually gained some acceptance in medical circles.By the 1880s, the German chemical industry, jump-started by the lucrative development of dyes from coal tar, was branching out to investigate the potential of new tar-derived medicines.: 40–46  The turning point was the advent of Kalle & Company's Antifebrine, the branded version of acetanilide —the fever-reducing properties of which were discovered by accident in 1886. Antifebrine's success inspired Carl Duisberg, the head of research at the small dye firm Friedrich Bayer & Company, to start a systematic search for other useful drugs by acetylation of various alkaloids and aromatic compounds. Bayer chemists soon developed Phenacetin, followed by the sedatives Sulfonal and Trional.: 62–65 Upon taking control of Bayer's overall management in 1890, Duisberg began to expand the company's drug research program. He created a pharmaceutical group for creating new drugs, headed by former university chemist Arthur Eichengrün, and a pharmacology group for testing the drugs, headed by Heinrich Dreser (beginning in 1897, after periods under Wilhelm Siebel and Hermann Hildebrandt). In 1894, the young chemist Felix Hoffmann joined the pharmaceutical group. Dreser, Eichengrün and Hoffmann would be the key figures in the development of acetylsalicylic acid as the drug Aspirin (though their respective roles have been the subject of some contention). Synthesis of acetylsalicylic acid: In 1897, Hoffmann used salicylic acid refluxed with acetic anhydride to synthesise acetylsalicylic acid. Eichengrün sent the ASA to Dreser's pharmacology group for testing, and the initial results were very positive. The next step would normally have been clinical trials, but Dreser opposed further investigation of ASA because of salicylic acid's reputation for weakening the heart—possibly a side effect of the high doses often used to treat rheumatism. Dreser's group was soon busy testing Felix Hoffmann's next chemical success: diacetylmorphine (which the Bayer team soon branded as heroin because of the heroic feeling it gave them). Eichengrün, frustrated by Dreser's rejection of ASA, went directly to Bayer's Berlin representative Felix Goldmann to arrange low-profile trials with doctors. Though the results of those trials were also very positive, with no reports of the typical salicylic acid complications, Dreser still demurred. However, Carl Duisberg intervened and scheduled full testing. Soon, Dreser admitted ASA's potential and Bayer decided to proceed with production. Dreser wrote a report of the findings to publicize the new drug; in it, he omitted any mention of Hoffmann or Eichengrün. He would also be the only one of the three to receive royalties for the drug (for testing it), since it was ineligible for any patent the chemists might have taken out for creating it. For many years, however, he attributed Aspirin's discovery solely to Hoffmann. Synthesis of acetylsalicylic acid: The controversy over who was primarily responsible for aspirin's development spread through much of the twentieth century and into the twenty-first. As of 2016 Bayer still described Hoffman as having "discovered a pain-relieving, fever-lowering and anti-inflammatory substance." Historians and others have also challenged Bayer's early accounts of Bayer's synthesis, in which Hoffmann was primarily responsible for the Bayer breakthrough. In 1949, shortly before his death, Eichengrün wrote an article, "Fifty Years of Aspirin", claiming that he had not told Hoffmann the purpose of his research, meaning that Hoffmann merely carried out Eichengrün's research plan, and that the drug would never have gone to the market without his direction. This claim was later supported by research conducted by historian Walter Sneader. Axel Helmstaedter, General Secretary of the International Society for the History of Pharmacy, subsequently questioned the novelty of Sneader's research, noting that several earlier articles discussed the Hoffmann–Eichengrün controversy in detail. Bayer countered Sneader in a press release stating that according to the records, Hoffmann and Eichengrün held equal positions, and Eichengrün was not Hoffmann's supervisor. Hoffmann was named on the US Patent as the inventor, which Sneader did not mention. Eichengrün, who left Bayer in 1908, had multiple opportunities to claim the priority and had never before 1949 done it; he neither claimed nor received any percentage of the profit from aspirin sales. Synthesis of acetylsalicylic acid: Naming the drug The name Aspirin was derived from the name of the chemical ASA—Acetylspirsäure in German. Spirsäure (salicylic acid) was named for the meadowsweet plant, Spirea ulmaria, from which it could be derived.: 40  Aspirin took a- for the acetylation, -spir- from Spirsäure, and added -in as a typical drug name ending to make it easy to say. In the final round of naming proposals that circulated through Bayer, it came down to Aspirin and Euspirin; Aspirin, they feared, might remind customers of aspiration, but Arthur Eichengrün argued that Eu- (meaning "good") was inappropriate because it usually indicated an improvement over an earlier version of a similar drug. Since the substance itself was already known, Bayer intended to use the new name to establish their drug as something new; in January 1899 they settled on Aspirin.: 73 : 27 Rights and sale Under Carl Duisberg's leadership, Bayer was firmly committed to the standards of ethical drugs, as opposed to patent medicines. Ethical drugs were drugs that could be obtained only through a pharmacist, usually with a doctor's prescription. Advertising drugs directly to consumers was considered unethical and strongly opposed by many medical organizations; that was the domain of patent medicines. Therefore, Bayer was limited to marketing Aspirin directly to doctors.: 80–83 When production of Aspirin began in 1899, Bayer sent out small packets of the drug to doctors, pharmacists and hospitals, advising them of Aspirin's uses and encouraging them to publish about the drug's effects and effectiveness. As positive results came in and enthusiasm grew, Bayer sought to secure patent and trademark wherever possible. It was ineligible for patent in Germany (despite being accepted briefly before the decision was overturned), but Aspirin was patented in Britain (filed 22 December 1898) and the United States (US Patent 644,077 issued 27 February 1900). The British patent was overturned in 1905, the American patent was also besieged but was ultimately upheld.: 77–80 Faced with growing legal and illegal competition for the globally marketed ASA, Bayer worked to cement the connection between Bayer and Aspirin. One strategy it developed was to switch from distributing Aspirin powder for pharmacists to press into pill form to distributing standardized tablets—complete with the distinctive Bayer cross logo. In 1903 the company set up an American subsidiary, with a converted factory in Rensselaer, New York, to produce Aspirin for the American market without paying import duties. Bayer also sued the most egregious patent violators and smugglers. The company's attempts to hold onto its Aspirin sales incited criticism from muckraking journalists and the American Medical Association, especially after the 1906 Pure Food and Drug Act that prevented trademarked drugs from being listed in the United States Pharmacopeia; Bayer listed ASA with an intentionally convoluted generic name (monoacetic acid ester of salicylic acid) to discourage doctors referring to anything but Aspirin.: 88–96 : 28–31 World War I and Bayer: By the outbreak of World War I in 1914, Bayer was facing competition in all its major markets from local ASA producers as well as other German drug firms (particularly Heyden and Hoechst). The British market was immediately closed to the German companies, but British manufacturing could not meet the demand—especially with phenol supplies, necessary for ASA synthesis, largely being used for explosives manufacture. On 5 February 1915, Bayer's UK trademarks were voided, so that any company could use the term aspirin. The Australian market was taken over by Aspro, after the makers of Nicholas-Aspirin lost a short-lived exclusive right to the aspirin name there. In the United States, Bayer was still under German control—though the war disrupted the links between the American Bayer plant and the German Bayer headquarters—but phenol shortage threatened to reduce aspirin production to a trickle, and imports across the Atlantic Ocean were blocked by the Royal Navy.: 97–110 Great Phenol Plot To secure phenol for aspirin production, and at the same time indirectly aid the German war effort, German agents in the United States orchestrated what became known as the Great Phenol Plot. By 1915, the price of phenol rose to the point that Bayer's aspirin plant was forced to drastically cut production. This was especially problematic because Bayer was instituting a new branding strategy in preparation of the expiry of the aspirin patent in the United States. Thomas Edison, who needed phenol to manufacture phonograph records, was also facing supply problems; in response, he created a phenol factory capable of pumping out twelve tons per day. Edison's excess phenol seemed destined for trinitrophenol production.: 39–41 : 109–113 Although the United States remained officially neutral until April 1917, it was increasingly throwing its support to the Allies through trade. To counter this, German ambassador Johann Heinrich von Bernstorff and Interior Ministry official Heinrich Albert were tasked with undermining American industry and maintaining public support for Germany. One of their agents was a former Bayer employee, Hugo Schweitzer.: 38–39  Schweitzer set up a contract for a front company called the Chemical Exchange Association to buy all of Edison's excess phenol. Much of the phenol would go to the German-owned Chemische Fabrik von Heyden's American subsidiary; Heyden was the supplier of Bayer's salicylic acid for aspirin manufacture. By July 1915, Edison's plants were selling about three tons of phenol per day to Schweitzer; Heyden's salicylic acid production was soon back on line, and in turn Bayer's aspirin plant was running as well.: 40–41 The plot only lasted a few months. On 24 July 1915, Heinrich Albert's briefcase, containing details about the phenol plot, was recovered by a Secret Service agent. Although the activities were not illegal—since the United States was still officially neutral and still trading with Germany—the documents were soon leaked to the New York World, an anti-German newspaper. The World published an exposé on 15 August 1915.: 41–42  The public pressure soon forced Schweitzer and Edison to end the phenol deal—with the embarrassed Edison subsequently sending his excess phenol to the U.S. military—but by that time the deal had netted the plotters over two million dollars and there was already enough phenol to keep Bayer's Aspirin plant running. Bayer's reputation took a large hit, however, just as the company was preparing to launch an advertising campaign to secure the connection between aspirin and the Bayer brand.: 113–114 Bayer loses foreign holdings Beginning in 1915, Bayer set up a number of shell corporations and subsidiaries in the United States, to hedge against the possibility of losing control of its American assets if the U.S. should enter the war and to allow Bayer to enter other markets (e.g., army uniforms). After the U.S. declared war on Germany in April 1917, alien property custodian A. Mitchell Palmer began investigating German-owned businesses, and soon turned his attention to Bayer. To avoid having to surrender all profits and assets to the government, Bayer's management shifted the stock to a new company, nominally owned by Americans but controlled by the German-American Bayer leaders. Palmer, however, soon uncovered this scheme and seized all of Bayer's American holdings. After the Trading with the Enemy Act was amended to allow sale of these holdings, the government auctioned off the Rensselaer plant and all Bayer's American patents and trademarks, including even the Bayer brand name and the Bayer cross logo. It was bought by a patent medicine company, Sterling Products, Inc.: 42–49  The rights to Bayer Aspirin and the U.S. rights to the Bayer name and trademarks were sold back to Bayer AG in 1994 for US$1 billion. Interwar years: With the coming of the deadly Spanish flu pandemic in 1918, aspirin—by whatever name—secured a reputation as one of the most powerful and effective drugs in the pharmacopeia of the time. Its fever-reducing properties gave many sick patients enough strength to fight through the infection, and aspirin companies large and small earned the loyalty of doctors and the public—when they could manufacture or purchase enough aspirin to meet demand. Despite this, some people believed that Germans put the Spanish flu bug in Bayer aspirin, causing the pandemic as a war tactic.: 136–142  The U.S. ASA patent expired in 1917, but Sterling owned the aspirin trademark, which was the only commonly used term for the drug. In 1920, United Drug Company challenged the Aspirin trademark, which became officially generic for public sale in the U.S. (although it remained trademarked when sold to wholesalers and pharmacists). With demand growing rapidly in the wake of the Spanish flu, there were soon hundreds of "aspirin" brands on sale in the United States.: 151–152 Sterling Products, equipped with all of Bayer's U.S. intellectual property, tried to take advantage of its new brand as quickly as possible, before generic ASAs took over. However, without German expertise to run the Rensselaer plant to make aspirin and the other Bayer pharmaceuticals, they had only a finite aspirin supply and were facing competition from other companies. Sterling president William E. Weiss had ambitions to sell Bayer aspirin not only in the U.S., but to compete with the German Bayer abroad as well. Taking advantage of the losses Farbenfabriken Bayer (the German Bayer company) suffered through the reparation provisions of the Treaty of Versailles, Weiss worked out a deal with Carl Duisberg to share profits in the Americas, Australia, South Africa and Great Britain for most Bayer drugs, in return for technical assistance in manufacturing the drugs.: 144–150 Sterling also took over Bayer's Canadian assets as well as ownership of the Aspirin trademark which is still valid in Canada and most of the world. Bayer bought Sterling Winthrop in 1994 restoring ownership of the Bayer name and Bayer cross trademark in the US and Canada as well as ownership of the Aspirin trademark in Canada. Interwar years: Diversification of market Between World War I and World War II, many new aspirin brands and aspirin-based products entered the market. The Australian company Nicholas Proprietary Limited, through the aggressive marketing strategies of George Davies, built Aspro into a global brand, with particular strength in Australia, New Zealand, and the U.K.: 153–161  American brands such as Burton's Aspirin, Molloy's Aspirin, Cal-Aspirin and St. Joseph Aspirin tried to compete with the American Bayer, while new products such Cafaspirin (aspirin with caffeine) and Alka-Seltzer (a soluble mix of aspirin and bicarbonate of soda) put aspirin to new uses.: 161–162  In 1925, the German Bayer became part of IG Farben, a conglomerate of former dye companies; IG Farben's brands of Aspirin and, in Latin America, the caffeinated Cafiaspirina (co-managed with Sterling Products) competed with less expensive aspirins such as Geniol.: 78, 90 Competition from new drugs: After World War II, with the IG Farben conglomerate dismantled because of its central role in the Nazi regime, Sterling Products bought half of Bayer Ltd, the British Bayer subsidiary—the other half of which it already owned. However, Bayer Aspirin made up only a small fraction of the British aspirin market because of competition from Aspro, Disprin (a soluble aspirin drug) and other brands. Bayer Ltd began searching for new pain relievers to compete more effectively. After several moderately successful compound drugs that mainly utilized aspirin (Anadin and Excedrin), Bayer Ltd's manager Laurie Spalton ordered an investigation of a substance that scientists at Yale had, in 1946, found to be the metabolically active derivative of acetanilide: acetaminophen. After clinical trials, Bayer Ltd brought acetaminophen to market as Panadol in 1956.: 205–207 However, Sterling Products did not market Panadol in the United States or other countries where Bayer Aspirin still dominated the aspirin market. Other firms began selling acetaminophen drugs, most significantly, McNeil Laboratories with liquid Tylenol in 1955, and Tylenol pills in 1958. By 1967, Tylenol was available without a prescription. Because it did not cause gastric irritation, acetaminophen rapidly displaced much of aspirin's sales. Another analgesic, anti-inflammatory drug was introduced in 1962: ibuprofen (sold as Brufen in the U.K. and Motrin in the U.S.). By the 1970s, aspirin had a relatively small portion of the pain reliever market, and in the 1980s sales decreased even more when ibuprofen became available without prescription.: 212–217 Also in the early 1980s, several studies suggested a link between children's consumption of aspirin and Reye's syndrome, a potentially fatal disease. By 1986, the U.S. Food and Drug Administration required warning labels on all aspirin, further suppressing sales. The makers of Tylenol also filed a lawsuit against Anacin aspirin maker American Home Products, claiming that the failure to add warning labels before 1986 had unfairly held back Tylenol sales, though this suit was eventually dismissed.: 228–229 Investigating how aspirin works: The mechanism of aspirin's analgesic, anti-inflammatory and antipyretic properties was unknown through the drug's heyday in the early- to mid-twentieth century; Heinrich Dreser's explanation, widely accepted since the drug was first brought to market, was that aspirin relieved pain by acting on the central nervous system. In 1958 Harry Collier, a biochemist in the London laboratory of pharmaceutical company Parke-Davis, began investigating the relationship between kinins and the effects of aspirin. In tests on guinea pigs, Collier found that aspirin, if given beforehand, inhibited the bronchoconstriction effects of bradykinin. He found that cutting the guinea pigs' vagus nerve did not affect the action of bradykinin or the inhibitory effect of aspirin—evidence that aspirin worked locally to combat pain and inflammation, rather than on the central nervous system. In 1963, Collier began working with University of London pharmacology graduate student Priscilla Piper to determine the precise mechanism of aspirin's effects. However, it was difficult to pin down the precise biochemical goings-on in live research animals, and in vitro tests on removed animal tissues did not behave like in vivo tests.: 223–226 After five years of collaboration, Collier arranged for Piper to work with pharmacologist John Vane at the Royal College of Surgeons of England, in order to learn Vane's new bioassay methods, which seemed like a possible solution to the in vitro testing failures. Vane and Piper tested the biochemical cascade associated with anaphylactic shock (in extracts from guinea pig lungs, applied to tissue from rabbit aortas). They found that aspirin inhibited the release of an unidentified chemical generated by guinea pig lungs, a chemical that caused rabbit tissue to contract. By 1971, Vane identified the chemical (which they called "rabbit-aorta contracting substance," or RCS) as a prostaglandin. In a 23 June 1971 paper in the journal Nature, Vane and Piper suggested that aspirin and similar drugs (the nonsteroidal anti-inflammatory drugs or NSAIDs) worked by blocking the production of prostaglandins. Later research showed that NSAIDs such as aspirin worked by inhibiting cyclooxygenase, the enzyme responsible for converting arachidonic acid into a prostaglandin.: 226–231 Revival as heart drug: Aspirin's effects on blood clotting (as an antiplatelet agent) were first noticed in 1950 by Lawrence Craven. Craven, a family doctor in California, had been directing tonsillectomy patients to chew Aspergum, an aspirin-laced chewing gum. He found that an unusual number of patients had to be hospitalized for severe bleeding, and that those patients had been using very high amounts of Aspergum. Craven began recommending daily aspirin to all his patients, and claimed that the patients who followed the aspirin regimen (about 8,000 people) had no signs of thrombosis. However, Craven's studies were not taken seriously by the medical community, because he had not done a placebo-controlled study and had published only in obscure journals.: 237–239  The idea of using aspirin to prevent clotting diseases (such as heart attacks and strokes) was revived in the 1960s, when medical researcher Harvey Weiss found that aspirin had an anti-adhesive effect on blood platelets (and unlike other potential antiplatelet drugs, aspirin had low toxicity). Medical Research Council haematologist John O'Brien picked up on Weiss's finding and, in 1963, began working with epidemiologist Peter Elwood on aspirin's anti-thrombosis drug potential. Elwood began a large-scale trial of aspirin as a preventive drug for heart attacks. Nicholas Laboratories agreed to provide aspirin tablets, and Elwood enlisted heart attack survivors in a double-blind controlled study—heart attack survivors were statistically more likely to suffer a second attack, greatly reducing the number of patients necessary to reliably detect whether aspirin had an effect on heart attacks. The study began in February 1971, though the researchers soon had to break the double-blinding when a study by American epidemiologist Hershel Jick suggested that aspirin prevented heart attacks but suggested that the heart attacks were more deadly. Jick had found that fewer aspirin-takers were admitted to his hospital for heart attacks than non-aspirin-takers, and one possible explanation was that aspirin caused heart attack sufferers to die before reaching the hospital; Elwood's initial results ruled out that explanation. When the Elwood trial ended in 1973, it showed a modest but not statistically significant reduction in heart attacks among the group taking aspirin.: 239–246 Several subsequent studies put aspirin's effectiveness as a heart drug on firmer ground, but the evidence was not incontrovertible. However, in the mid-1980s, with the relatively new technique of meta-analysis, statistician Richard Peto convinced the U.S. FDA and much of the medical community that the aspirin studies, in aggregate, showed aspirin's effectiveness with relative certainty.: 247–257  By the end of the 1980s, aspirin was widely used as a preventive drug for heart attacks and had regained its former position as the top-selling analgesic in the U.S.: 267–269 In 2018, three major clinical trials cast doubt on that conventional wisdom, finding few benefits and consistent bleeding risks associated with daily aspirin use. Taken together, the findings led the American Heart Association and American College of Cardiology to change clinical practice guidelines in early 2019, recommending against the routine use of aspirin in people older than 70 years or people with increased bleeding risk who do not have existing cardiovascular disease.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Lazy jack** Lazy jack: Lazy jacks (or lazyjacks) are a type of rigging which can be applied to a fore-and-aft rigged sail to assist in sail handling during reefing and furling. They consist of a network of cordage which is rigged to a point on the mast and to a series of points on either side of the boom; these lines form a cradle which helps to guide the sail onto the boom when it is lowered, reducing the crew needed to secure the sail. Lazy jacks are most commonly associated with Bermuda rigged sails, although they can be used with gaff rigged sails and with club-footed jibs. Blocks and rings may be part of some lazyjacks.The oyster dredging sailboats of the Chesapeake Bay, bugeyes and skipjacks, were invariably equipped with lazy jacks, as their huge sail plans, combined with the changeable conditions on the bay, made it necessary to be able to reef quickly and with a small crew. Of late they have been revived as a feature of pleasure yachts as an alternative to roller reefing and furling. The latter methods can distort the sail, and are not compatible with battens in the reefed or furled portion of the sail. Lazy jacks are also cheaper, and can be easily applied after-market. However, they are not without disadvantages. The extra lines provide something else for the sail to foul upon when it is being raised, particularly if it is battened, and the lines and the connections between them can chafe and beat upon the sail, shortening its life and making unwanted noise. Also, unlike the roller systems, some crew member(s) must be on deck to secure the sail. Lazy jack: It is generally claimed that the name has its origins in the colloquial reference to British sailors as "Jack tars". "Lazy jacks" would therefore point to reduction of manpower and effort that lazy jacks provide.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Delta ISO** Delta ISO: A Delta ISO is used to update an ISO image which contains RPM Package Manager files. It makes use of DeltaRPMs (a form of Delta compression) for RPMs which have changed between the old and new versions of the ISO. Delta ISOs can save disk space and download time, as a Delta ISO only contains the things that were updated in the new version of the ISO. After downloading the Delta ISO, a user can use it to update the outdated ISO. Some RPM-based Linux distributions such as Fedora and openSUSE make use of this technique.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Haplogroup S1a (Y-DNA)** Haplogroup S1a (Y-DNA): Haplogroup S1a is a human Y-DNA haplogroup, defined by SNPs Z41335, Z41336, Z41337, Z41338, Z41339, Z41340, and Z41341. S1a is found primarily in Melanesia (especially in Papua New Guinea), Micronesia, Maritime Southeast Asia and among indigenous Australians.As of 2017, it includes an unnamed primary subclade referred to by ISOGG as "S1a~" (P405), (which was previously known as K2b1a). The "~" symbol is ISOGG's way of indicating that an unverified and as-yet unnamed immediate ancestor may exist. Its secondary subclades include: S1a1 (Z42413), S1a2~ (P79, P307) and S1a3 (P315). Before 2016, S1a1b (M230, P202, P204) was known as Haplogroup S* (and before that as Haplogroup K5). (In 2016, haplogroup S-B254 was "promoted" to S*, from its previous position of S1.) The "sibling" clades of S1a include: S1b (B275, Z33756, Z33757, Z33758, Z33759), S1c (Z41926, Z41927, Z41928, Z41929, Z41930) and S1d (SK1806). Phylogeny: Haplogroup S1 (B255) includes the following subclades:S1a Z41335 S1a1 Z42413 S1a1a S1a1a1 P60, P304, P308 S1a1a2 S1a1b M230, P202, P204 – "demoted" from its previous position as the basal Haplogroup S* (and known before that as Haplogroup K5) S1a1b1 M254 (previously known as K2b1a4a) S1a1b1a P57 S1a1b1b P61 S1a1b1c P83 S1a1b1d SK1891 S1a2 P79, P307 S1a3 P315 S1a3a Z41763 S1a3b~ P401S1b~ B275, Z33756, Z33757, Z33758, Z33759 S1c~ Z41926, Z41927, Z41928, Z41929, Z41930 S1d SK1806 (Based on the 2017 ISOGG tree and subsequent published research.) Distribution: Basal S1a* appears to be extremely rare or extinct in living males. The primary subclade S-P405* is also relatively rare, but is found at significant levels among various Micronesian populations: 5.6%. It is also found among males on the Indonesian island of Sumba at a rate of 0.2%.According to ISOGG (2017), S1a1 (Z42413) has been found among the Lebbo' people of Indonesia and S1a1a1 (P60) among indigenous Australians. One study has reported finding S-M230 (S1a1b) in: 52% (16/31) of a sample from the Papua New Guinea (PNG) Highlands; 21% (7/34) of a sample from the Moluccas (Maluku); 16% (5/31) of a sample from the Papua New Guinea coast; 12.5% (2/16) of a sample of Tolai from New Britain; 10% (3/31) of a sample from Nusa Tenggara, and; 2% (2/89) of a sample from the West New Guinea lowlands/coast. One subclade, Haplogroup S1a1b1d1a (S-M226.1) has been found at low frequencies in the Admiralty Islands and along the coast of mainland PNG. The distribution of the other major subclades of S1a according to ISOGG, is as follows: S1a2 (P79) – Melanesia and Papua New Guinea, including the Admiralty Islands; S1a3 (P315) – indigenous Australians and; S1a3b (P401) – Vanuatu.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Descriptive interpretation** Descriptive interpretation: According to Rudolf Carnap, in logic, an interpretation is a descriptive interpretation (also called a factual interpretation) if at least one of the undefined symbols of its formal system becomes, in the interpretation, a descriptive sign (i.e., the name of single objects, or observable properties). In his Introduction to Semantics (Harvard Uni. Press, 1942) he makes a distinction between formal interpretations which are logical interpretations (also called mathematical interpretation or logico-mathematical interpretation) and descriptive interpretations: a formal interpretation is a descriptive interpretation if it is not a logical interpretation.Attempts to axiomatize the empirical sciences, Carnap said, use a descriptive interpretation to model reality.: the aim of these attempts is to construct a formal system for which reality is the only interpretation. - the world is an interpretation (or model) of these sciences, only insofar as these sciences are true.Any non-empty set may be chosen as the domain of a descriptive interpretation, and all n-ary relations among the elements of the domain are candidates for assignment to any predicate of degree n. Examples: A sentence is either true or false under an interpretation which assigns values to the logical variables. We might for example make the following assignments: Individual constants a: Socrates b: Plato c: AristotlePredicates: Fα: α is sleeping Gαβ: α hates β Hαβγ: α made β hit γSentential variables: p "It is raining."Under this interpretation the sentences discussed above would represent the following English statements: p: "It is raining." F(a): "Socrates is sleeping." H(b,a,c): "Plato made Socrates hit Aristotle." ∀ x(F(x)): "Everybody is sleeping." ∃ z(G(a,z)): "Socrates hates somebody." ∃ x∀ y∃ z(H(x,y,z)): "Somebody made everybody hit somebody." ∀ x∃ z(F(x) ∧ G(a,z)): Everybody is sleeping and Socrates hates somebody. Examples: ∃ x∀ y∃ z (G(a,z) ∨ H(x,y,z)): Either Socrates hates somebody or somebody made everybody hit somebody.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Area Defense Anti-Munitions** Area Defense Anti-Munitions: Area Defense Anti-Munitions (ADAM) is an experimental short range ground-to-air anti-missile weapons system being developed by Lockheed Martin. It uses a 10 kW fiber laser to attack its targets.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Minoxidil** Minoxidil: Minoxidil is a medication used for the treatment of high blood pressure and pattern hair loss. It is an antihypertensive vasodilator. It is available as a generic medication by prescription in oral tablet form and over the counter as a topical liquid or foam. Medical uses: Minoxidil, when used for hypertension, is generally reserved for use in severe hypertension patients who can not respond to at least two agents and a diuretic. Minoxidil is also generally administered with a loop diuretic to prevent sodium and potassium retention. It may also cause a reflex tachycardia and thus is prescribed with a beta blocker. Medical uses: Hair loss Minoxidil, applied topically, is used for the treatment of hair loss. It is effective in helping promote hair growth in people with androgenic alopecia regardless of sex. Minoxidil must be used indefinitely for continued support of existing hair follicles and the maintenance of any experienced hair regrowth.Low-dose oral minoxidil (LDOM) is used off-label against hair loss and to promote hair regrowth. Oral minoxidil has been found to be an effective and well-tolerated treatment alternative for patients having difficulty with topical formulations. Side effects: Topically applied minoxidil is generally well tolerated, but common side effects include itching of the eyes, general itching, irritation at the treated area, and unwanted hair growth elsewhere on the body.Alcohol and propylene glycol present in some topical preparations may dry the scalp, resulting in dandruff and contact dermatitis.Side effects of oral minoxidil may include swelling of the face and extremities, rapid heartbeat, or lightheadedness. Cardiac lesions, such as focal necrosis of the papillary muscle and subendocardial areas of the left ventricle, have been observed in laboratory animals treated with minoxidil. Pseudoacromegaly is an extremely rare side effect reported with large doses of oral minoxidil.In 2013 or 2014 a seven-year-old girl was admitted to a children's hospital in Toulouse in France after accidentally ingesting a teaspoon of Alopexy (a brand name for minoxidil in France). The child vomited constantly after ingestion and showed hypotension and tachycardia for 40 hours. The authors of the report on the incident stressed that the product should be kept out of reach of children, and urged manufacturers to consider more secure child-resistant packaging. Pharmacology: Mechanism of action The mechanism by which minoxidil promotes hair growth is not fully understood. Minoxidil is an adenosine 5'-triphosphate-sensitive potassium channel opener, causing hyperpolarization of cell membranes. Theoretically, by widening blood vessels and opening potassium channels, it allows more oxygen, blood, and nutrients to the follicles. Moreover, minoxidil contains a nitric oxide moiety and may act as a nitric oxide agonist. This may cause follicles in the telogen phase to shed, which are then replaced by thicker hairs in a new anagen phase. Minoxidil is a prodrug that is converted by sulfation via the sulfotransferase enzyme SULT1A1 to its active form, minoxidil sulfate. The effect of minoxidil is mediated by adenosine, which triggers intracellular signal transduction via both adenosine A1 receptors and two sub-types of adenosine A2 receptors (A2A and A2B receptors). Minoxidil acts as an activator of the Kir6/SUR2 channel upon selective binding to SUR2. The expression of SUR2B in dermal papilla cells might play a role in the production of adenosine. Minoxidil induces cell growth factors such as VEGF, HGF, IGF-1 and potentiates HGF and IGF-1 actions by the activation of uncoupled sulfonylurea receptor on the plasma membrane of dermal papilla cells.A number of in vitro effects of minoxidil have been described in monocultures of various skin and hair follicle cell types including stimulation of cell proliferation, inhibition of collagen synthesis, and stimulation of vascular endothelial growth factor, prostaglandin synthesis and leukotriene B4 expression.Minoxidil causes a redistribution of cellular iron through its apparent capacity to bind this metal ion. By binding iron in a Fenton-reactive form, intracellular hydroxyl radical production would ensue, but hydroxyl would be immediately trapped and scavenged by the minoxidil to generate a nitroxyl radical. It is presumed that this nitroxyl radical will be capable of reduction by glutathione to reform minoxidil. Such a process would cycle until the minoxidil is otherwise metabolized and would result in rapid glutathione depletion with glutathione disulphide formation and therefore with concomitant consumption of NADPH/ NADH and other reducing equivalents. Minoxidil inhibited PHD by interfering with the normal function of ascorbate, a cofactor of the enzyme, leading to a stabilization of HIF-1α protein and a subsequent activation of HIF-1. In an in vivo angiogenesis assay, millimolar minoxidil increased blood vessel formation in a VEGF-dependent manner. Minoxidil inhibition of PHD occurs via interrupting ascorbate binding to iron. The structural feature of positioning amines adjacent to nitric oxide may confer the ability of millimolar minoxidil to chelate iron, thereby inhibiting PHD. Minoxidil is capable of tetrahydrobiopterin inhibition as a cofactor for nitric oxide synthase.Minoxidil stimulates prostaglandin E2 production by activating COX-1 and prostaglandin endoperoxide synthase-1 but inhibits prostacyclin production. Additionally, expression of the prostaglandin E2 receptor, the most upregulated target gene in the β-catenin pathway of DP cells, was enhanced by minoxidil, which may enable hair follicles to grow continuously and maintain the anagen phase.Due to anti-fibrotic activity of minoxidil inhibition of enzyme lysyl hydroxylase present in fibroblast may result in synthesis of a hydroxylysine-deficient collagen. Minoxidil can also potentially stimulate elastogenesis in aortic smooth muscle cells, and in skin fibroblasts in a dose-dependent manner. In hypertensive rats, minoxidil increases elastin level in the mesenteric, abdominal, and renal arteries by a decrease in "elastase" enzyme activity in these tissues. In rats, potassium channel openers decrease calcium influx which inhibits elastin gene transcription through extracellular signal-regulated kinase 1/2 (ERK 1/2)-activator protein 1 signaling pathway. ERK 1/2 increases, through elastin gene transcription, adequately cross-linked elastic fiber content synthesized by smooth muscle cells, and decreases the number of cells in the aorta.Minoxidil possesses alpha 2-adrenoceptor agonist activity, stimulates the peripheral sympathetic nervous system (SNS) by way of carotid and aortic baroreceptor reflexes. Minoxidil administration also brings an increase in plasma renin activity, largely due to the aforementioned activation of the SNS. This activation of the renin-angiotensin axis further prompts increased biosynthesis of aldosterone; whereas plasma and urinary aldosterone levels are increased early in the course of treatment with minoxidil, over time these values tend to normalize presumably because of accelerated metabolic clearance of aldosterone in association with hepatic vasodilation.Minoxidil may be involved in the inhibition of serotonergic (5-HT2) receptors.Minoxidil might increase blood-tumor barrier permeability in a time-dependent manner by down-regulating tight junction protein expression and this effect could be related to ROS/RhoA/PI3K/PKB signal pathway. Minoxidil significantly increases ROS concentration when compared to untreated cells. Pharmacology: In vitro Minoxidil treatment resulted in a 0.22 fold change for 5α-R2 (p < 0.0001). This antiandrogenic effect of minoxidil, shown by significant downregulation of 5α-R2 gene expression in HaCaT cells, may be one of its mechanisms of action in alopecia.Minoxidil is less effective when the area of hair loss is large. In addition, its effectiveness has largely been demonstrated in younger men who have experienced hair loss for less than 5 years. Minoxidil use is indicated for central (vertex) hair loss only. Two clinical studies are being conducted in the US for a medical device that may allow patients to determine if they are likely to benefit from minoxidil therapy.Conditions such as Cantú syndrome have been shown to mimic the pharmacological properties of minoxidil. Chemistry: Minoxidil is an odorless, white to off-white, crystalline powder (crystals from methanol-acetonitrile). When heated to decomposition it emits toxic fumes of nitrogen oxides. It decomposes at 259-261 °C.Solubility (mg/ml): propylene glycol 75, methanol 44, ethanol 29, 2-propanol 6.7, dimethylsulfoxide 6.5, water 2.2, chloroform 0.5, acetone <0.5, ethyl acetate <0.5, diethyl ether <0.5, benzene <0.5, acetonitrile <0.5. Chemistry: pKa= 4.61 Commercially available minoxidil topical solution should be stored at a temperature of 20-25 °C. Extemporaneous formulations of minoxidil have been reported to have variable stability, depending on the vehicle and method of preparation, and the FDA requests that physicians and pharmacists refrain from preparing extemporaneous topical formulations using commercially available minoxidil tablets. Minoxidil tablets should be stored in well-closed containers at 15-30 °C. Chemistry: Minoxidil, 6-amino-1,2-dihydro-1-hydroxy-2-imino-4-piperidinopyrimidine, is synthesized from barbituric acid, the reaction of which with phosphorus oxychloride gives 2,4,6-trichloropyrimidine. Upon reaction with ammonium, this turns into 2,4-diamino-6-chloropyrimidine. Next, the resulting 2,4-diamino-6-chloropyrimidine undergoes reaction with 2,4-dichlorophenol in the presence of potassium hydroxide, giving 2,4-diamino-6-(2,4-dichlorophenoxy)-pyrimidine. Oxidation of this product with 3-chloroperbenzoic acid gives 2,4-diamino-6-(2,4-dichlorophenoxy)pyrimidine-3-oxide, the 2,4-dichlorophenoxyl group of which is replaced with a piperidine group at high temperature, giving minoxidil.Another synthesis approach is depicted here: Compounds related to minoxidil include kopexil (diaminopyrimidine oxide). History: Initial application Minoxidil was developed in the late 1950s by the Upjohn Company (later became part of Pfizer) to treat ulcers. In trials using dogs, the compound did not cure ulcers, but proved to be a powerful vasodilator. Upjohn synthesized over 200 variations of the compound, including the one it developed in 1963 and named minoxidil. These studies resulted in the U.S. Food and Drug Administration (FDA) approving minoxidil (with the brand name 'Loniten') in the form of oral tablets to treat high blood pressure in 1979. History: Hair growth When Upjohn received permission from the U.S. Food and Drug Administration (FDA) to test the new drug as medicine for hypertension they approached Charles A. Chidsey MD, Associate Professor of Medicine at the University of Colorado School of Medicine. He conducted two studies, the second study showing unexpected hair growth. Puzzled by this side-effect, Chidsey consulted Guinter Kahn (who while a dermatology resident at the University of Miami had been the first to observe and report hair development on patients using the minoxidil patch) and discussed the possibility of using minoxidil for treating hair loss. History: Kahn along with his colleague Paul J. Grant MD had obtained a certain amount of the drug and conducted their own research, since they were first to make the side effect observation. Neither Upjohn or Chidsey at the time were aware of the side effect of hair growth. The two doctors had been experimenting with a 1% solution of minoxidil mixed with several alcohol-based liquids. Both parties filed patents to use the drug for hair loss prevention, which resulted in a decade-long trial between Kahn and Upjohn, which ended with Kahn's name included in a consolidated patent (U.S. #4,596,812 Charles A Chidsey, III and Guinter Kahn) in 1986 and royalties from the company to both Kahn and Grant.Meanwhile, the effect of minoxidil on hair loss prevention was so clear that in the 1980s physicians were prescribing Loniten off-label to their balding patients.In August 1988, the FDA approved the drug for treating baldness in men under the brand name "Rogaine" (FDA rejected Upjohn's first choice, Regain, as misleading). The agency concluded that although "the product will not work for everyone", 39% of the men studied had "moderate to dense hair growth on the crown of the head". "Men's Rogaine", marketed by Johnson & Johnson went off-patent on 20 January 2006.In 1991, Upjohn made the product available for women. "Women's Rogaine", marketed by Johnson & Johnson went off-patent on 14 February 2014. Society and culture: Economics In February 1996, the FDA approved both the over-the-counter sale of the medication and the production of generic formulations of minoxidil. Upjohn replied to that by lowering prices to half the price of the prescription drug and by releasing a prescription 5% formula of Rogaine in 1997. In 1998, a 5% formulation of minoxidil was approved for nonprescription sale by the FDA. The 5% aerosol foam formula was approved for medical use in the US in 2006. The generic versions of the 5% aerosol foam formula were approved in 2017.In 2017, JAMA published a study of pharmacy prices in four states for 41 over-the-counter minoxidil products which were "gender-specified". The authors found that the mean price for minoxidil solutions was the same for women and men even though the women's formulations were 2% and the men's were 5%, while the mean price for minoxidil foams, which were all 5%, was 40% higher for women. The authors noted this was the first time gender-based pricing had been shown for a medication. Society and culture: Brand names As of June 2017, Minoxidil was marketed under many trade names worldwide: Alomax, Alopek, Alopexy, Alorexyl, Alostil, Aloxid, Aloxidil, Anagen, Apo-Gain, Axelan, Belohair, Boots Hair Loss Treatment, Botafex, Capillus, Carexidil, Coverit, Da Fei Xin, Dilaine, Dinaxcinco, Dinaxil, Ebersedin, Eminox, Folcare, Follixil, Guayaten, Hair Grow, Hair-Treat, Hairgain, Hairgaine, Hairgrow, Hairway, Headway, Inoxi, Ivix, Keranique, Lacovin, Locemix, Loniten, Lonnoten, Lonolox, Lonoten, Loxon, M E Medic, Maev-Medic, Mandi, Manoxidil, Mantai, Men's Rogaine, Minodil, Minodril, Minostyl, Minovital, Minox, Minoxi, Minoxidil, Minoxidilum, Minoximen, Minoxiten, Minscalp, Mintop, Modil, Morr, Moxidil, Neo-Pruristam, Neocapil, Neoxidil, Nherea, Noxidil, Oxofenil, Pilfud, Pilogro, Pilomin, Piloxidil,Re-Stim, Re-Stim+, Recrea, Regain, Regaine, Regaxidil, Regro, Regroe, Regrou, Regrowth, Relive, Renobell Locion, Reten, Rexidil, Rogaine, Rogan, Si Bi Shen, Splendora, Superminox, Trefostil, Tricolocion, Tricoplus, Tricovivax, Tricoxane, Trugain, Tugain, Unipexil, Vaxdil, Vius, Women's Regaine, Xenogrow, Xtreme Boost, Xtreme Boost+, Xue Rui, Ylox, and Zeldilon. It was also marketed as combination drug with amifampridine under the brand names Gainehair and Hair 4 U, and as a combination with tretinoin and clobetasol under the brand name Sistema GB. Veterinary uses: Minoxidil is suspected to be highly toxic to cats, even in small doses, as there are reported cases of cats dying shortly after coming in contact with minimal amounts of the substance.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Porter (carrier)** Porter (carrier): A porter, also called a bearer, is a person who carries objects or cargo for others. The range of services conducted by porters is extensive, from shuttling luggage aboard a train (a railroad porter) to bearing heavy burdens at altitude in inclement weather on multi-month mountaineering expeditions. They can carry items on their backs (backpack) or on their heads. The word "porter" derives from the Latin portare (to carry).The use of humans to transport cargo dates to the ancient world, prior to domesticating animals and development of the wheel. Historically it remained prevalent in areas where slavery was permitted, and exists today where modern forms of mechanical conveyance are impractical or impossible, such as in mountainous terrain, or thick jungle or forest cover. Porter (carrier): Over time slavery diminished and technology advanced, but the role of porter for specialized transporting services remains strong in the 21st century. Examples include bellhops at hotels, redcaps at railway stations, skycaps at airports, and bearers on adventure trips engaged by foreign travelers. Expeditions: Porters, frequently called Sherpas in the Himalayas (after the ethnic group most Himalayan porters come from), are also an essential part of mountaineering: they are typically highly skilled professionals who specialize in the logistics of mountain climbing, not merely people paid to carry loads (although carrying is integral to the profession). Frequently, porters/Sherpas work for companies who hire them out to climbing groups, to serve both as porters and as mountain guides; the term "guide" is often used interchangeably with "Sherpa" or "porter", but there are certain differences. Porters are expected to prepare the route before and/or while the main expedition climbs, climbing up beforehand with tents, food, water, and equipment (enough for themselves and for the main expedition), which they place in carefully located deposits on the mountain. This preparation can take months of work before the main expedition starts. Doing this involves numerous trips up and down the mountain, until the last and smallest supply deposit is planted shortly below the peak. When the route is prepared, either entirely or in stages ahead of the expedition, the main body follows. The last stage is often done without the porters, they remaining at the last camp, a quarter mile or below the summit, meaning only the main expedition is given the credit for mounting the summit. In many cases, since the porters are going ahead, they are forced to freeclimb, driving spikes and laying safety lines for the main expedition to use as they follow. Porters (such as Sherpas for example), are frequently local ethnic types, well adapted to living in the rarified atmosphere and accustomed to life in the mountains. Although they receive little glory, porters or Sherpas are often considered among the most skilled of mountaineers, and are generally treated with respect, since the success of the entire expedition is only possible through their work. They are also often called upon to stage rescue expeditions when a part of the party is endangered or there is an injury; when a rescue attempt is successful, several porters are usually called upon to transport the injured climber(s) back down the mountain so the expedition can continue. A well known incident where porters attempted to rescue numerous stranded climbers, and often died as a result, is the 2008 K2 disaster. Sixteen Sherpas were killed in the 2014 Mount Everest ice avalanche, inciting the entire Sherpa guide community to refuse to undertake any more ascents for the remainder of the year, making any further expeditions impossible. History: Human adaptability and flexibility led to the early use of humans for transporting gear. Porters were commonly used as beasts of burden in the ancient world, when labor was generally cheap and slavery widespread. The ancient Sumerians, for example, enslaved women to shift wool and flax. History: In the early Americas, where there were few native beasts of burden, all goods were carried by porters called Tlamemes in the Nahuatl language of Mesoamerica. In colonial times, some areas of the Andes employed porters called silleros to carry persons, particularly Europeans, as well as their luggage across the difficult mountain passes. Throughout the globe porters served, and in some areas continue to, as such littermen, particularly in crowded urban areas. History: Many great works of engineering were created solely by muscle power in the days before machinery or even wheelbarrows and wagons; massive workforces of workers and bearers would complete impressive earthworks by manually lugging the earth, stones, or bricks in baskets on their backs. Porters were very important to the local economies of many large cities in Brazil during the 1800s, where they were known as ganhadores. In 1857, ganhadores in Salvador, Bahia, went on strike in the first general strike in the country's history. Contribution to mountain climbing expeditions: The contributions of porters can often go overlooked. Amir Mehdi was a Pakistani mountaineer and porter known for being part of the team which managed the first successful ascent of Nanga Parbat in 1953, and of K2 in 1954 with an Italian expedition. He, along with the Italian mountaineer Walter Bonatti, are also known for having survived a night at the highest open bivouac - 8,100 metres (26,600 ft) - on K2 in 1954. Fazal Ali, who was born in the Shimshal Valley in Pakistan North, is – according to the Guinness Book of World Records – the only man ever to have scaled K2 (8611 m) three times, in 2014, 2017 and 2018, all without oxygen, but his achievements have gone largely unrecognised. Today: Porters are still paid to shift burdens in many third-world countries where motorized transport is impractical or unavailable, often alongside pack animals. The Sherpa people of Nepal are so renowned as mountaineering porters that their ethnonym is synonymous with that profession. Their skill, knowledge of the mountains and local culture, and ability to perform at altitude make them indispensable for the highest Himalayan expeditions. Porters at Indian railway stations are called coolies, a term for unskilled Asian labourer derived from the Chinese word for porter. Mountain porters are also still in use in a handful of more developed countries, including Slovakia (horský nosič) and Japan (bokka, 歩荷). These men (and more rarely women) regularly resupply mountain huts and tourist chalets at high-altitude mountain ranges. In North America Certain trade-specific terms are used for forms of porters in North America, including bellhop (hotel porter), redcap (railway station porter), and skycap (airport porter). Today: The practice of railroad station porters wearing red-colored caps to distinguish them from blue-capped train personnel with other duties was begun on Labor Day of 1890 by an African-American porter in order to stand out from the crowds at Grand Central Terminal in New York City. The tactic immediately caught on, over time adapted by other forms of porters for their specialties.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Preclosure operator** Preclosure operator: In topology, a preclosure operator or Čech closure operator is a map between subsets of a set, similar to a topological closure operator, except that it is not required to be idempotent. That is, a preclosure operator obeys only three of the four Kuratowski closure axioms. Definition: A preclosure operator on a set X is a map []p []p:P(X)→P(X) where P(X) is the power set of X. The preclosure operator has to satisfy the following properties: [∅]p=∅ (Preservation of nullary unions); A⊆[A]p (Extensivity); [A∪B]p=[A]p∪[B]p (Preservation of binary unions).The last axiom implies the following: 4. A⊆B implies [A]p⊆[B]p Topology: A set A is closed (with respect to the preclosure) if [A]p=A . A set U⊂X is open (with respect to the preclosure) if its complement A=X∖U is closed. The collection of all open sets generated by the preclosure operator is a topology; however, the above topology does not capture the notion of convergence associated to the operator, one should consider a pretopology, instead. Examples: Premetrics Given d a premetric on X , then [A]p={x∈X:d(x,A)=0} is a preclosure on X. Sequential spaces The sequential closure operator seq is a preclosure operator. Given a topology T with respect to which the sequential closure operator is defined, the topological space (X,T) is a sequential space if and only if the topology seq generated by seq is equal to T, that is, if seq =T.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Atosiban** Atosiban: Atosiban, sold under the brand name Tractocile among others, is an inhibitor of the hormones oxytocin and vasopressin. It is used as an intravenous medication as a labour repressant (tocolytic) to halt premature labor. It was developed by Ferring Pharmaceuticals in Sweden and first reported in the literature in 1985. Originally marketed by Ferring Pharmaceuticals, it is licensed in proprietary and generic forms for the delay of imminent preterm birth in pregnant adult women. Atosiban: The most commonly reported side effect is nausea. Medical uses: Atosiban is used to delay birth in adult women who are 24 to 33 weeks pregnant, when they show signs that they may give birth pre-term (prematurely). These signs include regular contractions lasting at least 30 seconds at a rate of at least four every 30 minutes, and dilation of the cervix (the neck of the womb) of 1 to 3 cm and an effacement (a measure of the thinness of the cervix) of 50% or more. In addition, the baby must have a normal heart rate. Pharmacology: Mechanism of action Atosiban is a nonapeptide, desamino-oxytocin analogue, and a competitive vasopressin/oxytocin receptor antagonist (VOTra). Atosiban inhibits the oxytocin-mediated release of inositol trisphosphate from the myometrial cell membrane. As a result, reduced release of intracellular, stored calcium from the sarcoplasmic reticulum of myometrial cells and reduced influx of Ca2+ from the extracellular space through voltage-gated channels occur. In addition, atosiban suppresses oxytocin-mediated release of PGE and PGF from the decidua.In human preterm labour, atosiban, at the recommended dosage, antagonises uterine contractions and induces uterine quiescence. The onset of uterus relaxation following atosiban is rapid, uterine contractions being significantly reduced within 10 minutes to achieve stable uterine quiescence. Other uses: Atosiban use after assisted reproduction Atosiban is useful in improving the pregnancy outcome of in vitro fertilization-embryo transfer (IVF-ET) in patients with repeated implantation failure. The pregnancy rate improved from zero to 43.7%.First- and second-trimester bleeding was more prevalent in ART than in spontaneous pregnancies. From 2004 to 2010, 33 first-trimester pregnancies with vaginal bleeding after ART with evident uterine contractions, when using atosiban and/or ritodrine, no preterm delivery occurred before 30 weeks.In a 2010 meta-analysis, nifedipine is superior to β2 adrenergic receptor agonists and magnesium sulfate for tocolysis in women with preterm labor (20–36 weeks), but it has been assigned to pregnancy category C by the U.S. Food and Drug Administration, so is not recommended before 20 weeks, or in the first trimester. A report from 2011 supports the use of atosiban, even at very early pregnancy, to decrease the frequency of uterine contractions to enhance success of pregnancy. Pharmacovigilance: Following the launch of atosiban in 2000, the calculated cumulative patient exposure to atosiban (January 2000 to December 2005) is estimated as 156,468 treatment cycles. To date, routine monitoring of drug safety has revealed no major safety issues. Regulatory affairs: Atosiban was approved in the European Union in January 2000 and launched in the European Union in April 2000. As of June 2007, atosiban was approved in 67 countries, excluding the United States and Japan. It was understood that Ferring did not expect to seek approval for atosiban in the US or Japan, focusing instead on development of new compounds for use in Spontaneous Preterm Labor (SPTL). The fact that atosiban only had a short duration before it was out of patent that the parent drug company decided not to pursue licensing in the US. Systematic reviews: In a systematic review of atosiban for tocolysis in preterm labour, six clinical studies — two compared atosiban to placebo and four atosiban to a β agonist — showed a significant increase in the proportion of women undelivered by 48 hours in women receiving atosiban compared to placebo. When compared with β agonists, atosiban increased the proportion of women undelivered by 48 hours and was safer compared to β agonists. Therefore, oxytocin antagonists appear to be effective and safe for tocolysis in preterm labour.A 2014 systematic review by the Cochrane Collaboration showed that while atosiban had fewer side effects than alternative drugs (such as ritodrine), other beta blockers, and calcium channel antagonists, it was no better than placebo in the major outcomes i.e. pregnancy prolongation or neonatal outcomes. The finding of an increase in infant deaths in one placebo-controlled trial warrants caution. Further research is recommended. Systematic reviews: Clinical trials Atosiban vs. nifedipine A 2013 retrospective study comparing the efficacy and safety of atosiban and nifedipine in the suppression of preterm labour concluded that atosiban and nifedipine are effective in delaying delivery for seven days or more in women presenting with preterm labour. A total of 68.3% of women in the atosiban group remained undelivered at seven days or more, compared with 64.7% in the nifedipine group. They have the same efficacy and associated minor side effects. However, flushing, palpitation, and hypotension were significantly higher in the nifedipine group.A 2012 clinical trial compared tocolytic efficacy and tolerability of atosiban with that of nifedipine. Forty-eight (68.6%) women allocated to atosiban and 39 (52%) to nifedipine did not deliver and did not require an alternate agent at 48 hours, respectively (p=.03). Atosiban has fewer failures within 48 hours. Nifedipine may be associated with a longer postponement of delivery.A 2009 randomised controlled study demonstrated for the first time the direct effects of atosiban on fetal movement, heart rate, and blood flow. Tocolysis with either atosiban or nifedipine combined with betamethasone administration had no direct fetal adverse effects. Systematic reviews: Atosiban vs. ritodrine Multicentre, controlled trial of atosiban vs. ritodrine in 128 women shows a significantly better tocolytic efficacy after 7 days in the atosiban group than in the ritodrine group (60.3 versus 34.9%), but not at 48 hours (68.3 versus 58.7%). Maternal adverse events were reported less frequently in the atosiban group (7.9 vs 70.8%), resulting in fewer early drug terminations due to adverse events (0 versus 20%). Therefore, atosiban is superior to ritodrine in the treatment of preterm labour. Brand names: In India it is marketed under the brand name Tosiban by Zuventus healthcare ltd.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Thoracic aorta injury** Thoracic aorta injury: Injury of the thoracic aorta refers to any injury which affects the portion of the aorta which lies within the chest cavity. Injuries of the thoracic aorta are usually the result of physical trauma; however, they can also be the result of a pathological process. The main causes of this injury are deceleration (such as a car accident) and crush injuries. There are different grades to injuries to the aorta depending on the extent of injury, and the treatment whether surgical or medical depends on that grade. It is difficult to determine if a patient has a thoracic injury just by their symptoms, but through imaging and a physical exam the extent of injury can be determined. All patients with a thoracic aortic injury need to be treated either surgically with endovascular repair or open surgical repair or with medicine to keep their blood pressure and heart rate in the appropriate range. However, most patients that have a thoracic aortic injury do not live for 24 hours. Mechanism: Injuries to the aorta are usually the result of trauma, such as deceleration and crush injuries. Deceleration injuries almost always occur during high speed impacts, such as those in motor vehicle crashes and falls from a substantial height. Several mechanical processes can occur and are reflected in the injury itself. A more recently proposed mechanism is that the aorta can be compressed between bony structures (such as the manubrium, clavicle, and first rib) and the spine. In the ascending aorta (the portion of the aorta which is almost vertical), one mechanism of injury is torsion (a two-way twisting). There are clinical predictors of an aortic injury. The predictors include if a patient is older than 50, was an unrestrained patient, has hypotension, has a thoracic injury requiring thoracotomy, has a spinal injury, or has a head injury. If four of these criteria are met their likelihood for an aortic injury is 30%The aortic wall is made up of three different components the inner layer (intima), the muscle layer (media), and the outer layer (adventitia). A traumatic injury to the thoracic aorta can cause disruption of any of these parts. Therefore, aortic injury is on a scale from injury to a part of the inner layer to a complete tear of all three layers.There are 4 grades of aortic injury. Mechanism: Type I: Intimal tear Type II: Intramural hematoma Type III: Pseudoaneurysm Type IV: RuptureIn addition to the 4 grades of aortic injury, the risk of rupture can also be categorized. If both the inner layer and the muscle layer of the aortic wall are both involved in the injury then the injury is categorized as significant aortic injury. If just the inner layer and a portion of the muscle layer are involved in the injury then the injury is characterized as minimal aortic injury. Radiographically this would be seen as an intimal flap less than 1 cm in size.Between the mobile ascending aorta and the relatively fixed descending thoracic aorta is the aortic isthmus. When there is a sudden deceleration the mobile ascending aorta pushes forward creating a whiplash effect on the aortic isthmus. However, a different mechanism is involved when the ascending aorta proximal to the isthmus is torn. When there is a rapid deceleration the heart is pushed to the left posterior chest. This causes a sudden increase in intra-aortic pressure and can cause aortic rupture. This is known as the water hammer effect.Based on the location of the injury in the thorax subsequent injuries can take place. If the injury is in the descending thoracic aorta this could lead to a hemothorax. Where as an injury to the ascending aorta could lead to hemoperricardium and subsequent tamponade or could compress the SVC. Symptoms: It is difficult to rely on symptoms to diagnose a thoracic aortic injury. However some symptoms do include severe chest pain, cough, shortness of breath, difficulty swallowing due to compression of the esophagus, back pain, and hoarseness due to involvement of the recurrent laryngeal nerve. There might be external signs such as bruising on the anterior chest wall due to a traumatic injury. Clinical signs are uncommon and nonspecific but can include generalized hypertension due to the injury involving the sympathetic afferent nerves in the aortic isthmus. A murmur can also be audible as turbulent blood flow goes over the tear. Diagnosis: Classification There are inconsistencies in the terminology of aortic injury. There are several terms which are interchangeably used to describe injury to the aorta such as tear, laceration, transection, and rupture. Laceration is used as a term for the consequence of a tear, whereas a transection is a section across an axis or cross section. For all intents and purposes, the latter is used when a tear occurs across all or nearly all of the circumference of the aorta. Rupture is defined as a forcible disruption of tissue. Some disagree with the usage of rupture as they believe it implies that a tear is incompatible with life; however, the term accurately gauges the severity of tears in the aorta. A rupture can be either complete or partial, and can be classified further by the position of the tear. Diagnosis: Imaging The gold standard for diagnosis of thoracic aortic injury is aortography. This method involves inserting a catheter into the aorta and directly injecting contrast material. The primary benefit of aortography is the ability to precisely determine the location of injury for surgical planning. Another imaging modality is CT angiogram which has a sensitivity of 100%. A CT angiogram relies on timing the CT scan after a bolus of IV contrast is administered from a peripheral IV site. Since a CT angiogram has a sensitivity of 100% and less invasive due to the peripheral placement of the IV line than aortagraphy it is the primary imaging choice. This allows visualization of the aorta and provides precise locations of traumatic injury. A CT angiogram does show both direct and indirect signs of aortic injury. The indirect sign that you can see is effacement of fat due to a hematoma. This sign should clue in a radiologist that there is an underlying injury. Some direct signs from a CT include having an intimal flap, irregularity of the shape of the aorta, filling defects secondary to a thrombus, or out pouching of the aorta.However, non contrasted CT scans, chest X-rays, and transesophageal echos can also be used. Chest X-rays most sensitive finding is a widened mediastinum of greater than 8 cm. An apical cap and displacement of the trachea to either side of the chest from midline can also be seen. A normal chest X-ray, however, does not exclude a diagnosis of thoracic aortic injury. A chest X-ray can also be useful to diagnose subsequent problems caused by aortic rupture such as pneumothorax or hemothorax. Non contrasted CT scans might show an intimal flap, periaortic hematoma, luminal filling defect, aortic contour abnormality, pseudoaneurysm, contained rupture, vessel wall disruption, active extravasation of intravenous contrast from the aorta and is therefore useful to assess for minimal aortic injury. Trans esophageal echos are useful in patients that are hemodynamically unstable, but the sensitivity and specificity of this study varies based on clinical user. The trans esophageal echo relies on placement an ultrasound probe into the patient's esophagus in order to get an ultrasound of the heart. If esophageal injury is expected, the patient has a facial injury, or if the patient has difficulty maintaining their away then the trans esophageal echo is contraindicated. Treatment: The first line treatment for patients with thoracic aortic injury is maintaining the patient's airway with intubation and treating secondary injuries such as a hemothorax. After ensuring the patient has a patent airway and other life-threatening injuries are treated then treatment for the aortic injury can be started. Treatment: Due to the constant risk of sudden rupture or exsanguination urgent treatment is necessary. A patient can either undergo endovascular repair or surgical repair. Endovascular repair is the current gold standard due to increased success rates and lower complications. Patients that are able to undergo endovascular repair without contraindications should proceed with it. Repair should be delayed if there is life-threatening intra-abdominal or intracranial bleeding or if the patient is at risk for infection. Treatment: Endovascular Repair Endovascular repair is done by first gaining vascular access usually through the femoral artery. A catheter is inserted to the point of injury and a luminal stent is deployed. Blood is then able to be pumped through the stent and prevent the aortic wall from rupturing. Treatment: Open Surgical Repair Surgical repair is done by way of a thoracotomy or opening of the chest wall. From this point multiple methods can be used, but the most successful methods enable distal perfusion to prevent ischemia. When the surgery is performed a constant check of blood flow to the parts of the body away from the injury should be monitored to know if oxygenation is occurring. Treatment: Medical Management While waiting for surgery careful regulation of blood pressure and heart rate is necessary. Systolic blood pressure should be maintained between 100 and 120 mmHg allowing for perfusion distal to the injury but decreasing the risk of rupture while the heart rate should be kept under 100 beats per minute. Esmolol is first choice to maintain blood pressure and heart rate due to its short time of action, but if the blood pressure is not within range adding nitroprusside sodium can be added as a second agent. The treatment is similar to what is done for aortic dissections.If the patient has minimal aortic injury then the patient can be managed non surgically. Rather the patient can be followed with serial images. If the patient does develop a more severe injury including a full thickness injury through the media layer then the patient should be treated with surgery. Outcomes: Thoracic aortic injury is the 2nd leading cause of death involving both blunt trauma. 80% of patients that have a thoracic aortic injury will die immediately. Of the patients that do make it to be evaluated only 50% will survive 24 hours. Of the patients that do survive the first 24 hours 14% develop paraplegia. Epidemiology: Thoracic aortic injury is most commonly caused by a penetrating trauma in up to 90% of cases. Of these cases around 28% are confined to the thoracic portion of the aorta including the ascending aorta, aorta arch, and the descending aorta. Of the thoracic aortic injuries the ligament arteriosum is the most common location followed by the portion of the aorta after the origin of the left subclavian artery. The most common mechanism leading to thoracic aortic injury is a motor vehicle collision. Other mechanisms include airplane crashes, falling from a large height and landing on a hard surface, or any injury that causes substantial pressure to the sternum. The incidence of thoracic aortic injuries is approximately 1 in 100,000.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Tribromosilane** Tribromosilane: Tribromosilane is the chemical compound with the formula Br3Si. At high temperatures, it decomposes to produce silicon, and is an alternative to purified trichlorosilane of ultrapure silicon in the semiconductor industry. The Schumacher Process of silicon deposition uses tribromosilane gas to produce polysilicon, but it has a number of cost and safety advantages over the Siemens Process to make polysilicon.It may be prepared by heating crystalline silicon with gaseous hydrogen bromide at high temperature. It spontaneously combusts when exposed to air.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Wetting** Wetting: Wetting is the ability of a liquid to maintain contact with a solid surface, resulting from intermolecular interactions when the two are brought together. This happens in presence of a gaseous phase or another liquid phase not miscible with the first one. The degree of wetting (wettability) is determined by a force balance between adhesive and cohesive forces. Wetting is important in the bonding or adherence of two materials. Wetting and the surface forces that control wetting are also responsible for other related effects, including capillary effects. There are two types of wetting: non-reactive wetting and reactive wetting.Wetting deals with three phases of matter: gas, liquid, and solid. It is now a center of attention in nanotechnology and nanoscience studies due to the advent of many nanomaterials in the past two decades (e.g. graphene, carbon nanotube, boron nitride nanomesh). Explanation: Adhesive forces between a liquid and solid cause a liquid drop to spread across the surface. Cohesive forces within the liquid cause the drop to ball up and avoid contact with the surface. Explanation: The contact angle (θ), as seen in Figure 1, is the angle at which the liquid–vapor interface meets the solid–liquid interface. The contact angle is determined by the balance between adhesive and cohesive forces. As the tendency of a drop to spread out over a flat, solid surface increases, the contact angle decreases. Thus, the contact angle provides an inverse measure of wettability.A contact angle less than 90° (low contact angle) usually indicates that wetting of the surface is very favorable, and the fluid will spread over a large area of the surface. Contact angles greater than 90° (high contact angle) generally mean that wetting of the surface is unfavorable, so the fluid will minimize contact with the surface and form a compact liquid droplet. Explanation: For water, a wettable surface may also be termed hydrophilic and a nonwettable surface hydrophobic. Superhydrophobic surfaces have contact angles greater than 150°, showing almost no contact between the liquid drop and the surface. This is sometimes referred to as the "Lotus effect". The table describes varying contact angles and their corresponding solid/liquid and liquid/liquid interactions. For nonwater liquids, the term lyophilic is used for low contact angle conditions and lyophobic is used when higher contact angles result. Similarly, the terms omniphobic and omniphilic apply to both polar and apolar liquids. High-energy vs. low-energy surfaces: Liquids can interact with two main types of solid surfaces. Traditionally, solid surfaces have been divided into high-energy and low-energy solids. The relative energy of a solid has to do with the bulk nature of the solid itself. Solids such as metals, glasses, and ceramics are known as 'hard solids' because the chemical bonds that hold them together (e.g., covalent, ionic, or metallic) are very strong. Thus, it takes a large amount of energy to break these solids (alternatively, a large amount of energy is required to cut the bulk and make two separate surfaces), so they are termed "high-energy". Most molecular liquids achieve complete wetting with high-energy surfaces. High-energy vs. low-energy surfaces: The other type of solid is weak molecular crystals (e.g., fluorocarbons, hydrocarbons, etc.) where the molecules are held together essentially by physical forces (e.g., van der Waals forces and hydrogen bonds). Since these solids are held together by weak forces, a very low amount of energy is required to break them, thus they are termed "low-energy". Depending on the type of liquid chosen, low-energy surfaces can permit either complete or partial wetting.Dynamic surfaces have been reported that undergo changes in surface energy upon the application of an appropriate stimuli. For example, a surface presenting photon-driven molecular motors was shown to undergo changes in water contact angle when switched between bistable conformations of differing surface energies. High-energy vs. low-energy surfaces: Wetting of low-energy surfaces Low-energy surfaces primarily interact with liquids through dispersive (van der Waals) forces. William Zisman produced several key findings:Zisman observed that cos θ increases linearly as the surface tension (γLV) of the liquid decreased. Thus, he was able to establish a linear function between cos θ and the surface tension (γLV) for various organic liquids. A surface is more wettable when γLV and θ is low. Zisman termed the intercept of these lines when cos θ = 1 as the critical surface tension (γc) of that surface. This critical surface tension is an important parameter because it is a characteristic of only the solid. Knowing the critical surface tension of a solid, it is possible to predict the wettability of the surface. The wettability of a surface is determined by the outermost chemical groups of the solid. Differences in wettability between surfaces that are similar in structure are due to differences in the packing of the atoms. For instance, if a surface has branched chains, it will have poorer packing than a surface with straight chains. Lower critical surface tension means a less wettable material surface. Ideal solid surfaces: An ideal surface is flat, rigid, perfectly smooth, chemically homogeneous, and has zero contact angle hysteresis. Zero hysteresis implies the advancing and receding contact angles are equal. In other words, only one thermodynamically stable contact angle exists. When a drop of liquid is placed on such a surface, the characteristic contact angle is formed as depicted in Figure 1. Furthermore, on an ideal surface, the drop will return to its original shape if it is disturbed. The following derivations apply only to ideal solid surfaces; they are only valid for the state in which the interfaces are not moving and the phase boundary line exists in equilibrium. Ideal solid surfaces: Minimization of energy, three phases Figure 3 shows the line of contact where three phases meet. In equilibrium, the net force per unit length acting along the boundary line between the three phases must be zero. The components of net force in the direction along each of the interfaces are given by: cos cos cos cos cos cos ⁡(β)+γαβ=0 where α, β, and θ are the angles shown and γij is the surface energy between the two indicated phases. These relations can also be expressed by an analog to a triangle known as Neumann's triangle, shown in Figure 4. Neumann's triangle is consistent with the geometrical restriction that α+β+θ=2π , and applying the law of sines and law of cosines to it produce relations that describe how the interfacial angles depend on the ratios of surface energies.Because these three surface energies form the sides of a triangle, they are constrained by the triangle inequalities, γij < γjk + γik meaning that not one of the surface tensions can exceed the sum of the other two. If three fluids with surface energies that do not follow these inequalities are brought into contact, no equilibrium configuration consistent with Figure 3 will exist. Ideal solid surfaces: Simplification to planar geometry, Young's relation If the β phase is replaced by a flat rigid surface, as shown in Figure 5, then β = π, and the second net force equation simplifies to the Young equation, cos ⁡(θ) which relates the surface tensions between the three phases: solid, liquid and gas. Subsequently, this predicts the contact angle of a liquid droplet on a solid surface from knowledge of the three surface energies involved. This equation also applies if the "gas" phase is another liquid, immiscible with the droplet of the first "liquid" phase. Ideal solid surfaces: Simplification to planar geometry, Young's relation derived from variational computation Consider the interface as a curve y(x) for x∈I=[0,L] where L is a free parameter. The free energy to be minimized is F[y,L]=∫0L(γLG1+y′2+(γSL−γSG))dx with the constraints y(0)=y(L)=0 which we can write as ∫Iy′dx=0 and fixed volume ∫Iydx=A The modified Lagrangian, taking into account the constraints is therefore L=γLG1+y′2+(γSL−γSG)−λ1y′−λ2y where λi are Lagrange multipliers. By definition, the momentum p=∂y′L and the Hamiltonian H=py′−L which is computed to be: H=−γLG11+y′2−(γSL−γSG)+λ2y Now, we recall that the boundary is free in the x direction and L is a free parameter. Therefore, we must have: ∂F∂L=−H=0 At the boundary y(L)=0 and cos ⁡θ , therefore we recover the Young equation. Ideal solid surfaces: Non-ideal smooth surfaces and the Young contact angle The Young equation assumes a perfectly flat and rigid surface often referred to as an ideal surface. In many cases, surfaces are far from this ideal situation, and two are considered here: the case of rough surfaces and the case of smooth surfaces that are still real (finitely rigid). Even in a perfectly smooth surface, a drop will assume a wide spectrum of contact angles ranging from the so-called advancing contact angle, θA , to the so-called receding contact angle, θR . The equilibrium contact angle ( θc ) can be calculated from θA and θR as was shown by Tadmor as, arccos cos cos ⁡(θR)rA+rR) where sin cos cos sin cos cos 3⁡(θR))13 The Young–Dupré equation and spreading coefficient The Young–Dupré equation (Thomas Young 1805; Anthanase Dupré and Paul Dupré 1869) dictates that neither γSG nor γSL can be larger than the sum of the other two surface energies. The consequence of this restriction is the prediction of complete wetting when γSG > γSL + γLG and zero wetting when γSL > γSG + γLG. The lack of a solution to the Young–Dupré equation is an indicator that there is no equilibrium configuration with a contact angle between 0 and 180° for those situations.A useful parameter for gauging wetting is the spreading parameter S, S=γSG−(γSL+γLG) When S > 0, the liquid wets the surface completely (complete wetting). When S < 0, partial wetting occurs. Ideal solid surfaces: Combining the spreading parameter definition with the Young relation yields the Young–Dupré equation: cos ⁡(θ)−1) which only has physical solutions for θ when S < 0. Ideal solid surfaces: A generalized model for the contact angle of droplets on flat and curved surfaces With improvements in measuring techniques such as AFM, confocal microscopy and SEM, researchers were able to produce and image droplets at ever smaller scales. With the reduction in droplet size came new experimental observations of wetting. These observations confirm that the modified Young’s equation does not hold at the micro-nano scales. In addition the sign of the line tension is not maintained through the modified Young’s equation.For a sessile droplet, the free energy of the three phase system can be expressed as: δw=γLVdALV+γSLdASL+γSVdASV−κdL−PdV−VdP At constant volume in thermodynamic equilibrium, this reduces to: 0=dALGdASL+γSL−γSGγLG−κγLGdLdASL−VγLGdPdASL Usually, the VdP term has been neglected for large droplets, however, VdP work becomes significant at small scales. The variation in pressure at constant volume at the free liquid-vapor boundary is due to the Laplace pressure, which is proportional to the mean curvature of the droplet, and is non zero. Solving the above equation for both convex and concave surfaces yields: cos cos sin cos sin cos cos sin cos cos ⁡(θ)+1)2) Where the constant parameters A, B, and C are defined as: A=γSG−γSLγLG , B=κγLG and C=γ3γLG This equation relates the contact angle θ , a geometric property of a sessile droplet to the bulk thermodynamics, the energy at the three phase contact boundary, and the curvature of the surface α. For the special case of a sessile droplet on a flat surface (α=0), cos cos cos cos 3⁡(θ)) The first two terms are the modified Young’s equation, while the third term is due to the Laplace pressure. This nonlinear equation correctly predicts the sign and magnitude of κ, the flattening of the contact angle at very small scales, and contact angle hysteresis. Ideal solid surfaces: Computational prediction of wetting For many surface/adsorbate configurations, surface energy data and experimental observations are unavailable. As wetting interactions are of great importance in various applications, it is often desired to predict and compare the wetting behavior of various material surfaces with particular crystallographic orientations, with relation to water or other adsorbates. This can be done from an atomistic perspective with tools including molecular dynamics and density functional theory. In the theoretical prediction of wetting by ab initio approaches such as DFT, ice is commonly substituted for water. This is because DFT calculations are generally conducted assuming conditions of zero thermal movement of atoms, essentially meaning the simulation is conducted at absolute zero. This simplification nevertheless yields results that are relevant for the adsorption of water under realistic conditions and the use of ice for the theoretical simulation of wetting is commonplace. Non-ideal rough solid surfaces: Unlike ideal surfaces, real surfaces do not have perfect smoothness, rigidity, or chemical homogeneity. Such deviations from ideality result in phenomenon called contact angle hysteresis, which is defined as the difference between the advancing (θa) and receding (θr) contact angles H=θa−θr When the contact angle is between the advancing and receding cases, the contact line is considered to be pinned and hysteretic behaviour can be observed, namely contact angle hysteresis. When these values are exceeded, the displacement of the contact line, such as the one in Figure 3, will take place by either expansion or retraction of the droplet. Figure 6 depicts the advancing and receding contact angles. The advancing contact angle is the maximum stable angle, whereas the receding contact angle is the minimum stable angle. Contact angle hysteresis occurs because many different thermodynamically stable contact angles are found on a nonideal solid. These varying thermodynamically stable contact angles are known as metastable states.Such motion of a phase boundary, involving advancing and receding contact angles, is known as dynamic wetting. The difference between dynamic and static wetting angles is proportional to the capillary number, Ca , When a contact line advances, covering more of the surface with liquid, the contact angle is increased and is generally related to the velocity of the contact line. If the velocity of a contact line is increased without bound, the contact angle increases, and as it approaches 180°, the gas phase will become entrained in a thin layer between the liquid and solid. This is a kinetic nonequilibrium effect which results from the contact line moving at such a high speed that complete wetting cannot occur. Non-ideal rough solid surfaces: A well-known departure from ideal conditions is when the surface of interest has a rough texture. The rough texture of a surface can fall into one of two categories: homogeneous or heterogeneous. A homogeneous wetting regime is where the liquid fills in the grooves of a rough surface. A heterogeneous wetting regime, though, is where the surface is a composite of two types of patches. An important example of such a composite surface is one composed of patches of both air and solid. Such surfaces have varied effects on the contact angles of wetting liquids. Cassie–Baxter and Wenzel are the two main models that attempt to describe the wetting of textured surfaces. However, these equations only apply when the drop size is sufficiently large compared with the surface roughness scale. When the droplet size is comparable to that of the underlying pillars, the effect of line tension should be considered. Non-ideal rough solid surfaces: Wenzel's model The Wenzel model (Robert N. Wenzel, 1936) describes the homogeneous wetting regime, as seen in Figure 7, and is defined by the following equation for the contact angle on a rough surface: cos cos (θ) where θ∗ is the apparent contact angle which corresponds to the stable equilibrium state (i.e. minimum free energy state for the system). The roughness ratio, r, is a measure of how surface roughness affects a homogeneous surface. The roughness ratio is defined as the ratio of true area of the solid surface to the apparent area. Non-ideal rough solid surfaces: θ is the Young contact angle as defined for an ideal surface. Although Wenzel's equation demonstrates the contact angle of a rough surface is different from the intrinsic contact angle, it does not describe contact angle hysteresis. Non-ideal rough solid surfaces: Cassie–Baxter model When dealing with a heterogeneous surface, the Wenzel model is not sufficient. A more complex model is needed to measure how the apparent contact angle changes when various materials are involved. This heterogeneous surface, like that seen in Figure 8, is explained using the Cassie–Baxter equation (Cassie's law): cos cos (θY)+f−1 Here the rf is the roughness ratio of the wet surface area and f is the fraction of solid surface area wet by the liquid. It is important to realize that when f = 1 and rf = r, the Cassie–Baxter equations becomes the Wenzel equation. On the other hand, when there are many different fractions of surface roughness, each fraction of the total surface area is denoted by fi A summation of all fi equals 1 or the total surface. Cassie–Baxter can also be recast in the following equation: cos i,sv i,sl ) Here γ is the Cassie–Baxter surface tension between liquid and vapor, γi,sv is the solid vapor surface tension of every component, and γi,sl is the solid liquid surface tension of every component. A case that is worth mentioning is when the liquid drop is placed on the substrate and creates small air pockets underneath it. This case for a two-component system is denoted by: cos 1,sv 1,sl )−(1−f1)γ Here the key difference to notice is that there is no surface tension between the solid and the vapor for the second surface tension component. This is because of the assumption that the surface of air that is exposed is under the droplet and is the only other substrate in the system. Subsequently, the equation is then expressed as (1 – f). Therefore, the Cassie equation can be easily derived from the Cassie–Baxter equation. Experimental results regarding the surface properties of Wenzel versus Cassie–Baxter systems showed the effect of pinning for a Young angle of 180 to 90°, a region classified under the Cassie–Baxter model. This liquid/air composite system is largely hydrophobic. After that point, a sharp transition to the Wenzel regime was found where the drop wets the surface, but no further than the edges of the drop. Actually, the Young, Wenzel and Cassie-Baxter equations represent the transversality conditions of the variational problem of wetting. Non-ideal rough solid surfaces: Precursor film With the advent of high resolution imaging, researchers have started to obtain experimental data which have led them to question the assumptions of the Cassie–Baxter equation when calculating the apparent contact angle. These groups believe the apparent contact angle is largely dependent on the triple line. The triple line, which is in contact with the heterogeneous surface, cannot rest on the heterogeneous surface like the rest of the drop. In theory, it should follow the surface imperfection. This bending in the triple line is unfavorable and is not seen in real-world situations. A theory that preserves the Cassie–Baxter equation while at the same time explaining the presence of the minimized energy state of the triple line hinges on the idea of a precursor film. This film of submicrometer thickness advances ahead of the motion of the droplet and is found around the triple line. Furthermore, this precursor film allows the triple line to bend and take different conformations that were originally considered unfavorable. This precursor fluid has been observed using environmental scanning electron microscopy (ESEM) in surfaces with pores formed in the bulk. With the introduction of the precursor film concept, the triple line can follow energetically feasible conformations, thereby correctly explaining the Cassie–Baxter model. Non-ideal rough solid surfaces: "Petal effect" vs. "lotus effect" The intrinsic hydrophobicity of a surface can be enhanced by being textured with different length scales of roughness. The red rose takes advantage of this by using a hierarchy of micro- and nanostructures on each petal to provide sufficient roughness for superhydrophobicity. More specifically, each rose petal has a collection of micropapillae on the surface and each papilla, in turn, has many nanofolds. The term "petal effect" describes the fact that a water droplet on the surface of a rose petal is spherical in shape, but cannot roll off even if the petal is turned upside down. The water drops maintain their spherical shape due to the superhydrophobicity of the petal (contact angle of about 152.4°), but do not roll off because the petal surface has a high adhesive force with water.When comparing the "petal effect" to the "lotus effect", it is important to note some striking differences. The surface structure of the lotus leaf and the rose petal, as seen in Figure 9, can be used to explain the two different effects. Non-ideal rough solid surfaces: The lotus leaf has a randomly rough surface and low contact angle hysteresis, which means the water droplet is not able to wet the microstructure spaces between the spikes. This allows air to remain inside the texture, causing a heterogeneous surface composed of both air and solid. As a result, the adhesive force between the water and the solid surface is extremely low, allowing the water to roll off easily (i.e. "self-cleaning" phenomenon). Non-ideal rough solid surfaces: The rose petal's micro- and nanostructures are larger in scale than those of the lotus leaf, which allows the liquid film to impregnate the texture. However, as seen in Figure 9, the liquid can enter the larger-scale grooves, but it cannot enter into the smaller grooves. This is known as the Cassie impregnating wetting regime. Since the liquid can wet the larger-scale grooves, the adhesive force between the water and solid is very high. This explains why the water droplet will not fall off even if the petal is tilted at an angle or turned upside down. This effect will fail if the droplet has a volume larger than 10 µl because the balance between weight and surface tension is surpassed. Non-ideal rough solid surfaces: Cassie–Baxter to Wenzel transition In the Cassie–Baxter model, the drop sits on top of the textured surface with trapped air underneath. During the wetting transition from the Cassie state to the Wenzel state, the air pockets are no longer thermodynamically stable and liquid begins to nucleate from the middle of the drop, creating a "mushroom state" as seen in Figure 10. The penetration condition is given by: cos (θC)=ϕ−1r−ϕ where θC is the critical contact angle Φ is the fraction of solid/liquid interface where drop is in contact with surface r is solid roughness (for flat surface, r = 1)The penetration front propagates to minimize the surface energy until it reaches the edges of the drop, thus arriving at the Wenzel state. Since the solid can be considered an absorptive material due to its surface roughness, this phenomenon of spreading and imbibition is called hemiwicking. The contact angles at which spreading/imbibition occurs are between 0 and π/2.The Wenzel model is valid between θC and π/2. If the contact angle is less than ΘC, the penetration front spreads beyond the drop and a liquid film forms over the surface. Figure 11 depicts the transition from the Wenzel state to the surface film state. The film smoothes the surface roughness and the Wenzel model no longer applies. In this state, the equilibrium condition and Young's relation yields: cos cos (θC)+(1−ϕ) By fine-tuning the surface roughness, it is possible to achieve a transition between both superhydrophobic and superhydrophilic regions. Generally, the rougher the surface, the more hydrophobic it is. Spreading dynamics: If a drop is placed on a smooth, horizontal surface, it is generally not in the equilibrium state. Hence, it spreads until an equilibrium contact radius is reached (partial wetting). While taking into account capillary, gravitational, and viscous contributions, the drop radius as a function of time can be expressed as exp 12 10 24 λV4(t+t0)π2η)]16 For the complete wetting situation, the drop radius at any time during the spreading process is given by 96 24 96 13π43γLG13]16 where γLG is surface tension of the fluid V is drop volume η is viscosity of the fluid ρ is density of the fluid g is gravitational constant λ is shape factor, 37.1 m−1 t0 is experimental delay time re is drop radius in equilibrium Modifying wetting properties: Surfactants Many technological processes require control of liquid spreading over solid surfaces. When a drop is placed on a surface, it can completely wet, partially wet, or not wet the surface. By reducing the surface tension with surfactants, a nonwetting material can be made to become partially or completely wetting. The excess free energy (σ) of a drop on a solid surface is: SL SV ) γ is the liquid–vapor interfacial tension γSL is the solid–liquid interfacial tension γSV is the solid–vapor interfacial tension S is the area of liquid–vapor interface P is the excess pressure inside liquid R is the radius of droplet baseBased on this equation, the excess free energy is minimized when γ decreases, γSL decreases, or γSV increases. Surfactants are absorbed onto the liquid–vapor, solid–liquid, and solid–vapor interfaces, which modify the wetting behavior of hydrophobic materials to reduce the free energy. When surfactants are absorbed onto a hydrophobic surface, the polar head groups face into the solution with the tail pointing outward. In more hydrophobic surfaces, surfactants may form a bilayer on the solid, causing it to become more hydrophilic. The dynamic drop radius can be characterized as the drop begins to spread. Thus, the contact angle changes based on the following equation: cos cos cos cos (θ0))(1−e−tτ) θ0 is initial contact angle θ∞ is final contact angle τ is the surfactant transfer time scaleAs the surfactants are absorbed, the solid–vapor surface tension increases and the edges of the drop become hydrophilic. As a result, the drop spreads. Modifying wetting properties: Surface changes Ferrocene is a redox-active organometallic compound which can be incorporated into various monomers and used to make polymers which can be tethered onto a surface. Vinylferrocene (ferroceneylethene) can be prepared by a Wittig reaction and then polymerized to form polyvinylferrocene (PVFc), an analog of polystyrene. Another polymer which can be formed is poly(2-(methacryloyloxy)ethyl ferrocenecarboxylate), PFcMA. Both PVFc and PFcMA have been tethered onto silica wafers and the wettability measured when the polymer chains are uncharged and when the ferrocene moieties are oxidised to produce positively charged groups, as illustrated at right. The contact angle with water on the PFcMA-coated wafers was 70° smaller following oxidation, while in the case of PVFc the decrease was 30°, and the switching of wettability has been shown to be reversible. In the PFcMA case, the effect of longer chains with more ferrocene groups (and also greater molar mass) has been investigated, and it was found that longer chains produce significantly larger contact angle reductions. Modifying wetting properties: Oxygen vacancies Rare earth oxides exhibit intrinsic hydrophobicity, and hence can be used in thermally stable heat exchangers and other applications involving high-temperature hydrophobicity. The presence of oxygen vacancies at surfaces of ceria or other rare earth oxides is instrumental in governing surface wettability. Adsorption of water at oxide surfaces can occur as molecular adsorption, in which H2O molecules remain intact at the terminated surface, or as dissociative adsorption, in which OH and H are adsorbed separately at solid surfaces. The presence of oxygen vacancies is generally found to enhance hydrophobicity while promoting dissociative adsorption.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Pink (ship)** Pink (ship): A pink (French: pinque) is a sailing ship with a very narrow stern. The term was applied to two different types of ship. The first was a small, flat-bottomed ship with a narrow stern; the name derived from the Italian word pinco. It was used primarily in the Mediterranean Sea as a cargo ship. Pink (ship): In the Atlantic Ocean the word pink was used to describe any small ship with a narrow stern, having derived from the Dutch word pincke meaning pinched. They had a large cargo capacity, and were generally square rigged. Their flat bottoms (and resulting shallow draught) made them more useful in shallow waters than some similar classes of ship. They were most often used for short-range missions in protected channels, as both merchantmen and warships. A number saw service in the English Navy during the second half of the 17th century. In the 1730s pinks were used in cross-Atlantic voyages to bring Palatinate immigrants to America.This model of ship was often used in the Mediterranean because it could be sailed in shallow waters and through coral reefs. It could also be maneuvered up rivers and streams. Pinks were quite fast and flexible. Pink (ship): There is a reference to "pink" in its maritime sense in the State Papers of Charles II under 1 February 1672, with diarist Samuel Pepys notified about one offered for sale: "Col. Bullen Reymes to Samuel Pepes (Pepys). Offering to sell a pink now at Weymouth which can be brought round to Portsmouth and examined by Commissioner Tippetts, or by whom else they please, or to let her by the month, if they will not buy her." [S.P. Dom., Car. II. 322, No. 88.]
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Keyline** Keyline: A keyline, in graphic design, is a boundary line that separates color and monochromatic areas or differently colored areas of printing on a given page or other printed piece. The line itself, usually consisting of a black (or other dark colored) border, provides an area in which lighter colors can be printed with slight variation in registration. In traditional paste-up graphics workflows, keylines for cropping were often merely indicated on original artwork, and then images were stripped into the area manually with the keylines themselves being added as part of the process. Keylines are often included when printing something that will be cut out using a die form, requires folding, or uses perforation lines.Per the International Paper's Pocket Pal (18th ed., printed in 2000), keyline is defined as being "in artwork, an outline drawing of finished art to indicate the exact shape, position, and size for elements such as halftones, line sketches, etc."
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Xkill** Xkill: Xkill is a utility program distributed with the X Window System that instructs the X server to forcefully terminate its connection to a client, thus "killing" the client. When run with no command line arguments, the program displays a special cursor (usually a crosshair or a skull and crossbones) and displays a message such as Select the window whose client you wish to kill with button 1 ... Xkill: If a non-root window is then selected, the server will close its connection to the client that created that window, and the window will be destroyed. Xkill is not intended to be used as a routine way to terminate X client programs, but only as a last resort to abort malfunctioning or malicious clients. Unlike kill, xkill does not request that the client process, which may be running on a different machine, be terminated. In fact, the process can continue running without an X connection. Most clients, however, do abort when their X connections are unexpectedly closed. Xkill has been cited as an example of a program with a simple and appealing user interface. Its mode of operation has been summed up as "Just click the bad thing with the skull and it dies."
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**GTPBP4** GTPBP4: Nucleolar GTP-binding protein 1 is a protein that in humans is encoded by the GTPBP4 gene.GTPases function as molecular switches that can flip between two states: active, when GTP is bound, and inactive, when GDP is bound. 'Active' usually means that the molecule acts as a signal to trigger other events in the cell. When an extracellular ligand binds to a G protein-coupled receptor, the receptor changes its conformation and switches on the trimeric G proteins that associate with it by causing them to eject their GDP and replace it with GTP. The switch is turned off when the G protein hydrolyzes its own bound GTP, converting it back to GDP. But before that occurs, the active protein has an opportunity to diffuse away from the receptor and deliver its message for a prolonged period to its downstream target.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Minkowski functional** Minkowski functional: In mathematics, in the field of functional analysis, a Minkowski functional (after Hermann Minkowski) or gauge function is a function that recovers a notion of distance on a linear space. Minkowski functional: If K is a subset of a real or complex vector space X, then the Minkowski functional or gauge of K is defined to be the function pK:X→[0,∞], valued in the extended real numbers, defined by where the infimum of the empty set is defined to be positive infinity ∞ (which is not a real number so that pK(x) would then not be real-valued). The set K is often assumed/picked to have properties, such as being an absorbing disk in X, that guarantee that pK will be a real-valued seminorm on X. Minkowski functional: In fact, every seminorm p on X is equal to the Minkowski functional (that is, p=pK ) of any subset K of X satisfying {x∈X:p(x)<1}⊆K⊆{x∈X:p(x)≤1} (where all three of these sets are necessarily absorbing in X and the first and last are also disks). Thus every seminorm (which is a function defined by purely algebraic properties) can be associated (non-uniquely) with an absorbing disk (which is a set with certain geometric properties) and conversely, every absorbing disk can be associated with its Minkowski functional (which will necessarily be a seminorm). These relationships between seminorms, Minkowski functionals, and absorbing disks is a major reason why Minkowski functionals are studied and used in functional analysis. In particular, through these relationships, Minkowski functionals allow one to "translate" certain geometric properties of a subset of X into certain algebraic properties of a function on X. Minkowski functional: The Minkowski function is always non-negative (meaning pK≥0 ). This property of being nonnegative stands in contrast to other classes of functions, such as sublinear functions and real linear functionals, that do allow negative values. However, pK might not be real-valued since for any given x∈X, the value pK(x) is a real number if and only if {r>0:x∈rK} is not empty. Minkowski functional: Consequently, K is usually assumed to have properties (such as being absorbing in X, for instance) that will guarantee that pK is real-valued. Definition: Let K be a subset of a real or complex vector space X. Define the gauge of K or the Minkowski functional associated with or induced by K as being the function pK:X→[0,∞], valued in the extended real numbers, defined by where recall that the infimum of the empty set is ∞ (that is, inf ∅=∞ ). Here, {r>0:x∈rK} is shorthand for and x∈rK}. For any x∈X, pK(x)≠∞ if and only if {r>0:x∈rK} is not empty. The arithmetic operations on R can be extended to operate on ±∞, where := 0 for all non-zero real −∞<r<∞. The products 0⋅∞ and 0⋅−∞ remain undefined. Definition: Some conditions making a gauge real-valued In the field of convex analysis, the map pK taking on the value of ∞ is not necessarily an issue. However, in functional analysis pK is almost always real-valued (that is, to never take on the value of ∞ ), which happens if and only if the set {r>0:x∈rK} is non-empty for every x∈X. Definition: In order for pK to be real-valued, it suffices for the origin of X to belong to the algebraic interior or core of K in X. If K is absorbing in X, where recall that this implies that 0∈K, then the origin belongs to the algebraic interior of K in X and thus pK is real-valued. Characterizations of when pK is real-valued are given below. Motivating examples: Example 1 Consider a normed vector space (X,‖⋅‖), with the norm ‖⋅‖ and let := {x∈X:‖x‖≤1} be the unit ball in X. Then for every x∈X, ‖x‖=pU(x). Thus the Minkowski functional pU is just the norm on X. Example 2 Let X be a vector space without topology with underlying scalar field K. Let f:X→K be any linear functional on X (not necessarily continuous). Fix 0. Let K be the set and let pK be the Minkowski functional of K. Then The function pK has the following properties: It is subadditive: pK(x+y)≤pK(x)+pK(y). It is absolutely homogeneous: pK(sx)=|s|pK(x) for all scalars s. It is nonnegative: 0. Therefore, pK is a seminorm on X, with an induced topology. This is characteristic of Minkowski functionals defined via "nice" sets. There is a one-to-one correspondence between seminorms and the Minkowski functional given by such sets. What is meant precisely by "nice" is discussed in the section below. Notice that, in contrast to a stronger requirement for a norm, pK(x)=0 need not imply 0. In the above example, one can take a nonzero x from the kernel of f. Consequently, the resulting topology need not be Hausdorff. Common conditions guaranteeing gauges are seminorms: To guarantee that pK(0)=0, it will henceforth be assumed that 0∈K. In order for pK to be a seminorm, it suffices for K to be a disk (that is, convex and balanced) and absorbing in X, which are the most common assumption placed on K. More generally, if K is convex and the origin belongs to the algebraic interior of K, then pK is a nonnegative sublinear functional on X, which implies in particular that it is subadditive and positive homogeneous. If K is absorbing in X then p[0,1]K is positive homogeneous, meaning that p[0,1]K(sx)=sp[0,1]K(x) for all real s≥0, where [0,1]K={tk:t∈[0,1],k∈K}. If q is a nonnegative real-valued function on X that is positive homogeneous, then the sets := {x∈X:q(x)<1} and := {x∈X:q(x)≤1} satisfy [0,1]U=U and [0,1]D=D; if in addition q is absolutely homogeneous then both U and D are balanced. Gauges of absorbing disks Arguably the most common requirements placed on a set K to guarantee that pK is a seminorm are that K be an absorbing disk in X. Due to how common these assumptions are, the properties of a Minkowski functional pK when K is an absorbing disk will now be investigated. Since all of the results mentioned above made few (if any) assumptions on K, they can be applied in this special case. Algebraic properties Let X be a real or complex vector space and let K be an absorbing disk in X. pK is a seminorm on X. pK is a norm on X if and only if K does not contain a non-trivial vector subspace. psK=1|s|pK for any scalar 0. If J is an absorbing disk in X and J⊆K then pK≤pJ. If K is a set satisfying {x∈X:p(x)<1}⊆K⊆{x∈X:p(x)≤1} then K is absorbing in X and p=pK, where pK is the Minkowski functional associated with K; that is, it is the gauge of K. In particular, if K is as above and q is any seminorm on X, then q=p if and only if {x∈X:q(x)<1}⊆K⊆{x∈X:q(x)≤1}. If x∈X satisfies pK(x)<1 then x∈K. Topological properties Assume that X is a (real or complex) topological vector space (TVS) (not necessarily Hausdorff or locally convex) and let K be an absorbing disk in X. Then where Int X⁡K is the topological interior and Cl X⁡K is the topological closure of K in X. Importantly, it was not assumed that pK was continuous nor was it assumed that K had any topological properties. Moreover, the Minkowski functional pK is continuous if and only if K is a neighborhood of the origin in X. If pK is continuous then Minimal requirements on the set: This section will investigate the most general case of the gauge of any subset K of X. The more common special case where K is assumed to be an absorbing disk in X was discussed above. Properties All results in this section may be applied to the case where K is an absorbing disk. Throughout, K is any subset of X. Examples If L is a non-empty collection of subsets of X then inf {pL(x):L∈L} for all x∈X, where def ⋃L∈LL. Thus min {pK(x),pL(x)} for all x∈X. If L is a non-empty collection of subsets of X and I⊆X satisfies then sup {pL(x):L∈L} for all x∈X. The following examples show that the containment (0,R]K⊆⋂e>0(0,R+e)K could be proper. Example: If R=0 and K=X then (0,R]K=(0,0]X=∅X=∅ but ⋂e>0(0,e)K=⋂e>0X=X, which shows that its possible for (0,R]K to be a proper subset of ⋂e>0(0,R+e)K when 0. ◼ The next example shows that the containment can be proper when R=1; the example may be generalized to any real 0. Assuming that [0,1]K⊆K, the following example is representative of how it happens that x∈X satisfies pK(x)=1 but x∉(0,1]K. Example: Let x∈X be non-zero and let K=[0,1)x so that [0,1]K=K and x∉K. From x∉(0,1)K=K it follows that 1. That pK(x)≤1 follows from observing that for every e>0, (0,1+e)K=[0,1+e)([0,1)x)=[0,1+e)x, which contains x. Thus pK(x)=1 and x∈⋂e>0(0,1+e)K. Minimal requirements on the set: However, (0,1]K=(0,1]([0,1)x)=[0,1)x=K so that x∉(0,1]K, as desired. ◼ Positive homogeneity characterizes Minkowski functionals The next theorem shows that Minkowski functionals are exactly those functions f:X→[0,∞] that have a certain purely algebraic property that is commonly encountered. This theorem can be extended to characterize certain classes of [−∞,∞] -valued maps (for example, real-valued sublinear functions) in terms of Minkowski functionals. For instance, it can be used to describe how every real homogeneous function f:X→R (such as linear functionals) can be written in terms of a unique Minkowski functional having a certain property. Minimal requirements on the set: Characterizing Minkowski functionals on star sets Characterizing Minkowski functionals that are seminorms In this next theorem, which follows immediately from the statements above, K is not assumed to be absorbing in X and instead, it is deduced that (0,1)K is absorbing when pK is a seminorm. It is also not assumed that K is balanced (which is a property that K is often required to have); in its place is the weaker condition that (0,1)sK⊆(0,1)K for all scalars s satisfying 1. Minimal requirements on the set: The common requirement that K be convex is also weakened to only requiring that (0,1)K be convex. Positive sublinear functions and Minkowski functionals It may be shown that a real-valued subadditive function f:X→R on an arbitrary topological vector space X is continuous at the origin if and only if it is uniformly continuous, where if in addition f is nonnegative, then f is continuous if and only if := {x∈X:f(x)<1} is an open neighborhood in X. If f:X→R is subadditive and satisfies f(0)=0, then f is continuous if and only if its absolute value |f|:X→[0,∞) is continuous. A nonnegative sublinear function is a nonnegative homogeneous function f:X→[0,∞) that satisfies the triangle inequality. It follows immediately from the results below that for such a function f, if := {x∈X:f(x)<1} then f=pV. Given K⊆X, the Minkowski functional pK is a sublinear function if and only if it is real-valued and subadditive, which is happens if and only if (0,∞)K=X and (0,1)K is convex. Correspondence between open convex sets and positive continuous sublinear functions
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Oxatriquinacene** Oxatriquinacene: Oxatriquinacene is an organic cation with formula C9H9O+. It is an oxonium ion, with a tricoordinated oxygen atom with +1 charge connected to carbons 1,4, and 7 of a cyclononatriene ring, forming three fused pentagonal cycles. The compound may possess weak tris-homoaromatic character. Oxatriquinacene: Oxatriquinacene has remarkable stability compared to other oxonium cations, although not as extreme as that of the similar oxatriquinane. It reacts with water, but can be dissolved in acetonitrile. It is of interest as a possible precursor to oxaacepentalene, a hypothetical neutral aromatic species.Oxatriquinacene was obtained in 2008 by Mascal and coworkers, through a variant of the synthesis that led them to oxatriquinane.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**DSIF** DSIF: In gene expression, DSIF (DRB Sensitivity Inducing Factor) is a protein that can either negatively or positively affect transcription by RNA polymerase II (Pol II). In one case of negative regulation, it can interact with negative elongation factor (NELF) to promote the stalling of Pol II at some genes. This stalling is relieved by P-TEFb. In humans, DSIF is composed of hSPT4 and hSPT5 (SPT4 and SPT5 are homologs in yeast).The complex locks the RNAP clamp into a closed state to prevent the elongation complex (EC) from dissociating. The Spt5 NGN domain helps anneal the two strands of DNA upstream. The single KOW domain in bacteria and archaea anchors a ribosome to the RNAP.In bacteria, the homologous complex only contains NusG, a Spt5 homolog. Archaea have both proteins.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Assortative mixing** Assortative mixing: In the study of complex networks, assortative mixing, or assortativity, is a bias in favor of connections between network nodes with similar characteristics. In the specific case of social networks, assortative mixing is also known as homophily. The rarer disassortative mixing is a bias in favor of connections between dissimilar nodes. In social networks, for example, individuals commonly choose to associate with others of similar age, nationality, location, race, income, educational level, religion, or language as themselves. In networks of sexual contact, the same biases are observed, but mixing is also disassortative by gender – most partnerships are between individuals of opposite sex. Assortative mixing: Assortative mixing can have effects, for example, on the spread of disease: if individuals have contact primarily with other members of the same population groups, then diseases will spread primarily within those groups. Many diseases are indeed known to have differing prevalence in different population groups, although other social and behavioral factors affect disease prevalence as well, including variations in quality of health care and differing social norms. Assortative mixing: Assortative mixing is also observed in other (non-social) types of networks, including biochemical networks in the cell, computer and information networks, and others. Assortative mixing: Of particular interest is the phenomenon of assortative mixing by degree, meaning the tendency of nodes with high degree to connect to others with high degree, and similarly for low degree. Because degree is itself a topological property of networks, this type of assortative mixing gives rise to more complex structural effects than other types. Empirically it has been observed that most social networks mix assortatively by degree, but most networks of other types mix disassortatively, although there are exceptions.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Samsung Minikit** Samsung Minikit: The Samsung Miniket was a line of multifunction devices sold from March 2005 through mid-2007 in Australia (and possibly other markets). It bundles together a video camera, SVGA (Super Video Graphics Array) digital camera, an MP3 player, voice recorder, memory stick and Web cam, but had neither cellular nor wifi connectivity, though it did provide for USB access. It came in four models (VP-M110B, VP-M110S, VP-X110L, VP-M2100)—with internal storage capacity of 1 GB. Models weigh as little as 147 grams.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Latarjet procedure** Latarjet procedure: The Latarjet operation, also known as the Latarjet-Bristow procedure, is a surgical procedure used to treat recurrent shoulder dislocations, typically caused by bone loss or a fracture of the glenoid. The procedure was first described by French surgeon Dr. Michel Latarjet in 1954. Mechanism: The mechanism of action has been described as a triple blocking effect: conjoint tendon of shoulder i.e short head of the biceps and coracobrachialis, acting as a sling on the subscapularis and capsule with the arm abducted and externally rotated; increasing or restoring the glenoid bone; and repair of the capsule to the stump of coracoacromial ligament. Procedure: The Latarjet procedure involves the removal and transfer of a section of the coracoid process and its attached muscles to the front of the glenoid. This placement of the coracoid acts as a bone block which, combined with the transferred muscles acting as a strut, prevents further dislocation of the joint. In layman's terms, this procedure involves removing a piece of bone from another part of the shoulder, and attaching it to the front of your shoulder socket. The bone will then act as a barrier which will physically block the shoulder from slipping out of the socket, while the muscles which are transferred with the bone will give additional stability to the joint. Effectiveness: While the Latarjet procedure can be used for surgical treatment of most cases of shoulder dislocations or subluxation, it is particularly indicated in cases with bone defects. The failure rate following arthroscopic Bankart repair has been shown to dramatically increase from 4% to 67% in patients with significant bone loss. The same authors subsequently reported much improved results when the Latarjet operation was used in patients with bone loss. A number of technical variations have been proposed including both open and arthroscopic variations. Complication rates are between 15 - 30%, with long term issues such as graft osteolysis continuing to be an issue with the procedure.With appropriate patient selection, the Latarjet procedure can be expected to prevent recurrent anterior instability in approximately 94-99% of cases. Full recovery can take 6 months, however the majority of activities can be resumed after 3. The main long term side effect is reduced external rotation range in the shoulder. Effectiveness: The Latarjet operation has also been demonstrated to be successful in contact athletes and rugby players.In summary, the Latarjet operation may ideally be suited as the shoulder reconstruction procedure of choice for contact athletes, patients with increased shoulder laxity, failed previous shoulder reconstructions or if there is significant bone damage.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Straight dough** Straight dough: Straight dough is a single-mix process of making bread. The dough is made from all fresh ingredients, and they are all placed together and combined in one kneading or mixing session. After mixing, a bulk fermentation rest of about 1 hour or longer occurs before division. It is also called the direct dough method. Formula: A straight dough formula might look like this: Process: In general, the process steps for making straight dough are as follows: Mise en place: The first step is to look at the formula ("recipe"), familiarize yourself with the ingredients and process, get ready to perform the task at hand. Assess the availability of tools, consider the batch size and time schedule, and gather what is needed. Weigh ingredients: This is also called scaling. If more yeast is chosen for the initial mixing and it is viable, faster fermentation occurs. If too much yeast is used the result is a noticeable yeast flavor. Process: Mixing: The ingredients are all placed in a mixing bowl at once and combined. A variation of this technique is to add ingredients sequentially. This mixing process may be done by hand kneading or by machine. Once fermentation has commenced, it will continue until the heat of the oven kills the yeast during baking. For fast fermentations, long, intense mixing techniques are recommended for dough development, whereas for long-fermented doughs, short-mixing techniques on slow speeds or hand kneading may be used with sufficient later folding.It is known that mixing adds heat to dough, and more intense mixing adds heat more quickly. Doughs mixed at warmer temperatures of 79 °F (26 °C) are known to have more oxidation than doughs mixed at lower temperatures of 73 °F (23 °C). Oxidation results in loss of color and flavor. Bakers sometimes substitute a weight of crushed ice for some of the dough's water to compensate for the expected temperature rise, while other bakers use "water-jacketed or refrigerated mixer bowls" to keep the dough cooler during mixing.Bulk fermentation: After mixing, the dough is allowed to rest in a bowl or container of large enough size to accommodate dough expansion, usually in a warm location of about 75–80 °F (24–27 °C). The container is often covered so the dough remains in a humid environment, ideal is 74–77% relative humidity. Without some humidity the dough surface will tend to dry and develop a skin. As the dough rests, it will expand in volume due to the carbon dioxide created as it ferments. The dough will expand to a certain point, then volume growth will stall, and eventually the peak of the dough will begin to fall. When it reaches this point, it is at about 66–70% of its total fermentation time.Stretch and folds or degassing: When the dough reaches a specified size or scheduled time it is removed from the bowl and stretched and folded on a flour-dusted surface for the purposes of degassing the bubbles that have formed as well as stretching and aligning the gluten, then it is returned to the bowl for continued bulk fermentation. Prior to folding, the dough surfaces that are folded together should be brushed to remove excess dry flour. This is also called knock back or punch down, and may occur in an oiled bowl followed by a few folds, then flipped over so the seam side is down. This stretching and folding develops the gluten and equalizes the dough temperature. Long bulk fermentations may have as many as 4 to 5 folding sessions. Some schedules begin degassing at half the total fermentation time, while others degas once just before the peak begins to fall. A fermentation ratio is described as the time the dough takes from leaving the mixer to just before the peak begins to fall — when degassing occurs — relative to the remaining bulk fermentation time afterwards. Folding or knock back may also be omitted: after sufficient bulk fermentation time, the dough may go straight to make-up.Make-upDividing: This is also called scaling or portioning. The bulk dough is divided to smaller, final weights. This step is used when making more than one loaf of bread, or many rolls.Pre-shaping or rounding: The dough pieces are made into oval, cylinder, or round shapes, depending upon the shape's appropriateness to the final product.Bench or intermediate proofing: A rest period of 8 to 30 minutes follows which allows the dough to relax, easing shaping.Shaping: Each piece of dough is manipulated into its desired final shape, and either placed on proofing trays or panned. It is also called, makeup and panning or moulding and panning.Proofing [or, outside USA, Proving]: The final fermentation rest before baking. Like bulk fermentation, proofing is ideally done in a humidity- and temperature-controlled environment. It may be performed at bulk-fermentation temperatures, or temperatures up to about 95–100 °F (35–38 °C), and with 83–88% relative humidity. Yeast thrives within the temperature range of 70–95 °F (21–35 °C), and within that range, warmer temperatures result in faster baker's yeast fermentation times. The proofing dough rests and ferments until it reaches about 85% of its final volume. Process: Scoring: If desired, proofed dough is scored with a lame or razor to slash the top of the dough to direct oven-spring expansion. It is also used for its decorative effect. Process: Baking: The proofed dough is loaded into a hot oven for baking. During the first few minutes, the remaining rise will occur in the dough and is known as oven spring. Starch gelatinization begins at 105 °F (41 °C), the yeast dies at 140 °F (60 °C), and the baking is finished when the product reaches an internal temperature of 208–210 °F (98–99 °C). Process: Cooling: Once the bread is fully baked, it is removed to racks to cool. Bread is sliced once it has cooled to 95–105 °F (35–41 °C). History: The straight dough method became popular after the discovery and later mass production of baker's yeast, as well as the mass production of mixing machines. Straight dough was simpler than sponge and dough, took less time and effort, and was considered superior for commercial purposes. Baking expert Julius Emil Wihlfahrt of The Fleischmann Company wrote in 1915:Generally speaking, sponge is best used for fancy breads and straight dough for the average bread, for in this manner the advantages of both systems can be best appropriated. History: Prior to 1920, there were two basic kinds of breads, naturally leavened French bread, and Vienna bread leavened with cereal press yeast, an early form of baker's yeast. After 1920, when mixing machines became popular among bakers, rural bakers began to make more sponge doughs and city bakers more straight doughs, both replacing sourdough. By the 1930s, straight dough had mostly replaced sponge dough, and the terms "French" and "Vienna" breads were used less often. Bakers who continued using older methods were generally unable in America to compete on a cost basis, and so with "rare exceptions," were limited to local niche markets.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Freddy II** Freddy II: Freddy (1969–1971) and Freddy II (1973–1976) were experimental robots built in the Department of Machine Intelligence and Perception (later Department of Artificial Intelligence, now part of the School of Informatics at the University of Edinburgh). Technology: Technical innovations involving Freddy were at the forefront of the 70s robotics field. Freddy was one of the earliest robots to integrate vision, manipulation and intelligent systems as well as having versatility in the system and ease in retraining and reprogramming for new tasks. The idea of moving the table instead of the arm simplified the construction. Freddy also used a method of recognising the parts visually by using graph matching on the detected features. The system used an innovative collection of high level procedures for programming the arm movements which could be reused for each new task. Lighthill controversy: In the mid 1970s there was controversy about the utility of pursuing a general purpose robotics programme in both the USA and the UK. A BBC TV programme in 1973, referred to as the "Lighthill Debate", pitched James Lighthill, who had written a critical report for the science and engineering research funding agencies in the UK, against Donald Michie from the University of Edinburgh and John McCarthy from Stanford University. The Edinburgh Freddy II and Stanford/SRI Shakey robots were used to illustrate the state-of-the-art at the time in intelligent robotics systems. Freddy I and II: Freddy Mark I (1969–1971) was an experimental prototype, with 3 degrees-of-freedom created by a rotating platform driven by a pair of independent wheels. The other main components were a video camera and bump sensors connected to a computer. The computer moved the platform so that the camera could see and then recognise the objects.Freddy II (1973–1976) was a 5 degrees of freedom manipulator with a large vertical 'hand' that could move up and down, rotate about the vertical axis and rotate objects held in its gripper around one horizontal axis. Two remaining translational degrees of freedom were generated by a work surface that moved beneath the gripper. The gripper was a two finger pinch gripper. A video camera was added as well as a later a light stripe generator. Freddy I and II: The Freddy and Freddy II projects were initiated and overseen by Donald Michie. The mechanical hardware and analogue electronics were designed and built by Stephen Salter (who also pioneered renewable energy from waves (see Salter's Duck)), and the digital electronics and computer interfacing were designed by Harry Barrow and Gregan Crawford. The software was developed by a team led by Rod Burstall, Robin Popplestone and Harry Barrow which used the POP-2 programming language, one of the world's first functional programming languages. The computing hardware was an Elliot 4130 computer with 384KB (128K 24-bit words) RAM and a hard disk linked to a small Honeywell H316 computer with 16KB of RAM which directly performed sensing and control. Freddy I and II: Freddy was a versatile system which could be trained and reprogrammed to perform a new task in a day or two. The tasks included putting rings on pegs and assembling simple model toys consisting of wooden blocks of different shapes, a boat with a mast and a car with axles and wheels. Freddy I and II: Information about part locations was obtained using the video camera, and then matched to previously stored models of the parts.It was soon realised in the Freddy project that the 'move here, do this, move there' style of robot behavior programming (actuator or joint level programming) is tedious and also did not allow for the robot to cope with variations in part position, part shape and sensor noise. Consequently, the RAPT robot programming language was developed by Pat Ambler and Robin Popplestone, in which robot behavior was specified at the object level. Freddy I and II: This meant that robot goals were specified in terms of desired position relationships between the robot, objects and the scene, leaving the details of how to achieve the goals to the underlying software system. Although developed in the 1970s RAPT is still considerably more advanced than most commercial robot programming languages.The team of people who contributed to the project were leaders in the field at the time and included Pat Ambler, Harry Barrow, Ilona Bellos, Chris Brown, Rod Burstall, Gregan Crawford, Jim Howe, Donald Michie, Robin Popplestone, Stephen Salter, Austin Tate and Ken Turner. Freddy I and II: Also of interest in the project was the use of a structured-light 3D scanner to obtain the 3D shape and position of the parts being manipulated.The Freddy II robot is currently on display at the Royal Museum in Edinburgh, Scotland, with a segment of the assembly video shown in a continuous loop.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Neohexene** Neohexene: Neohexene is the hydrocarbon compound with the chemical formula (CH3)3CCH=CH2. It is a colorless liquid, with properties similar to other hexenes. It is a precursor to commercial synthetic musk perfumes. Preparation and reactions: Neohexene is prepared by ethenolysis of diisobutene, an example of a metathesis reaction: CH CH CH CH CH CH CH CH CH CH 2 It is a building block to synthetic musks by its reaction with p-cymene. It is also used in the industrial preparation of terbinafine.In the study of C-H activation, neohexene is often used as a hydrogen acceptor.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Holmes heart** Holmes heart: Holmes heart is a rare congenital heart disease with absence of the inflow tract of the morphologically right ventricle (RV) and hence a single left ventricle (LV). The great vessels are normally related, with the pulmonary artery arising from the small infundibular outlet chamber, and the aorta arising from the single left ventricle.The Holmes heart is named after Dr. Andrew F. Holmes, who first described an autopsy specimen of this congenital heart defect in 1824. Dr. Holmes later became the first Dean of the Medical Faculty at McGill University in Canada.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Childhood-autism spectrum test** Childhood-autism spectrum test: The Childhood Autism Spectrum Test, abbreviated as CAST and formerly titled the Childhood Asperger Syndrome Test, is a tool to screen for autism spectrum disorder in children aged 4–11 years, in a non-clinical setting. It is also called the Social and Communication Development Questionnaire. Development: The questionnaire was developed by the Autism Research Centre at the University of Cambridge, by Fiona J Scott, Simon Baron-Cohen, Patrick Bolton, and Carol Brayne. Pilot Study The pilot study was used to discern the preliminary cutoff scores for the CAST. Parents of 13 children with Asperger Syndrome and 37 typically developing children completed the CAST questionnaire. There were significant differences in average scores, with the Asperger Syndrome sample average of 21.08 (range 15–31) and the typical sample average of 4.73 (range 0–13). Main Study Parents of 1,150 primary school aged children were sent the CAST questionnaire, with 199 responders and 174 taking part in the full data analysis. The results suggested that, compared to other screening tools currently available, the CAST may be useful for identifying children at risk for autism spectrum disorders, in a mainstream non-clinical sample. Additional Research Research is ongoing to establish accurate sensitivity data, validity, reliability, to replicate current findings in a larger and geographically more diverse sample, and to study the epidemiological issues in greater detail. The PhenX Toolkit uses CAST as its child protocol for symptoms of autism spectrum disorders. Format: The CAST questionnaire contains 39 yes-or-no questions about the child's social behaviors and communication tendencies. It also contains a separate special needs section that asks about other comorbid disorders that the child might have.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Strained silicon** Strained silicon: Strained silicon is a layer of silicon in which the silicon atoms are stretched beyond their normal interatomic distance. This can be accomplished by putting the layer of silicon over a substrate of silicon–germanium (SiGe). As the atoms in the silicon layer align with the atoms of the underlying silicon germanium layer (which are arranged a little farther apart, with respect to those of a bulk silicon crystal), the links between the silicon atoms become stretched - thereby leading to strained silicon. Moving these silicon atoms farther apart reduces the atomic forces that interfere with the movement of electrons through the transistors and thus better mobility, resulting in better chip performance and lower energy consumption. These electrons can move 70% faster allowing strained silicon transistors to switch 35% faster. Strained silicon: More recent advances include deposition of strained silicon using metalorganic vapor-phase epitaxy (MOVPE) with metalorganics as starting sources, e.g. silicon sources (silane and dichlorosilane) and germanium sources (germane, germanium tetrachloride, and isobutylgermane). Strained silicon: More recent methods of inducing strain include doping the source and drain with lattice mismatched atoms such as germanium and carbon. Germanium doping of up to 20% in the P-channel MOSFET source and drain causes uniaxial compressive strain in the channel, increasing hole mobility. Carbon doping as low as 0.25% in the N-channel MOSFET source and drain causes uniaxial tensile strain in the channel, increasing electron mobility. Covering the NMOS transistor with a highly stressed silicon nitride layer is another way to create uniaxial tensile strain. As opposed to wafer-level methods of inducing strain on the channel layer prior to MOSFET fabrication, the aforementioned methods use strain induced during the MOSFET fabrication itself to alter the carrier mobility in the transistor channel. History: The idea of using germanium to strain silicon for the purpose of improving field-effect transistors appears to go back at least as far as 1991.In 2000, an MIT report investigated theoretical and experimental hole mobility in SiGe heterostructure-based PMOS devices.In 2003, IBM was reported to be among primary proponents of the technology.In 2002, Intel had featured strained silicon technology in its 90nm X86 Pentium microprocessors series in early 2000. In 2005 Intel was sued by AmberWave company for alleged patent infringement related to strained silicon technology.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Coenzyme Q5** Coenzyme Q5: Coenzyme Q5, more commonly known as COQ5, is a coenzyme involved in the electron transport chain. It is a shorter-chain homolog of coenzyme Q10 (ubiquinone), the more-common coenzyme of this family.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**All-trans-octaprenyl-diphosphate synthase** All-trans-octaprenyl-diphosphate synthase: All-trans-octaprenyl-diphosphate synthase (EC 2.5.1.90, octaprenyl-diphosphate synthase, octaprenyl pyrophosphate synthetase, polyprenylpyrophosphate synthetase, terpenoidallyltransferase, terpenyl pyrophosphate synthetase, trans-heptaprenyltranstransferase, trans-prenyltransferase) is an enzyme with systematic name (2E,6E)-farnesyl-diphosphate:isopentenyl-diphosphate farnesyltranstransferase (adding 5 isopentenyl units). This enzyme catalyses the following chemical reaction (2E,6E)-farnesyl diphosphate + 5 isopentenyl diphosphate ⇌ 5 diphosphate + all-trans-octaprenyl diphosphateThis enzyme catalyses the condensation reactions resulting in the formation of all-trans-octaprenyl diphosphate.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded