text
stringlengths
60
353k
source
stringclasses
2 values
**Electronic control unit** Electronic control unit: An electronic control unit (ECU), also known as an electronic control module (ECM), is an embedded system in automotive electronics that controls one or more of the electrical systems or subsystems in a car or other motor vehicle. Electronic control unit: Modern vehicles have many ECUs, and these can include some or all of the following: engine control module (ECM), powertrain control module (PCM), transmission control module (TCM), brake control module (BCM or EBCM), central control module (CCM), central timing module (CTM), general electronic module (GEM), body control module (BCM), and suspension control module (SCM). These ECUs together are sometimes referred to collectively as the car's computer though technically they are all separate computers, not a single one. Sometimes an assembly incorporates several individual control modules (a PCM often controls both the engine and the transmission).Some modern motor vehicles have up to 150 ECUs. Embedded software in ECUs continues to increase in line count, complexity, and sophistication. Managing the increasing complexity and number of ECUs in a vehicle has become a key challenge for original equipment manufacturers (OEMs). Types: Generic Industry Controller Naming - Is the naming of controllers where the logical thought of the controller's name implies the system the controller is responsible for controlling Generic Powertrain The Generic Powertrain pertains to a vehicle's emission system and is the only regulated controller name. Other Controllers All other controller names are decided upon by the individual OEM. The engine controller may have several different names, such as "DME", "Enhanced Powertrain", "PGM-FI" and many others. Door control unit (DCU) Engine control unit (ECU) — not to be confused with electronic control unit, the generic term for all these devices Electric Power Steering Control Unit (PSCU) — Generally this will be integrated into the EPS power pack. Human–machine interface (HMI) Powertrain control module (PCM): Sometimes the functions of the Engine Control Unit and transmission control module (TCM) are combined into a single unit called the Powertrain Control Module. Seat Control Unit Speed control unit (SCU) Telematic control unit (TCU) Transmission control module (TCM) Brake Control Module (BCM; ABS or ESC) Battery management system (BMS) Key elements: Core Microcontroller Memory SRAM EEPROM Flash Inputs Supply Voltage and Ground Digital inputs Analog inputs Outputs Actuator drivers (e.g. injectors, relays, valves) H bridge drivers for servomotors Logic outputs Communication links Housing Bus Transceivers, e.g. for K-Line, CAN, Ethernet Embedded Software Boot Loader Metadata for ECU and Software Identification, Version Management, Checksums Functional Software Routines Configuration Data Design and development: The development of an ECU involves both hardware and software required to perform the functions expected from that particular module. Automotive ECU's are being developed following the V-model. Recently the trend is to dedicate a significant amount of time and effort to develop safe modules by following standards like ISO 26262. It is rare that a module is developed fully from scratch. The design is generally iterative and improvements are made to both the hardware and software. The development of most ECUs is carried out by Tier 1 suppliers based on specifications provided by the OEM. Testing and validation: As part of the development cycle, manufacturers perform detailed FMEAs and other failure analyses to catch failure modes that can lead to unsafe conditions or driver annoyance. Extensive testing and validation activities are carried out as part of the Production part approval process to gain the confidence of the hardware and software. On-board diagnostics or OBD help provide specific data related to which system or component failed or caused a failure during run time and help perform repairs. Modifications: Some people may wish to modify their ECU so as to be able to add or change functionality. However modern ECUs come equipped with protection locks to prevent users from modifying the circuit or exchange chips. The protection locks are a form of digital rights management (DRM), the circumventing of which is illegal in certain jurisdictions. In the United States for example, the DMCA criminalizes circumvention of DRM, though an exemption does apply that allows circumvention the owner of a motorized land vehicle if it is required to allow diagnosis, repair or lawful modification (ie. that does not violate applicable law such as emissions regulations).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Application performance management** Application performance management: In the fields of information technology and systems management, application performance management (APM) is the monitoring and management of the performance and availability of software applications. APM strives to detect and diagnose complex application performance problems to maintain an expected level of service. APM is "the translation of IT metrics into business meaning ([i.e.] value)." Measuring application performance: Two sets of performance metrics are closely monitored. The first set of performance metrics defines the performance experienced by end-users of the application. One example of performance is average response times under peak load. The components of the set include load and response times: The load is the volume of transactions processed by the application, e.g., transactions per second, requests per second, pages per second. Without being loaded by computer-based demands (e.g. searches, calculations, transmissions), most applications are fast enough, which is why programmers may not catch performance problems during development.The response times are the times required for an application to respond to a user's actions at such a load.The second set of performance metrics measures the computational resources used by the application for the load, indicating whether there is adequate capacity to support the load, as well as possible locations of a performance bottleneck. Measurement of these quantities establishes an empirical performance baseline for the application. The baseline can then be used to detect changes in performance. Changes in performance can be correlated with external events and subsequently used to predict future changes in application performance.The use of APM is common for Web applications, which lends itself best to the more detailed monitoring techniques. In addition to measuring response time for a user, response times for components of a Web application can also be monitored to help pinpoint causes of delay. There also exist HTTP appliances that can decode transaction-specific response times at the Web server layer of the application. Measuring application performance: In their APM Conceptual Framework, Gartner Research describes five dimensions of APM: End-user experience monitoring – (active and passive) Application runtime architecture discovery and modeling User-defined transaction profiling (also called business transaction management) Application component monitoring Reporting & Application data analytics.In 2016, Gartner Research has updated its definition, into three main functional dimensions: End-user experience monitoring (EUEM) has been evolved into Digital experience monitoring (DEM); A new dimension, Application discovery, tracing, and diagnostics (ADTD), combines three formerly separate dimensions (Application topology [runtime architecture] discovery and visualization, User-defined transaction profiling, and Application component deep-dive), since all three are primarily focused on problem remediation and are interlinked; Application analytics (AA). Current issues: Since the first half of 2013, APM has entered into a period of intense competition in technology and strategy with a multiplicity of vendors and viewpoints. This has caused an upheaval in the marketplace with vendors from unrelated backgrounds (including network monitoring, systems management, application instrumentation, and web performance monitoring) adopting messaging around APM. As a result, the term APM has become diluted and has evolved into a concept for managing application performance across many diverse computing platforms, rather than a single market. With so many vendors to choose from, selecting one can be a challenge. It is important to evaluate each carefully to ensure its capabilities meet your needs.Two challenges for implementing APM are (1) it can be difficult to instrument an application to monitor application performance, especially among components of an application, and (2) applications can be virtualized, which increases the variability of the measurements. To alleviate the first problem application service management (ASM) provides an application-centric approach, where business service performance visibility is a key objective. The second aspect present in distributed, virtual and cloud-based applications poses a unique challenge for application performance monitoring because most of the key system components are no longer hosted on a single machine. Each function is now likely to have been designed as an Internet service that runs on multiple virtualized systems. The applications themselves are very likely to be moving from one system to another to meet service-level objectives and deal with momentary outages. The APM conceptual framework: Applications themselves are becoming increasingly difficult to manage as they move toward highly distributed, multi-tier, multi-element constructs that in many cases rely on application development frameworks such as .NET or Java. The APM Conceptual Framework was designed to help prioritize an approach on what to focus on first for quick implementation and overall understanding of the five-dimensional APM model. The framework slide outlines three areas of focus for each dimension and describes their potential benefits. These areas are referenced as "Primary" below, with the lower priority dimensions referenced as "Secondary. " End user experience (primary) Measuring the transit of traffic from user request to data and back again is part of capturing the end-user experience (EUE). The outcome of this measuring is referred to as Real-time Application monitoring (aka Top-Down monitoring), which has two components, passive and active. Passive monitoring is usually an agentless appliance implemented using network port mirroring. A key feature to consider is the ability to support multi-component analytics (e.g., database, client/browser). Active monitoring, on the other hand, consists of synthetic probes and web robots predefined to report system availability and business transactions. Active monitoring is a good complement to passive monitoring; together, these two components help provide visibility into application health during off-peak hours when transaction volume is low. User experience management (UEM) is a subcategory that emerged from the EUE dimension to monitor the behavioral context of the user. UEM, as practiced today, goes beyond availability to capture latencies and inconsistencies as human beings interact with applications and other services. UEM is usually agent-based and may include JavaScript injection to monitor the end-user device. UEM is considered another facet of Real-time Application monitoring. The APM conceptual framework: Runtime application architecture (secondary) Application Discovery and Dependency Mapping (ADDM) offerings exist to automate the process of mapping transactions and applications to underlying infrastructure components. When preparing to implement a runtime application architecture, it is necessary to ensure that up/down monitoring is in place for all nodes and servers within the environment (aka, bottom-up monitoring). This helps lay the foundation for event correlation and provides the basis for a general understanding of how network topologies interact with application architectures. The APM conceptual framework: Business transaction (primary) Focus on user-defined transactions or the URL page definitions that have some meaning to the business community. For example, if there are 200 to 300 unique page definitions for a given application, group them into 8–12 high-level categories. This allows for meaningful SLA reports, and provides trending information on application performance from a business perspective: start with broad categories and refine them over time. For a deeper understanding, see Business transaction management. The APM conceptual framework: Deep dive component monitoring (secondary) Deep dive component monitoring (DDCM) requires an agent installation and is generally targeted at middleware, focusing on web, application, and messaging servers. It should provide a real-time view of the J2EE and .NET stacks, tying them back to the user-defined business transactions. A robust monitor shows a clear path from code execution (e.g., spring and struts) to the URL rendered, and finally to the user request. Since DDCM is closely related to the second dimension in the APM model, most products in this field also provide application discovery dependency mapping (ADDM) as part of their offering. The APM conceptual framework: Analytics/reporting (primary) It is important to arrive at a common set of metrics to collect and report on for each application, then standardize on a common view on how to present the application performance data. Collecting raw data from the other tool sets across the APM model provides flexibility in application reporting. This allows for answering a wide variety of performance questions as they arise, despite the different platforms each application may be running on. Too much information is overwhelming. That is why it is important to keep reports simple or they won't be used.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Geode (processor)** Geode (processor): Geode was a series of x86-compatible system-on-a-chip (SoC) microprocessors and I/O companions produced by AMD, targeted at the embedded computing market. Geode (processor): The series was originally launched by National Semiconductor as the Geode family in 1999. The original Geode processor core itself is derived from the Cyrix MediaGX platform, which was acquired in National's merger with Cyrix in 1997. AMD bought the Geode business from National in August 2003 to augment its existing line of embedded x86 processor products. Before aquirering Geode, AMD marketed the AMD Élan, a family of 32-Bit embedded SoCs based on their own Am386, Am486 and Am586 microprocessors. All of these products have been backed with a long-term supply guarantee to meet the needs of embedded processors. However, after acquiring Geode, the product was suddenly discontinued.AMD expanded the Geode series to two classes of processor: the MediaGX-derived Geode GX and LX, and the modern Athlon-derived Geode NX. Geode (processor): Geode processors are optimized for low power consumption and low cost while still remaining compatible with software written for the x86 platform. The MediaGX-derived processors lack modern features such as SSE and a large on-die L1 cache but these are offered on the more recent Athlon-derived Geode NX. Geode processors tightly integrate some of the functions normally provided by a separate chipset, such as the northbridge. Whilst the processor family is best suited for thin client, set top box and embedded computing applications, it can be found in unusual applications such as the Nao robot and the Win Enterprise IP-PBX. Geode (processor): The One Laptop per Child project used the GX series Geode processor in OLPC XO-1 prototypes, but moved to the Geode LX for production. The Linutop (rebranded Artec ThinCan DBE61C or rebranded FIC ION603A) is also based on the Geode LX. 3Com Audrey was powered by a 200 MHz Geode GX1. The SCxxxx range of Geode devices are a single-chip version, comparable to the SiS 552, VIA CoreFusion or Intel's Tolapai, which integrate the CPU, memory controller, graphics and I/O devices into one package. Single processor boards based on these processors are manufactured by Artec Group, PC Engines (WRAP), Soekris, and Win Enterprises. AMD discontinued all Geode processors in 2019. National Semiconductor Geode: Geode GXm Rebranded Cyrix MediaGXm. Returns "CyrixInstead" on CPUID. National Semiconductor Geode: 0.35 μm four-layer metal CMOS MMX instructions Core speed: 180, 200, 233, 266 MHz 3.3 V I/O, 2.9 V core 16 KB four-way set associative write-back unified (I&D) L1 cache, 2 or 4 KB of which can be reserved as I/O scratchpad RAM for use by the integrated graphics core (e.g. for bitblits) 30-33 MHz PCI bus interconnect with CPU bus 64-bit SDRAM interface Fully static design CS5530 companion chip (implements sound and video functions) VSA architecture 1280×1024×8 or 1024×768×16 display Geode GXLV Die-shrunk GXm 0.25 μm four-layer metal CMOS Core speed: 166, 180, 200, 233, 266 MHz 3.3 V I/O, 2.2, 2.5, 2.9 V core Typical power: 1.0 W at 2.2 V/166 MHz, 2.5 W at 2.9 V/266 MHz Geode GX1 Die-shrunk GXLV 0.18 μm four-layer metal CMOS Core speed: 200, 233, 266, 300, 333 MHz 3.3 V I/O, 1.8, 2.0, 2.2 V core Typical power: 0.8 W at 1.8 V/200 MHz, 1.4 W at 2.2 V/333 MHz 64-bit SDRAM interface, up to 111 MHz CS5530A companion chip 60 Hz VGA refresh rateNational Semiconductor/AMD SC1100 is based on the Geode GX1 core and the CS5530 support chip. National Semiconductor Geode: Geode GX2 Announced by National Semiconductor Corporation October, 2001 at Microprocessor Forum. First demonstration at COMPUTEX Taiwan, June, 2002. National Semiconductor Geode: 0.15 μm process technology MMX and 3DNow! instructions 16 KB Instruction and 16 KB Data L1 cache GeodeLink architecture, 6 GB/s on-chip bandwidth, up to 2 GB/s memory bandwidth Integrated 64-bit PC133 SDRAM and DDR266 controller Clockrate: 266, 333, and 400 MHz 33 MHz PCI bus interconnect with CPU bus 3 PCI masters supported 1600×1200 24-bit display with video scaling CRT DACs and an UMA DSTN/TFT controller. National Semiconductor Geode: Geode CS5535 or CS5536 companion chip Geode SCxx00 Series Developed by National Tel Aviv (NSTA) based on IP from Longmont and other sources. Applications: The SC3200 was used in the Tatung TWN-5213 CU. AMD Geode: In 2002, AMD introduced the Geode GX series, which was a re-branding of the National Semiconductor GX2. This was quickly followed by the Geode LX, running up to 667 MHz. LX brought many improvements, such as higher speed DDR, a re-designed instruction pipe, and a more powerful display controller. The upgrade from the CS5535 I/O Companion to the CS5536 brought higher speed USB. AMD Geode: Geode GX and LX processors are typically found in devices such as thin clients and industrial control systems. However, they have come under competitive pressure from VIA on the x86 side, and ARM processors from various vendors taking much of the low-end business. AMD Geode: Because of the relative performance, albeit higher PPW, of the GX and LX core design, AMD introduced the Geode NX, which is an embedded version of the Athlon processor, K7. Geode NX uses the Thoroughbred core and is quite similar to the Athlon XP-M that use this core. The Geode NX includes 256 KB of level 2 cache, and runs fanless at up to 1 GHz in the NX1500@6 W version. The NX2001 part runs at 1.8 GHz, the NX1750 part runs at 1.4 GHz, and the NX1250 runs at 667 MHz. AMD Geode: The Geode NX, with its strong FPU, is particularly suited for embedded devices with graphical performance requirements, such as information kiosks and casino gaming machines, such as video slots. AMD Geode: However, it was reported that the specific design team for Geode processors in Longmont, Colorado, has been closed, and 75 employees are being relocated to the new development facility in Fort Collins, Colorado. It is expected that the Geode line of processors will be updated less frequently due to the closure of the Geode design center.In 2009, comments by AMD indicated that there are no plans for any future microarchitecture upgrades to the processor and that there will be no successor; however, the processors will still be available with the planned availability of the Geode LX extending through 2015. In 2016 AMD updated the product roadmap announcing extension of last time buy and shipment for the LX series to 2019. In early 2018 hardware manufacturer congatec announced an agreement with AMD for a further extension of availability of congatec's Geode based platforms. AMD Geode: Geode GX Geode LX Features: Low power. Full x86 compatibility. Processor functional blocks: CPU Core GeodeLink Control Processor GeodeLink Interface Units GeodeLink Memory Controller Graphics Processor Display Controller Video Processor Video Input Port GeodeLink PCI Bridge Security Block 128-Bit Advanced Encryption Standard (AES) - (CBC/ECB) True Random Number GeneratorSpecification: Processor frequency up to 600 MHz (LX900), 500 MHz (LX800) and 433 MHz (LX700). Power management: ACPI, lower power, wakeup on SMI/INTR. 64K Instruction / 64K Data L1 cache and 128K L2 cache Split Instruction/Data cache/TLB. AMD Geode: DDR Memory 400 MHz (LX 800), 333 MHz (LX 700) Integrated FPU with MMX and 3DNow! 9 GB/s internal GeodeLink Interface Unit (GLIU) Simultaneous, high-res CRT and TFT (High and standard definition). VESA 1.1 and 2.0 VIP/VDA support Manufactured at a 0.13 micrometre process 481-terminal PBGA (Plastic Ball grid array) GeodeLink active hardware power managementApplications: OLPC XO-1 Geode NX Features: 7th generation core (based on Mobile Athlon XP-M). AMD Geode: Power management: AMD PowerNow!, ACPI 1.0b and ACPI 2.0. 3DNow!, MMX and SSE instruction sets 0.13 μm (130 nm) fabrication process Pin compatibility between all NX family processors. OS support: Linux, Windows CE, MS Windows XP. AMD Geode: Compatible with Socket A motherboards Geode NX 2001 In 2007, there was a Geode NX 2001 model on sale, which in fact was a relabelled Athlon XP 2200+ Thoroughbred. The processors, with part numbers AANXA2001FKC3G or ANXA2001FKC3D, their specifications are 1.8 GHz clock speed, and 1.65 volt core operating voltage. The power consumption is 62.8 Watt. There are no official references to this processor except officials explaining that the batch of CPUs were "being shipped to specific customers", though it is clear it is a desktop Athlon XP CPU core instead of the Mobile Athlon XP-M derived Thoroughbred cores of the other Geode NX CPUs, and thus doesn't feature embedded application specific thermal envelope, power consumption and power management features. This kind of "badge engineering" of a particular CPU to accommodate a request for a desktop class chip from an OEM which merely wants to maintain brand recognition and association with the GeodeNX CPUs in its products, but the actual end-product application doesn't necessitate the advanced power and thermal optimization of the GeodeNX CPU's, is understandable, as re-labeling a part in a product catalog, is practically free and the processors do share the same CPU socket (Socket A). Chipsets for Geode: NSC Geode CS5530A Southbridge for Geode GX1. NSC/AMD Geode CS5535 Southbridge for Geode GX(2) and Geode LX (USB 1.1). Integrates four USB ports, one ATA-66 UDMA controller, one Infrared communication port, one AC'97 controller, one SMBUS controller, one LPC port, as well as GPIO, Power Management, and legacy functional blocks. AMD Geode CS5536 Southbridge for Geode GX and Geode LX (USB 2.0). Power consumption: 1.9 W (433 MHz) and 2.4 W (500 MHz). This chipset is also used on PowerPC board (Amy'05). Geode NX processors are "100 percent socket and chipset compatible" with AMD's Socket A Athlon XP processors: SIS741CX Northbridge and SIS 964 Southbridge, VIA KM400 Northbridge and VIA VT8235 Southbridge, VIA KM400A Northbridge and VIA VT8237R Southbridge and other Socket A chipsets.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Octyl gallate** Octyl gallate: Octyl gallate is the ester of 1-octanol and gallic acid. As a food additive, it is used under the E number E311 as an antioxidant and preservative. Properties: Octyl gallate is a white powder with a characteristic odor. It is very slightly soluble in water and soluble in alcohol. Its solubility in lard is 1.1%. Octyl gallate darkens in the presence of iron. Uses: This antioxidant is used in numerous pharmaceutical, cosmetic, and food products; such as soaps, shampoos, shaving soaps, skin lotions, deodorants, margarine, and peanut butter. It is a synergistic antioxidant with butylated hydroxytoluene (BHT) and butylated hydroxyanisole (BHA).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**1986 Cleveland Indians season** 1986 Cleveland Indians season: == Offseason == December 10, 1985: George Vukovich was purchased from the Indians by the Seibu Lions (Japan Pacific). January 7, 1986: Roy Smith and Ramón Romero were traded by the Indians to the Minnesota Twins for Ken Schrom and Bryan Oelkers. January 14, 1986: Troy Neel was drafted by the Indians in the 9th round of the 1986 Major League Baseball draft. Player signed May 5, 1986. January 23, 1986: Fran Mullins was purchased by the Indians from the San Francisco Giants. February 8, 1986: Dickie Noles was signed as a free agent by the Indians. February 10, 1986: Butch Benton was released by the Indians. February 25, 1986: Jim Kern was signed as a free agent by the Indians. Regular season: Season standings Record vs. opponents Notable transactions March 28, 1986: Dave Von Ohlen was released by the Indians. April 1, 1986: Jerry Willard was released by the Indians. June 2, 1986: Greg Swindell was drafted by the Indians in the 1st round (2nd pick) of the 1986 Major League Baseball draft. Player signed July 31, 1986. June 17, 1986: Jim Kern was released by the Indians. June 20, 1986: Neal Heaton was traded by the Indians to the Minnesota Twins for John Butcher. Opening Day Lineup Roster Player stats: Batting Note: G = Games played; AB = At bats; R = Runs scored; H = Hits; 2B = Doubles; 3B = Triples; HR = Home runs; RBI = Runs batted in; AVG = Batting average; SB = Stolen bases Pitching Note: W = Wins; L = Losses; ERA = Earned run average; G = Games pitched; GS = Games started; SV = Saves; IP = Innings pitched; R = Runs allowed; ER = Earned runs allowed; BB = Walks allowed; K = Strikeouts
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Roulade** Roulade: A roulade () is a dish of filled rolled meat or pastry. Roulade can be savory or sweet. Swiss roll is an example of a sweet roulade. Traditionally found in various European cuisines, the term roulade originates from the French word rouler, meaning "to roll". Meat: A meat-based roulade typically consists of a slice of steak rolled around a filling such as cheese, vegetables, or other meats. A roulade, like a braised dish, is often browned then covered with wine or stock and cooked. Such a roulade is commonly secured with a toothpick, metal skewer or a piece of string. The roulade is sliced into rounds and served. Of this common form, there are several notable dishes: Paupiette, French veal roulade filled with vegetables, fruits or sweetmeats Rinderroulade, German and Hungarian beef roulade filled with onions, bacon and pickles. Also Kohlrouladen, cabbage filled with minced meat. Meat: Španělské ptáčky (Spanish birds) are roulade in Czech cuisine. The recipe is practically identical with German Rouladen, perhaps omitting wine and adding a wedge of hard boiled egg and/or frankfurter to the filling. Unlike the large roulade, sliced before serving, the "birds" are typically 10 cm (3.9 in) long, served whole with a side dish of rice or Czech style bread dumplings. Meat: Szüz tekercsek ("Virgin rouladen"), in Hungary a dish filled with minced meat. Zrazy (or "rolada"), in Poland Rollade, in the Netherlands. Most 'rollades' are made from rolled pork. A typical Dutch 'rollade' is not filled. Common spices are pepper, salt and nutmeg. Meat: Involtini In Italian cuisine, roulades are known as involtini (singular involtino). Involtini can be thin slices of beef, pork, or chicken rolled with a filling of grated cheese (usually Parmesan cheese or Pecorino Romano), sometimes egg to give consistency and some combination of additional ingredients such as bread crumbs, other cheeses, minced prosciutto, ham or Italian sausage, mushrooms, onions, garlic, spinach, pinoli (pine nuts), etc. Involtini (diminutive form of involti) means "little bundles". Each involtino is held together by a wooden toothpick, and the dish is usually served (in various sauces: red, white, etc.) as a second course. When cooked in tomato sauce, the sauce itself is used to toss the pasta for the first course, giving a consistent taste to the whole meal. Meat: In southern parts of Italy such as Sicily, where fish are a more plentiful element of cuisine, involtini can sometimes be made with fish such as swordfish. This term encompasses dishes like braciole (a roulade consisting of beef, pork or chicken usually filled with Parmesan cheese, bread crumbs and eggs) and saltimbocca. There are also vegetarian involtini made with eggplant. Pastry: Some roulades consist of cake (often sponge cake) baked in a flat pan rolled around a filling. Cake rolled around jam, chocolate buttercream, nuts or other fillings, is an example of a sweet roulade like the bejgli or the Swiss roll. The bûche de Noël or "Yule log" is a traditional French Christmas cake roll, often decorated with frosting made to look like bark. Pastry: Another form of non-meat roulade consists of a soufflé-type mixture baked in a flat pan rolled around a filling.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Clobetasol propionate** Clobetasol propionate: Clobetasol propionate is a corticosteroid used to treat skin conditions such as eczema, contact dermatitis, seborrheic dermatitis, and psoriasis. It is applied to the skin as a cream, ointment, or shampoo. Use should be short term and only if other weaker corticosteroids are not effective. Use is not recommended in rosacea or perioral dermatitis.Common side effects include skin irritation, dry skin, redness, pimples, and telangiectasia. Serious side effects may include adrenal suppression, allergic reactions, cellulitis, and Cushing's syndrome. Use in pregnancy and breastfeeding is of unclear safety. Clobetasol is believed to work by activating steroid receptors. It is a US Class I (Europe: class IV) corticosteroid, making it one of the strongest available. Clobetasol propionate: Clobetasol propionate was patented in 1968 and came into medical use in 1978. It is available as a generic medication. In 2020, it was the 171st most commonly prescribed medication in the United States, with more than 3 million prescriptions. Medical uses: Clobetasol propionate is used for the treatment of various skin disorders including eczema, herpes labialis, psoriasis, and lichen sclerosus. It is also used to treat several auto-immune diseases including alopecia areata, lichen planus (auto immune skin nodules), and mycosis fungoides (T-cell skin lymphoma). It is used as first-line treatment for both acute and chronic GVHD of the skin.Clobetasol propionate is used cosmetically for skin whitening, although this use is controversial. The U.S. Food and Drug Administration has not approved it for that purpose, and sales without a prescription are illegal in the U.S. Nonetheless, skin-whitening creams containing this ingredient can sometimes be found in beauty supply stores in New York City and on the internet. It is also sold internationally, and does not require a prescription in some countries. Whitening creams with clobetasol propionate, such as Hyprogel, can make skin thin and easily bruised, with visible capillaries, and acne. It can also lead to hypertension, elevated blood sugar, suppression of the body's natural steroids, and stretch marks, which may be permanent.Clobetasol propionate is, along with mercury and hydroquinone, "amongst the most toxic and most used agents in lightening products." Many products sold illegally have higher concentrations of clobetasol propionate than is permitted for prescription drugs. Contraindications: According to the California Environmental Protection Agency, clobetasol propionate should not be used by pregnant women, or women expecting to become pregnant soon, as studies with rats shows a risk of birth defects: "Studies in the rat following oral administration at dosage levels up to 50 mcg/kg per day revealed that the females exhibited an increase in the number of resorbed embryos and a decrease in the number of living fetuses at the highest dose. Pregnancy: Teratogenic Effects (i.e., possibility of causing abnormalities in fetuses): Pregnancy Category C: Clobetasol propionate has not been tested for teratogenicity when applied topically; however, it is absorbed percutaneously, and when administered subcutaneously it was a significant teratogen in both the rabbit and mouse. Clobetasol propionate has greater teratogenic potential than steroids that are less potent. There are no adequate and well-controlled studies of the teratogenic effects of clobetasol propionate in pregnant women. Temovate Cream and Ointment should be used during pregnancy only if the potential benefit justifies the potential risk to the fetus." Forms: Clobetasol propionate is marketed and sold worldwide under numerous names, including Clobex, Clob-x (Colombia), Clovate, Clobet (Biolab Thailand) Clonovate (T.O. Chemicals, Thailand), Cormax (Watson, US), Haloderm (Switzerland, by ELKO Org), Pentasol (Colombia), Cosvate, Clop (Cadila Healthcare, India), Propysalic (India), Temovate (US), Dermovate (GlaxoSmithKline, Canada, Estonia, Pakistan, Switzerland, Portugal, Romania, Israel), Olux, ClobaDerm, Tenovate, Dermatovate (Brazil, Mexico), Butavate, Movate, Novate, Salac (Argentina), and Powercort, Lotasbat and Kloderma (Indonesia), Lemonvate (Italy), Delor (Ethiopia), Psovate (Turkey), Skineal(Nigeria).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Substomatal cavity** Substomatal cavity: In plants, the substomatal cavity is the cavity located immediately proximal to the stoma. It acts as a diffusion chamber connected with intercellular air spaces and allows rapid diffusion of carbon dioxide and other gases (such as plant pheromones) in and out of plant cells.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**ARHGAP8** ARHGAP8: Rho GTPase-activating protein 8 is a protein that in humans is encoded by the ARHGAP8 gene. Function: This gene encodes a member of the RHOGAP family. GAP (GTPase-activating) family proteins participate in signaling pathways that regulate cell processes involved in cytoskeletal changes. GAP proteins alternate between an active (GTP-bound) and inactive (GDP-bound) state based on the GTP:GDP ratio in the cell. Rare read-through transcripts, containing exons from the PRR5 gene which is located immediately upstream, led to the original description of this gene as encoding a RHOGAP protein containing the proline-rich domains characteristic of PRR5 proteins. Alternatively spliced variants encoding different isoforms have been described.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**MENTAL domain** MENTAL domain: The MENTAL or MLN64 NH2-terminal domain is a membrane-spanning domain that is conserved in two late endosomal proteins in vertebrates, MLN64 and MENTHO. The domain is 170 amino acids long. Current data indicates that this domain allows for dimerization between MLN64 and MENTHO molecules and with themselves. The domain may also direct cholesterol transport.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**E-diesel** E-diesel: E-diesel is a synthetic diesel fuel created by Audi for use in automobiles. Currently, e-diesel is created by an Audi research facility in partnership with a company named Sunfire. The fuel is created from carbon dioxide, water, and electricity with a process powered by renewable energy sources to create a liquid energy carrier called blue crude (in contrast to regular crude oil) which is then refined to generate e-diesel. E-diesel is considered to be a carbon-neutral fuel as it does not extract new carbon and the energy sources to drive the process are from carbon-neutral sources. As of April 2015, an Audi A8 driven by Federal Minister of Education and Research in Germany is using the e-diesel fuel. Catalytic conversions: Sunfire, a clean technology company, operates a pilot plant in Dresden, Germany. The current process involves high-temperature electrolysis powered by electricity generated from renewable energy sources to split water into hydrogen and oxygen. The next two chemical processes to create a liquid energy carrier called blue crude are done at a temperature of 220 °C (428 °F) and a pressure of 25 bars (2,500 kPa). In a conversion step, hydrogen and carbon dioxide are used to create syngas with water as byproduct. The syngas, which contains carbon monoxide and hydrogen, reacts to generate the blue crude. Catalytic conversions: Sunfire power-to-liquids system: Base products are carbon dioxide (CO2) and water (H2O)1st step: Electrolysis of Water (SOEC) −water is split into hydrogen and oxygen. 2nd step: Conversion Reactor (RWGSR) −hydrogen and carbon dioxide are inputs to the Conversion Reactor that outputs hydrogen, carbon monoxide, and water. Catalytic conversions: 3rd step: F-T Reactor −hydrogen and carbon monoxide are inputs to the F-T Reactor that outputs paraffinic and olefinic hydrocarbons, ranging from methane to high molecular weight waxes.The final step is also known as Fischer–Tropsch process which was first developed in 1925 by German chemists Franz Fischer and Hans Tropsch. After the blue crude is produced, it can be refined to create e-diesel on site, saving the fuel and other infrastructure costs on crude transportation. As of April 2015, Sunfire has a capability to produce a limited amount of fuel at 160 litres (35 imp gal; 42 US gal) a day. There is a plan to increase the production to an industrial scale.Audi also partners with a company named Climeworks which manufactures Direct Air Capture technology. Climeworks technologies can absorb atmospheric carbon dioxide which is chemically captured at the surface of a sorbent until it becomes saturated. At that point, the sorbent is introduced with 95 °C (203 °F) heat in a desorption cycle to drive out the high-purity carbon dioxide that can be used during the conversion step of the blue crude generation process. The atmospheric carbon dioxide capturing process has 90% of energy demand in the form of low-temperature heat and the rest from electrical energy for pumping and control. The combined plant of Climeworks and Sunfire in Dresden became operational in November 2014. A plant on Herøya in Norway, producing 10 million liters per year, is being considered, as CO2 from a fertilizer plant is readily available and electricity is relatively cheap in Norway. Catalytic conversions: Properties As much as eighty percent of blue crude can be converted into e-diesel. The fuel contains no sulfur or aromatics, and has a high cetane number. These properties allow it to be blended with typical fossil diesel and used as a replacement fuel in automobiles with diesel engines. Catalytic conversions: Oxygen by-product In future designs, the oxygen by-product may be combined with renewable natural gas in the oxidative coupling of methane to ethylene: 2CH4 + O2 → C2H4 + 2H2OThe reaction is exothermic (∆H = -280 kJ/mol) and occurs at high temperatures (750–950 ˚C). The yield of the desired C2 products is reduced by non-selective reactions of methyl radicals with the reactor surface and oxygen, which produces carbon monoxide and carbon dioxide by-products. Another ethylene production initiative developed by the European Commission through the Seventh Framework Programme for Research and Technological Development is the OCMOL process, which is the Oxidative Coupling of Methane (OCM) and simultaneous Reforming of Methane (RM) in a fully integrated reactor. Biocatalytic conversions: Audi also partnered with a now-defunct United States company, Joule, to develop Sunflow-D as e-diesel for Audi. Joule's planned plant in New Mexico involved the use of genetically modified microorganisms in bright sunlight to act as catalyst for the conversion of carbon dioxide and salty water into hydrocarbons. The process could be modified for longer molecular chains to produce alkanes in order to create synthetic diesel.Joule was the first company to patent a modified organism that continuously secretes hydrocarbon fuel. The organism is a single-celled cyanobacterium, also known as blue-green algae, although it is technically not an algae. It produces the fuel using photosynthesis, the same process that multi-cellular green plants use, to make sugars and other materials from water, carbon dioxide, and sunlight. Similar initiatives: There are other initiatives to create synthetic fuel from carbon dioxide and water, however they are not part of Audi's initiatives and the fuels are not called e-diesel. The water splitting methods vary. Concentrated solar power 2004 Sunshine-to-Petrol – Sandia National Laboratories. 2013 NewCO2Fuels – New CO2 Fuels Ltd (IL) and Weizmann Institute of Science. 2014 Solar-Jet Fuels – Consortium partners ETH Zurich, Royal Dutch Shell, DLR, Bauhaus Luftfahrt, ARTTIC. High-temperature electrolysis 2004 Syntrolysis Fuels – Idaho National Laboratory and Ceramatec, Inc. (US). 2008 WindFuels – Doty Energy (US). 2012 Air Fuel Synthesis – Air Fuel Synthesis Ltd (UK). 2013 Green Feed – Ben-Gurion University of the Negev and Israel Strategic Alternative Energy Foundation (I-SAEF). Similar initiatives: 2014 E-dieselThe U.S. Naval Research Laboratory (NRL) is designing a power-to-liquids system using the Fischer-Tropsch Process to create fuel on board a ship at sea, with the base products carbon dioxide (CO2) and water (H2O) being derived from sea water via "An Electrochemical Module Configuration For The Continuous Acidification Of Alkaline Water Sources And Recovery Of CO2 With Continuous Hydrogen Gas Production".
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Mnemosyne (software)** Mnemosyne (software): Mnemosyne (named for the Greek goddess of memory, Mnemosyne) is a line of spaced repetition software developed since 2003. Spaced repetition is an evidence-based learning technique that has been shown to increase the rate of memorization. Features: Spacing algorithm based on an early version of the SuperMemo algorithm, SM-2, with some modifications that deal with early and late repetitions. Features: Supports pictures, sound, video, HTML, Flash and LaTeX Portable (can be installed on a USB stick) Categorization of cards Learning progress statistics Stores learning data (represented as decks of cards that each have a question and an answer side) in ".mem" database files, which are interoperable with a number of other spaced repetition applications Plugins and JavaScript support Review cards on Android devices. Overview: Each day, the software displays each card that is scheduled for repetition. The user then grades their recollection of the card's answer on a scale of 0–5. The software then schedules the next repetition of the card in accordance with the user's rating of that particular card and the database of cards as a whole. This produces an active, rather than passive, review process. The rationale behind this approach is that (because of the spacing effect), over time, the number of repetitions done per day is reduced, increasing the rate of recall (when compared to passive learning techniques), with minimal time spent learning. Software: Mnemosyne is written in Python, which allows its use on Microsoft Windows, Linux, and Mac OS X. A client program for review on Android devices is also available but needs to be synchronized by the desktop program. Users of the software usually make their own database of cards, although pre-made Mnemosyne databases are available, and it is possible to import SuperMemo collections and text files. SQLite is used by the program to store files. Imports of flashcard databases from Anki, as well as databases from older versions of Mnemosyne are possible. Research: Mnemosyne collects data from volunteering users and is a research project on long-term memory.An August 2009 version of the dataset was made available via BitTorrent; a January 2014 version is available for download. Otherwise, the latest version is available from the author, Peter Bienstman, upon request.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Paper shredder** Paper shredder: A paper shredder is a mechanical device used to cut sheets of paper into either strips or fine particles. Government organizations, businesses, and private individuals use shredders to destroy private, confidential, or otherwise sensitive documents. Invention: The first paper shredder is credited to prolific inventor Abbot Augustus Low, whose patent was filed on February 2, 1909. His invention was however never manufactured because the inventor died prematurely soon after filing the patent.Adolf Ehinger's paper shredder, based on a hand-crank pasta maker, was the first to be manufactured in 1935 in Germany. Supposedly he created a shredding machine to shred his anti-Nazi leaflets to avoid the inquiries of the authorities. Ehinger later marketed and began selling his patented shredders to government agencies and financial institutions converting from hand-crank to electric motor. Ehinger's company, EBA Maschinenfabrik, manufactured the first cross-cut paper shredders in 1959 and continues to do so to this day as EBA Krug & Priester GmbH & Co. in Balingen. Invention: Right before the fall of the Berlin Wall, a “wet shredder” was invented in the former German Democratic Republic. To prevent paper shredders in the Ministry for State Security (Stasi) from glutting, this device mashed paper snippets with water.With a shift from paper to digital document production, modern industrial shredders can process non-paper media, such as credit cards and CDs, and destroy thousands of documents in under one minute. History of use: Until the mid-1980s, it was rare for paper shredders to be used by non-government entities. A high-profile example of their use was when the U.S. embassy in Iran used shredders to reduce paper pages to strips before the embassy was taken over in 1979, but some documents were reconstructed from the strips, as detailed below. History of use: After Colonel Oliver North told Congress that he used a Schleicher cross-cut model to shred Iran-Contra documents, sales for that company increased nearly 20 percent in 1987.Paper shredders became more popular among U.S. citizens with privacy concerns after the 1988 Supreme Court decision in California v. Greenwood; in which the Supreme Court of the United States held that the Fourth Amendment does not prohibit the warrantless search and seizure of garbage left for collection outside of a home. Anti-burning laws also resulted in increased demand for paper shredding. History of use: More recently, concerns about identity theft have driven increased personal use, with the US Federal Trade Commission recommending that individuals shred financial documents before disposal.Information privacy laws such as FACTA, HIPAA, and the Gramm–Leach–Bliley Act are driving shredder usage, as businesses and individuals take steps to securely dispose of confidential information. Types: Shredders range in size and price from small and inexpensive units designed for a certain amount of pages, to large expensive units used by commercial shredding services and can shred millions of documents per hour. While the very smallest shredders may be hand-cranked, most shredders are electrically powered. Shredders over time have added features to improve the shredder user's experience. Many now reject paper that is fed over capacity to avoid jams; others have safety features to reduce risks. Some shredders designed for use in shared workspaces or department copy rooms have noise reduction. Mobile shredding truck Larger organisation or shredding services sometimes use "mobile shredding trucks", typically constructed as a box truck with an industrial-size paper shredder mounted inside and space for storage of the shredded materials. Such a unit may also offer the shredding of CDs, DVDs, hard drives, credit cards, and uniforms, among other things. Kiosks A 'shredding kiosk' is an automated retail machine (or kiosk) that allows public access to a commercial or industrial-capacity paper shredder. This is an alternative solution to the use of a personal or business paper shredder, where the public can use a faster and more powerful shredder, paying for each shredding event rather than purchasing shredding equipment. Services Some companies outsource their shredding to 'shredding services'. These companies either shred on-site, with mobile shredder trucks or have off-site shredding facilities. Documents that need to be destroyed are often placed in locked bins that are emptied periodically. Shredding method, and output: As well as size and capacity, shredders are classified according to the method they use; and the size and shape of the shreds they produce. Strip-cut shredders use rotating knives to cut narrow strips as long as the original sheet of paper. Cross-cut or confetti-cut shredders use two contra-rotating drums to cut rectangular, parallelogram, or lozenge (diamond-shaped) shreds. Particle-cut or Micro-cut shredders create tiny square or circular pieces. Cardboard shredders are designed specifically to shred corrugated material into either strips or a mesh pallet. Disintegrators and granulators repeatedly cut the paper at random with rotating knives in a drum until the particles are small enough to pass through a fine mesh. Hammermills pound the paper through a screen. Pierce-and-tear shredders have rotating blades that pierce the paper and then tear it apart. Grinders have a rotating shaft with cutting blades that grind the paper until it is small enough to fall through a screen. Shredding method, and output: Security levels There is a number of standards covering the security levels of paper shredders, including: Deutsches Institut für Normung (DIN) The previous DIN 32757 standard has now been replaced with DIN 66399. This is complex, but can be summarized as below: Level P-1 = ≤ 2000 mm² particles or ≤ 12 mm wide strips of any length (For shredding general internal documents such as instructions, forms, expired notices) Level P-2 = ≤ 800 mm² particles or ≤ 6 mm wide strips of any length Level P-3 = ≤ 320 mm² particles or ≤ 2 mm wide strips of any length (For highly sensitive documents and personal data subject to high protection requirements, purchase order, order confirmations or delivery notes with address data) Level P-4 = ≤ 160 mm² particles with width ≤ 6 mm (Particularly sensitive and confidential data, working documents, customer/client data, invoices, private tax and financial documents) Level P-5 = ≤ 30 mm² particles with width ≤ 2 mm (Data that must be kept secret, balance sheets and profit-and-loss, strategy papers, design and engineering documents, personal data) Level P-6 = ≤ 10 mm² particles with width ≤ 1 mm (Secret high-security data, patents, research and development documents, essential information that is important for your existence) Level P-7 = ≤ 5 mm² particles with width ≤ 1 mm (Top secret, highly classified data for the military, embassies, intelligence services) NSA/CSS The United States National Security Agency and Central Security Service produce "NSA/CSS Specification 02-01 for High Security Crosscut Paper Shredders". They provide a list of evaluated shredders. Shredding method, and output: ISO/IEC The International Organization for Standardization and the International Electrotechnical Commission produce "ISO/IEC 21964 Information technology — Destruction of data carriers". The General Data Protection Regulation (GDPR), which came into force in May, 2018, regulates the handling and processing of personal data. ISO/IEC 21964 and DIN 66399 support data protection in business processes. Destruction of evidence: There have been many instances where it is alleged that documents have been improperly or illegally destroyed by shredding, including: Oliver North shredded documents relating to the Iran–Contra affair between November 21 and November 25, 1986. During the trial, North testified that on November 21, 22, or 24, he witnessed John Poindexter destroy what may have been the only signed copy of a presidential covert action finding that sought to authorize CIA participation in the November 1985 Hawk missile shipment to Iran. Destruction of evidence: According to the report of the Paul Volcker Committee, between April and December 2004, Kofi Annan's Chef de Cabinet, Iqbal Riza, authorized thousands of United Nations documents shredded, including the entire chronological files of the Oil-for-Food Programme during the years 1997 through 1999. Destruction of evidence: The Union Bank of Switzerland used paper shredders to destroy evidence that their company owned property stolen from Jews during the Holocaust by the Nazi government. The shredding was disclosed to the public through the work of Christoph Meili, a security guard working at the bank who happened to wander by a room where the shredding was taking place. Also in the shredding room were books from the German Reichsbank. They listed stock accounts for companies involved in the holocaust, including BASF, Degussa, and Degesch. They also listed real-estate records for Berlin properties that had been forcibly taken by the Nazis, placed in Swiss accounts, and then claimed to be owned by UBS. Destruction of such documents was a violation of Swiss laws. Unshredding and forensics: To achieve their purpose, it should not be possible to reassemble and read shredded documents. In practice the feasibility of this depends on how well the shredding has been done, and the resources put into reconstruction.The resources put into reconstruction should depend on the importance of the document, e.g. whether it is a simple personal matter, corporate espionage, a criminal matter, a matter of national security.How easy reconstruction is will depend on: the size and legibility of the text whether the document is single- or double-sided the size and shape of the shredded pieces the orientation of the material when fed how effectively the shredded material is further randomized afterwards whether other processes such as pulping and chemical decomposition are usedEven without a full reconstruction, in some cases useful information can be obtained by forensic analysis of the paper, ink, and cutting method. Unshredding and forensics: Reconstruction examples After the Iranian Revolution and the takeover of the U.S. embassy in Tehran in 1979, Iranians enlisted local carpet weavers who reconstructed the pieces by hand. The recovered documents would be later released by the Iranian government in a series of books called "Documents from the US espionage Den". The US government subsequently improved its shredding techniques by adding pulverizing, pulping, and chemical decomposition protocols. Unshredding and forensics: Modern computer technology considerably speeds up the process of reassembling shredded documents. The strips are scanned on both sides, and then a computer determines how the strips should be put together. Robert Johnson of the National Association for Information Destruction has stated that there is a huge demand for document reconstruction. Several companies offer commercial document reconstruction services. For maximum security, documents should be shredded so that the words of the document go through the shredder horizontally (i.e. perpendicular to the blades). Many of the documents in the Enron Accounting scandals were fed through the shredder the wrong way, making them easier to reassemble. Unshredding and forensics: In 2003, there was an effort underway to recover the shredded archives of the Stasi, the East German secret police. There are "millions of shreds of paper that panicked Stasi officials threw into garbage bags during the regime's final days in the fall of 1989". As it took three dozen people six years to reconstruct 300 of the 16,000 bags, the Fraunhofer-IPK institute has developed the Stasi-Schnipselmaschine ('Stasi snippet machine') for computerized reconstruction and is testing it in a pilot project. Unshredding and forensics: The DARPA Shredder Challenge 2011 called upon computer scientists, puzzle enthusiasts, and anyone else with an interest in solving complex problems, to compete for up to $50,000 by piecing together a series of shredded documents. The Shredder Challenge consisted of five separate puzzles in which the number of documents, the document subject matter and the method of shredding were varied to present challenges of increasing difficulty. To complete each problem, participants were required to provide the answer to a puzzle embedded in the content of the reconstructed document. The overall prizewinner and prize awarded was dependent on the number and difficulty of the problems solved. DARPA declared a winner on December 2, 2011 (the winning entry was submitted 33 days after the challenge began) – the winner was "All Your Shreds Are Belong To U.S." using a combination system that used automated sorting to pick the best fragment combinations to be reviewed by humans. Unshredding and forensics: Forensic identification The individual shredder that was used to destroy a given document may sometimes be of forensic interest. Shredders display certain device-specific characteristics, "fingerprints", like the exact spacing of the blades, the degree and pattern of their wear. By closely examining the shredded material, the minute variations of size of the paper strips and the microscopic marks on their edges may be able to be linked to a specific machine. (c.f. the forensic identification of typewriters.) Recycling of waste: The resulting shredded paper can be recycled in a number of ways, including: Animal bedding — To produce a warm and comfortable bed for animals Void fill and packaging — Void fill for the transportation of goods Briquettes — an alternative to non-renewable fuels Insulation — Shredded newsprint mixed with flame-retardant chemicals and glue to create a sprayable insulation material for wall interiors and the underside of roofing
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Human interactions with insects** Human interactions with insects: Human interactions with insects include both a wide variety of uses, whether practical such as for food, textiles, and dyestuffs, or symbolic, as in art, music, and literature, and negative interactions including damage to crops and extensive efforts to control insect pests. Human interactions with insects: Academically, the interaction of insects and society has been treated in part as cultural entomology, dealing mostly with "advanced" societies, and in part as ethnoentomology, dealing mostly with "primitive" societies, though the distinction is weak and not based on theory. Both academic disciplines explore the parallels, connections and influence of insects on human populations, and vice versa. They are rooted in anthropology and natural history, as well as entomology, the study of insects. Other cultural uses of insects, such as biomimicry, do not necessarily lie within these academic disciplines. Human interactions with insects: More generally, people make a wide range of uses of insects, both practical and symbolic. On the other hand, attitudes to insects are often negative, and extensive efforts are made to kill them. The widespread use of insecticides has failed to exterminate any insect pest, but has caused resistance to commonly-used chemicals in a thousand insect species. Human interactions with insects: Practical uses include as food, in medicine, for the valuable textile silk, for dyestuffs such as carmine, in science, where the fruit fly is an important model organism in genetics, and in warfare, where insects were successfully used in the Second World War to spread disease in enemy populations. One insect, the honey bee, provides honey, pollen, royal jelly, propolis and an anti-inflammatory peptide, melittin; its larvae too are eaten in some societies. Medical uses of insects include maggot therapy for wound debridement. Over a thousand protein families have been identified in the saliva of blood-feeding insects; these may provide useful drugs such as anticoagulants, vasodilators, antihistamines and anaesthetics. Human interactions with insects: Symbolic uses include roles in art, in music (with many songs featuring insects), in film, in literature, in religion, and in mythology. Insect costumes are used in theatrical productions and worn for parties and carnivals. Context: Culture Culture consists of the social behaviour and norms found in human societies and transmitted through social learning. Cultural universals in all human societies include expressive forms like art, music, dance, ritual, religion, and technologies like tool usage, cooking, shelter, and clothing. The concept of material culture covers physical expressions such as technology, architecture and art, whereas immaterial culture includes principles of social organization, mythology, philosophy, literature, and science. This article describes the roles played by insects in human culture so defined. Context: Cultural entomology and ethnoentomology Ethnoentomology developed from the 19th century with early works by authors such as Alfred Russel Wallace (1852) and Henry Walter Bates (1862). Hans Zinsser's classic Rats, Lice and History (1935) showed that insects were an important force in human history. Writers like William Morton Wheeler, Maurice Maeterlinck, and Jean Henri Fabre described insect life and communicated their meaning to people "with imagination and brilliance". Frederick Simon Bodenheimer's Insects as Human Food (1951) drew attention to the scope and potential of entomophagy, and showed a positive aspect of insects. Food is the most studied topic in ethnoentomology, followed by medicine and beekeeping. Context: In 1968, Erwin Schimitschek claimed cultural entomology as a branch of insect studies, in a review of the roles insects played in folklore and culture including religion, food, medicine and the arts. In 1984, Charles Hogue covered the field in English and from 1994 to 1997, Hogue's The Cultural Entomology Digest served as a forum on the field. Hogue argued that "Humans spend their intellectual energies in three basic areas of activity: surviving, using practical learning (the application of technology); seeking pure knowledge through inductive mental processes (science); and pursuing enlightenment to taste a pleasure by aesthetic exercises that may be referred to as the 'humanities.' Entomology has long been concerned with survival (economic entomology) and scientific study (academic entomology), but the branch of investigation that addresses the influence of insects (and other terrestrial Arthropoda, including arachnids and myriapods) in literature, language, music, the arts, interpretive history, religion, and recreation has only become recognized as a distinct field" through Schimitschek's work. Context: Hogue set out the boundaries of the field by saying: "The narrative history of the science of entomology is not part of cultural entomology, while the influence of insects on general history would be considered cultural entomology." He added: "Because the term "cultural" is narrowly defined, some aspects normally included in studies of human societies are excluded."Darrell Addison Posey, noting that the boundary between cultural entomology and ethnoentomology is difficult to draw, cites Hogue as limiting cultural entomology to the influence of insects on "the essence of humanity as expressed in the arts and humanities". Posey notes further that cultural anthropology is usually restricted to the study of "advanced", industrialised, and literate societies, whereas ethnoentomology studies "the entomological concerns of 'primitive' or 'noncivilized' societies". Posey states at once that the division is artificial, complete with an unjustified us/them bias. Brian Morris similarly criticises the way that anthropologists treat non-Western attitudes to nature as monadic and spiritualist, and contrast this "in gnostic fashion" with a simplistic treatment of Western, often 17th-century, mechanistic attitude. Morris considers this "quite unhelpful, if not misleading", and offers instead his own research into the multiple ways that the people of Malawi relate to insects and other animals: "pragmatic, intellectual, realist, practical, aesthetic, symbolic and sacramental." Benefits and costs: Insect ecosystem services The Millennium Ecosystem Assessment (MEA) report 2005 defines ecosystem services as benefits people obtain from ecosystems, and distinguishes four categories, namely provisioning, regulating, supporting, and cultural. A fundamental tenet is that a few species of arthropod are well understood for their influence on humans (such as honeybees, ants, mosquitoes, and spiders). However, insects offer ecological goods and services. The Xerces Society calculates the economic impact of four ecological services rendered by insects: pollination, recreation (i.e. "the importance of bugs to hunting, fishing, and wildlife observation, including bird-watching"), dung burial, and pest control. The value has been estimated at $153 billion worldwide. As the ant expert E. O. Wilson observed: "If all mankind were to disappear, the world would regenerate back to the rich state of equilibrium that existed ten thousand years ago. If insects were to vanish, the environment would collapse into chaos." A Nova segment on the American Public Broadcasting Service framed the relationship with insects in an urban context: "We humans like to think that we run the world. But even in the heart of our great cities, a rival superpower thrives ... These tiny creatures live all around us in vast numbers, though we hardly even notice them. But in many ways, it is they who really run the show. The Washington Post stated: "We are flying blind in many aspects of preserving the environment, and that's why we are so surprised when a species like the honeybee starts to crash, or an insect we don't want, the Asian tiger mosquito or the fire ant, appears in our midst. In other words: Start thinking about the bugs." Pests and propaganda Human attitudes toward insects are often negative, reinforced by sensationalism in the media. This has produced a society that attempts to eliminate insects from daily life. For example, nearly 75 million pounds of broad-spectrum insecticides are manufactured and sold each year for use in American homes and gardens. Annual revenues from insecticide sales to homeowners exceeded $450 million in 2004. Out of the roughly a million species of insects described so far, not more than 1,000 can be regarded as serious pests, and less than 10,000 (about 1%) are even occasional pests. Yet not one species of insect has been permanently eradicated through the use of pesticides. Instead, at least 1,000 species have developed field resistance to pesticides, and extensive harm has been done to beneficial insects including pollinators such as bees.During the Cold War, the Warsaw Pact countries launched a widespread war against the potato beetle, blaming the introduction of the species from America on the CIA, demonising the species in propaganda posters, and urging children to gather the beetles and kill them. Practical uses: As food Entomophagy is the eating of insects. Many edible insects are considered a culinary delicacy in some societies around the world, and Frederick Simon Bodenheimer's Insects as Human Food (1951) drew attention to the scope and potential of entomophagy, but the practice is uncommon and even taboo in other societies. Sometimes insects are considered suitable only for the poor in the third world, but in 1975 Victor Meyer-Rochow suggested that insects could help ease global future food shortages and advocated a change in western attitudes towards cultures in which insects were appreciated as a food item. P. J. Gullan and P. S. Cranston felt that the remedy for this may be marketing of insect dishes as suitably exotic and costly to make them acceptable. They also note that some societies in sub-Saharan Africa prefer caterpillars to beef, while Chakravorty et al. (2011) point out that food insects (highly appreciated in North-East India) are more expensive than meat. The economics, i.e., the costs involved collecting food insects and the money earned through the sale of such insects, have been studied in a Laotian setting by Meyer-Rochow et al. (2008). In Mexico, ant larvae and corixid water boatman eggs are sought out as a form of caviar by gastronomes. In Guangdong, water beetles fetch a high enough price for these insects to be farmed. Especially high prices are fetched in Thailand for the giant water bug Lethocerus indicus.Insects used in food include honey bee larvae and pupae, mopani worms, silkworms, Maguey worms, Witchetty grubs, crickets, grasshoppers and locusts. In Thailand, there are 20,000 farmers rearing crickets, producing some 7,500 tons per year. Practical uses: In medicine Insects have been used medicinally in cultures around the world, often according to the Doctrine of Signatures. Thus, the femurs of grasshoppers, which were said to resemble the human liver, were used to treat liver ailments by the indigenous peoples of Mexico. The doctrine was applied in both Traditional Chinese Medicine (TCM) and in Ayurveda. TCM uses arthropods for various purposes; for example, centipede is used to treat tetanus, seizures, and convulsions, while the Chinese Black Mountain Ant, Polyrhachis vicina, is used as a cure all, especially by the elderly, and extracts have been examined as a possible anti-cancer agent. Ayurveda uses insects such as Termite for conditions such as ulcers, rheumatic diseases, anaemia, and pain. The Jatropha leaf miner's larvae are used boiled to induce lactation, reduce fever, and soothe the gastrointestinal tract. In contrast, the traditional insect medicine of Africa is local and unformalised. The indigenous peoples of Central America used a wide variety of insects medicinally. Mayans used Army ant soldiers as living sutures. The venom of the Red harvester ant was used to cure rheumatism, arthritis, and poliomyelitis via the immune reaction produced by its sting. Boiled silkworm pupae were taken to treat apoplexy, aphasy, bronchitis, pneumonia, convulsions, haemorrhages, and frequent urination.Honey bee products are used medicinally in apitherapy across Asia, Europe, Africa, Australia, and the Americas, despite the fact that the honey bee was not introduced to the Americas until the colonization by Spain and Portugal. They are by far the most common medical insect product both historically and currently, and the most frequently referenced of these is honey. It can be applied to skin to treat excessive scar tissue, rashes, and burns, and as an eye poultice to treat infection. Honey is taken for digestive problems and as a general health restorative. It is taken hot to treat colds, cough, throat infections, laryngitis, tuberculosis, and lung diseases. Apitoxin (honey bee venom) is applied via direct stings to relieve arthritis, rheumatism, polyneuritis, and asthma. Propolis, a resinous, waxy mixture collected by honeybees and used as a hive insulator and sealant, is often consumed by menopausal women because of its high hormone content, and it is said to have antibiotic, anesthetic, and anti-inflammatory properties. Royal jelly is used to treat anaemia, gastrointestinal ulcers, arteriosclerosis, hypo- and hypertension, and inhibition of sexual libido. Finally bee bread, or bee pollen, is eaten as a generally health restorative, and is said to help treat both internal and external infections. One of the major peptides in bee venom, melittin, has the potential to treat inflammation in sufferers of rheumatoid arthritis and multiple sclerosis.The rise of antibiotic resistant infections has sparked pharmaceutical research for new resources, including into arthropods.Maggot therapy uses blowfly larvae to perform wound-cleaning debridement.Cantharidin, the blister-causing oil found in several families of beetles described by the vague common name Spanish fly has been used as an aphrodisiac in some societies.Blood-feeding insects like ticks, horseflies, and mosquitoes inject multiple bioactive compounds into their prey. These insects have long been used by practitioners of Eastern Medicine to prevent blood clot formation or thrombosis, suggesting possible applications in scientific medicine. Over 1280 protein families have been associated with the saliva of blood feeding organisms, including inhibitors of platelet aggregation, ADP, arachidonic acid, thrombin, PAF, anticoagulants, vasodilators, vasoconstrictors, antihistamines, sodium channel blockers, complement inhibitors, pore formers, inhibitors of angiogenesis, anaesthetics, AMPs and microbial pattern recognition molecules, and parasite enhancers/activators. Practical uses: In science and technology Insects play an important role in biological research. Because of its small size, short generation time and high fecundity, the common fruit fly Drosophila melanogaster was selected as a model organism for studies of the genetics of higher eukaryotes. D. melanogaster has been an essential part of studies into principles like genetic linkage, interactions between genes, chromosomal genetics, evolutionary developmental biology, animal behaviour and evolution. Because genetic systems are well conserved among eukaryotes, understanding basic cellular processes like DNA replication or transcription in fruit flies helps scientists to understand those processes in other eukaryotes, including humans. The genome of D. melanogaster was sequenced in 2000, reflecting the fruit fly's important role in biological research. 70% of the fly genome is similar to the human genome, supporting the Darwinian theory of evolution from a single origin of life. Practical uses: Some hemipterans are used to produce dyestuffs such as carmine (also called cochineal). The scale insect Dactylopius coccus produces the brilliant red-coloured carminic acid to deter predators. Up to 100,000 scale insects are needed to make a kilogram (2.2 lbs) of cochineal dye.A similarly enormous number of lac bugs are needed to make a kilogram of shellac, a brush-on colourant and wood finish. Additional uses of this traditional product include the waxing of citrus fruits to extend their shelf-life, and the coating of pills to moisture-proof them, provide slow-release or mask the taste of bitter ingredients.Kermes is a red dye from the dried bodies of the females of a scale insect in the genus Kermes, primarily Kermes vermilio. Kermes are native to the Mediterranean region, living on the sap of the kermes oak. They were used as a red dye by the ancient Greeks and Romans. The kermes dye is a rich red, and has good colour fastness in silk and wool. Practical uses: Insect attributes are sometimes mimicked in architecture, as at the Eastgate Centre, Harare, which uses passive cooling, storing heat in the morning and releasing it in the warm parts of the day. The target of this piece of biomimicry is the structure of the mounds of termites such as Macrotermes michaelseni which effectively cool the nests of these social insects. The properties of the Namib desert beetle's exoskeleton, in particular its wing-cases (elytra) which have bumps with hydrophilic (water-attracting) tips and hydrophobic (water-shedding) sides, have been mimicked in a film coating designed for the British Ministry of Defence, to capture water in arid regions. Practical uses: In textiles Silkworms, the caterpillars and pupae of the moth Bombyx mori, have been reared to produce silk in China from the Neolithic Yangshao period onwards, c. 5000 BC. Production spread to India by 140 AD. The caterpillars are fed on mulberry leaves. The cocoon, produced after the fourth moult, is covered with a continuous filament of the silk protein, fibroin, gummed together with sericin. In the traditional process, the gum is removed by soaking in hot water, and the silk is then unwound from the cocoon and reeled. Filaments are spun together to make silk thread. Commerce in silk between China and countries to its west began in ancient times, with silk known from an Egyptian mummy of 1070 BC, and later to the ancient Greeks and Romans. The silk road leading west from China was opened in the 2nd century AD, helping to drive trade in silk and other goods. Practical uses: In warfare The use of insects for warfare may have been attempted in the Middle Ages or earlier, but was first systematically researched by several nations during the 20th century. It was put into practice by the Japanese army's Unit 731 in attacks on China during the Second World War, killing almost 500,000 Chinese people with fleas infected with plague and flies infected with cholera. Also in the Second World War, the French and Germans explored the use of Colorado beetles to destroy enemy potato crops. During the Cold War, the US Army considered using yellow fever mosquitoes to attack Soviet cities. Symbolic uses: In mythology and folklore Insects have appeared in mythology around the world from ancient times. Among the insect groups featuring in myths are the bee, butterfly, cicada, fly, dragonfly, praying mantis and scarab beetle. Scarab beetles held religious and cultural symbolism in Old Egypt, Greece and some shamanistic Old World cultures. The ancient Chinese regarded cicadas as symbols of rebirth or immortality. In the Homeric Hymn to Aphrodite, the goddess Aphrodite retells the legend of how Eos, the goddess of the dawn, requested Zeus to let her lover Tithonus live forever as an immortal. Zeus granted her request, but, because Eos forgot to ask him to also make Tithonus ageless, Tithonus never died, but he did grow old. Eventually, he became so tiny and shriveled that he turned into the first cicada.In an ancient Sumerian poem, a fly helps the goddess Inanna when her husband Dumuzid is being chased by galla demons. Flies also appear on Old Babylonian seals as symbols of Nergal, the god of death and fly-shaped lapis lazuli beads were often worn by many different cultures in ancient Mesopotamia, along with other kinds of fly-jewellery. The Akkadian Epic of Gilgamesh contains allusions to dragonflies, signifying the impossibility of immortality.Amongst the Arrernte people of Australia, honey ants and witchety grubs served as personal clan totems. In the case of the San bushmen of the Kalahari, it is the praying mantis which holds much cultural significance including creation and zen-like patience in waiting.: 9  Insects feature in folklore around the world. In China, farmers traditionally regulated their crop planting according to the Awakening of the Insects, when temperature shifts and monsoon rains bring insects out of hibernation. Most "awakening" customs are related to eating snacks like pancakes, parched beans, pears, and fried corn, symbolizing harmful insects in the field. Symbolic uses: In the Great Lakes region of the United States, there is an annual Woollybear Festival that has been celebrated for over 40 years. The larvae of the species Pyrrharctia isabella (commonly known as the isabella tiger moth), with their 13 distinct segments of black and reddish brown, have the reputation in common folklore of being able to forecast the coming winter weather.There is a common misconception that cockroaches are serious vectors of disease, but while they can carry bacteria they do not travel far, and have no bite or sting. Their shells contain a protein, arylphorin, implicated in asthma and other respiratory conditions.Among the deep-sea fishermen of Greenock in Scotland, there is a belief that if a fly falls into a glass from which a person has been drinking, or is about to drink, it is a sure omen of good luck to the drinker.Many people believe the urban myth that the daddy longlegs (Opiliones) has the most poisonous bite in the spider world, but that the fangs are too small to penetrate human skin. This is untrue on several counts. None of the known species of harvestmen have venom glands; their chelicerae are not hollowed fangs but grasping claws that are typically very small and definitely not strong enough to break human skin.In Japan, the emergence of fireflies and rhinoceros beetles signify the anticipated changing of the seasons. Symbolic uses: In religion In the Brazilian Amazon, members of the Tupí–Guaraní language family have been observed using Pachycondyla commutata ants during female rite-of-passage ceremonies, and prescribing the sting of Pseudomyrmex spp. for fevers and headaches.The red harvester ant Pogonomyrmex californicus has been widely used by natives of Southern California and Northern Mexico for hundreds of years in ceremonies conducted to help tribe members acquire spirit helpers through hallucination. During the ritual, young men are sent away from the tribe and consume large quantities of live, unmasticated ants under the supervision of an elderly member of the tribe. Ingestion of ants should lead to a prolonged state of unconsciousness, where dream helpers appear and serve as allies to the dreamer for the rest of his life. Symbolic uses: In art Both the symbolic form and the actual body of insects have been used to adorn humans in ancient and modern times. A recurrent theme for ancient cultures in Europe and the Near East regarded the sacred image of a bee or human with insect features. Often referred to as the bee "goddess", these images were found in gems and stones. An onyx gem from Knossos (ancient Crete) dating to approximately 1500 BC illustrates a Bee goddess with bull horns above her head. In this instance, the figure is surrounded by dogs with wings, most likely representing Hecate and Artemis – gods of the underworld, similar to the Egyptian gods Akeu and Anubis.Beetlewing art is an ancient craft technique using iridescent beetle wings practiced traditionally in Thailand, Myanmar, India, China and Japan. Beetlewing pieces are used as an adornment to paintings, textiles and jewelry. Different species of metallic wood-boring beetle wings were used depending on the region, but traditionally the most valued were those from beetles belonging to the genus Sternocera. The practice comes from across Asia and Southeast Asia, especially Thailand, Myanmar, Japan, India and China. In Thailand beetlewings were preferred to decorate clothing (shawls and Sabai cloth) and jewellery in former court circles. Symbolic uses: The Canadian entomologist Charles Howard Curran's 1945 book, Insects of the Pacific World, noted women from India and Sri Lanka, who kept 1+1⁄2 inch (38 mm) long, iridescent greenish coppery beetles of the species Chrysochroa ocellata as pets. These living jewels were worn on festive occasions, probably with a small chain attached to one leg anchored to the clothing to prevent escape. Afterwards, the insects were bathed, fed, and housed in decorative cages. Living jewelled beetles have also been worn and kept as pets in Mexico.Butterflies have long inspired humans with their life cycle, color, and ornate patterns. The novelist Vladimir Nabokov was also a renowned butterfly expert. He published and illustrated many butterfly species, stating: I discovered in nature the nonutilitarian delights that I sought in art. Both were a form of magic, both were games of intricate enchantment and deception. Symbolic uses: It was the aesthetic complexity of insects that led Nabokov to reject natural selection.The naturalist Ian MacRae writes of butterflies: the animal is at once awkward, flimsy, strange, bouncy in flight, yet beautiful and immensely sympathetic; it is painfully transient, albeit capable of extreme migrations and transformations. Images and phrases such as "kaleidoscopic instabilities," "oxymoron of similarities," "rebellious rainbows," "visible darkness" and "souls of stone" have much in common. They bring together the two terms of a conceptual contradiction, thereby facilitating the mixing of what should be discrete and mutually exclusive categories ... In positing such questions, butterfly science, an inexhaustible, complex, and finely nuanced field, becomes not unlike the human imagination, or the field of literature itself. In the natural history of the animal, we begin to sense its literary and artistic possibilities. Symbolic uses: The photographer Kjell Sandved spent 25 years documenting all 26 characters of the Latin alphabet using the wing patterns of butterflies and moths as The Butterfly Alphabet.In 2011, the artist Anna Collette created over 10,000 individual ceramic insects at Nottingham Castle, "Stirring the Swarm". Reviews of the exhibit offered a compelling narrative for cultural entomology: "the unexpected use of materials, dark overtones, and the straightforward impact of thousands of tiny multiples within the space. The exhibition was at once both exquisitely beautiful and deeply repulsive, and this strange duality was fascinating." In literature and film The Ancient Greek playwright Aeschylus has a gadfly pursue and torment Io, a maiden associated with the moon, watched constantly by the eyes of the herdsman Argus, associated with all the stars: "Io: Ah! Hah! Again the prick, the stab of gadfly-sting! O earth, earth, hide, the hollow shape—Argus—that evil thing—the hundred-eyed." William Shakespeare, inspired by Aeschylus, has Tom o'Bedlam in King Lear, "Whom the foul fiend hath led through fire and through flame, through ford and whirlpool, o'er bog and quagmire", driven mad by the constant pursuit. In Antony and Cleopatra, Shakespeare similarly likens Cleopatra's hasty departure from the Actium battlefield to that of a cow chased by a gadfly.H. G. Wells introduced giant wasps in his 1904 novel The Food of the Gods and How It Came to Earth, making use of the newly discovered growth hormones to lend plausibility to his science fiction.Lafcadio Hearn's essay Butterflies analyses the treatment of the butterfly in Japanese literature, both prose and poetry. He notes that these often allude to Chinese tales, such as of the young woman that the butterflies took to be a flower. He translates 22 Japanese haiku poems about butterflies, including one by the haiku master Matsuo Bashō, said to suggest happiness in springtime: "Wake up! Wake up!—I will make thee my comrade, thou sleeping butterfly."The novelist Vladimir Nabokov was the son of a professional lepidopterist, and was interested in butterflies himself. He wrote his novel Lolita while travelling on his annual butterfly-collection trips in the western United States. He eventually became a leading lepidopterist. This is reflected in his fiction, where for example The Gift devotes two whole chapters (of five) to the tale of a father and son on a butterfly expedition.Horror films involving insects, sometimes called "big bug movies", include the pioneering 1954 Them!, featuring giant ants mutated by radiation, and the 1957 The Deadly Mantis.The Far Side, a newspaper cartoon, has been used by professor of Michael Burgett as a teaching tool in his entomology class; The Far Side and its author Gary Larson have been acknowledged by biologist Dale H. Clayton his colleague for "the enormous contribution" Larson has made to their field through his cartoons. Symbolic uses: In music Some popular and influential pieces of music have had insects as their subjects. The French Renaissance composer Josquin des Prez wrote a frottola entitled El Grillo (lit. 'The Cricket'). It is among the most frequently sung of his works. Nikolai Rimsky-Korsakov wrote the "Flight of the Bumblebee" in 1899–1900 as part of his opera The Tale of Tsar Saltan. The piece is one of the most recognizable pieces in classical composition. The bumblebee in the story is a prince who has been transformed into an insect so that he can fly off to visit his father. The play upon which the opera was based – written by Alexander Pushkin – originally had two more insect themes: the Flight of the Mosquito and the Flight of the Fly. The Hungarian composer Béla Bartók explained in his diary that he was attempting to depict the desperate attempts to escape of a fly caught in a cobweb in his piece From the Diary of a Fly, for piano (Mikrokosmos Vol. 6/142).The jazz musician and philosophy professor David Rothenberg plays duets with singing insects including cicadas, crickets, and beetles. Symbolic uses: In astronomy and cosmology In astronomy, constellations named after arthropods include the zodiacal Scorpius, the scorpion, and Musca, the fly, also known as Apis, the bee, in the deep southern sky. Musca, the only recognised insect constellation, was named by Petrus Plancius in 1597. Symbolic uses: "The Bug Nebula", also called "The Butterfly Nebula", is a more recent discovery. Known as NGC 6302 is one of the brightest and most popular stars in the universe – popular in that its features draw the attention of a lot of researchers. It happens to be located in the Scorpius constellation. It is perfectly bipolar, and until recently, the central star was unobservable, clouded by gas, but estimated to be one of the hottest in the galaxy – 200,000 degrees Fahrenheit, perhaps 35 times hotter than the Sun.The honey bee played a central role in the cosmology of the Mayan people. The stucco figure at the temples of Tulum known as "Ah Mucen Kab" – the Diving Bee God – bears resemblance to the insect in the Codex Tro-Cortesianus identified as a bee. Such reliefs might have indicated towns and villages that produce honey. Modern Mayan authorities say the figure also have a connection to modern cosmology. Mayan mythology expert Migel Angel Vergara relates that the Mayans held a belief that bees came from Venus, the "Second Sun." The relief might be indicative of another "insect deity", that of Xux Ex, the Mayan "wasp star." The Mayan embodied Venus in the form of the god Kukulkán (also known as or related to Gukumatz and Quetzalcoatl in other parts of Mexico), Quetzalcoatl is a Mesoamerican deity whose name in Nahuatl means "feathered serpent". The cult was the first Mesoamerican religion to transcend the old Classic Period linguistic and ethnic divisions. This cult facilitated communication and peaceful trade among peoples of many different social and ethnic backgrounds. Although the cult was originally centered on the ancient city of Chichén Itzá in the modern Mexican state of Yucatán, it spread as far as the Guatemalan highlands. Symbolic uses: In costumes Bee and other insect costumes are worn in a variety of countries for parties, carnivals and other celebrations.Ovo is an insect-themed production by the world renowned Canadian entertainment company Cirque du Soleil. The show looks at the world of insects and its biodiversity where they go about their daily lives until a mysterious egg appears in their midst, as the insects become awestruck about this iconic object that represents the enigma and cycles of their lives. The costuming was a fusion of arthropod body types blended with superhero armour. Liz Vandal, the lead costume designer, has a special affinity for the world of the insect: When I was just a kid I put rocks down around the yard near the fruit trees and I lifted them regularly to watch the insects who had taken up residence underneath them. I petted caterpillars and let butterflies into the house. So when I learned that OVO was inspired by insects, I immediately knew that I was in a perfect position to pay tribute to this majestic world with my costumes. All insects are beautiful and perfect; it is what they evoke for each of us that changes our perception of them." The Webby award-winning video series Green Porno was created to showcase the reproductive habits of insects. Jody Shapiro and Rick Gilbert were responsible for translating the research and concepts that Isabella Rossellini envisioned into the paper and paste costumes which directly contribute to the series' unique visual style. The film series was driven by the creation of costumes to translate scientific research into "something visual and how to make it comical."
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Tagmeme** Tagmeme: A tagmeme is the smallest functional element in the grammatical structure of a language. The term was introduced in the 1930s by the linguist Leonard Bloomfield, who defined it as the smallest meaningful unit of grammatical form (analogous to the morpheme, defined as the smallest meaningful unit of lexical form). The term was later adopted, and its meaning broadened, by Kenneth Pike and others beginning in the 1950s, as the basis for their tagmemics. Bloomfield's scheme: According to the scheme set out by Leonard Bloomfield in his book Language (1933), the tagmeme is the smallest meaningful unit of grammatical form. A tagmeme consists of one or more taxemes, where a taxeme is a primitive grammatical feature, in the same way that a phoneme is a primitive phonological feature. Taxemes and phonemes do not as a rule have meaning on their own, but combine into tagmemes and morphemes respectively, which carry meaning. Bloomfield's scheme: For example, an utterance such as "John runs" is a concrete example of a tagmeme (an allotagm) whose meaning is that an actor performs an action. The taxemes making up this tagmeme include the selection of a nominative expression, the selection of a finite verb expression, and the ordering of the two such that the nominative expression precedes the finite verb expression. Bloomfield's scheme: Bloomfield makes the taxeme and tagmeme part of a system of emic units: The smallest (and meaningless, when taken by itself) unit of linguistic signaling is the pheneme; this may be either lexical (phoneme) or grammatical (taxeme). The smallest meaningful unit of linguistic signaling is the glosseme, either lexical (morpheme) or grammatical (tagmeme). Bloomfield's scheme: The meaning of a glosseme is a noeme, the meaning of either a morpheme (sememe) or a tagmeme (episememe).More generally, he defines any meaningful unit of linguistic signaling (not necessarily smallest) as a linguistic form, and its meaning as a linguistic meaning; it may be either a lexical form (with a lexical meaning) or a grammatical form (with a grammatical meaning). Pike and tagmemics: Bloomfield's term was adopted by Kenneth Pike and others to denote what they had previously been calling the grammeme (earlier grameme). In Pike's approach, consequently called tagmemics, the hierarchical organization of levels (e.g. in syntax: word, phrase, sentence, paragraph, discourse) results from the fact that the elements of a tagmeme on a higher level (e.g. 'sentence') are analyzed as syntagmemes on the next lower level (e.g. 'phrase'). Pike and tagmemics: The tagmeme is the correlation of a syntagmatic function (e.g. subject, object) and paradigmatic fillers (e.g. nouns, pronouns or proper nouns as possible fillers of the subject position). Tagmemes combine to form a syntagmeme, a syntactic construction consisting of a sequence of tagmemes. Pike and tagmemics: Tagmemics as a linguistic methodology was developed by Pike in his book Language in Relation to a Unified Theory of the Structure of Human Behavior, 3 vol. (1954–1960). It was primarily designed to assist linguists to efficiently extract coherent descriptions out of corpora of fieldwork data. Tagmemics is particularly associated with the Summer Institute of Linguistics, an association of missionary linguists devoted largely to Bible translations, of which Pike was an early member. Pike and tagmemics: Tagmemics makes the kind of distinction made between phone and phoneme in phonology and phonetics at higher levels of linguistic analysis (grammatical and semantic); for instance, contextually conditioned synonyms are considered different instances of a single tagmeme, as sounds which are (in a given language) contextually conditioned are allophones of a single phoneme. The emic and etic distinction also applies in other social sciences.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Solar eclipse of January 27, 2055** Solar eclipse of January 27, 2055: A partial solar eclipse will occur on Wednesday, January 27, 2055. A solar eclipse occurs when the Moon passes between Earth and the Sun, thereby totally or partly obscuring the image of the Sun for a viewer on Earth. A partial solar eclipse occurs in the polar regions of the Earth when the center of the Moon's shadow misses the Earth. It will be visible across North America. Related eclipses: Solar eclipses 2054–2058 This eclipse is a member of a semester series. An eclipse in a semester series of solar eclipses repeats approximately every 177 days and 4 hours (a semester) at alternating nodes of the Moon's orbit. Tritos Preceded: Solar eclipse of February 28, 2044 Followed: Solar eclipse of December 27, 2065 Tzolkinex Preceded: Solar eclipse of December 16, 2047 Followed: Solar eclipse of March 11, 2062
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**In situ polymerization** In situ polymerization: In polymer chemistry, in situ polymerization is a preparation method that occurs "in the polymerization mixture" and is used to develop polymer nanocomposites from nanoparticles. There are numerous unstable oligomers (molecules) which must be synthesized in situ (i.e. in the reaction mixture but cannot be isolated on their own) for use in various processes. The in situ polymerization process consists of an initiation step followed by a series of polymerization steps, which results in the formation of a hybrid between polymer molecules and nanoparticles. Nanoparticles are initially spread out in a liquid monomer or a precursor of relatively low molecular weight. Upon the formation of a homogeneous mixture, initiation of the polymerization reaction is carried out by addition of an adequate initiator, which is exposed to a source of heat, radiation, etc. After the polymerization mechanism is completed, a nanocomposite is produced, which consists of polymer molecules bound to nanoparticles. In situ polymerization: In order to perform the in situ polymerization of precursor polymer molecules to form a polymer nanocomposite, certain conditions must be fulfilled which include the use of low viscosity pre-polymers (typically less than 1 pascal), a short period of polymerization, the use of polymer with advantageous mechanical properties, and no formation of side products during the polymerization process. Advantages and Disadvantages: There are several advantages of the in situ polymerization process, which include the use of cost-effective materials, being easy to automate, and the ability to integrate with many other heating and curing methods. Some downsides of this preparation method, however, include limited availability of usable materials, a short time period to execute the polymerization process, and expensive equipment is required. The next sections will cover the various examples of polymer nanocomposites produced using the in situ polymerization technique, and their real life applications. Clay Nanocomposites: Towards the end of the 20th century, Toyota Motor Corp devised the first commercial application of the clay-polyamide-6 nanocomposite, which was prepared via in situ polymerization. Once Toyota laid the groundwork for polymer layered silicate nanocomposites, extensive research in this particular area was conducted afterwards. Clay nanocomposites can experience a significant increase in strength, thermal stability, and ability to penetrate barriers upon addition of a minute portion of nanofiller into the polymer matrix. A standard technique to prepare clay nancomposites is in situ polymerization, which consists of intercalation of the monomer with the clay surface, followed by initiation by the functional group in the organic cation and then polymerization. A study by Zeng and Lee investigated the role of the initiator in the in situ polymerization process of clay nanocomposites. One of the major findings was that the more favorable nanocomposite product was produced with a more polar monomer and initiator. Carbon Nanotubes (CNT): In situ polymerization is an important method of preparing polymer grafted nanotubes using carbon nanotubes. Carbon Nanotubes (CNT): Properties Due to their remarkable mechanical, thermal and electronic properties, including high conductivity, large surface area, and excellent thermal stability, carbon nanotubes (CNT) have been heavily studied since their discovery to develop various real world applications. Two particular applications that carbon nanotubes have made major contributions to include strengthening composites as filler material and energy production via thermally conductive composites. Carbon Nanotubes (CNT): Types of CNT Currently, the two principal types of carbon nanotubes are single walled nanotubes (SWNT) and multi-walled nanotubes (MWNT). Carbon Nanotubes (CNT): Advantages of In Situ Polymerization Using CNT In situ polymerization offers several advantages in the preparation of polymer grafted nanotubes compared to other methods. First and foremost, it allows polymer macromolecules to attach to CNT walls. Additionally, the resulting composite is miscible with most types of polymers. Unlike solution or melt processing, in situ polymerization can prepare insoluble and thermally unstable polymers. Lastly, in situ polymerization can achieve stronger covalent interactions between polymer and CNTs earlier in the process. Carbon Nanotubes (CNT): Applications Recent improvements in the in situ polymerization process have led to the production of polymer-carbon nanotube composites with enhanced mechanical properties. With regards to their energy-related applications, carbon nanotubes have been used to make electrodes, with one specific example being the CNT/PMMA composite electrode. In situ polymerization has been studied to streamline the construction process of such electrodes. Huang, Vanhaecke, and Chen found that in situ polymerization can potentially produce composites of conductive CNTs on a grand scale. Some aspects of in situ polymerization that can help achieve this feat are that it is cost effective with regards to operation, requires minimal sample, has high sensitivity, and offers many promising environmental and bioanalytical applications. Biopharmaceuticals: Proteins, DNAs, and RNAs are just a few examples of biopharmaceuticals that hold the potential to treat various disorders and diseases, ranging from cancer to infectious diseases. However, due to certain undesirable properties such as poor stability, susceptibility to enzyme degradation, and insufficient capability to penetrate biological barriers, the application of such biopharmaceuticals in delivering medical treatment has been severely hindered. The formation of polymer-biomacromolecule nanocomposites via in situ polymerization offers an innovative means of overcoming these obstacles and improving the overall effectiveness of biopharmaceuticals. Recent studies have demonstrated how in situ polymerization can be implemented to improve the stability, bioactivity, and ability to cross biological barriers of biopharmaceuticals. Biopharmaceuticals: Types of Biomolecule Polymer Nanocomposites The two main types of nanocomposites formed by in situ polymerization are 1) biomolecule-linear polymer hybrids, which are linear or have a star-like shape, and contain covalent bonds between individual polymer chains and the biomolecular surface and 2) biomolecule-crosslinked polymer nanocapsules, which are nanocapsules with biomacromolecules centered within the polymer shells. Biopharmaceuticals: In Situ Polymerization Methods for Biomolecules Biomolecule-linear polymer hybrids are formed via “grafting-from” polymerization, which is an in situ approach that differs from the standard “grafting to” polymerization. Whereas “grafting to” polymerization involves the straightforward attachment of polymers to the biomolecule of choice, the “grafting from” method takes place on proteins that are pre-modified with initiators. Some examples of “grafting to” polymerization include atom transfer radical polymerization (ATRP) and reversible addition-fragmentation chain transfer (RAFT). These methods are similar in that they both lead to narrow molecular weight distributions and can make block copolymer. On the other hand, they each have distinct properties that need to be analyzed on a case-by-case basis. For example, ATRP is sensitive to oxygen whereas RAFT is insensitive to oxygen; in addition, RAFT has a much greater compatibility with monomers than ATRP.Radical polymerization with crosslinkers is the other in situ polymerization method, and this process leads to the formation of biomolecule-crosslinked polymer nanocapsules. This process produces nanogels/nanocapsules via a covalent or non-covalent approach. In the covalent approach, the two steps are the conjugation of acryloyl groups to protein followed by in situ free radical polymerization. In the non-covalent approach, proteins are entrapped within nanocapsules. Biopharmaceuticals: Protein Nanogels Nanogels, which are microscopic hydrogel particles held together by a cross-linked polymer network, offer a desirable mode of drug delivery that has a variety of biomedical applications. In situ polymerization can be used to prepare protein nanogels that help facilitate the storage and delivery of protein. The preparation of such nanogels via the in situ polymerization method begins with free proteins dispersed in an aqueous solution along with cross-linkers and monomers, followed by addition of radical initiators, which leads to the polymerization of a nanogel polymer shell that encloses a protein core. Additional modification of the polymeric nanogel enables delivery to specific target cells. Three classes of in situ polymerized nanogels are 1) direct covalent conjugation via chemical modifications, 2) noncovalent encapsulation, and 3) cross-linking of preformed crosslinkable polymers. Protein nanogels have tremendous applications for cancer treatment, vaccination, diagnosis, regenerative medicine, and therapies for loss-of-function genetic diseases. In situ polymerized nanogels are capable of delivering the appropriate amount of protein to the site of treatment; certain chemical and physical factors including pH, temperature, and redox potential manage the protein delivery process of nanogels. Urea Formaldehyde (UF) and Melamine Formalehyde (MF): Urea-formaldehyde (UF) and melamine formaldehyde (MF) encapsulation systems are other examples that utilize in situ polymerization. In such type of in situ polymerization a chemical encapsulation technique is involved very similar to interfacial coating. The distinguishing characteristic of in situ polymerization is that no reactants are included in the core material. All polymerization occurs in the continuous phase, rather than on both sides of the interface between the continuous phase and the core material. In situ polymerization of such formaldehyde systems usually involves the emulsification of an oil-phase in water. Then, water-soluble urea/melamine formaldehyde resin monomers are added, which are allowed to disperse. The initiation step occurs when acid is added to lower the pH of the mixture. Crosslinking of the resins completes the polymerization process and results in a shell of polymer-encapsulated oil droplets.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**PIP5K1B** PIP5K1B: Phosphatidylinositol-4-phosphate 5-kinase type-1 beta is an enzyme that in humans is encoded by the PIP5K1B gene.Abnormal silencing of the PIP5K1B gene contributes to the cytoskeletal defects seen in Friedreich's ataxia.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Lacny** Lacny: The Lacny or Lacny cycle is a chess problem theme named after Ľudovít Lačný, the first person to demonstrate the idea in 1949. It is an example of lines of play being cyclically related: in one phase of play, the Black defences a, b and c are answered by the White mates A, B and C respectively; in another phase, those same defences a, b and c are answered by the White mates B, C and A respectively. Lacny: The theme can be understood by reference to the problem to the right: this is the first problem to demonstrate the idea, by Lacny himself (first prize at the Przepiorka Memorial, 1949); it has been much-reproduced. The set play is: 1...Nh2 [a] 2.Qd4# [A] 1...c1=Q [b] 2.Ng2# [B] 1...c3 [c] 2.Qe4# [C]The key to the solution is 1.Nd2 (threatening 2.Nf1#), after which the mates are changed thus: 1...Nh2 [a] 2.Ng2# [B] 1...c1=Q [b] 2.Qe4# [C] 1...c3 [c] 2.Qd4# [A]As well as in set play (as in this example) the theme can be shown in tries, more than one solution or twins. Lacny: The scheme can be expanded to include more defences; in a fivefold Lacny, for example, the defences a, b, c, d and e are met with the mates A, B, C, D and E respectively in one phase and B, C, D, E and A respectively in another. The cycle can also be extended over three phases to make a complete Lacny cycle; here, the defences a, b and c are answered by the mates A, B and C respectively in one phase; by B, C and A respectively in another; and by C, A and B respectively in a third. This is considerably harder to achieve than the "simple" Lacny, and there are relatively few examples. Lacny: In the related threat Lacny, short-cut Lacny or Dombro-Lacny, in one phase A is threatened, and defence b leads to mate B while defence c leads to mate C; in another phase B is threatened and defence b leads to mate C while defence c leads to mate A. Once, problems following this scheme were also called Lacnys, but now a distinction tends to be drawn between the two (Peter Gvozdjak in Cyclone suggests this scheme should be called the Shedey cycle after its originator, Sergei Shedey). There are a number of other themes featuring cyclic play in different phases, including the Kiss and Djurasevic cycles.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**The International Journal of Biochemistry & Cell Biology** The International Journal of Biochemistry & Cell Biology: The International Journal of Biochemistry & Cell Biology is a monthly peer-reviewed scientific journal published by Elsevier, covering research in all areas of biochemistry and cell biology. The editor-in-chief is Geoffrey J. Laurent (University of Western Australia). The journal was established in 1970 as International Journal of Biochemistry and obtained its current title in 1995. Abstracting and indexing: The journal is abstracted and indexed in: According to the Journal Citation Reports, the journal has a 2013 impact factor of 4.240.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Malayalam WordNet** Malayalam WordNet: Malayalam WordNet (പദശൃംഖല) is an online WordNet created for Malayalam Language. Malayalam WordNet has been developed by the Department of Computer Science, Cochin University Of Science And Technology. History: The first WordNet to be created was the Princeton English WordNet. WordNet was created in the Cognitive Science Laboratory of Princeton University under the direction of professor G. A. Miller starting in 1985 . It was followed by EuroWordNet for European languages, based on Princeton WordNet. Hindi WordNet was the first Indian language WordNet to be created. It was developed by the Natural Language Processing group at the Center for Indian Language Technology. It was followed by IndoWordNet which was developed for 18 Indian Languages under the guidance of Dr. Pushpak Bhattacharya, Indian Institute of Technology Bombay. A WordNet for Malayalam language was developed as part of the IndoWordNet under the guidance of Dr. K.P. Soman and Dr. S. Rajendran at Amrita Vishwa Vidyapeetham, Coimbatore. Features: Malayalam WordNet is a crowd sourced project. IndoWordNet is publicly browsable, but it is not available to edit. Malayalam WordNet allows users to add data to the WordNet in a controlled crowd sourcing manner. Either a set of experts or users itself could review the entries added by other members which helps in maintaining consistent data throughout. It also has a JSON and XML interfaces which helps the programmers to interact with the WordNet. It would be highly useful for the researchers, language experts as well as application developers. Team Members: Malayalam WordNet has been developed by the Department of Computer Science, Cochin University Of Science And Technology. The team is headed by Dr. Sumam Mary Idicula (Professor and Head, Department of Computer Science). The team also includes Drishya Gopinath and Varghese K. Aniyan Relationships covered: It gives information about the meaning of the word, position in ontology, an example sentence for the synset and the following relationships: Synsets/ Synonyms Hyponymy and hypernymy Holonymy Meronymy Antonyms Release: The alpha version of has been launched on Feb 1,2016. The final version is released on April, 2016.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Roberts's triangle theorem** Roberts's triangle theorem: Roberts's triangle theorem, a result in discrete geometry, states that every simple arrangement of n lines has at least n−2 triangular faces. Thus, three lines form a triangle, four lines form at least two triangles, five lines form at least three triangles, etc. It is named after Samuel Roberts, a British mathematician who published it in 1889. Statement and example: The theorem states that every simple arrangement of n lines in the Euclidean plane has at least n−2 triangular faces. Here, an arrangement is simple when it has no two parallel lines and no three lines through the same point.One way to form an arrangement of n lines with exactly n−2 triangular faces is to choose the lines to be tangent to a semicircle. For lines arranged in this way, the only triangles are the ones formed by three lines with consecutive points of tangency. As the n lines have n−2 consecutive triples, they also have n−2 triangles. Proof: Branko Grünbaum found the proof in Roberts's original paper "unconvincing", and credits the first correct proof of Roberts's theorem to Robert W. Shannon, in 1979. He presents instead the following more elementary argument, first published in Russian by Alexei Belov. It depends implicitly on a simpler version of the same theorem, according to which every simple arrangement of three or more lines has at least one triangular face. This follows easily by induction from the fact that adding a line to an arrangement cannot decrease the number of triangular faces: if the line cuts an existing triangle, one of the resulting two pieces is again a triangle. On the other hand, although the bound of Roberts's theorem increases with each added line, the number of triangles in any particular arrangement may sometimes remain unchanged.If the n given lines are all moved without changing their slopes, their new positions can be described by a system of n real numbers, the offsets of each line from its original position. For each triangular face, there is a linear equation on the offsets of its three lines that, if satisfied, causes the face to retain its original area. If there could be fewer than n−2 triangles, then (because there would be more variables than equations constraining them) it would be possible to fix two of the lines in place and find a simultaneous linear motion of all remaining lines, keeping their slopes fixed, that preserves all of the triangle areas. Such a motion must pass through arrangements that are not simple, for instance when one of the moving lines passes over the crossing point of the two fixed lines. At the time when the moving lines first form a non-simple arrangement, three or more lines meet at a point. Just before these lines meet, this subset of lines would have a triangular face (also present in the original arrangement) whose area approaches zero. But this contradicts the invariance of the face areas. The contradiction shows that the assumption that there are fewer than n−2 triangles cannot be true. Related results: Whereas Roberts's theorem concerns the fewest possible triangles made by a given number of lines, the related Kobon triangle problem concerns the largest number possible. The two problems differ already for n=5 , where Roberts's theorem guarantees that three triangles will exist, but the solution to the Kobon triangle problem has five triangles.Roberts's theorem can be generalized from simple line arrangements to some non-simple arrangements, to arrangements in the projective plane rather than in the Euclidean plane, and to arrangements of hyperplanes in higher-dimensional spaces. Beyond line arrangements, the same bound as Roberts's theorem holds for arrangements of pseudolines.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Flap endonuclease** Flap endonuclease: Flap endonucleases (FENs, also known as 5' durgs in older references) are a class of nucleolytic enzymes that act as both 5'-3' exonucleases and structure-specific endonucleases on specialised DNA structures that occur during the biological processes of DNA replication, DNA repair, and DNA recombination. Flap endonucleases have been identified in eukaryotes, prokaryotes, archaea, and some viruses. Organisms can have more than one FEN homologue; this redundancy may give an indication of the importance of these enzymes. In prokaryotes, the FEN enzyme is found as an N-terminal domain of DNA polymerase I, but some prokaryotes appear to encode a second homologue.The endonuclease activity of FENs was initially identified as acting on a DNA duplex which has a single-stranded 5' overhang on one of the strands (termed a "5' flap", hence the name flap endonuclease). FENs catalyse hydrolytic cleavage of the phosphodiester bond at the junction of single- and double-stranded DNA. Some FENs can also act as 5'-3' exonucleases on the 5' terminus of the flap strand and on 'nicked' DNA substrates. Flap endonuclease: Protein structure models based on X-ray crystallography data suggest that FENs have a flexible arch created by two α-helices through which the single 5' strand of the 5' flap structure can thread.Flap endonucleases have been used in biotechnology, for example the Taqman PCR assay and the Invader Assay for mutation and single nucleotide polymorphism (SNP) detection.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Koders** Koders: Koders was a search engine for open source code. It enabled software developers to easily search and browse source code in thousands of projects posted at hundreds of open source repositories. Koders: On April 28, 2008, it was announced that Black Duck Software would acquire the Koders assets and technologies although the Koders website will remain as a free resource.On May 19, 2009, Black Duck Software announced that projects from the Microsoft CodePlex open source project hosting site will now be fed automatically into Black Duck’s open source KnowledgeBase repository. The projects also will be searchable through Black Duck’s Koders.com.Koders announced on September 9, 2009 that their search engine now exceed 2.4 billion lines of code; a 210% increase since April 2008.Black Duck Software announced on October 25, 2012 the merge of Koders with Ohloh Code. Plug-ins: In addition to their web-based search engine, Koders provided a plug-in for the IDE Eclipse and an add-in for Microsoft Visual Studio. A plug-in was also included within Code::Blocks.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Cannabimovone** Cannabimovone: Cannabimovone (CBM) is a phytocannabinoid first isolated from a non-psychoactive strain of Cannabis sativa in 2010, which is thought to be a rearrangement product of cannabidiol. It lacks affinity for cannabinoid receptors, but acts as an agonist at both TRPV1 and PPARγ.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Aureolysin** Aureolysin: Aureolysin (EC 3.4.24.29, protease III, staphylococcal metalloprotease, Staphylococcus aureus neutral proteinase) is an extracellular metalloprotease expressed by Staphylococcus aureus. This protease is a major contributor to the bacterium's virulence, or ability to cause disease, by cleaving host factors of the innate immune system as well as regulating S. aureus secreted toxins and cell wall proteins. To catalyze its enzymatic activities, aureolysin requires zinc and calcium which it obtains from the extracellular environment within the host. Genetics: Aureolysin is expressed from the gene aur, which is located on a monocistronic operon. The gene exists in two allelic forms but the sequence is highly conserved with 89% homology between the two. The gene contains a coding sequence of 1,527 nucleotides that translates into a pre-pro-form of the enzyme that is 509 amino acids long. Of the 509 amino acids, only 301 denote the mature form of aureolysin. After translation, the pre-portion of the enzyme is a 27 amino acid N-terminal signal peptide that acts as a guide to the secretion system located within the cell wall. Here, the signal peptide is cleaved upon secretion of aureolysin.Aureolysin is largely co-expressed with other major proteases of S. aureus including the two cystine proteases, Staphopain A (ScpA) and B (SspB), and a serine protease V8 (SspA). The transcriptional regulation of aur is controlled by "housekeeping" sigma factor σA, and is up-regulated by accessory gene regulator agr. Expression levels of aureolysin are at their highest during the post-exponential phase however, up-regulation of aureolysin during phagocytosis has also been observed. Transcription is repressed by staphylococcal accessory regulator sarA and by alternative sigma factor σB (a stress response modulator of Gram-positive bacteria). The aur gene has a high prevalence in the genome of both commensal- and pathogenic-type S. aureus strains. Activation: Aureolysin, along with V8, SspB, and ScpA, are all secreted a zymogens. This means that they are secreted in an inactive conformation until the propeptide is removed in some manner. Aureolysin, V8, and SspB constitute what is known as the staphylococcal proteolytic cascade. All three of these proteases are secreted into the environment with the propeptide inhibiting their activation. Aureolysin undergoes autocatalysis and the propeptide is degraded generating the mature form of the enzyme. Mature aureolysin will then cleave the propeptide from V8, causing this protease to become active. Finally V8 will cleave SspB propeptide and the cascade is now complete. ScpA becomes mature by autocatalytic degradation of the propeptide, similar to that of aureolysin.The active residues of aureolysin are of critical importance to its enzymatic function. The active residue is a glutamate amino acid located at the 145th position of the protein. Immune Evasion: Aureolysin cleaves various immune components and host proteins. It is important for hiding the bacterium from the immune system and is responsible for mediating the transition of a biofilm forming phenotype to a mobile and invasive one. There are many different targets of aureolysin and the effect on each is critical for the bacterium's virulence. Immune Evasion: One major way aureolysin contributes to infection, is by inactivating certain targets within the complement system. Of all the proteases, aureolysin is the most effective against the complement cascade. In all three pathways of complement activation, there is a target for the protease to manipulate. In the classical pathway, aureolysin not only decreases deposition of C1q on the S. aureus bacterial surface, it induces C1q to bind surfaces and deposit on commensal bacteria surfaces that typically do not activate the innate immune system. Aureolysin has also been noted to produce high levels of C5a in human plasma, which leads to overstimulation of neutrophils that ultimately results in neutrophil death. C3 is another major target of aureolysin. The active site has a high affinity for C3 and will cleave it into C3a and C3b however, the protein is cleaved two amino acid residues away from the native site that is recognized by the host C3 convertase. The aureolysin derived C3a and C3b are further degraded by host complement inhibitor factor H and I. In the lectin pathway, aureolysin inhibits MBL and ficolin binding which, in turn, reduce C3b deposition.Further immune evasion outside of the complement system occurs in various ways. Aureolysin cleaves and inactivates protease inhibitor α1-antichymotrypsin and partially inactivates α1-antitrypsin. The cleavage of α1-antitrypsin generates a fragment chemotactic to neutrophils, and the cleavage of both protease inhibitors causes deregulation of neutrophil-derived proteolytic activity. Aureolysin has also been shown to cleave the antimicrobial peptide LL-37, rendering it inactive and unable to puncture the bacterial cell wall. Production of immunoglobulin by lymphocytes is inhibited by aureolysin as well. It contributes to both coagulation triggered by coagulase, and to fibrinolysis mediated by staphylokinase. Proteolytic conversion of pro-thrombin into thrombin by aureolysin works synergistically with coagulase and contributes to the staphylocoagulation of human plasma. By inducing staphylocoagulation, the bacterium is hidden within the clot from phagocytic cells. Contradictory to staphylocoagulation, aureolysin is responsible for the activation of urokinase, and inactivation of α2-antiplasmin and plasminogen activator inhibitor-1. This promotes the dissemination of the bacterium to allow for further invasion of the host. Biological significance: When S. aureus is establishing an infection within a host, it needs to continuously switch from a static, or biofilm forming phenotype, to an invasive, or mobile phenotype. The proteases help mediate this process. Aureolysin appears to down-regulate the formation of biofilms and allow for the mobility of the bacterium. One way it contributes to this change is by mediating coagulation as well as the activation of urokinase. However, it also mediates S. aureus cell wall and secreted proteins to promote this change. For example, clumping factor B is a surface protein that is responsible for the binding of fibrinogen around the bacterium to hide it within a clot. Aureolysin is responsible for the cleavage of clumping factor B, which causes the loss of S. aureus binding to fibrinogen. By this mechanism, it may act as a self-regulatory mechanism for dissemination and spreading in combination with activation of fibrinolysis, while the protease simultaneously provides protection against complement activation. It has been demonstrated that aureolysin has impact for bacterial survival in human whole blood. Aureolysin is also up-regulated upon phagocytosis and promotes intracellular survival.S. aureus prefers to establish a chronic, or long lasting infection within a host. While promoting dissemination and counteracting immune mechanisms, aureolysin also regulates secreted virulence factors to control the pathogenicity of the bacterium. By inactivation of PSMs and α-toxins, aureolysin may suppress the pathogenic impact of the bacteria allowing for a chronic infection to be established.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Webcam** Webcam: A webcam is a video camera which is designed to record or stream to a computer or computer network. They are primarily used in video telephony, live streaming and social media, and security. Webcams can be built-in computer hardware or peripheral devices, and are commonly connected to a device using USB or wireless protocols. Webcams have been used on the Internet as early as 1993, and the first widespread commercial one became available in 1994. Early webcam usage on the Internet was primarily limited to stationary shots streamed to web sites. In the late 1990s and early 2000s, instant messaging clients added support for webcams, increasing their popularity in video conferencing. Computer manufacturers also started integrating webcams into laptop hardware. In 2020, the COVID-19 pandemic caused a shortage of webcams due to the increased number of people working from home. History: Early development (early 1990s) First developed in 1991, a webcam was pointed at the Trojan Room coffee pot in the Cambridge University Computer Science Department (initially operating over a local network instead of the web). The camera was finally switched off on August 22, 2001. The final image captured by the camera can still be viewed at its homepage. The oldest continuously operating webcam, San Francisco State University's FogCam, has run since 1994 and is still operating as of October 2022. It updates every 20 seconds.The released in 1993 SGI Indy is the first commercial computer to have a standard video camera, and the first SGI computer to have standard video inputs.The maximum supported input resolution is 640×480 for NTSC or 768×576 for PAL. A fast machine is required to capture at either of these resolutions, though; an Indy with slower R4600PC CPU, for example, may require the input resolution to be reduced before storage or processing. However, the Vino hardware is capable of DMAing video fields directly into the frame buffer with minimal CPU overhead. History: The first widespread commercial webcam, the black-and-white QuickCam, entered the marketplace in 1994, created by the U.S. computer company Connectix. QuickCam was available in August 1994 for the Apple Macintosh, connecting via a serial port, at a cost of $100. Jon Garber, the designer of the device, had wanted to call it the "Mac-camera", but was overruled by Connectix's marketing department; a version with a PC-compatible parallel port and software for Microsoft Windows was launched in October 1995. The original Quick Cam provided 320x240-pixel resolution with a grayscale depth of 16 shades at 60 frames per second, or 256 shades at 15 frames per second. These cam were tested on several Delta II launch using a variety of communication protocols including CDMA, TDMA, GSM and HF. History: Videoconferencing via computers already existed, and at the time client-server based videoconferencing software such as CU-SeeMe had started to become popular. The first widely known laptop with integrated webcam option, at a pricepoint starting at US$ 12,000, was an IBM RS/6000 860 laptop and his ThinkPad 850 sibling, released in 1996. History: Entering the mainstream (late 1990s) One of the most widely reported-on webcam sites was JenniCam, created in 1996, which allowed Internet users to observe the life of its namesake constantly, in the same vein as the reality TV series Big Brother, launched four years later. Other cameras are mounted overlooking bridges, public squares, and other public places, their output made available on a public web page in accordance with the original concept of a "webcam". Aggregator websites have also been created, providing thousands of live video streams or up-to-date still pictures, allowing users to find live video streams based on location or other criteria. History: In the late 1990s, Microsoft NetMeeting was the only videoconferencing software on PC in widespread use, making use of webcams. In the following years, instant messaging clients started adding webcam support: Yahoo Messenger introduced this with version 5.5 in 2002, allowing video calling in 20 frames per second using a webcam. MSN Messenger gained this in version 5.0 in 2003. History: 2000s–2019 Around the turn of the 21st century, computer hardware manufacturers began building webcams directly into laptop and desktop screens, thus eliminating the need to use an external USB or FireWire camera. Gradually webcams came to be used more for telecommunications, or videotelephony, between two people, or among several people, than for offering a view on a Web page to an unknown public. History: For less than US$100 in 2012, a three-dimensional space webcam became available, producing videos and photos in 3D anaglyph image with a resolution up to 1280 × 480 pixels. Both sender and receiver of the images must use 3D glasses to see the effect of three dimensional image. History: 2020–present Webcams are considered an essential accessory for remote work, mainly to compensate for lower quality video processing with the built-in camera of the average laptop. During the COVID-19 pandemic, there was a shortage of webcams. Most laptops before and during the pandemic were made with cameras capping out at 720p recording quality at best, compared to the industry standard of 1080p or 4K seen in smartphones and televisions from the same period. The backlog on new developments for built-in webcams is the result of a design flaw with laptops being too thin to support the 7mm camera modules to fit inside, instead resorting to ~2.5mm. Also the camera components are more expensive and not a high level of demand for this feature, Smartphones started to be used as a backup option or webcam replacement, with kits including lighting and tripods or downloadable apps. Technology: Image sensor Image sensors can be CMOS or CCD, the former being dominant for low-cost cameras, but CCD cameras do not necessarily outperform CMOS-based cameras in the low-price range. Most consumer webcams are capable of providing VGA-resolution video at a frame rate of 30 frames per second. Many newer devices can produce video in multi-megapixel resolutions, and a few can run at high frame rates such as the PlayStation Eye, which can produce 320×240 video at 120 frames per second. Most image sensors are sourced from Omnivision or Sony. Technology: As webcams evolved simultaneously with display technologies, USB interface speeds and broadband internet speeds, the resolution went up from gradually from 320×240, to 640×480, and some now even offer 1280×720 (aka 720p) or 1920×1080 (aka 1080p) resolution. Despite the low cost, the resolution offered as of 2019 is impressive, with now the low-end webcams offering resolutions of 720p, mid-range webcams offering 1080p resolution, and high-end webcams offering 4K resolution at 60 fps. Technology: Optics Various lenses are available, the most common in consumer-grade webcams being a plastic lens that can be manually moved in and out to focus the camera. Fixed-focus lenses, which have no provision for adjustment, are also available. As a camera system's depth of field is greater for small image formats and is greater for lenses with a large f-number (small aperture), the systems used in webcams have a sufficiently large depth of field that the use of a fixed-focus lens does not impact image sharpness to a great extent. Technology: Most models use simple, focal-free optics (fixed focus, factory-set for the usual distance from the monitor to which it is fastened to the user) or manual focus. Technology: Webcams can come with different presets and fields of view. Individual users can make use of less than 90° horizontal FOV for home offices and live streaming. Webcams with as much as 360° horizontal FOV can be used for small- to medium-sized rooms (sometimes even large rooms). Depending on the users' purposes, webcams in the market can display the whole room or just the general vicinity. Technology: Internal software As the bayer filter is proprietary, any webcam contains some built-in image processing, separate from compression. Digital video streams are represented by huge amounts of data, burdening its transmission (from the image sensor, where the data is continuously created) and storage alike. Most if not all cheap webcams come with built-in ASIC to do video compression in real-time. Technology: Support electronics read the image from the sensor and transmit it to the host computer. Most webcams come with a controller that translates video over USB from Sonix, Suyin, Ricoh, Realtek or others. Typically, each frame is transmitted uncompressed in RGB or YUV or compressed as JPEG. Some cameras, such as mobile-phone cameras, use a CMOS sensor with supporting electronics "on die", i.e. the sensor and the support electronics are built on a single silicon chip to save space and manufacturing costs. Most webcams feature built-in microphones to make video calling and videoconferencing more convenient. Technology: Interface and external software Typical interfaces used by articles marketed as a "webcam" are USB, Ethernet and IEEE 802.11 (denominated as IP camera). Further interfaces such as e.g. Composite video, S-Video or FireWire were also available. The USB video device class (UVC) specification allows inter-connectivity of webcams to computers without the need for proprietary device drivers. Technology: Various proprietary as well as free and open-source software is available to handle the UVC stream. One could use Guvcview or GStreamer and GStreamer-based software to handle the UVC stream. Another could use multiple USB cameras attached to the host computer the software resides on, and broadcast multiple streams at once over (Wireless) Ethernet, such as MotionEye. MotionEye can either be installed onto a Raspberry Pi as MotionEyeOs, or afterwards on Raspbian as well. MotionEye can also be set up on Debian, Raspbian is a variant of Debian. Note that MotionEye V4.1.1 ( Aug '21 ) can only run on Debian 10 Buster ( oldstable ) and Python 2.7. Newer versions such as 3.X are not supported at this point of time according to Ccrisan, foundator and author of MotionEye. Technology: Various software tools in wide use can be employed to take video and pictures, such as PicMaster and Microsoft's Camera app (for use with Windows operating systems), Photo Booth (Mac), or Cheese (with Unix systems). For a more complete list see Comparison of webcam software. Uses: The most popular use of webcams is the establishment of video links, permitting computers to act as videophones or videoconference stations. For example, Apple's iSight camera, which is built into Apple laptops, iMacs and a majority of iPhones, can be used for video chat sessions, using the Messages instant messaging program. Other popular uses include security surveillance, computer vision, video broadcasting, and for recording social videos. Uses: Videotelephony Webcams can be added to instant messaging, text chat services such as AOL Instant Messenger, and VoIP services such as Skype, one-to-one live video communication over the Internet has now reached millions of mainstream PC users worldwide. Improved video quality has helped webcams encroach on traditional video conferencing systems. New features such as automatic lighting controls, real-time enhancements (retouching, wrinkle smoothing and vertical stretch), automatic face tracking and autofocus, assist users by providing substantial ease-of-use, further increasing the popularity of webcams. Uses: Webcams can also encourage remote work, enabling people to work remotely via the Internet. This usage was crucial to the survival of many businesses during the COVID-19 pandemic, when in-person office work was discouraged. Businesses, schools, and individuals have relied on video conferencing instead of spending on business travel for meetings. Moreover, the number of video conferencing cameras and software have multiplied since then due to their popularity. Webcam features and performance can vary by program, computer operating system, and also by the computer's processor capabilities. Video calling support has also been added to several popular instant messaging programs. Uses: Webcams allow for inexpensive, real-time video chat and webcasting, in both amateur and professional pursuits. They are frequently used in online dating and for online personal services offered mainly by women when camgirling. However, the ease of webcam use through the Internet for video chat has also caused issues. For example, moderation system of various video chat websites such as Omegle has been criticized as being ineffective, with sexual content still rampant. In a 2013 case, the transmission of nude photos and videos via Omegle from a teenage girl to a schoolteacher resulted in a child pornography charge.The popularity of webcams among teenagers with Internet access has raised concern about the use of webcams for cyber-bullying. Webcam recordings of teenagers, including underage teenagers, are frequently posted on popular Web forums and imageboards such as 4chan. Uses: Monitoring Webcams can be used as security cameras. Software is available to allow PC-connected cameras to watch for movement and sound, recording both when they are detected. These recordings can then be saved to the computer, e-mailed, or uploaded to the Internet. In one well-publicised case, a computer e-mailed images of the burglar during the theft of the computer, enabling the owner to give police a clear picture of the burglar's face even after the computer had been stolen. Uses: In December 2011, Russia announced that 290,000 Webcams would be installed in 90,000 polling stations to monitor the 2012 Russian presidential election. Webcams may be installed at places such as childcare centres, offices, shops and private areas to monitor security and general activity. Uses: Astrophotography With very-low-light capability, a few specific models of webcams are very popular to photograph the night sky by astronomers and astro photographers. Mostly, these are manual-focus cameras and contain an old CCD array instead of comparatively newer CMOS array. The lenses of the cameras are removed and then these are attached to telescopes to record images, video, still, or both. In newer techniques, videos of very faint objects are taken for a couple of seconds and then all the frames of the video are "stacked" together to obtain a still image of respectable contrast. Uses: Laser beam profiling A webcam's CCD response is linearly proportional to the incoming light. Therefore, webcams are suitable to record laser beam profiles, after the lens is removed. The resolution of a laser beam profiler depends on the pixel size. Commercial webcams are usually designed to record color images. The size of a webcam's color pixel depends on the model and may lie in the range of 5 to 10 µm. However, a color pixel consists of four black and white pixels each equipped with a color filter (for details see Bayer filter). Although these color filters work well in the visible, they may be rather transparent in the near infrared. By switching a webcam into the Bayer-mode it is possible to access the information of the single pixels and a resolution below 3 µm was possible. Privacy concerns: Many users do not wish the continuous exposure for which webcams were originally intended, but rather prefer privacy. Such privacy is lost when malware allow malicious hackers to activate the webcam without the user's knowledge, providing the hackers with a live video and audio feed. This is a particular concern on many laptop computers, as such cameras normally cannot be physically disabled if hijacked by such a Trojan Horse program or other similar spyware programs. Privacy concerns: Cameras such as Apple's older external iSight cameras include lens covers to thwart this. Some webcams have built-in hardwired LED indicators that light up whenever the camera is active, sometimes only in video mode. However, it is possible, depending on the circuit design of a webcam, for malware to circumvent the indicator and activate the camera surreptitiously, as researchers demonstrated in the case of a MacBook's built-in camera in 2013.Various companies sell sliding lens covers and stickers that allow users to retrofit a computer or smartphone to close access to the camera lens as needed. One such company reported having sold more than 250,000 such items from 2013 to 2016. However, any opaque material will work just as well.The process of attempting to hack into a person's webcam and activate it without the webcam owner's permission has been called camfecting, a portmanteau of cam and infecting. The remotely activated webcam can be used to watch anything within the webcam's field of vision. Camfecting is most often carried out by infecting the victim's computer with a computer virus.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Quadratrix of Hippias** Quadratrix of Hippias: The quadratrix or trisectrix of Hippias (also quadratrix of Dinostratus) is a curve which is created by a uniform motion. It is one of the oldest examples for a kinematic curve (a curve created through motion). Its discovery is attributed to the Greek sophist Hippias of Elis, who used it around 420 BC in an attempt to solve the angle trisection problem (hence trisectrix). Later around 350 BC Dinostratus used it in an attempt to solve the problem of squaring the circle (hence quadratrix). Definition: Consider a square ABCD , and an inscribed quarter circle arc centered at A with radius equal to the side of the square. Let E be a point that travels with a constant angular velocity along the arc from D to B , and let F be a point that travels simultaneously with a constant velocity from D to ABCD along line segment AD¯ , so that E and F start at the same time at D and arrive at the same time at B and A . Then the quadratrix is defined as the locus of the intersection of line segment AE¯ with the parallel line to AB¯ through F .If one places such a square ABCD with side length a in a (Cartesian) coordinate system with the side AB¯ on the x -axis and with vertex A at the origin, then the quadratix is described by a planar curve γ:(0,π2]→R2 with This description can also be used to give an analytical rather than a geometric definition of the quadratrix and to extend it beyond the (0,π2] interval. It does however remain undefined at the singularities of cot ⁡(t) except for the case of t=0 where the singularity is removable due to lim cot ⁡(t)=1 and hence yields a continuous planar curve on the interval (−π,π) .To describe the quadratrix as simple function rather than planar curve, it is advantageous to swap the y -axis and the x -axis, that is to place the side AB¯ on the y -axis rather than on the x -axis. Then the quadratrix forms the graph of the function Angle trisection: The trisection of an arbitrary angle using only ruler and compasses is impossible. However, if the quadratrix is allowed as an additional tool, it is possible to divide an arbitrary angle into n equal segments and hence a trisection ( n=3 ) becomes possible. In practical terms the quadratrix can be drawn with the help of a template or a quadratrix compass (see drawing).Since, by the definition of the quadratrix, the traversed angle is proportional to the traversed segment of the associated squares' side dividing that segment on the side into n equal parts yields a partition of the associated angle as well. Dividing the line segment into n equal parts with ruler and compass is possible due to the intercept theorem. Angle trisection: For a given angle ∠BAE (at most 90°) construct a square ABCD over its leg AB¯ . The other leg of the angle intersects the quadratrix of the square in a point G and the parallel line to the leg AB¯ through G intersects the side AD¯ of the square in F . Now the segment AF¯ corresponds to the angle ∠BAE and due to the definition of the quadratrix any division of the segment AF¯ into n equal segments yields a corresponding division of the angle ∠BAE into n equal angles. To divide the segment AF¯ into n equal segments, draw any ray starting at A with n equal segments (of arbitrary length) on it. Connect the endpoint O of the last segment to F and draw lines parallel to OF¯ through all the endpoints of the remaining n−1 segments on AO¯ . These parallel lines divide the segment AF¯ into n equal segments. Now draw parallel lines to AB¯ through the endpoints of those segments on AF¯ , intersecting the trisectrix. Connecting their points of intersection to A yields a partition of angle ∠BAE into n equal angles.Since not all points of the trisectrix can be constructed with circle and compass alone, it is really required as an additional tool next to compass and circle. However it is possible to construct a dense subset of the trisectrix by circle and compass, so while one cannot assure an exact division of an angle into n parts without a given trisectrix, one can construct an arbitrarily close approximation by circle and compass alone. Squaring of the circle: Squaring the circle with ruler and compass alone is impossible. However, if one allows the quadratrix of Hippias as an additional construction tool, the squaring of the circle becomes possible due to Dinostratus' theorem. It lets one turn a quarter circle into square of the same area, hence a square with twice the side length has the same area as the full circle. Squaring of the circle: According to Dinostratus' theorem the quadratrix divides one of the sides of the associated square in a ratio of 2π . For a given quarter circle with radius r one constructs the associated square ABCD with side length r. The quadratrix intersect the side AB in J with |AJ¯|=2πr . Now one constructs a line segment JK of length r being perpendicular to AB. Then the line through A and K intersects the extension of the side BC in L and from the intercept theorem follows |BL¯|=π2r . Extending AB to the right by a new line segment |BO¯|=r2 yields the rectangle BLNO with sides BL and BO the area of which matches the area of the quarter circle. This rectangle can be transformed into a square of the same area with the help of Euclid's geometric mean theorem. One extends the side ON by a line segment |OQ¯|=|BO¯|=r2 and draws a half circle to right of NQ, which has NQ as its diameter. The extension of BO meets the half circle in R and due to Thales' theorem the line segment OR is the altitude of the right-angled triangle QNR. Hence the geometric mean theorem can be applied, which means that OR forms the side of a square OUSR with the same area as the rectangle BLNO and hence as the quarter circle.Note that the point J, where the quadratrix meets the side AB of the associated square, is one of the points of the quadratrix that cannot be constructed with ruler and compass alone and not even with the help of the quadratrix compass based on the original geometric definition (see drawing). This is due to the fact that the two uniformly moving lines coincide and hence there exists no unique intersection point. However relying on the generalized definition of the quadratrix as a function or planar curve allows for J being a point on the quadratrix. Historical sources: The quadratrix is mentioned in the works of Proclus (412–485), Pappus of Alexandria (3rd and 4th centuries) and Iamblichus (c. 240 – c. 325). Proclus names Hippias as the inventor of a curve called quadratrix and describes somewhere else how Hippias has applied the curve on the trisection problem. Pappus only mentions how a curve named quadratrix was used by Dinostratus, Nicomedes and others to square the circle. He neither mentions Hippias nor attributes the invention of the quadratrix to a particular person. Iamblichus just writes in a single line, that a curve called a quadratrix was used by Nicomedes to square the circle.Although based on Proclus' name for the curve it is conceivable that Hippias himself used it for squaring the circle or some other curvilinear figure, most historians of mathematics assume that Hippias invented the curve, but used it only for the trisection of angles. Its use for squaring the circle only occurred decades later and was due to mathematicians like Dinostratus and Nicomedes. This interpretation of the historical sources goes back to the German mathematician and historian Moritz Cantor.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Ultrafast X-ray** Ultrafast X-ray: Ultrafast X-rays or ultrashort X-ray pulses are femtosecond x-ray pulses with wavelengths occurring at interatomic distances. This beam uses the X-ray's inherent abilities to interact at the level of atomic nuclei and core electrons. This ability combined with the shorter pulses at 30 femtosecond could capture the change in position of atoms, or molecules during phase transitions, chemical reactions, and other transient processes in physics, chemistry, and biology. Fundamental transitions and processes: Ultrafast X-ray diffraction (time-resolved X-ray diffraction) can surpass ultrashortpulse visible techniques, which are limited to detecting structures on the level of valence and free electrons. Ultrashort pulse X-ray techniques are able to resolve atomic scales, where dynamic structural changes and reactions occur in the interior of a material.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Buttocks** Buttocks: The buttocks (singular: buttock) are two rounded portions of the exterior anatomy of most mammals, located on the posterior of the pelvic region. In humans, the buttocks are located between the lower back and the perineum. They are composed of a layer of exterior skin and underlying subcutaneous fat superimposed on a left and right gluteus maximus and gluteus medius muscles. The two gluteus maximus muscles are the largest muscles in the human body. They are responsible for movements such as straightening the body into the upright (standing) posture when it is bent at the waist; maintaining the body in the upright posture by keeping the hip joints extended; and propelling the body forward via further leg (hip) extension when walking or running. In the seated position, the buttocks bear the weight of the upper body and take that weight off the feet. In many cultures, the buttocks play a role in sexual attraction. Many cultures have also used the buttocks as a primary target for corporal punishment, as the buttocks' layer of subcutaneous fat offers protection against injury while still allowing for the infliction of pain. There are several connotations of buttocks in art, fashion, culture and humor. The English language is replete with many popular synonyms that range from polite colloquialisms ("posterior", "backside" or "bottom") to vulgar slang ("arse", "ass", "bum", "butt", "booty", "prat"). Anatomy: The buttocks are formed by the masses of the gluteal muscles or "glutes" (the gluteus maximus muscle and the gluteus medius muscle) superimposed by a layer of fat. The superior aspect of the buttock ends at the iliac crest, and the lower aspect is outlined by the horizontal gluteal crease. The gluteus maximus has two insertion points: 1⁄3 superior portion of the linea aspera of the femur, and the superior portion of the iliotibial tractus. The masses of the gluteus maximus muscle are separated by an intermediate intergluteal cleft or "crack" in which the anus is situated. Anatomy: The buttocks allow primates to sit upright without needing to rest their weight on their feet as four-legged animals do. Females of certain species of baboon have red buttocks that blush to attract males. In the case of humans, females tend to have proportionally wider and thicker buttocks due to higher subcutaneous fat and proportionally wider hips. In humans they also have a role in propelling the body in a forward motion and aiding bowel movement.Some baboons and all gibbons, though otherwise fur-covered, have characteristic naked callosities on their buttocks. While human children generally have smooth buttocks, mature males and females have varying degrees of hair growth, as on other parts of their body. Females may have hair growth in the gluteal cleft (including around the anus), sometimes extending laterally onto the lower aspect of the cheeks. Males may have hair growth over some or all of the buttocks. Society and culture: Connotations The English word of Greek origin "callipygian" indicates someone who has beautiful buttocks. Depending on the context, exposure of the buttocks in non-intimate situations can cause feelings of embarrassment or humiliation, and embarrassment or amusement in an onlooker (see pantsing). Willfully exposing one's own bare buttocks as a protest, a provocation, or for fun is called mooning. Society and culture: In many punitive traditions, the buttocks are a common target for corporal punishment, which can be meted out with no risk of long-term physical harm compared with the dangers of applying it to other parts of the body, such as the hands, which could easily be damaged. Within the Victorian school system in England, the buttocks have been described as "the place provided by nature" for this purpose. A modern-day example can be seen in some Southeast Asian countries, such as Singapore. Caning in Singapore is widely used as a form of judicial corporal punishment, with male convicts being sentenced to a caning on their bare buttocks. Society and culture: In Western and some other cultures, many comedians, writers and others rely on the buttocks as a source of amusement, camaraderie and fun. There are numerous colloquial terms for the buttocks. Society and culture: In American English, phrases use the buttocks or synonyms (especially "butt" and "ass") as a synecdoche or pars pro toto for a whole person, often with a negative connotation. For example, terminating an employee may be described as "firing his ass". One might say "move your ass" or "haul ass" as an exhortation to greater haste or urgency. Expressed as a function of punishment, defeat or assault becomes "kicking one's ass". Such phrases also may suggest a person's characteristics, e.g. difficult people are termed "hard asses". In the US, an annoying person or any source of frustration may be termed "a pain in the ass" (a synonym for "a pain in the neck"). People deemed excessively puritanical or proper may be termed "tight asses" (in Australia and New Zealand, "tight arse" refers to someone who is excessively miserly). Society and culture: Certain physical dispositions of the buttocks—particularly size—are sometimes identified, controversially, as a racial characteristic (see race). A famous example was the case of Saartjie Baartman, the so-called "Hottentot Venus". Society and culture: Synonyms The Latin name for the buttocks is nates (English pronunciation NAY-teez, classical pronunciation nătes [ˈnateːs]) which is plural; the singular, natis (buttock), is rarely used. There are many colloquial terms to refer to them, including: Backside, posterior, behind and its derivates (hind-quarters, hinder or the childish diminutive "heinie" (US usage only), strictly the whole body behind the hind leg-trunk attachment), rear or rear-end, derrière (French for "behind")—all strictly positional descriptions, as the inaccurate use of rump (as in 'rump roast', after a 'hot' spanking), thighs, upper legs; analogous are: Aft, stern and poop deck, naval in origin; in nautical jargon, buttocks also designates the aftermost portion of a hull above the water line and in front of the rudder, merging with the run below the water line Caboose, originally a ship's galley in wooden cabin on deck; also the "rear end" car of a freight train, considered a cute synonym suitable for any audience Bottom (and the shortening "bot" as well as childish diminutives "bottie" or "botty"), but the use of similar-sounding "booty" or "bootie" may be related. Society and culture: Tail (strictly anatomically a zoomorphism, humans only have a tail-bone, yet the illogical "tail feather" was popularized by musicians. When used to refer to a woman or to women in general, the term is derogatory; also used for the even more sensual phallus) and tail-end Trunk, in American English, particularly when describing large buttocks: "junk in the trunk" Apple, referring to the similar shape of the fruit, derived from the 1970s. Also likened to an upside-down heart, attributed from various, popular ads of the 1970s. Society and culture: Arse or ass, arsehole or asshole, and (butt-)hole: a pars pro toto (strictly only the actual body cavity and directly adjoining anal region); also used as an insult for a person. The term arse is Anglo-Saxon, and over a thousand years old. Badonkadonk: Ideophonic slang meaning the large yet firm buttocks of a woman Batty: Jamaican Patois, commonly used in certain communities within the UK and Canada. Booty, US slang, used in the popular slang expression "booty call". It has been suggested that the word derives from a Bambara (West African) word for anus, buda. The use of the word "booty" also derived from the connotations of the word boot, such as that of a car, and the rear, posterior or "back" end of an object. Society and culture: Breech, a metaphorical sense derived from on older form of the garment breeches (as the French culotte meaning pantoloons, via cul from Latin culus "butt"), so 'bare breech' means without breeches, i.e., trouserless butt Bum: in British English, used frequently in the United Kingdom, Ireland, Canada, Australia, New Zealand and many other English-speaking Commonwealth countries, also in the United States, is a mild often humorous term for buttocks, not necessarily in a vulgar or sexual context: "I've a boil on my bum, thrice as large as my thumb" (The Judge With The Sore Rump, St. George Tucker). A bum boy is an insulting term for a male homosexual. Society and culture: Bumpy: a euphemistic term for the buttocks, used primarily with children Buns, from Gaelic bun "bottom, base", mounds (cfr. Butte, a geographical mound, known since 1805 in American English, from (Old) French butte "mound, knoll") and orbs—shape-metaphors. Bund: derived from Punjabi Bunda: Brazilian Portuguese slang for buttocks, from Kimbundu mbunda, with same meaning. Butt: the common term for a pair of buttocks in the US (singular, as one body-part; cognate but neither its root nor an abbreviation), used in everyday speech. Cakes: slang word for buttocks Can (a container) had an unusual development: the slang meaning "toilet" is recorded c. 1900, said to be a shortening of piss-can, the meaning "buttocks" from c. 1910, and the verb meaning "fire an employee" (to flush=dump?) from 1905. Society and culture: Cheeks, a shape-metaphor within human anatomy, but also used in the singular: left cheek and right cheek; sounds particularly naughty because of the homonym and the adjective cheeky, lending themselves to word puns Culo: (From Spanish/Italian) slang, usually meaning a woman's voluptuous, round and firm buttocks. Derived from a term for booty; in Spanish the term is considered vulgar and offensive, but less so in Spain than in Latin America. Society and culture: Duffs: Ulster Irish origin Dumper sometimes denotes the buttocks, especially when they are large. Society and culture: Fanny: a socially acceptable term in print in Canada and the United States, for many years before some of the bolder terms came along; and a subject of jokes, since "Fannie" can be a woman's name, diminutive of "Frances". In British English fanny refers to the vulva and is considered vulgar. The figure of a bare-bottomed lass named Fanny is ubiquitous in Provence (the southeast of France) wherever pétanque is played: traditionally when a player loses 13 to 0 it is said that "il est fanny" (he's fanny), and he has to kiss the bottom of a girl called Fanny; as there is rarely an obliging Fanny, there is always a substitute picture, woodcarving or pottery so that Fanny's bottom is always available. Society and culture: Fourth point of contact: in military slang, because of the sequence of textbook parachute jump landing Fundament (literally "foundation", not common in this general sense in English, but for the buttocks since 1297) Gand or Gaand: a Hindi derivative Hams, like buttocks generally as a plural, after the meat cut from the analogous part of a hog ; pressed ham refers to the act mooning with the buttocks pressed against a window or other transparent object; brawn, a singular derived from the Frankish for ham or roast, is also used for both a muscular body part (but either on arms or legs) or boar meat, especially roast Hurdies: Scots, origin unknown, also applied to the whole rump Haunches Moon was a common shape-metaphor for the butt in English since 1756, and the verb to moon meant 'to expose to (moon)light' since 1601. They were combined in US student slang in the verb (al expression) mooning "to flash the buttocks" in 1968. Society and culture: Prat (British English, origin unknown; as in pratfall, a music hall term; also a term of abuse for a person) Seat (of the trousers; or metaphorically): another long-standing socially acceptable term, referring to the use for sitting—but compare the sarcastic use of seat of wisdom and similar expressions, such as 'seat of learning', referring to use as target for an 'educational' spanking. Society and culture: Sit-upon; has various independent counterparts in other languages, e.g., Dutch zitvlak ("sitting plain"), German Gesäß Italian sedere Six; in military terminology, particularly in the United States Navy, it refers to the term "six o'clock", i.e., a point directly behind the referenced person. Tuchis: Yiddish. Tush or tushy (from the Yiddish language "tuchis" or "tochis" meaning "under" or "beneath") Ultimatum (Latin, literally 'the furthest part') was used in slang c.1820s. Related terms The word "callipygian" is sometimes used to describe someone with notably attractive buttocks. The term comes from the Greek kallipygos, (first used for the Venus Kallipygos) which literally means "beautiful buttocks"; the prefix is also a root of "calligraphy" (beautiful writing) and "calliope" (beautiful voice); callimammapygian means having both beautiful breasts and buttocks. Society and culture: Both the English (in) tails and the Dutch billentikker ('tapping the buttocks') are ironic terms for very formal coats with a significantly longer tail end as part of festive (especially wedding party) dress Macropygia means 'heaving large buttocks, hindquarter', and occurs in biological species names, A pygopag(ous) (from the Greek pygè 'buttock' and pagein 'attached') was a monster in Ancient (Greek) mythology consisting of two bodies joint by common buttocks, now a medical term for 'Siamese' twins thus joint back-to-back Pygophilia is sexual arousal or excitement caused by seeing, playing with or touching the buttocks; people who have strong attraction to buttocks are called pygophilists. Society and culture: Pygoscopia means observing someone's rear; pygoscopophobia a pathological fear to be its unwilling object Pygalgia is soreness in the buttocks, i.e. a pain in the rump. Steatopygia is a marked accumulation of fat in and around the buttocks. Uropygial in ornithology means situated on or belonging to the uropygium, i.e. the rump of a bird. "Bubble butt" has at least two connotations, which are at odds with each other: either a small, round and firm pair of buttocks resembling a pair of soap bubbles next to each other, or a large rear end, seemingly about to burst from the strain. In both cases, the term implies an appealing shapeliness about the buttocks. Society and culture: Fashion The 1880s were well known for the fashion trend among women called the bustle, which made even the smallest buttocks appear huge. The popularity of this fashion is shown in the famous Georges Seurat painting A Sunday Afternoon on the Island of La Grande Jatte in the two women to the far left and right. Like long underwear with the ubiquitous "butt flap" (used to allow baring only the bottom with a simple gesture, as for hygiene), this clothing style was acknowledged in popular media such as cartoons and comics for generations afterward. Society and culture: More recently, the cleavage of the buttocks is sometimes exposed by some women, deliberately or accidentally, as fashion dictated trousers be worn lower, as with hip-hugger pants. An example of another attitude in an otherwise hardly exhibitionist culture is the Japanese fundoshi. In popular culture In 1966 Yoko Ono made a roughly 90 minute-long experimental film called No. 4, which is colloquially known as Bottoms. It consists of footage of human buttocks in motion while the person walks on a turntable.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Spontaneous Broadway** Spontaneous Broadway: Spontaneous Broadway is an advanced long-form improvised performance, based on audience suggestions. The audience typically submits titles of songs that have never been written, and the performers choose suggestions to create songs, the audience votes through acclamation on their favorite song, which is then used as the core of a brand new Broadway musical.The format received a favorable review from The New York Times when it premiered in New York in 1995.Though not required or necessarily encouraged by improv professionals, elements of humor inevitably surface in the performance because of the surprising and playful nature of improvisation and its use of typical Broadway stereotypes. The performers' songs are supported by an onstage musician or band that improvises the music, generally in the style of typical show tunes. Spontaneous Broadway: The format was created in New York City at Freestyle Repertory Theatre and has been performed by a number of different companies around the US. Currently, it is performed by BATS theatre in San Francisco, The Mop & Bucket Co. in Schenectady NY, and at several colleges around the country, including Stanford. Spontaneous Broadway: The Spontaneous Broadway format was created by Kat Koppett in association with Freestyle Repertory Theatre in New York. Koppett is a 25-year improv veteran, having worked with Freestyle Repertory Theatre) and San Francisco's BATS Improv. She is currently co-director of the Mop & Bucket Company, an improv troupe based in the Capital District of New York State. Koppett also runs a consulting business, appropriately named Koppett. In 1995, TheaterWeek Magazine named Kat one of the year's "Unsung Heroes" for her creation of Spontaneous Broadway, which is now performed regularly by teams of actors all over the world. Significant contributions to the development of the format at Freestyle Rep. were made by Kenn Adams, Laura Livingston, and Samuel D. Cohen. Spontaneous Broadway: At the 2000 Melbourne Fringe Festival, the show began its life in Australia and immediately won a special Fringe Award. Produced by musical director John Thorn (who secured the Australian licence from Kat Koppett) and hosted by Russell Fletcher, many of Australia's finest comic improvisors have since performed the show around the country, including sell-out performances at the Sydney Opera House, The Famous Spiegeltent, and at the Melbourne International Comedy Festival, receiving rave reviews and legions of repeat attendee fans.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Epanorthosis** Epanorthosis: An epanorthosis is a figure of speech that signifies emphatic word replacement. "Thousands, no, millions!" is a stock example. Epanorthosis as immediate and emphatic self-correction often follows a Freudian slip (either accidental or deliberate). Etymology: The word epanorthosis, attested 1570, is from Ancient Greek epanórthōsis (ἐπανόρθωσις) "correcting, revision" < epí (ἐπί) + anorthóō (ἀνορθόω) "restore, rebuild" < ana- (ἀνα-) "up" + orthóō (ὀρθόω) "straighten" < orthós (ὀρθός) "straight, right" (hence to "straighten up"). Examples: "Seems, madam! Nay, it is; I know not 'seems.'" (Hamlet, Act 1, Scene 2) "The psychologist known as Sigmund Fraud—Freud, I mean!" "I've been doing this for six weeks!—er, days, that is." "Man has parted company with his trusty friend the horse and has sailed into the azure with the eagles, eagles being represented by the infernal combustion engine–er er, internal combustion engine. [loud laughter] Internal combustion engine! Engine!" – Winston ChurchillThe words in italics are technically the epanorthoses, but all the words following the dash may be considered part of the epanorthosis as well. Striking through words is another way of demonstrating such an effect. Examples: In Aviation English phraseology, the word "correction" must be explicitly used: "climb to reach Flight Level 290 at time 58 — correction at time 55". A classic leet-like online variant, using caret notation to denote control characters, is the use of ^H (as in "We've always used COBOL^H^H^H^H") to suggest a backspace, or ^W to suggest deletion of the preceding word. Both may be repeated as necessary. A more modern variant, where markup is available on the communication client, allows the use of plain strikethrough text to use for humorous effect, such as "We are feeling terrible fine."
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Hypothalamotegmental tract** Hypothalamotegmental tract: In human neuroanatomy, the hypothalamotegmental tract is a pathway from the hypothalamus to the reticular formation. Axons from the posterior hypothalamus descend through the mesencephalic and pontine reticular formations. They connect with reticular neurons important in visceral and autonomic activity. The tract is a continuation of the medial forebrain bundle in the lateral portion of the tegmentum. It is not visible without special stains.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Ānāpānasati Sutta** Ānāpānasati Sutta: The Ānāpānasati Sutta (Pāli) or Ānāpānasmṛti Sūtra (Sanskrit), "Breath-Mindfulness Discourse," Majjhima Nikaya 118, is a discourse that details the Buddha's instruction on using awareness of the breath (anapana) as an initial focus for meditation. The sutta includes sixteen steps of practice, and groups them into four tetrads, associating them with the four satipatthanas (placings of mindfulness). According to American scholar monk, Thanissaro Bhikkhu, this sutta contains the most detailed meditation instructions in the Pali Canon. Versions of the text: Theravada Pali Canon The Theravada Pali Canon version of the Anapanasati Sutta lists sixteen steps to relax and compose the mind and body. The Anapanasati Sutta is a celebrated text among Theravada Buddhists. In the Theravada Pali Canon, this discourse is the 118th discourse in the Majjhima Nikaya (MN) and is thus frequently represented as "MN 118". In addition, in the Pali Text Society edition of the Pali Canon, this discourse is in the Majjhima Nikaya (M)'s third volume, starting on the 78th page and is thus sometimes referenced as "M iii 78". Versions of the text: Summary of the Pali Canon version Benefits The Buddha states that mindfulness of the breath, "developed and repeatedly practiced, is of great fruit, great benefit." It fulfills the Four Foundations of Mindfulness (satipatthana). When these are developed and cultivated, they fulfill the Seven Factors of Enlightenment (bojjhanga). And when these are developed and cultivated, they fulfill "knowledge and freedom" (Bhikkhu Sujato), "true knowledge and deliverance" (Bhikkhu Bodhi), or "clear vision and deliverance" (Nanamoli). Versions of the text: Establishing mindfulness To develop and cultivate mindfulness of breathing, a monk goes to the wilderness or forest, or to the root of a tree, or to an empty hut, sits down with crossed legs and the body erect, and establishes mindfulness in front or right there (parimukham), and mindfully breathes in and out. Versions of the text: Four tetrads The Ānāpānasati Sutta then describes the monitoring of the breath, and relates this to various experiences and practices. Following the classification of the four satipatthanas, these experiences and practices are grouped into a list of sixteen objects or steps of instructions, generally broken into four tetrads. These core sixteen steps are one of the most widely taught meditation instructions in the early Buddhist texts. They appear in various Pali suttas like the Ananada sutta, not just the Anapanasati sutta. They also appear in various Chinese translations of the Agamas (such as in a parallel version of the Ananada sutta in the Samyukta-Agama, SA 8.10) with minor differences as well as in the Vinayas of different schools. They are as follows: First Tetrad: Contemplation of the Body (kāya) “Breathing in long he knows (pajanati ‘I am breathing in long.’ Breathing in short he knows ‘I am breathing in short.’ Breathing out long he knows ‘I am breathing out long.’Breathing out short he knows ‘I am breathing out short.’ He trains himself ‘breathing in, I experience the whole body.’(sabbakāya).‘breathing out, I experience the whole body.’ He trains himself, ‘breathing in, I calm the bodily formation.’‘breathing out, I calm the bodily formation.’ (kāya-saṃskāra) Second Tetrad: Contemplation of the Feeling (vedanā) He trains himself, ‘I will breath in experiencing joy.’(pīti, also translated as "rapture")He trains himself, ‘I will breath out experiencing joy.’[4] He trains himself, ‘I will breath in experiencing pleasure (sukha). He trains himself, ‘I will breath out experiencing pleasure. Versions of the text: He trains himself, ‘I will breath in experiencing mental formation.’ (citta-saṃskāra)He trains himself, ‘I will breath out experiencing mental formation.’ He trains himself, ‘I will breath in calming the mental formation.’He trains himself, ‘I will breath out calming the mental formation.’ Third Tetrad: Contemplation of the Mind (citta) He trains himself, ‘I will breath in experiencing the mind.’He trains himself, ‘I will breath out experiencing the mind.’ He trains himself, ‘I will breath in pleasing the mind.’He trains himself, ‘I will breath out pleasing the mind.’ He trains himself, ‘I will breath in concentrating (samādhi) the mind.’He trains himself, ‘I will breath out concentrating the mind.’ He trains himself, ‘I will breath in releasing the mind.’He trains himself, ‘I will breath out releasing the mind.’ Fourth Tetrad: Contemplation of the Mental Objects (dhammā) He trains himself, ‘I will breath in observing (anupassi) impermanence.’ (anicca)He trains himself, ‘I will breath out observing impermanence.’ He trains himself, ‘I will breath in observing dispassion.’ (virāga)He trains himself, ‘I will breath out observing dispassion. Versions of the text: He trains himself, ‘I will breath in observing cessation.’ (nirodha)He trains himself, ‘I will breath out observing cessation.’ He trains himself, ‘I will breath in observing relinquishment.’ (paṭinissaggā)He trains himself, ‘I will breath out observing relinquishment.’ Seven factors of awakening The sutra then explains how the four tetrads are correlated to the four satipatthanas. Next, the sutra explicates how contemplation of the four satipatthanas sets in the seven factors of awakening, which bring "clear knowing" and release. Versions of the text: In East Asian Buddhism The Ānāpānasmṛti Sūtra, as the text was known to Sanskritic early Buddhist schools in India, exists in several forms. There is a version of the Ānāpānasmṛti Sutra in the Ekottara Āgama preserved in the Chinese Buddhist canon. This version also teaches about the Four Dhyānas, recalling past lives, and the Divine Eye. The earliest translation of Ānāpānasmṛti instructions, however, was by An Shigao as a separate sutra (T602) in the 2nd century CE. It is not part of the Sarvastivada Madhyama Āgama, but is instead an isolated text, although the sixteen steps are found elsewhere in the Madhyama and Samyukta Āgamas. The versions preserved in the Samyukta Agama are SA 815, SA 803, SA 810–812 and these three sutras have been translated into English by Thich Nhat Hanh. Related canonical discourses: Breath mindfulness, in general, and this discourse's core instructions, in particular, can be found throughout the Pali Canon, including in the "Code of Ethics" (that is, in the Vinaya Pitaka's Parajika) as well as in each of the "Discourse Basket" (Sutta Pitaka) collections (nikaya). From these other texts, clarifying metaphors, instructional elaborations and contextual information can be gleaned. These can also be found throughout the Chinese Agamas. Related canonical discourses: Pali suttas including the core instructions In addition to being in the Anapanasati Sutta, all four of the aforementioned core instructional tetrads can also be found in the following canonical discourses: the "Greater Exhortation to Rahula Discourse" (Maha-Rahulovada Sutta, MN 62); sixteen discourses of the Samyutta Nikaya's (SN) chapter 54 (Anapana-samyutta): SN 54.1, SN 54.3–SN 54.16, SN 54.20; the "To Girimananda Discourse" (Girimananda Sutta, AN 10.60); and, the Khuddaka Nikaya's Patisambhidamagga's section on the breath, Anapanakatha.The first tetrad identified above (relating to bodily mindfulness) can also be found in the following discourses: the "Great Mindfulness Arousing Discourse" (Mahasatipatthana Sutta, DN 22) and, similarly, the "Mindfulness Arousing Discourse" (Satipatthana Sutta, MN 10), in the section on Body Contemplation; and, the "Mindfulness concerning the Body Discourse" (Kayagatasati Sutta, MN 119) as the first type of body-centered meditation described. Related canonical discourses: Chinese sutras with the core steps The Saṃyukta Āgama contains a section titled Ānāpānasmṛti Saṃyukta (安那般那念相應) which contains various sutras on the theme of anapanasati including the sixteen steps. Related canonical discourses: Metaphors Hot-season rain cloud In a discourse variously entitled "At Vesali Discourse" and "Foulness Discourse" (SN 54.9), the Buddha describes "concentration by mindfulness of breathing" (ānāpānassatisamādhi) in the following manner: "Just as, bhikkhus, in the last month of the hot season, when a mass of dust and dirt has swirled up, a great rain cloud out of season disperses it and quells it on the spot, so too concentration by mindfulness of breathing, when developed and cultivated, is peaceful and sublime, an ambrosial pleasant dwelling, and it disperses and quells on the spot evil unwholesome states whenever they arise...."After stating this, the Buddha states that such an "ambrosial pleasant dwelling" is achieved by pursuing the sixteen core instructions identified famously in the Anapanasati Sutta. Related canonical discourses: The skillful turner In the "Great Mindfulness Arousing Discourse" (Mahasatipatthana Sutta, DN 22) and the "Mindfulness Arousing Discourse" (Satipatthana Sutta, MN 10), the Buddha uses the following metaphor for elaborating upon the first two core instructions: Just as a skillful turner or turner's apprentice, making a long turn, knows, "I am making a long turn," or making a short turn, knows, "I am making a short turn," just so the monk, breathing in a long breath, knows, "I am breathing in a long breath"; breathing out a long breath, he knows, "I am breathing out a long breath"; breathing in a short breath, he knows, "I am breathing in a short breath"; breathing out a short breath, he knows, "I am breathing out a short breath." Expanded contexts Great fruit, great benefit The Anapanasati Sutta refers to sixteenfold breath-mindfulness as being of "great fruit" (mahapphalo) and "great benefit" (mahānisaṃso). "The Simile of the Lamp Discourse" (SN 54.8) states this as well and expands on the various fruits and benefits, including: unlike with other meditation subjects, with the breath one's body and eyes do not tire and one's mind, through non-clinging, becomes free of taints householder memories and aspirations are abandoned one dwells with equanimity towards repulsive and unrepulsive objects one enters and dwells in the four material absorptions (rupajhana) and the four immaterial absorptions (arupajhana) all feelings (vedana) are seen as impermanent, are detached from and, upon the death of the body, "will become cool right here." Commentaries and interpretations: Traditional commentaries Pali commentaries In traditional Pali literature, the 5th-century CE commentary (atthakatha) for this discourse can be found in two works, both attributed to Ven. Buddhaghosa: the Visuddhimagga provides commentary on the four tetrads, focusing on "concentration through mindfulness of breathing" (ānāpānassati-samādhi). the Papañcasūdanī provides commentary on the remainder of this discourse.The earlier Vimuttimagga also provides a commentary on Anapanasati, as does the late canonical Pali Paṭisambhidāmagga (ca. 2nd c. BCE). Commentaries and interpretations: Likewise, the sub-commentary to the Visuddhimagga, Paramatthamañjusā (ca. 12th c. BCE), provides additional elaborations related to Buddhaghosa's treatment of this discourse. For instance, the Paramatthamañjusā maintains that a distinction between Buddhists and non-Buddhists is that Buddhists alone practice the latter twelve instructions (or "modes") described in this sutta: "When outsiders know mindfulness of breathing, they only know the first four modes [instructions]" (Pm. 257, trans. Ñāṇamoli). Commentaries and interpretations: Sanskrit commentaries The Śrāvakabhūmi chapter of the Yogācārabhūmi-śāstra and Vasubandhu's Abhidharmakośa both contain expositions on the practice outlined in the Ānāpānasmṛti Sūtra. Commentaries and interpretations: Chinese commentaries The Chinese Buddhist monk An Shigao translated a version of the Ānāpānasmṛti Sūtra into Chinese (148-170 CE) known as the Anban shouyi jing (安般守意經, Scripture on the ānāpānasmŗti) as well as other works dealing with Anapanasati. The practice was a central feature of his teaching and that of his students who wrote various commentaries on the sutra.One work which survives from the tradition of An Shigao is the Da anban shouyi jing (佛說大安般守意經, Taishō Tripitaka No.602) which seems to include the translated sutra of anapanasmrti as well as original added commentary amalgamated within the translation. Commentaries and interpretations: Modern interpretations According to Ajahn Sujato, the ultimate goal of Anapanasati is to bear insight and understanding into the Four Foundations of Mindfulness (Satipaṭṭhāna), the Seven Factors of Awakening (Bojjhangas), and ultimately Nibbana.Different traditions (such as Sri Lankan practitioners who follow the Visuddhimagga versus Thai forest monks) interpret a number of aspects of this sutta in different ways. Below are some of the matters that have multiple interpretations: Are the 16 core instructions to be followed sequentially or concurrently (Bodhi, 2000, p. 1516; Brahm, 2006, pp. 83–101; Rosenberg, 2004)? Must one have reached the first jhana before (or in tandem with) pursuing the second tetrad (Rosenberg, 2004)? In the preparatory instructions, does the word "parimukham" mean: around the mouth (as favored by Goenka, 1998, p. 28), in the chest area (as supported by a use of the word in the Vinaya), in the forefront of one's mind (as favored at times by Thanissaro) or simply "sets up mindfulness before him" (per Bodhi in Wallace & Bodhi, 2006, p. 5) or "to the fore" (Thanissaro, 2006d) or "mindfulness alive" (Piyadassi, 1999) ? In the first tetrad's third instruction, does the word "sabbakaya" mean: the whole "breath body" (as indicated in the sutta itself [Nanamoli, 1998, p. 7: "I say that this, bhikkhus, is a certain body among the bodies, namely, respiration."], as perhaps supported by the Patisambhidamagga [Nanamoli, 1998, p. 75], the Visuddhimagga [1991, pp. 266–267], Nyanaponika [1965, pp. 109–110], Buddhadasa [1988, p. 35], and Brahm [2006, p. 84]) or the whole "flesh body" (as supported by Bhikkhu Bodhi's revised second translation of the sutta [in Nanamoli & Bodhi, 2001, see relevant footnote to MN 118], Goenka [1988, pp. 29–30], Nhat Hanh [1988, p. 26] and Rosenberg [1998, pp. 40, 43]), and the commentary, which explains that the "body among bodies" refers to the wind element as opposed to other ways of relating to the body? Modern expositions available in English: Nhất Hạnh, Thích (2008). Breathe, You Are Alive: The Sutra on the Full Awareness of Breathing. Parallax Press. ISBN 978-1888375848. Rosenberg, Larry (2004). Breath by Breath: The Liberating Practice of Insight Meditation. Shambhala Publications. ISBN 978-1590301364. Modern expositions available in English: Bhikkhu (translator), Ñánamoli (2000). Mindfulness of Breathing (Ánápánasati) (PDF). Buddhist Publication Society. ISBN 9789552401671. {{cite book}}: |last= has generic name (help) Analayo. Understanding and Practicing the Ānāpānasati-sutta in "Buddhist Foundations of Mindfulness" (Mindfulness in Behavioral Health) 1st ed. 2015 Edition Buddhadasa. Santikaro Bhikkhu (Translator). Mindfulness with Breathing: A Manual for Serious Beginners. Wisdom Publications; Revised edition (June 15, 1988). ISBN 9780861717163. Modern expositions available in English: Bhaddanta Āciṇṇa. Mindfulness of Breathing (Anapanasati) Bhante Vimalaramsi. Breath of Love: A Guide to Mindfulness of Breathing and Loving-Kindness Thanissaro Bhikkhu. Right Mindfulness: Memory & Ardency on the Buddhist Path. 2012. U. Dhammajīva Thero. Towards an Inner Peace Upul Nishantha Gamage.Coming Alive with Mindfulness of Breathing Ajahn Kukrit Sotthibalo. Buddhawajana Anapanasati
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Caffè crema** Caffè crema: Caffè crema (Italian: "cream coffee") refers to two different coffee drinks: An old name for espresso (1940s and 1950s). Caffè crema: A long espresso drink served primarily in Germany, Switzerland and Austria and northern Italy (1980s onwards), along the Italian/Swiss and Italian/Austrian border. In Germany it is generally known as a "Café Crème" or just "Kaffee" and is generally the default type of black coffee served, unless there is a filter machine.As a colorful term it generally means "espresso", while in technical discussions, referring to the long drink, it may more narrowly be referred to as Swiss caffè crema. In addition, there is also Italian iced crema caffè. Caffè crema: Variant terms include "crema caffè" and the hyperforeignism "café crema" – café is French, while caffè and crema are Italian; thus "café crema" mixes French and Italian. Synonym for espresso: "Caffè crema", and the English calque "cream coffee", was the original term for modern espresso, produced by hot water under pressure, coined in 1948 by Gaggia to describe the light brown foam (crema) on espresso. The term has fallen out of use in favor of "espresso". As a colorful synonym for "espresso", the term and variants find occasional use in coffee branding, as in "Jacobs Caffè Crema" and "Kenco Café Crema". In Italy caffè crema is sometimes used for a crema rich espresso. Swiss drink: The term "caffè crema" also refers to a long espresso drink, popular since the 1980s in Switzerland and northern Italy. It is generally served as the standard "café traditionnel" in Belgium. It is produced by running 180–240 millilitres (6.3–8.4 imp fl oz; 6.1–8.1 US fl oz) of water when brewing an espresso, primarily by using a coarser grind. It is similar to a caffè Americano or a long black, except that these latter are diluted espresso, and consist of making ("pulling") a normal (short) espresso shot and combining it with hot water. By contrast, a caffè crema extracts differently, and thus has a different flavor profile. Swiss drink: As a long, brewed rather than diluted, espresso, caffè crema is the long end of the ristretto – normale – lungo – caffè crema range, and is significantly longer than a lungo, generally twice as long. Rough brewing ratios of ristretto, normale, lungo, and caffè crema are 1:2:3:6 – a doppio ristretto will be approximately 1 oz/30 ml (crema increases the volume), normale 2 oz/60 ml, lungo 3 oz/90 ml, and caffè crema 6 oz/180 ml. However, volumes of caffè crema can vary significantly, from 4–8 oz (120 ml–240 ml) for a double shot, depending on how it is brewed and taste, and there is no widely agreed standard measure in the English-speaking world. In terms of solubles concentration, a caffè crema is approximately midway between a lungo and non-pressure brewed coffee, such as drip or press.The motivation for the caffè crema is that it produces a traditional large cup of coffee, just as brewed coffee does: the small size of espresso is due to the original Gaggia lever espresso machine of 1948 requiring manual pressure, and thus a single (solo) espresso of 30 millilitres (1.1 imp fl oz; 1.0 US fl oz) was the maximum that could practically be extracted. The development of pump-driven espresso in the 1961 Faema removed this restriction, but by then a taste had developed for the short espresso, and these continued to be produced on the new machines, long caffè crema only emerging in the 1980s. Swiss drink: The caffè crema is not a common drink in the English-speaking world and is virtually never available in cafés because of the need to significantly change the grind compared to standard espresso. Cafés instead serve Americanos or long blacks. The caffè crema was briefly used in Australia in the 1980s, but was replaced by the long black. Swiss drink: Brewing method As the caffè crema is very uncommon in the English-speaking world, and not widely available outside of home brewing, there are few English-language resources on how to brew it, nor consistency in what precisely is understood by this. What is generally done is to coarsen the grind, but otherwise extract in much the same way as espresso, stopping the shot when it blonds, as is usual for espresso – the coarser grind resulting in greater volume, but the extraction taking approximately the same time (25–30 seconds). Some variants include tamping less or extracting for slightly longer (35–40 seconds), and coarser grinds generally result in less mass of grinds fitting into a given filter basket, leading some to prefer using triple-shot baskets to allow sufficient coffee. Swiss drink: One can make a caffè crema in a commercial setting by using the existing filter grind, which is approximately correct, in the espresso machine and otherwise brewing normally, but this would be a very unusual request. Crema caffè: In Italy, during the summer, traditional cafés (called bar, without final s, in Italian) commonly serve an iced, creamy variant of espresso called crema caffè, crema fredda di caffè, "Caffè del nonno" and so on. This requires a special spinning apparatus making it constantly creamy, without ice scales. It can be served straight or with panna (milk cream).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Finkbeiner test** Finkbeiner test: The Finkbeiner test, named for the science journalist Ann Finkbeiner, is a checklist to help science journalists avoid gender bias in articles about women in science. It asks writers to avoid describing women scientists in terms of stereotypically feminine traits, such as their family arrangements. The Finkbeiner test has been linked to affirmative action, because writing can cause readers to view women in science as different from men in negative or unfair ways. The test helps avoid gender bias in science reporting similarly to various tests that focus on under-representation of marginalized groups in different career fields. Checklist: The Finkbeiner test is a checklist proposed by freelance journalist Christie Aschwanden to help journalists avoid gender bias in media articles about women in science. To pass the test, an article about a female scientist must not mention: That she is a woman Her husband's job Her childcare arrangements How she nurtures her underlings How she was taken aback by the competitiveness in her field How she is a role model for other women How she's the "first woman to ..." History: Aschwanden formulated the test in an article in Double X Science, an online magazine for women published on 5 March 2013. She created the test in the spirit of (but was not inspired by) the Bechdel test – used to highlight gender bias in film – in response to the sexist media coverage of women scientists she noticed. She recalled: Campaigns to recognize outstanding female scientists have led to a recognizable genre of media coverage. Let's call it "A lady who..." genre. You've seen these profiles, of course you have, because they're everywhere. The hallmark of "A lady who..." profile is that it treats its subject's sex as her most defining detail. She's not just a great scientist, she's a woman! And if she’s also a wife and a mother, those roles get emphasized too. History: Aschwanden named the test after journalist Ann Finkbeiner, winner of the 2008 AIP Science Communication Award, who had earlier written a post for the science blog The Last Word on Nothing about her decision not to write about the subject of her latest profile, an astronomer, "as a woman".Both journalists agree that the test "should apply mainly to the sort of general-interest scientist profiles that one might find in The New York Times or the front section of Nature, which are supposed to focus on professional accomplishments". The point of the test is to not overemphasize or privilege the gender of a female scientist. Even Finkbeiner, who vowed to "ignore gender" in her writing, actually tripped up on the tendency to focus on sex; in an astronomer's profile she considered mentioning that the scientist was the "first" to win a certain award. "After a reader urged Finkbeiner to stick to her pledge, she [left out 'the first.']" The tactic of singling out women as "role models" can also distort gender equality in the reception of news reporting. Students indiscriminately cite scholars and mentors of any sex or gender as "great role models"; being a role model is not unique to a person's sex or gender identity expression. Thus, emphasizing sex in profiles about members of marginalized groups reinforces their supposed difference, perpetuating gender bias in science. Reception: The test was mentioned in the media criticism of the New York Times's obituary of rocket scientist Yvonne Brill. That obituary, published on 30 March 2013, by Douglas Martin, began with the words: "She made a mean beef stroganoff, followed her husband from job to job and took eight years off from work to raise three children". A few hours after publication the New York Times revised the obituary to address some of the criticisms; the revised version begins "She was a brilliant rocket scientist who followed her husband from job to job..."Another New York Times article, on Jennifer Doudna, published on 11 May 2015, drew similar criticism with reference to the Finkbeiner test. An article in The Globe and Mail on astrophysicist Victoria Kaspi, published on 16 February 2016, drew the same criticism, as did David Quammen's book A Tangled Tree, for giving women scientists, especially Lynn Margulis, short shrift.Susan Gelman, Professor of Psychology at the University of Michigan, applauded the move to report on female scientists without emphasising their gender, but questions whether the Finkbeiner test should seek to eliminate all references to personal life, suggesting that the move should be towards asking male scientists about personal issues too. This view is shared by other writers. In addition, Vasudevan Mukunth points out in The Wire that countries in which women are drastically under-represented in science might want to bend the test's rules in hopes of highlighting any systemic barriers: "The test's usefulness rests on the myth of a level playing field—there is none in India." In another post on Last Word on Nothing, Finkbeiner responded to these questions by arguing with herself. Reversed Finkbeiner: The "Reversed Finkbeiner" approach is an exercise in which students are asked to write an article about a male scientist that would fail the Finkbeiner test if it were about a woman.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Clofibride** Clofibride: Clofibride is a fibrate. Clofibride is a derivative of clofibrate. In the body it is converted into 4-chlorophenoxyisobutyric acid (clofibric acid), which is the true hypolipidemic agent. So clofibride, just like clofibrate is a prodrug of clofibric acid.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Separator (oil production)** Separator (oil production): The term separator in oilfield terminology designates a pressure vessel used for separating well fluids produced from oil and gas wells into gaseous and liquid components. A separator for petroleum production is a large vessel designed to separate production fluids into their constituent components of oil, gas and water. A separating vessel may be referred to in the following ways: Oil and gas separator, Separator, Stage separator, Trap, Knockout vessel (Knockout drum, knockout trap, water knockout, or liquid knockout), Flash chamber (flash vessel or flash trap), Expansion separator or expansion vessel, Scrubber (gas scrubber), Filter (gas filter). These separating vessels are normally used on a producing lease or platform near the wellhead, manifold, or tank battery to separate fluids produced from oil and gas wells into oil and gas or liquid and gas. An oil and gas separator generally includes the following essential components and features: A vessel that includes (a) primary separation device and/or section, (b) secondary "gravity" settling (separating) section, (c) mist extractor to remove small liquid particles from the gas, (d) gas outlet, (e) liquid settling (separating) section to remove gas or vapor from oil (on a three-phase unit, this section also separates water from oil), (f) oil outlet, and (g) water outlet (three-phase unit). Separator (oil production): Adequate volumetric liquid capacity to handle liquid surges (slugs) from the wells and/or flowlines. Adequate vessel diameter and height or length to allow most of the liquid to separate from the gas so that the mist extractor will not be flooded. A means of controlling an oil level in the separator, which usually includes a liquid-level controller and a diaphragm motor valve on the oil outlet. A back pressure valve on the gas outlet to maintain a steady pressure in the vessel. Separator (oil production): Pressure relief devices.Separators work on the principle that the three components have different densities, which allows them to stratify when moving slowly with gas on top, water on the bottom and oil in the middle. Any solids such as sand will also settle in the bottom of the separator. The functions of oil and gas separators can be divided into the primary and secondary functions which will be discussed later on. Classification of oil and gas separators: By operating configuration Oil and gas separators can have three general configurations: vertical, horizontal, and spherical. Classification of oil and gas separators: Vertical separators can vary in size from 10 or 12 inches in diameter and 4 to 5 feet seam to seam (S to S) up to 10 or 12 feet in diameter and 15 to 25 feet S to S. Horizontal separators may vary in size from 10 or 12 inches in diameter and 4 to 5 feet S to S up to 15 to 16 feet in diameter and 60 to 70 feet S to S. Spherical separators are usually available in 24 or 30 inch up to 66 to 72 inch in diameter. Classification of oil and gas separators: Horizontal oil and gas separators are manufactured with monotube and dual-tube shells. Monotube units have one cylindrical shell, and dual-tube units have two cylindrical parallel shells with one above the other. Both types of units can be used for two-phase and three-phase service. A monotube horizontal oil and gas separator is usually preferred over a dual-tube unit. The monotube unit has greater area for gas flow as well as a greater oil/gas interface area than is usually available in a dual-tube separator of comparable price. The monotube separator will usually afford a longer retention time because the larger single-tube vessel retains a larger volume of oil than the dual-tube separator. It is also easier to clean than the dual-tube unit. In cold climates, freezing will likely cause less trouble in the monotube unit because the liquid is usually in close contact with the warm stream of gas flowing through the separator. The monotube design normally has a lower silhouette than the dual-tube unit, and it is easier to stack them for multiple-stage separation on offshore platforms where space is limited. It was illustrated by Powers et al (1990) that vertical separators should be constructed such that the flow stream enters near the top and passes through a gas/liquid separating chamber even though they are not competitive alternatives unlike the horizontal separators. Classification of oil and gas separators: By function The three configurations of separators are available for two-phase operation and three-phase operation. In the two-phase units, gas is separated from the liquid with the gas and liquid being discharged separately. Oil and gas separators are mechanically designed such that the liquid and gas components are separated from the hydrocarbon steam at specific temperature and pressure according to Arnold et al (2008). In three-phase separators, well fluid is separated into gas, oil, and water with the three fluids being discharged separately. The gas-liquid separation section of the separator is determined by the maximum removal droplet size using the Souders–Brown equation with an appropriate K factor. The oil-water separation section is held for a retention time that is provided by laboratory test data, pilot plant operating procedure, or operating experience. In the case where the retention time is not available, the recommended retention time for three-phase separator in API 12J is used. The sizing methods by K factor and retention time give proper separator sizes. According to Song et al (2010), engineers sometimes need further information for the design conditions of downstream equipment, i.e., liquid loading for the mist extractor, water content for the crude dehydrator/desalter or oil content for the water treatment. Classification of oil and gas separators: By operating pressure Oil and gas separators can operate at pressures ranging from a high vacuum to 4,000 to 5,000 psi. Most oil and gas separators operate in the pressure range of 20 to 1,500 psi. Separators may be referred to as low pressure, medium pressure, or high pressure. Low-pressure separators usually operate at pressures ranging from 10 to 20 up to 180 to 225 psi. Medium-pressure separators usually operate at pressures ranging from 230 to 250 up to 600 to 700 psi. High-pressure separators generally operate in the wide pressure range from 750 to 1,500 psi. Classification of oil and gas separators: By application Oil and gas separators may be classified according to application as test separator, production separator, low temperature separator, metering separator, elevated separator, and stage separators (first stage, second stage, etc.). Classification of oil and gas separators: Test separator A test separator is used to separate and to meter the well fluids. The test separator can be referred to as a well tester or well checker. Test separators can be vertical, horizontal, or spherical. They can be two-phase or three-phase. They can be permanently installed or portable (skid or trailer mounted). Test separators can be equipped with various types of meters for measuring the oil, gas, and/or water for potential tests, periodic production tests, marginal well tests, etc. Classification of oil and gas separators: Production separator A production separator is used to separate the produced well fluid from a well, group of wells, or a lease on a daily or continuous basis. Production separators can be vertical, horizontal, or spherical. They can be two-phase or three-phase. Production separators range in size from 12 in. to 15 ft in diameter, with most units ranging from 30 in. to 10 ft in diameter. They range in length from 6 to 70 ft, with most from 10 to 40 ft long. Classification of oil and gas separators: Low-temperature separator A low-temperature separator is a special one in which high-pressure well fluid is jetted into the vessel through a choke or pressure reducing valve so that the separator temperature is reduced appreciably below the well-fluid temperature. The temperature reduction is obtained by the Joule–Thomson effect of expanding well fluid as it flows through the pressure-reducing choke or valve into the separator. The lower operating temperature in the separator causes condensation of vapors that otherwise would exit the separator in the vapor state. Liquids thus recovered require stabilization to prevent excessive evaporation in the storage tanks. Classification of oil and gas separators: Metering separator The function of separating well fluids into oil, gas, and water and metering the liquids can be accomplished in one vessel. These vessels are commonly referred to as metering separators and are available for two-phase and three-phase operation. These units are available in special models that make them suitable for accurately metering foaming and heavy viscous oil. Primary functions of oil and gas separators: Separation of oil from gas may begin as the fluid flows through the producing formation into the well bore and may progressively increase through the tubing, flow lines, and surface handling equipment. Under certain conditions, the fluid may be completely separated into liquid and gas before it reaches the oil and gas separator. In such cases, the separator vessel affords only an "enlargement" to permit gas to ascend to one outlet and liquid to descend to another. Primary functions of oil and gas separators: Removal of oil from gas Difference in density of the liquid and gaseous hydrocarbons may accomplish acceptable separation in an oil and gas separator. However, in some instances, it is necessary to use mechanical devices commonly referred to as "mist extractors" to remove liquid mist from the gas before it is discharged from the separator. Also, it may be desirable or necessary to use some means to remove non solution gas from the oil before the oil is discharged from the separator. Primary functions of oil and gas separators: Removal of gas from oil The physical and chemical characteristics of the oil and its conditions of pressure and temperature determine the amount of gas it will contain in solution. The rate at which the gas is liberated from a given oil is a function of change in pressure and temperature. The volume of gas that an oil and gas separator will remove from crude oil is dependent on (1) physical and chemical characteristics of the crude, (2) operating pressure, (3) operating temperature, (4) rate of throughput, (5) size and configuration of the separator, and (6) other factors. Primary functions of oil and gas separators: Agitation, heat, special baffling, coalescing packs, and filtering materials can assist in the removal of nonsolution gas that otherwise may be retained in the oil because of the viscosity and surface tension of the oil. Gas can be removed from the top of the drum by virtue of being gas. Oil and water are separated by a baffle at the end of the separator, which is set at a height close to the oil-water contact, allowing oil to spill over onto the other side, while trapping water on the near side. The two fluids can then be piped out of the separator from their respective sides of the baffle. The produced water is then either injected back into the oil reservoir, disposed of, or treated. The bulk level (gas–liquid interface) and the oil water interface are determined using instrumentation fixed to the vessel. Valves on the oil and water outlets are controlled to ensure the interfaces are kept at their optimum levels for separation to occur. The separator will only achieve bulk separation. The smaller droplets of water will not settle by gravity and will remain in the oil stream. Normally the oil from the separator is routed to a coalescer to further reduce the water content. Primary functions of oil and gas separators: Separation of water from oil The production of water with oil continues to be a problem for engineers and the oil producers. Since 1865 when water was coproduced with hydrocarbons, separation of valuable hydrocarbons from disposable water has challenged and frustrated the oil industry. According to Rehm et al (1983), innovation over the years has led from the skim pit to installation of the stock tank, to the gunbarrel, to the freewater knockout, to the hay-packed coalescer and most recently to the Performax Matrix Plate Coalescer, an enhanced gravity settling separator. The history of water treating for the most part has been sketchy and spartan. There is little economic value to the produced water, and it represents an extra cost for the producer to arrange for its disposal. Primary functions of oil and gas separators: Today, oil fields produce greater quantities of water than they produce oil. Along with greater water production are emulsions and dispersions which are more difficult to treat. The separation process becomes interlocked with a myriad of contaminants as the last drop of oil is being recovered from the reservoir. In some instances it is preferable to separate and to remove water from the well fluid before it flows through pressure reductions, such as those caused by chokes and valves. Such water removal may prevent difficulties that could be caused downstream by the water, such as corrosion which can be referred to as being a chemical reactions that occurs whenever a gas or liquid chemically attacks an exposed metallic surface. Corrosion is usually accelerated by warm temperatures and likewise by the presence of acids and salts. Primary functions of oil and gas separators: Other factors that affect the removal of water from oil include hydrate formation and the formation of tight emulsion that may be difficult to resolve into oil and water. The water can be separated from the oil in a three-phase separator by use of chemicals and gravity separation. If the three-phase separator is not large enough to separate the water adequately, it can be separated in a free-water knockout vessel installed upstream or downstream of the separators. Secondary functions: Maintenance of optimum pressure on separator For an oil and gas separator to accomplish its primary functions, pressure must be maintained in the separator so that the liquid and gas can be discharged into their respective processing or gathering systems. Pressure is maintained on the separator by use of a gas backpressure valve on each separator or with one master backpressure valve that controls the pressure on a battery of two or more separators. The optimum pressure to maintain on a separator is the pressure that will result in the highest economic yield from the sale of the liquid and gaseous hydrocarbons. Secondary functions: Maintenance of liquid seal in separator To maintain pressure on a separator, a liquid seal must be effected in the lower portion of the vessel. This liquid seal prevents loss of gas with the oil and requires the use of a liquid-level controller and a valve. Methods used to remove oil from gas: Effective oil-gas separation is important not only to ensure that the required export quality is achieved but also to prevent problems in downstream process equipment and compressors. Once the bulk liquid has been knocked out, which can be achieved in many ways, the remaining liquid droplets are separated from by a demisting device. Until recently the main technologies used for this application were reverse-flow cyclones, mesh pads and vane packs. More recently new devices with higher gas-handling have been developed which have enabled potential reduction in the scrubber vessel size. There are several new concepts currently under development in which the fluids are degassed upstream of the primary separator. These systems are based on centrifugal and turbine technology and have additional advantages in that they are compact and motion insensitive, hence ideal for floating production facilities. Below are some of the ways in which oil is separated from gas in separators. Methods used to remove oil from gas: Density difference (gravity separation) Natural gas is lighter than liquid hydrocarbon. Minute particles of liquid hydrocarbon that are temporarily suspended in a stream of natural gas will, by density difference or force of gravity, settle out of the stream of gas if the velocity of the gas is sufficiently slow. The larger droplets of hydrocarbon will quickly settle out of the gas, but the smaller ones will take longer. At standard conditions of pressure and temperature, the droplets of liquid hydrocarbon may have a density 400 to 1,600 times that of natural gas. However, as the operating pressure and temperature increase, the difference in density decreases. At an operating pressure of 800 psig, the liquid hydrocarbon may be only 6 to 10 times as dense as the gas. Thus, operating pressure materially affects the size of the separator and the size and type of mist extractor required to separate adequately the liquid and gas. The fact that the liquid droplets may have a density 6 to 10 times that of the gas may indicate that droplets of liquid would quickly settle out of and separate from the gas. However, this may not occur because the particles of liquid may be so small that they tend to "float" in the gas and may not settle out of the gas stream in the short period of time the gas is in the oil and gas separator. As the operating pressure on a separator increases, the density difference between the liquid and gas decreases. For this reason, it is desirable to operate oil and gas separators at as low a pressure as is consistent with other process variables, conditions, and requirements. Methods used to remove oil from gas: Impingement If a flowing stream of gas containing liquid, mist is impinged against a surface, the liquid mist may adhere to and coalesce on the surface. After the mist coalesces into larger droplets, the droplets will gravitate to the liquid section of the vessel. If the liquid content of the gas is high, or if the mist particles are extremely fine, several successive impingement surfaces may be required to effect satisfactory removal of the mist. Methods used to remove oil from gas: Change of flow direction When the direction of flow of a gas stream containing liquid mist is changed abruptly, inertia causes the liquid to continue in the original direction of flow. Separation of liquid mist from the gas thus can be effected because the gas will more readily assume the change of flow direction and will flow away from the liquid mist particles. The liquid thus removed may coalesce on a surface or fall to the liquid section below. Methods used to remove oil from gas: Change of flow velocity Separation of liquid and gas can be effected with either a sudden increase or decrease in gas velocity. Both conditions use the difference in inertia of gas and liquid. With a decrease in velocity, the higher inertia of the liquid mist carries it forward and away from the gas. The liquid may then coalesce on some surface and gravitate to the liquid section of the separator. With an increase in gas velocity, the higher inertia of the liquid causes the gas to move away from the liquid, and the liquid may fall to the liquid section of the vessel. Methods used to remove oil from gas: Centrifugal force If a gas stream carrying liquid mist flows in a circular motion at sufficiently high velocity, centrifugal force throws the liquid mist outward against the walls of the container. Here the liquid coalesces into progressively larger droplets and finally gravitates to the liquid section below. Centrifugal force is one of the most effective methods of separating liquid mist from gas. However, according to Keplinger (1931), some separator designers have pointed out a disadvantage in that a liquid with a free surface rotating as a whole will have its surface curved around its lowest point lying on the axis of rotation. This created false level may cause difficulty in regulating the fluid level control on the separator. This is largely overcome by placing vertical quieting baffles which should extend from the bottom of the separator to above the outlet. Efficiency of this type of mist extractor increases as the velocity of the gas stream increases. Thus for a given rate of throughput, a smaller centrifugal separator will suffice. Methods used to remove gas from oil: Because of higher prices for natural gas, the widespread reliance on metering of liquid hydrocarbons, and other reasons, it is important to remove all nonsolution gas from crude oil during field processing. Methods used to remove gas from crude oil in oil and gas separators are discussed below: Agitation Moderate, controlled agitation which can be defined as movement of the crude oil with sudden force is usually helpful in removing nonsolution gas that may be mechanically locked in the oil by surface tension and oil viscosity. Agitation usually will cause the gas bubbles to coalesce and to separate from the oil in less time than would be required if agitation were not used. Methods used to remove gas from oil: Heat Heat as a form of energy that is transferred from one body to another results in a difference in temperature. This reduces surface tension and viscosity of the oil and thus assists in releasing gas that is hydraulically retained in the oil. The most effective method of heating crude oil is to pass it through a heated-water bath. A spreader plate that disperses the oil into small streams or rivulets increases the effectiveness of the heated-water bath. Upward flow of the oil through the water bath affords slight agitation, which is helpful in coalescing and separating entrained gas from the oil. A heated-water bath is probably the most effective method of removing foam bubbles from foaming crude oil. A heated-water bath is not practical in most oil and gas separators, but heat can be added to the oil by direct or indirect fired heaters and/or heat exchangers, or heated free-water knockouts or emulsion treaters can be used to obtain a heated-water bath. Methods used to remove gas from oil: Centrifugal force Centrifugal force which can be defined as a fictitious force, peculiar to a particle moving on a circular path, that has the same magnitude and dimensions as the force that keeps the particle on its circular path (the centripetal force) but points in the opposite direction is effective in separating gas from oil. The heavier oil is thrown outward against the wall of the vortex retainer while the gas occupies the inner portion of the vortex. A properly shaped and sized vortex will allow the gas to ascend while the liquid flows downward to the bottom of the unit. Flow measurements: The direction of flow in and around a separator along with other flow instruments are usually illustrated on the Piping and instrumentation diagram, (P&ID). Some of these flow instruments include the Flow Indicator (FI), Flow Transmitter (FT) and the Flow Controller (FC). Flow is of paramount importance in the oil and gas industry because flow, as a major process variable is essentially important in that its understanding helps engineers come up with better designs and enables them to confidently carry out additional research. Mohan et al (1999) carried out a research into the design and development of separators for a three-phase flow system. The purpose of the study was to investigate the complex multiphase hydrodynamic flow behaviour in a three-phase oil and gas separator. A mechanistic model was developed alongside a computational fluid dynamics (CFD) simulator. These were then used to carry out a detailed experimentation on the three-phase separator. The experimental and CFD simulation results were suitably integrated with the mechanistic model. The simulation time for the experiment was 20 seconds with the oil specific gravity as 0.885, and the separator lower part length and diameter were 4-ft and 3-inches respectively. The first set of experiment became a basis through which detailed investigations were used to carry out and to conduct similar simulation studies for different flow velocities and other operating conditions as well. Flow calibration: As earlier stated, flow instruments that function with the separator in an oil and gas environment include the flow indicator, flow transmitter and the flow controller. Due to maintenance (which will be discussed later) or due to high usage, these flowmeters do need to be calibrated from time to time. Calibration can be defined as the process of referencing signals of known quantity that has been predetermined to suit the range of measurements required. Calibration can also be seen from a mathematical point of view in which the flowmeters are standardized by determining the deviation from the predetermined standard so as to ascertain the proper correction factors. In determining the deviation from the predetermined standard, the actual flowrate is usually first determined with the use of a master meter which is a type of flowmeter that has been calibrated with a high degree of accuracy or by weighing the flow so as to be able to obtain a gravimetric reading of the mass flow. Flow calibration: Another type of meter used is the transfer meter. However, according to Ting et al (1989), transfer meters have been proven to be less accurate if the operating conditions are different from its original calibrated points. According to Yoder (2000), the types of flowmeters used as master meters include turbine meters, positive displacement meters, venturi meters, and Coriolis meters. In the U.S., master meters are often calibrated at a flow lab that has been certified by the National Institute of Standards and Technology, (NIST). NIST certification of a flowmeter lab means that its methods have been approved by NIST. Normally, this includes NIST traceability, meaning that the standards used in the flowmeter calibration process have been certified by NIST or are causally linked back to standards that have been approved by NIST. However, there is a general belief in the industry that the second method which involves the gravimetric weighing of the amount of fluid (liquid or gas) that actually flows through the meter into or out of a container during the calibration procedure is the most ideal method for measuring the actual amount of flow. Apparently, the weighing scale used for this method also has to be traceable to the National Institute of Standards and Technology (NIST) as well.In ascertaining a proper correction factor, there is often no simple hardware adjustment to make the flowmeter start reading correctly. Instead, the deviation from the correct reading is recorded at a variety of flowrates. The data points are plotted, comparing the flowmeter output to the actual flowrate as determined by the standardized National Institute of Standards and Technology master meter or weigh scale. Controls and features: Controls The controls required for oil and gas separators are liquid level controllers for oil and oil/water interface (three-phase operation) and gas back-pressure control valve with pressure controller. Although the use of controls is expensive making the cost of operating fields with separators so high, installations has resulted in substantial savings in the overall operating expense as in the case of the 70 gas wells in the Big Piney, Wyo sighted by Fair (1968). The wells with separators were located above 7,200 ft elevation, ranging upward to 9,000 ft. Control installations were sufficiently automated such that the field operations around the controllers could be operated from a remote-control station at the field office using the Distributed Control System. All in all, this improved the efficiency of personnel and the operation of the field, with a corresponding increase in production from the area. Controls and features: Valves The valves required for oil and gas separators are oil discharge control valve, water-discharge control valve (three-phase operation), drain valves, block valves, pressure relief valves, and Emergency Shutdown valves (ESD). ESD valves typically stay in open position for months or years awaiting a command signal to operate. Little attention is paid to these valves outside of scheduled turnarounds. The pressures of continuous production often stretch these intervals even longer. This leads to build up or corrosion on these valves that prevents them from moving. For safety critical applications, it must be ensured that the valves operate upon demand. Controls and features: Accessories The accessories required for oil and gas separators are pressure gauges, thermometers, pressure-reducing regulators (for control gas), level sight glasses, safety head with rupture disk, piping, and tubing. Controls and features: Safety features Oil and gas separators should be installed at a safe distance from other lease equipment. Where they are installed on offshore platforms or in close proximity to other equipment, precautions should be taken to prevent injury to personnel and damage to surrounding equipment in case the separator or its controls or accessories fail. The following safety features are recommended for most oil and gas separators. Controls and features: High- and low-liquid-level controls High- and low liquid-level controls normally are float-operated pilots that actuate a valve on the inlet to the separator, open a bypass around the separator, sound a warning alarm, or perform some other pertinent function to prevent damage that might result from high or low liquid levels in the separator. Controls and features: High- and low-pressure controls High- and low pressure controls are installed on separators to prevent excessively high or low pressures from interfering with normal operations. These high- and low-pressure controls can be mechanical, pneumatic, or electric and can sound a warning, actuate a shut-in valve, open a bypass, or perform other pertinent functions to protect personnel, the separator, and surrounding equipment. Controls and features: High- and low-temperature controls Temperature controls may be installed on separators to shut in the unit, to open or to close a bypass to a heater, or to sound a warning should the temperature in the separator become too high or too low. Such temperature controls are not normally used on separators, but they may be appropriate in special cases. According to Francis (1951), low-temperature controls in separators is another tools used by gas producers which finds its application in the high-pressure gas fields, usually referred to as "vapour-phase" reservoirs. Low temperatures obtainable from the expansion of these high-pressure gas streams are utilized to a profitable advantage. A more efficient recovery of the hydrocarbon condensate and a greater degree of dehydration of the gas as compared to the conventional heater and separator installation is a major advantage of low-temperature controls in oil and gas separators. Controls and features: Safety relief valves A spring-loaded safety relief valve is usually installed on all oil and gas separators. These valves normally are set at the design pressure of the vessel. Safety relief valves serve primarily as a warning, and in most instances are too small to handle the full rated fluid capacity of the separator. Full-capacity safety relief valves can be used and are particularly recommended when no safety head (rupture disk) is used on the separator. Controls and features: Safety heads or rupture disks A safety head or rupture disk is a device containing a thin metal membrane that is designed to rupture when the pressure in the separator exceeds a predetermined value. This is usually from 1 1/4 to 1% times the design pressure of the separator vessel. The safety head disk is usually selected so that it will not rupture until the safety relief valve has opened and is incapable of preventing excessive pressure buildup in the separator. Operation and maintenance considerations: Over the life of a production system, the separator is expected to process a wide range of produced fluids. With break through from water flood and expanded gas lift circulation, the produced fluid water cut and gas-oil ratio is ever changing. In many instances, the separator fluid loading may exceed the original design capacity of the vessel. As a result, many operators find their separator no longer able to meet the required oil and water effluent standards, or experience high liquid carry-over in the gas according to Power et al (1990). Some operational maintenance and considerations are discussed below: Periodic inspection In refineries and processing plants, it is normal practice to inspect all pressure vessels and piping periodically for corrosion and erosion. In the oil fields, this practice is not generally followed (they are inspected at a predetermined frequency, normally decided by an RBI assessment) and equipment is replaced only after actual failure. This policy may create hazardous conditions for operating personnel and surrounding equipment. It is recommended that periodic inspection schedules for all pressure equipment be established and followed to protect against undue failures. Operation and maintenance considerations: Installation of safety devices All safety relief devices should be installed as close to the vessel as possible and in such manner that the reaction force from exhausting fluids will not break off, unscrew, or otherwise dislodge the safety device. The discharge from safety devices should not endanger personnel or other equipment. Operation and maintenance considerations: Low temperature Separators should be operated above hydrate-formation temperature. Otherwise hydrates may form in the vessel and partially or completely plug it thereby reducing the capacity of the separator. In some instances when the liquid or gas outlet is plugged or restricted, this causes the safety valve to open or the safety head to rupture. Steam coils can be installed in the liquid section of oil and gas separators to melt hydrates that may form there. This is especially appropriate on low-temperature separators. Operation and maintenance considerations: Corrosive fluids A separator handling corrosive fluid should be checked periodically to determine whether remedial work is required. Extreme cases of corrosion may require a reduction in the rated working pressure of the vessel. Periodic hydrostatic testing is recommended, especially if the fluids being handled are corrosive. Expendable anode can be used in separators to protect them against electrolytic corrosion. Some operators determine separator shell and head thickness with ultrasonic thickness indicators and calculate the maximum allowable working pressure from the remaining metal thickness. This should be done yearly offshore and every two to four years onshore.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**PDE2A** PDE2A: cGMP-dependent 3',5'-cyclic phosphodiesterase is an enzyme that in humans is encoded by the PDE2A gene.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Achilles' heel** Achilles' heel: An Achilles' heel (or Achilles heel) is a weakness in spite of overall strength, which can lead to downfall. While the mythological origin refers to a physical vulnerability, idiomatic references to other attributes or qualities that can lead to downfall are common. Origin: In Greek mythology, when Achilles was an infant, it was foretold that he would perish at a young age. To prevent his death, his mother Thetis took Achilles to the River Styx, which was supposed to offer powers of invulnerability. She dipped his body into the water but, because she held him by his heel, it was not touched by the water of the river. Achilles grew up to be a man of war who survived many great battles. Origin: Although the death of Achilles was predicted by Hector in Homer’s Iliad, it does not actually occur in the Iliad, but it is described in later Greek and Roman poetry and drama concerning events after the Iliad, later in the Trojan War. In the myths surrounding the war, Achilles was said to have died from a wound to his heel, ankle, or torso, which was the result of an arrow—possibly poisoned—shot by Paris. The Iliad may purposefully suppress the myth to emphasise Achilles' human mortality and the stark chasm between gods and heroes.Classical myths attribute Achilles's invulnerability to his mother Thetis having treated him with ambrosia and burned away his mortality in the hearth fire except on the heel, by which she held him. Peleus, his father, discovered the treatment and was alarmed to see Thetis holding the baby in the flames, which offended him and made her leave the treatment incomplete. According to a myth arising later, his mother had dipped the infant Achilles in the river Styx, holding onto him by his heel, and he became invulnerable where the waters touched him—that is, everywhere except the areas of his heel that were covered by her thumb and forefinger. As expression: As an expression meaning "area of weakness, vulnerable spot," the use of "Achilles' heel" dates only to 1840, with implied use in Samuel Taylor Coleridge's "Ireland, that vulnerable heel of the British Achilles!" from 1810 (Oxford English Dictionary). Anatomy: The oldest-known written record of the Achilles tendon being named after Achilles is in 1693 by the Flemish/Dutch anatomist Philip Verheyen. In his widely used text Corporis Humani Anatomia he described the tendon's location and said that it was commonly called "the cord of Achilles."The large and prominent tendon of the gastrocnemius, soleus, and plantaris muscles of the calf is called the tendo achilleus or Achilles tendon. This is commonly associated with the site of Achilles's death wound. Tendons are avascular, so such an injury would be unlikely to be fatal if the arrow were not poisoned.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Anti-CD3 monoclonal antibody** Anti-CD3 monoclonal antibody: An anti-CD3 monoclonal antibody is one that binds to CD3 on the surface of T cells. They are immunosuppresive drugs. The first to be approved was muromonab-CD3 in 1986, to treat transplant rejection. Newer monoclonal antibodies with the same mechanism of action include otelixizumab, teplizumab and visilizumab. They are being investigated for the treatment of other conditions like Crohn's disease, ulcerative colitis and type 1 diabetes, and for inducing immune tolerance.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Boot flag** Boot flag: A boot flag is a 1-byte value in a non-extended partition record, within a master boot record. It appears at the beginning of a partition record, as the value 0x80. A value of 0x00 indicates the partition does not have the boot flag set. Any other value is invalid. Boot flag: Its primary function is to indicate to a MS-DOS/MS Windows-type boot loader which partition to boot. In some cases it is used by Windows XP/2000 to assign the active partition the letter "C:". The active partition is the partition where the boot flag is set. DOS and Windows allow only one boot partition to be set with the boot flag.Other boot loaders used by third-party boot managers (such as GRUB or XOSL) can be installed to a master boot record and can boot primary or extended partitions, which do not have the boot flag set. Boot flag: There are many disk editors that can modify the boot flag, such as Disk Management in Windows, GPartEd in Linux, and fdisk. Some BIOSes test if the boot flag of at least one partition is set, otherwise they ignore the device in boot-order. Therefore, even if the bootloader does not need the flag, it has to be set to start the boot code from BIOS.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Dirty dog exercise** Dirty dog exercise: Dirty dog exercise or hip side lifts or fire hydrant exercise is an exercise that is meant to strengthen the hips and buttocks, without the use of weights. It is so named due to resemblance to the way a dog urinates. The exercise also improves core stability.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Rack lift** Rack lift: A rack lift is a type of elevator which consists of a cage attached to vertical rails affixed to the walls of a tower or shaft and which is propelled up and down by means of an electric motor which drives a pinion gear that engages a rack gear which is also attached to the wall between the rails.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Interferon regulatory factors** Interferon regulatory factors: Interferon regulatory factors (IRF) are proteins which regulate transcription of interferons (see regulation of gene expression). Interferon regulatory factors contain a conserved N-terminal region of about 120 amino acids, which folds into a structure that binds specifically to the IRF-element (IRF-E) motifs, which is located upstream of the interferon genes. Some viruses have evolved defense mechanisms that regulate and interfere with IRF functions to escape the host immune system. For instance, the remaining parts of the interferon regulatory factor sequence vary depending on the precise function of the protein. The Kaposi sarcoma herpesvirus, KSHV, is a cancer virus that encodes four different IRF-like genes; including vIRF1, which is a transforming oncoprotein that inhibits type 1 interferon activity. In addition, the expression of IRF genes is under epigenetic regulation by promoter DNA methylation. Role in IFN signaling: IRFs primarily regulate type I IFNs in the host after pathogen invasion and are considered the crucial mediators of an antiviral response. Following a viral infection, pathogens are detected by Pattern Recognition Receptors (PRRs), including various types of Toll-like Receptors (TLR) and cytosolic PRRs, in the host cell. The downstream signaling pathways from PRR activation phosphorylate ubiquitously expressed IRFs (IRF1, IRF3, and IRF7) through IRF kinases, such as TANK-binding kinase 1 (TBK1). Phosphorylated IRFs are translocated to the nucleus where they bind to IRF-E motifs and activate the transcription of Type I IFNs. In addition to IFNs, IRF1 and IRF5 has been found to induce transcription of pro-inflammatory cytokines. Role in IFN signaling: Some IFNs like IRF2 and IRF4 regulate the activation of IFNs and pro-inflammatory cytokines through inhibition. IRF2 contains a repressor region that downregulates expression of type I IFNs. IRF4 competes with IRF5, and inhibits its sustained activity. Role in immune cell development: In addition to the signal transduction functions of IRFs in innate immune responses, multiple IRFs (IRF1, IRF2, IRF4, and IRF8) play essential roles in the development of immune cells, including dendritic, myeloid, natural killer (NK), B, and T cells.Dendritic cells (DC) are a group of heterogeneous cells that can be divided into different subsets with distinct functions and developmental programs. IRF4 and IRF8 specify and direct the differentiation of different subsets of DCs by stimulating subset-specific gene expression. For example, IRF4 is required for the generation of CD4 + DCs, whereas IRF8 is essential for CD8α + DCs. In addition to IRF4 and IRF8, IRF1 and IRF2 are also involved in DC subset development. Role in immune cell development: IRF8 has also been implicated in the promotion of macrophage development from common myeloid progenitors (CMPs) and the inhibition of granulocytic differentiation during the divergence of granulocytes and monocytes. IRF8 and IRF4 are also involved in the regulation of B and T-cell development at multiple stages. IRF8 and IRF4 function redundantly to drive common lymphoid progenitors (CLPs) to B-cell lineage. IRF8 and IRF4 are also required in the regulation of germinal center (GC) B cell differentiation. Role in diseases: IRFs are critical regulators of immune responses and immune cell development, and abnormalities in IRF expression and function have been linked to numerous diseases. Due to their critical role in IFN type I activation, IRFs are implicated in autoimmune diseases that are linked to activation of IFN type I system, such as systemic lupus erythematosus (SLE). Accumulating evidence also indicates that IRFs play a major role in the regulation of cellular responses linked to oncogenesis. In addition to autoimmune diseases and cancers, IRFs are also found to be involved in the pathogenesis of metabolic, cardiovascular, and neurological diseases, such as hepatic steatosis, diabetes, cardiac hypertrophy, atherosclerosis, and stroke. Genes: IRF1 IRF2 IRF3 IRF4 IRF5 IRF6 IRF7 IRF8 IRF9
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Neat Volume** Neat Volume: In civil engineering and construction, the neat volume is a theoretical amount of material. For earthworks, it can refer to the volume either before native material is disturbed by excavation, or after placement and compaction is complete. A percentage is typically added to neat volume to estimate loose (i.e. uncompacted) volumes for procurement purposes. With concrete work, neat volume is calculated assuming there is no bowing in the formwork, or, for cast-in-place concrete, that the surfaces in contact with the concrete have no voids or imperfections that would require a greater volume of concrete to fill.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Deflectin** Deflectin: A deflectin is one of a family of antibiotic chemicals produced by Aspergillus deflectus which contain a 6H-furo[2,3-h]-2-benzopyran-6,8(6aH)-dione core. Deflectins are yellow coloured crystalline substances when pure. They react with ammonia, by replacing an oxygen atom in the six-membered ring with an NH group. They are weak acids. On adding a strong base to an alcoholic solution of deflectin, it show a red colour for a short time. Deflectin: Deflectin 1a contains a 1-oxooctyl side chain. It has a melting point of 161 °C. Deflectin 1b contains a ten carbon side chain and melts at 152 °C. Deflectin 1c has a 12-atom side chain and melts at 141 °C.Deflectin 2a melts at 122 °C. It has a 10 carbon atom side chain with a 2-methyl branch. Deflectin 2b is similar but the side chain is 2 atoms longer. It melts at 111 °C.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Evolutionary mismatch** Evolutionary mismatch: Evolutionary mismatch (also "mismatch theory" or "evolutionary trap") is the evolutionary biology concept that a previously advantageous trait may become maladaptive due to change in the environment, especially when change is rapid. It is said this can take place in humans as well as other animals. Evolutionary mismatch: Environmental change leading to evolutionary mismatch can be broken down into two major categories: temporal (change of the existing environment over time, e.g. a climate change) or spatial (placing organisms into a new environment, e.g. a population migrating). Since environmental change occurs naturally and constantly, there will certainly be examples of evolutionary mismatch over time. However, because large-scale natural environmental change – like a natural disaster – is often rare, it is less often observed. Another more prevalent kind of environmental change is anthropogenic (human-caused). In recent times, humans have had a large, rapid, and trackable impact on the environment, thus creating scenarios where it is easier to observe evolutionary mismatch.Because of the mechanism of evolution by natural selection, the environment ("nature") determines ("selects") which traits will persist in a population. Therefore, there will be a gradual weeding out of disadvantageous traits over several generations as the population becomes more adapted to its environment. Any significant change in a population's traits that cannot be attributed to other factors (such as genetic drift and mutation) will be responsive to a change in that population's environment; in other words, natural selection is inherently reactive. Shortly following an environmental change, traits that evolved in the previous environment, whether they were advantageous or neutral, are persistent for several generations in the new environment. Because evolution is gradual and environmental changes often occur very quickly on a geological scale, there is always a period of "catching-up" as the population evolves to become adapted to the environment. It is this temporary period of "disequilibrium" that is referred to as mismatch. Mismatched traits are ultimately addressed in one of several possible ways: the organism may evolve such that the maladaptive trait is no longer expressed, the organism may decline and/or become extinct as a result of the disadvantageous trait, or the environment may change such that the trait is no longer selected against. History: As evolutionary thought became more prevalent, scientists studied and attempted to explain the existence of disadvantageous traits, known as maladaptations, that are the basis of evolutionary mismatch. History: The theory of evolutionary mismatch began under the term evolutionary trap as early as the 1940s. In his 1942 book, evolutionary biologist Ernst Mayr described evolutionary traps as the phenomenon that occurs when a genetically uniform population suited for a single set of environmental conditions is susceptible to extinction from sudden environment changes. Since then, key scientists such as Warren J. Gross and Edward O. Wilson have studied and identified numerous examples of evolutionary traps.The first occurrence of the term "evolutionary mismatch" may have been in a paper by Jack E. Riggs published in the Journal of Clinical Epidemiology in 1993. In the years to follow, the term evolutionary mismatch has become widely used to describe biological maladaptations in a wide range of disciplines. A coalition of modern scientists and community organizers assembled to found the Evolution Institute in 2008, and in 2011 published a more recent culmination of information on evolutionary mismatch theory in an article by Elisabeth Lloyd, David Sloan Wilson, and Elliott Sober. In 2018 a popular science book appeared by evolutionary psychologists on evolutionary mismatch and the implications for humans Mismatch in human evolution: The Neolithic Revolution: transitional context The Neolithic Revolution brought about significant evolutionary changes in humans; namely the transition from a hunter-gatherer lifestyle, in which humans foraged for food, to an agricultural lifestyle. This change occurred approximately 10,000–12,000 years ago. Humans began to domesticate both plants and animals, allowing for the maintenance of constant food resources. This transition quickly and dramatically changed the way that humans interact with the environment, with societies taking up practices of farming and animal husbandry. However, human bodies had evolved to be adapted to their previous foraging lifestyle. The slow pace of evolution in comparison with the very fast pace of human advancement allowed for the persistence of these adaptations in an environment where they are no longer necessary. In some human societies that now function in a vastly different way from the hunter-gatherer lifestyle, these outdated adaptations now lead to the presence of maladaptive, or mismatched, traits. Mismatch in human evolution: Obesity and diabetes Human bodies are predisposed to maintain homeostasis, especially when storing energy as fat. This trait serves as the main basis for the "thrifty gene hypothesis", the idea that "feast-or-famine conditions during human evolutionary development naturally selected for people whose bodies were efficient in their use of food calories". Hunter-gatherers, who used to live under environmental stress, benefit from this trait; there was an uncertainty of when the next meal would be, and they would spend most of their time performing high levels of physical activity. Therefore, those that consumed many calories would store the extra energy as fat, which they could draw upon in times of hunger.However, modern humans have evolved to a world of more sedentary lifestyles and convenience foods. People are sitting more throughout their days, whether it be in their cars during rush hour or in their cubicles during their full-time jobs. Less physical activity in general means fewer calories burned throughout the day. Human diets have changed considerably over the 10,000 years since the advent of agriculture, with more processed foods in their diets that lack nutritional value and lead them to consume more sodium, sugar, and fat. These high calorie, nutrient-deficient foods cause people to consume more calories than they burn. Fast food combined with decreased physical activity means that the "thrifty gene" that once benefit human predecessors now works against them, causing their bodies to store more fat and leading to higher levels of obesity in the population. Mismatch in human evolution: Obesity is one consequence of mismatched genes. Known as "metabolic syndrome", this condition is also associated with other health concerns, including insulin resistance, where the body no longer responds to insulin secretion, so blood glucose levels are unable to be lowered, which can lead to type 2 diabetes. Mismatch in human evolution: Osteoporosis Another human disorder that can be explained by mismatch theory is the rise in osteoporosis in modern humans. In advanced societies, many people, especially women, are remarkably susceptible to osteoporosis during aging. Fossil evidence has suggested that this was not always the case, with bones from elderly hunter-gatherer women often showing no evidence of osteoporosis. Evolutionary biologists have posited that the increase in osteoporosis in modern Western populations is likely due to our considerably sedentary lifestyles. Women in hunter-gatherer societies were physically active both from a young age and well into their late-adult lives. This constant physical activity likely lead to peak bone mass being considerably higher in hunter-gatherer humans than in modern-day humans. While the pattern of bone mass degradation during aging is purportedly the same for both hunter-gatherers and modern humans, the higher peak bone mass associated with more physical activity may have led hunter-gatherers to be able to develop a propensity to avoid osteoporosis during aging. Mismatch in human evolution: Hygiene hypothesis The hygiene hypothesis, a concept initially theorized by immunologists and epidemiologists, has been proved to have a strong connection with evolutionary mismatch through recent year studies. Hygiene hypothesis states that the profound increase in allergies, autoimmune diseases, and some other chronic inflammatory diseases is related to the reduced exposure of the immune system to antigens. Such reduced exposure is more common in industrialized countries and especially urban areas, where the inflammatory chronic diseases are also more frequently seen. Recent analysis and studies have tied the hygiene hypothesis and evolutionary mismatch together. Some researchers suggest that the overly sterilized urban environment changes or depletes the microbiota composition and diversity. Such environmental conditions favor the development of the inflammatory chronic diseases because human bodies have been selected to adapt to a pathogen-rich environment in the history of evolution. For example, studies have shown that change in our symbiont community can lead to the disorder of immune homeostasis, which can be used to explain why antibiotic use in early childhood can result in higher asthma risk. Because the change or depletion of the microbiome is often associated with hygiene hypothesis, the hypothesis is sometimes also called "biome depletion theory". Mismatch in human evolution: Human behavior Behavioral examples of evolutionary mismatch theory include the abuse of dopaminergic pathways and the reward system. An action or behavior that stimulates the release of dopamine, a neurotransmitter known for generating a sense of pleasure, will likely be repeated since the brain is programmed to continually seek such pleasure. In hunter-gatherer societies, this reward system was beneficial for survival and reproductive success. But now, when there are fewer challenges to survival and reproducing, certain activities in the present environment (gambling, drug use, eating) exploit this system, leading to addictive behaviors. Mismatch in human evolution: Anxiety Anxiety is another example of a modern manifestation of evolutionary mismatch in humans. An immediate return environment is when decisions made in the present create immediate results. Prehistoric human brains have evolved to assimilate to this particular environment; creating reactions such as anxiety to solve short-term problems. For example, the fear of a predator stalking a human, causes the human to run away consequently immediately ensuring the safety of the human as the distance increases from the predator. However, humans currently live in a different environment called the delayed reaction environment. In this environment, current decisions do not create immediate results. The advancement of society has reduced the threat of external factors such as predators, lack of food,shelter, etc. therefore human problems that once circulated around current survival have changed into how the present will affect the quality of future survival. In summation, traits like anxiety have become outdated as the advancement of society has allowed humans to no longer be under constant threat and instead worry about the future. Mismatch in human evolution: Work stress Examples of evolutionary mismatch also occur in the modern workplace. Unlike our hunter-gatherer ancestors who lived in small egalitarian societies, the modern work place is large, complex, and hierarchical. Humans spend significant amounts of time interacting with strangers in conditions that are very different from those of our ancestral past. Hunter-gatherers do not separate work from their private lives, they have no bosses to be accountable to, or no deadlines to adhere to. Our stress system reacts to immediate threats and opportunities. The modern workplace exploits evolved psychological mechanisms that are aimed at immediate survival or longer-term reproduction. These basic instincts misfire in the modern workplace, causing conflicts at work, burnout, job alienation and poor management practices. Mismatch in human evolution: Gambling There are two aspects of gambling that make it an addictive activity: chance and risk. Chance gives gambling its novelty. Back when humans had to forage and hunt for food, novelty-seeking was advantageous for them, particularly for their diet. However, with the development of casinos, this trait of pursuing novelties has become disadvantageous. Risk assessment, the other behavioral trait applicable to gambling, was also beneficial to hunter-gatherers in the face of danger. However, the types of risks hunter-gatherers had to assess are significantly different and more life-threatening than the risks people now face. The attraction to gambling stems from the attraction to risk and reward related activity. Mismatch in human evolution: Drug addiction Herbivores have created selective pressure for plants to possess specific molecules that deter plant consumption, such as nicotine, morphine, and cocaine. Plant-based drugs, however, have reinforcing and rewarding effects on the human neurological system, suggesting a "paradox of drug reward" in humans. Human behavioral evolutionary mismatch explains the contradiction between plant evolution and human drug use. In the last 10,000 years, humans found the dopaminergic system, or reward system, particularly useful in optimizing Darwinian fitness. While drug use has been a common characteristic of past human populations, drug use involving potent substances and diverse intake methods is a relatively contemporary feature of society. Human ancestors lived in an environment that lacked drug use of this nature, so the reward system was primarily used in maximizing survival and reproductive success. In contrast, present-day humans live in a world where the current nature of drugs render the reward system maladaptive. This class of drugs falsely triggers a fitness benefit in the reward system, leaving people susceptible to drug addiction. The modern-day dopaminergic system presents vulnerabilities to the difference in accessibility and social perception of drugs. Mismatch in human evolution: Eating In the era of foraging for food, hunter-gatherers rarely knew where their next meal would come from. This food scarcity rewarded consumption of high energy meals in order to save excess energy as fat. Now that food is readily available, the neurological system that once helped people recognize the survival advantages of essential eating has now become disadvantageous as it promotes overeating. This has become especially dangerous after the rise of processed foods, as the popularity of foods that have unnaturally high levels of sugar and fat has significantly increased. Non-human examples: Evolutionary mismatch can occur any time an organism is exposed to an environment that does not resemble the typical environment the organism adapted in. Due to human influences, such as global warming and habitat destruction, the environment is changing very rapidly for many organisms, leading to numerous cases of evolutionary mismatch. Non-human examples: Examples with human influence Sea turtles and light pollution Female sea turtles create nests to lay their eggs by digging a pit on the beach, typically between the high tide line and dune, using their rear flippers. Consequently, within the first seven days of hatching, hatchling sea turtles must make the journey from the nest back into the ocean. This trip occurs predominantly at night in order to avoid predators and overheating. Non-human examples: In order to orient themselves towards the ocean, the hatchlings depend on their eyes to turn towards the brightest direction. This is because the open horizon of the ocean, illuminated by celestial light, tends to be much brighter in a natural undeveloped beach than the dunes and vegetation. Studies propose two mechanisms of the eye for this phenomenon. Referred to as the "raster system", the theory is that sea turtles' eyes contain numerous light sensors which take in the overall brightness information of a general area and make a "measurement" of where the light is most intense. If the light sensors detect the most intense light on a hatchling's left side, the sea turtle would turn left. A similar proposal called the complex phototropotaxis system theorizes that the eyes contain light intensity comparators that take in detailed information of the intensity of light from all directions. Sea turtles are able to "know" that they are facing the brightest direction when the light intensity is balanced between both eyes.This method of finding the ocean is successful in natural beaches, but in developed beaches, the intense artificial lights from buildings, light houses, and even abandoned fires overwhelm the sea turtles and cause them to head towards the artificial light instead of the ocean. Scientists call this misorientation. Sea turtles can also become disoriented and circle around in the same place. Numerous cases show that misoriented hatchling sea turtles either die from dehydration, get consumed by a predator, or even burn to death in an abandoned fire. The direct impact of light pollution on the number of sea turtles has been too difficult to measure. However, this problem is exacerbated because all species of sea turtles are endangered. Other animals, including migratory birds and insects, are also victims to light pollution because they also depend on light intensity at night to properly orient themselves. Non-human examples: Dodo bird and hunting The Dodo bird lived on a remote Island, Mauritius, in the absence of predators. Here, the Dodo evolved to lose its instinct for fear and the ability to fly. This allowed them to be easily hunted by Dutch sailors who arrived on the island in the late 16th century. The Dutch sailors also brought foreign animals to the island such as monkeys and pigs that ate the Dodo bird's eggs, which was detrimental to the population growth of the slow breeding bird. Their fearlessness made them easy targets and their inability to fly gave them no opportunity to evade danger. Thus, they were easily driven to extinction within a century of their discovery. Non-human examples: The Dodo's inability to fly was once beneficial for the bird because it conserved energy. The Dodo conserved more energy relative to birds with the ability to fly, due to the Dodo's smaller pectoral muscles. Smaller muscle sizes are linked to lower rates of maintenance metabolism, which in turn conserves energy for the Dodo. Lacking an instinct for fear was another mechanism through which the Dodo conserved energy because it never had to expend any energy for a stress response. Both mechanisms of conservation of energy was once advantageous because it enabled the Dodo to execute activities with minimal energy expenditure. However, these proved disadvantageous when their island was invaded, rendering them defenseless to the new dangers that humans brought. Non-human examples: Peppered moths during the English Industrial Revolution Before the English Industrial Revolution of the late 18th and early 19th centuries the most common phenotypic color of the peppered moth was white with black speckles. When higher air pollution in urban regions killed the lichens adhering to trees and exposed their darker bark, the light-colored moths stood out more to predators. Natural selection began favoring a previously rare darker variety of the peppered moth referred to as "carbonaria" because the lighter phenotype had become mismatched to its environment. Non-human examples: Carbonaria frequencies rose above 90% in some areas of England until efforts in the late 1900s to reduce air pollution caused a resurgence of epiphytes, including lichens, to again lighten the color of trees. Under these conditions the coloring of the carbonaria reverted from an advantage to a disadvantage and that phenotype became mismatched to its environment. Non-human examples: Giant jewel beetle and beer bottles Evolutionary mismatch can also be seen among insects. One such example is in the case of the giant jewel beetle (Julodimorpha bakewelli). The male jewel beetle has evolved to be attracted to features of the female jewel beetle that allow the male to identify a female jewel beetle as it flies across the desert. These features include size, color, and texture. However, these physical traits are seen manifested in some beer bottles as well. As a result, males often consider beer bottles more attractive than female jewel beetles due to the beer bottle's large size and attractive coloring. Beer bottles are often discarded by humans in the Australian desert that the jewel beetle thrives in, creating an environment where male jewel beetles prefer to mate with beer bottles instead of females. This is a situation that is extremely disadvantageous as it reduces the reproductive output of the jewel beetle as fewer beetles are mating. This condition can be considered an evolutionary mismatch, as a habit that evolved to aid in reproduction has become disadvantageous due to the littering of beer bottles, an anthropogenic cause. Non-human examples: Examples without human influence Information cascades between birds Normally, gaining information from watching other organisms allows the observer to make good decisions without spending effort. More specifically, birds often observe the behavior of other organisms to gain valuable information, such as the presence of predators, good breeding sites, and optimal feeding spots. Although this allows the observer to spend less effort gathering information, it can also lead to bad decisions if the information gained from observing is unreliable. In the case of the nutmeg mannikins, the observer can minimize the time spent looking for an optimal feeder and maximize its feeding time by watching where other nutmeg mannikins feed. However, this relies on the assumption that the observed mannikins also had reliable information that indicated the feeding spot was an ideal one. This behavior can become maladaptive when prioritizing information gained from watching others leads to information cascades, where birds follow the rest of the crowd even though prior experience may have suggested that the decision of the crowd is a poor one. For instance, if a nutmeg mannikin sees enough mannikins feeding at a feeder, nutmeg mannikins have been shown to choose that feeder even if their personal experience indicates that the feeder is a poor one. Non-human examples: House finches and the introduction of the MG disease Evolutionary mismatch occurs in house finches when they are exposed to infectious individuals. Male house finches tend to feed in close proximity to other finches that are sick or diseased, because sick individuals are less competitive than usual, in turn making the healthy male more likely to win an aggressive interaction if it happens. To make it less likely to lose a social confrontation, healthy finches are inclined to forage near individuals that are lethargic or listless due to disease. However, this disposition has created an evolutionary trap for the finches after the introduction of the MG disease in 1994. Since this disease is infectious, healthy finches will be in danger of contraction if they are in the vicinity of individuals that have previously developed the disease. The relatively short duration of the disease's introduction has caused an inability for the finches to adapt quickly enough to avoid nearing sick individuals, which ultimately results in the mismatch between their behavior and the changing environment. Non-human examples: Exploitation of earthworm's reaction to vibrations Worm charming is a practice used by people to attract earthworms out of the ground by driving in a wooden stake to vibrate the soil. This activity is commonly performed to collect fishing bait and as a competitive sport. Worms that sense the vibrations rise to the surface. Research shows that humans are actually taking advantage of a trait that worms adapted to avoid hungry burrowing moles which prey on the worms. This type of evolutionary trap, where an originally beneficial trait is exploited in order to catch prey, was coined the "rare enemy effect" by Richard Dawkins, an English evolutionary biologist. This trait of worms has been exploited not only by humans, but by other animals. Herring gulls and wood turtles have been observed to also stamp on the ground to drive the worms up to the surface and consume them.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**PMEG (antiviral)** PMEG (antiviral): PMEG (9-[2-(phosphonomethoxy)ethyl] guanine) is an acyclic nucleoside phosphonate. Acyclic nucleoside phosphonates can have significant antiviral, cytostatic and antiproliferative activities. PMEG can inhibit cell proliferation and cause genotoxicity. PMEG is active against leukemia and melanoma in animal tumor models, and also has antiviral activities against herpes viruses in murine models.Successful application of PMEG and PMEG-derivatives analogs may depend on the development analogs with reduced toxicity and enhanced pharmacokinetic properties to tissues. There are no clinical trials using PMEG listed at clinicaltrials.gov. This suggests that the pharmacokinetic properties of PMEG were too toxic to process forward with. There are several different PMEG-derivatives analogs currently being investigated. GS-9191 and GS-9219 prodrugs are just two of the next generation PMEG compounds being evaluated for antiviral and anticancer activities. Both GS-9191 and GS-9219 have made it into clinical trials, but require additional study. Biology: Acyclic nucleoside phosphonates prodrugs require further phosphorylation in the cell in order to become the active metabolite. Once PMEG is phosphorylated into its triphosphate form, host or viral DNA polymerases can use it a substrate during DNA synthesis. Since it is an acyclic nucleoside, lacking a 3'-OH moiety, to further extension of the DNA strand occurs. Thus this agent uses the classical mechanism of DNA chain terminator. PMEG has often been cited for its antiviral activities.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Bovine podiatry** Bovine podiatry: Bovine podiatry is a branch of veterinary medicine concerned with the diagnosis and treatment of the defects of a bovine hoof.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Diagonal matrix** Diagonal matrix: In linear algebra, a diagonal matrix is a matrix in which the entries outside the main diagonal are all zero; the term usually refers to square matrices. Elements of the main diagonal can either be zero or nonzero. An example of a 2×2 diagonal matrix is [3002] , while an example of a 3×3 diagonal matrix is [600000000] . An identity matrix of any size, or any multiple of it (a scalar matrix), is a diagonal matrix. Diagonal matrix: A diagonal matrix is sometimes called a scaling matrix, since matrix multiplication with it results in changing scale (size). Its determinant is the product of its diagonal values. Definition: As stated above, a diagonal matrix is a matrix in which all off-diagonal entries are zero. That is, the matrix D = (di,j) with n columns and n rows is diagonal if However, the main diagonal entries are unrestricted. Definition: The term diagonal matrix may sometimes refer to a rectangular diagonal matrix, which is an m-by-n matrix with all the entries not of the form di,i being zero. For example: [10004000−3000] or [100000400000−300] More often, however, diagonal matrix refers to square matrices, which can be specified explicitly as a square diagonal matrix. A square diagonal matrix is a symmetric matrix, so this can also be called a symmetric diagonal matrix. Definition: The following matrix is square diagonal matrix: If the entries are real numbers or complex numbers, then it is a normal matrix as well. In the remainder of this article we will consider only square diagonal matrices, and refer to them simply as "diagonal matrices". Vector-to-matrix diag operator: A diagonal matrix D can be constructed from a vector a=[a1⋯an]T using the diag operator: This may be written more compactly as diag ⁡(a) The same operator is also used to represent block diagonal matrices as diag ⁡(A1,…,An) where each argument Ai is a matrix. The diag operator may be written as: where ∘ represents the Hadamard product and 1 is a constant vector with elements 1. Matrix-to-vector diag operator: The inverse matrix-to-vector diag operator is sometimes denoted by the identically named diag ⁡(D)=[a1⋯an]T where the argument is now a matrix and the result is a vector of its diagonal entries. The following property holds: Scalar matrix: A diagonal matrix with equal diagonal entries is a scalar matrix; that is, a scalar multiple λ of the identity matrix I. Its effect on a vector is scalar multiplication by λ. For example, a 3×3 scalar matrix has the form: The scalar matrices are the center of the algebra of matrices: that is, they are precisely the matrices that commute with all other square matrices of the same size. By contrast, over a field (like the real numbers), a diagonal matrix with all diagonal elements distinct only commutes with diagonal matrices (its centralizer is the set of diagonal matrices). That is because if a diagonal matrix diag ⁡(a1,…,an) has ai≠aj, then given a matrix M with mij≠0, the (i,j) term of the products are: (DM)ij=aimij and (MD)ij=mijaj, and ajmij≠mijai (since one can divide by mij ), so they do not commute unless the off-diagonal terms are zero. Diagonal matrices where the diagonal entries are not all equal or all distinct have centralizers intermediate between the whole space and only diagonal matrices.For an abstract vector space V (rather than the concrete vector space Kn ), the analog of scalar matrices are scalar transformations. This is true more generally for a module M over a ring R, with the endomorphism algebra End(M) (algebra of linear operators on M) replacing the algebra of matrices. Formally, scalar multiplication is a linear map, inducing a map End ⁡(M), (from a scalar λ to its corresponding scalar transformation, multiplication by λ) exhibiting End(M) as a R-algebra. For vector spaces, the scalar transforms are exactly the center of the endomorphism algebra, and, similarly, invertible transforms are the center of the general linear group GL(V). The former is more generally true free modules M≅Rn , for which the endomorphism algebra is isomorphic to a matrix algebra. Vector operations: Multiplying a vector by a diagonal matrix multiplies each of the terms by the corresponding diagonal entry. Given a diagonal matrix diag ⁡(a1,…,an) and a vector v=[x1⋯xn]T , the product is: This can be expressed more compactly by using a vector instead of a diagonal matrix, d=[a1⋯an]T , and taking the Hadamard product of the vectors (entrywise product), denoted d∘v This is mathematically equivalent, but avoids storing all the zero terms of this sparse matrix. This product is thus used in machine learning, such as computing products of derivatives in backpropagation or multiplying IDF weights in TF-IDF, since some BLAS frameworks, which multiply matrices efficiently, do not include Hadamard product capability directly. Matrix operations: The operations of matrix addition and matrix multiplication are especially simple for diagonal matrices. Write diag(a1, ..., an) for a diagonal matrix whose diagonal entries starting in the upper left corner are a1, ..., an. Then, for addition, we have diag(a1, ..., an) + diag(b1, ..., bn) = diag(a1 + b1, ..., an + bn)and for matrix multiplication, diag(a1, ..., an) diag(b1, ..., bn) = diag(a1b1, ..., anbn).The diagonal matrix diag(a1, ..., an) is invertible if and only if the entries a1, ..., an are all nonzero. In this case, we have diag(a1, ..., an)−1 = diag(a1−1, ..., an−1).In particular, the diagonal matrices form a subring of the ring of all n-by-n matrices. Matrix operations: Multiplying an n-by-n matrix A from the left with diag(a1, ..., an) amounts to multiplying the i-th row of A by ai for all i; multiplying the matrix A from the right with diag(a1, ..., an) amounts to multiplying the i-th column of A by ai for all i. Operator matrix in eigenbasis: As explained in determining coefficients of operator matrix, there is a special basis, e1, ..., en, for which the matrix A takes the diagonal form. Hence, in the defining equation {\textstyle \mathbf {A} \mathbf {e} _{j}=\sum _{i}a_{i,j}\mathbf {e} _{i}} , all coefficients ai,j with i ≠ j are zero, leaving only one term per sum. The surviving diagonal elements, ai,i , are known as eigenvalues and designated with λi in the equation, which reduces to Aei=λiei . The resulting equation is known as eigenvalue equation and used to derive the characteristic polynomial and, further, eigenvalues and eigenvectors. Operator matrix in eigenbasis: In other words, the eigenvalues of diag(λ1, ..., λn) are λ1, ..., λn with associated eigenvectors of e1, ..., en. Properties: The determinant of diag(a1, ..., an) is the product a1⋯an. The adjugate of a diagonal matrix is again diagonal. Where all matrices are square, A matrix is diagonal if and only if it is triangular and normal. A matrix is diagonal if and only if it is both upper- and lower-triangular. A diagonal matrix is symmetric. The identity matrix In and zero matrix are diagonal. A 1×1 matrix is always diagonal. Applications: Diagonal matrices occur in many areas of linear algebra. Because of the simple description of the matrix operation and eigenvalues/eigenvectors given above, it is typically desirable to represent a given matrix or linear map by a diagonal matrix. In fact, a given n-by-n matrix A is similar to a diagonal matrix (meaning that there is a matrix X such that X−1AX is diagonal) if and only if it has n linearly independent eigenvectors. Such matrices are said to be diagonalizable. Applications: Over the field of real or complex numbers, more is true. The spectral theorem says that every normal matrix is unitarily similar to a diagonal matrix (if AA∗ = A∗A then there exists a unitary matrix U such that UAU∗ is diagonal). Furthermore, the singular value decomposition implies that for any matrix A, there exist unitary matrices U and V such that U∗AV is diagonal with positive entries. Operator theory: In operator theory, particularly the study of PDEs, operators are particularly easy to understand and PDEs easy to solve if the operator is diagonal with respect to the basis with which one is working; this corresponds to a separable partial differential equation. Therefore, a key technique to understanding operators is a change of coordinates—in the language of operators, an integral transform—which changes the basis to an eigenbasis of eigenfunctions: which makes the equation separable. An important example of this is the Fourier transform, which diagonalizes constant coefficient differentiation operators (or more generally translation invariant operators), such as the Laplacian operator, say, in the heat equation. Operator theory: Especially easy are multiplication operators, which are defined as multiplication by (the values of) a fixed function–the values of the function at each point correspond to the diagonal entries of a matrix. Sources: Horn, Roger Alan; Johnson, Charles Royal (1985), Matrix Analysis, Cambridge University Press, ISBN 978-0-521-38632-6
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Bare-metal server** Bare-metal server: In computer networking, a bare-metal server is a physical computer server that is used by one consumer, or tenant, only. Each server offered for rental is a distinct physical piece of hardware that is a functional server on its own. They are not virtual servers running in multiple pieces of shared hardware. The term is used for distinguishing between servers that can host multiple tenants and which use virtualisation and cloud hosting. Unlike bare-metal servers, cloud servers are shared between multiple tenants. Each bare-metal server may run any amount of work for a user, or have multiple simultaneous users, but they are dedicated entirely to the entity who is renting them. Bare-metal advocacy: Hypervisors provide some isolation between tenants but there can still be a noisy neighbour effect. If a physical server is multi-tenanted, peaks of load from one tenant may consume enough machine resources to temporarily affect other tenants. As the tenants are otherwise isolated, it is also hard to manage or load balance this. Bare-metal servers, and single tenancy, can avoid this. In addition, hypervisors provide weaker isolation and are much more risky from a security point-of-view compared to using separate machines. Attackers have always found vulnerabilities in the isolation software (such as hypervisors), covert channels are impractical to counter without physically separate machines, and shared hardware is vulnerable to defects in hardware protection mechanisms such as Rowhammer, Spectre, and Meltdown. As, once again, server costs are dropping as a proportion of total cost of ownership against their administration overhead, the classic solution of 'throwing hardware at the problem' becomes viable again. Bare-metal cloud hosting: Bare-metal cloud servers do not run a hypervisor, are not virtualised -- but can still be delivered via a cloud-like service model. Infrastructure as a service, particularly through infrastructure as code, offers many advantages to make hosting conveniently manageable. Combining the features of both cloud hosting, and bare-metal servers, offers most of these, whilst still conveying the performance advantages. These cloud offerings are also called Bare-Metal-as-a-Service (BMaaS). Some bare-metal cloud servers may run a hypervisor or containers, e.g., to simplify maintenance or provide additional layers of isolation.Note that the distinction between these services and the traditional dedicated server offerings is the user's ability to provision infrastructures composed out of multiple servers, a complex network and storage setup rather than servers in isolation. Bare-metal cloud software: Both commercial and open-source platforms exist enabling companies to build their own private bare-Metal private clouds. Bare-metal cloud software: BMaaS software typically takes over the lifecycle management of the equipment in a datacenter (compute, storage and network Switches, firewalls, load balancers and others). It enables datacenter operators to offload much of the manual work typically associated with deploying hardware. It also reduces waste by simplifying reuse and increases security by implementing automatic cleanup and automatic segmentation between tenants at the network level. Increasingly BMaaS software is used internally to reduce the costs associated with lifecycle management of equipment for enterprises with large fleets of servers.BMaaS software aims to simplify hardware management and enable its as-a-service consumption. It handles primarily the layer below a hyper-converged or container-based solution. It often collaborates with the layers above through integrations such as the Kubernetes cluster autoscaler. Comparison with composable disaggregated infrastructure: BMaaS software has a similar objective to composable disaggregated infrastructure in that it aims to offer the user the ability to "compose" the desired compute unit defined as a set of resources (such as compute or storage). The distinction is that the storage and compute need not be "dissagregated" (accessed from outside the server unit) as this often requires specialized hardware. Instead, the same result is achieved with off-the-shelf hardware by selecting a matching server that matches the desired characteristics (RAM, CPU cores, local disk capacity, GPU, FPGA, SmartNICs) from a pool of servers and reconfiguring the network so that the server joins the others that a tenant has deployed. Comparison with composable disaggregated infrastructure: Note that in some implementations, the storage component is external to the systems using iSCSI blurring the lines between BMaaS and composable infrastructure. This allows the user to choose the size and performance of the node's storage in a manner similar to classical virtualized Infrastructure as a Service offerings. This has the advantage of lower variability (snowflaking) in the hardware pool and the possibility of faster migration from one equipment to another in the event of hardware failure. Use in edge computing: As new workloads such as augmented reality, mixed reality, connected cars, and telerobotics are gaining ground so is the demand for low latency cloud services so does demand for edge computing.Bare Metal and the BMaaS automation software is used for edge cloud implementations, where large numbers of small data-centers need to be automated and then consumed as a service and where the service needs to offer the lowest latency possible. History: At one time, all servers were bare-metal servers. Servers were kept on-premises and often belonged to the organisation using and operating them. Operating systems developed very early on (early 1960s) to allow time-sharing. Single large computers, mainframes or minis, were commonly housed in centralised locations and their services shared through a bureau. The shift to cheap commodity PCs in the 1980s changed this as the market expanded, and most organisations, even the smallest, began to purchase or lease their own computers. Popular growth of the internet, and particularly the web, in the 1990s encouraged the practice of hosting in data centres, where many customers shared the facilities of single servers. Small web servers at this time often cost more for their connectivity than their hardware cost, encouraging this centralisation. HTTP 1.1's ability for virtual hosting also made it easy to co-host many web sites on the same server. History: From around 2000, or 2005 in commercially practical terms, interest grew in the use of virtual servers and then cloud hosting, where infrastructure as a service made the computing service the fungible commodity, rather than the server hardware. Hypervisors were developed which could offer many virtual machines hosted on larger physical servers. The load pattern of multiple users has long been recognised as being smoother overall than individual users, so these virtual machines could make more efficient use of the physical hardware and its costs, whilst also appearing to have higher individual performance than a simple cost-share would suggest. History: One of the forefathers of bare metal provisioning is Cobbler that appeared in the 1990s and was using the Preboot Execution Environment (PXE) protocol. Since then various cloud providers have been building their own in-house stacks in order to offer variants of dedicated servers or bare metal cloud offerings such as: April 2015 OpenStack Ironic component was launched as part of the Kilo release. History: March 2020, Equinix acquired bare metal cloud provider Packetfor $335 million. May 2020 Packet released a part of their stack as Tinkerbell June 2020 MetalSoft was launched to commercialize the Stack behind Bigstep Cloud. Examples of BMaaS software: Examples of BMaaS software both open-source and commercial: OpenStack Ironic (open source) Canonical MaaS (open source) MetalSoft (commercial) RackN DigitalRebar (commercial) Tinkerbell (open source) xCAT (open source) RackHD (open source) Cobbler (open source) Foreman (open source) Puppet Labs Razor (commercial) Companies offering BMaaS products: Equinix Metal (former Packet) Lumen OVHCloud Internap Bigstep
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**1929–1930 psittacosis pandemic** 1929–1930 psittacosis pandemic: The 1929–1930 psittacosis pandemic, also known as the psittacosis outbreak of 1929–1930 and the great parrot fever pandemic, was a series of simultaneous outbreaks of psittacosis (parrot fever) which, accelerated by the breeding and transportation of birds in crowded containers for the purpose of trade, was initially seen to have its origin in parrots from South America. It was shortly found to have spread from several species of birds from several countries worldwide to humans between mid 1929 and early 1930. Diagnosed by its clinical features and link to birds, it affected around 750 to 800 people globally, with a mortality of 15%. Its mode of transmission to humans by mouth-to-beak contact or inhaling dried bird secretions and droppings was not known at the time. The cause, Chlamydia psittaci, which usually remains dormant in birds until activated by stress of capture and confinement, was discovered after the pandemic. 1929–1930 psittacosis pandemic: Cases of psittacosis were reported in mid 1929, in Birmingham, United Kingdom, and linked to parrots from Buenos Aires, Argentina, where an ongoing outbreak of the disease had led to cautioning bird owners to declare their sick parrots. The origin of the outbreak in the Argentine city of Córdoba was traced to an import of 5,000 parrots from Brazil. Although the Argentine parrot trade was stopped, a number of birds were illegally sold on to visitors at its seaports, with the consequence that psittacosis was transmitted to several countries. 1929–1930 psittacosis pandemic: In November 1929, reports of cases among an Argentine theatrical group in Córdoba made it into the local press. In January 1930, when cases of an atypical pneumonia in one family, with the death of their parrot, appeared in Maryland, United States, a link was made to the story of the theatrical group, and "parrot fever" made headlines in the American press. Following further cases, bans on parrot trades were implemented, and subsequently cases were reported in several other countries, including Germany, France and Australia. The origin was understood to have been the importation of green Amazon parrots from South America. Later, the principal source of the disease in the U.S. was domestic lovebirds raised in Californian aviaries. 1929–1930 psittacosis pandemic: The impact of the outbreak on the U.S. Hygienic Laboratory, with 16 of its workers affected, including two deaths, led to the formation of the National Institute of Health. Background: In 1880, physician Jakob Ritter described a cluster of seven people with atypical pneumonia connected to his brother's household in Germany. The outbreak was at the time not linked to sick exotic birds: 12 finches and parrots confined in the study of the house in Uster, Switzerland. Three of the seven affected people died, including Ritter's brother and the metal-worker who visited the home to fix the bird cage. Ritter detailed the natural history of the disease, and, noting its similar features to typhoid and typhus, he called the disease "pneumotyphus" and proposed that the birds might be the vectors. Subsequently, further similar outbreaks with a coincidence of exposure to birds appeared in other parts of Europe, including Paris in the 1890s, where it killed one in three affected people. The outbreaks ended following bans in bird trading. Subsequently, greater efforts were made to find the cause of the disease, but without success. The disease in birds was named psittacosis in 1895 by Antonin Morange. Prior to the 1929 outbreak of psittacosis in the United States, the last known cases were in 1917, found in captive birds in the basement of a department store in Pennsylvania. The causative pathogen, C. psittaci, was not discovered until the 1960s. Origin and global spread: There were multiple origins, involving several countries and several species of birds. Affected people typically experienced headache, poor sleep, fatigue and a cough trailing several days of fever. Some subsequently became delirious and semi-conscious, after which some died, and others recovered with a prolonged convalescent period.Initial outbreaks were linked to exotic birds from South America. The source of the Córdoba outbreak was traced to an import of 5,000 parrots from Brazil. The birds had been confined in unsanitary and crowded containers. Although the Argentine parrot trade was stopped, a number of birds were sold on to visitors to its seaports, and psittacosis, also known as parrot fever, was transmitted to several countries. Its mode of transmission to humans by mouth-to-beak contact or inhaling dried bird secretions and droppings was not known at the time. Germany, the United Kingdom and the United States were the most severely affected with more than 100 cases each. Implicated birds included green Amazon parrots, canaries, lovebirds and shell parakeets.By early 1930, the disease was reported in humans in several countries around the world, accelerated by the popular hobby of domestic bird-keeping at the time. Many cases and clusters had links with sick parrots. Around 750 to 800 people were affected. The average mortality was 15%, with a total of more than 100 deaths.The majority of cases in the U.S. were found in 1931 to be linked to endemic psittacosis in California, associated with the increasingly popular trade of breeding lovebirds for sale chiefly to housewives and widows. Africa: Algiers In Algiers, four deaths were attributed to the disease in the week ending 8 February 1930. The following week, three further cases were reported. Europe: When cases appeared in Amsterdam, Netherlands Health Department asked that steamships that call at South American seaports refuse to take on board parrots. Europe: Germany Cases in Germany were reported, with some uncertainty, from July 1929, in Berlin, Hamburg, Liegnitz, Munich, Glauchau and Dôbeln. It resulted in the banning of the importation of parrots. By the end of the pandemic in early 1930, Germany had the largest number of cases, with 215 affected, of which 45 died. Parrot-owners were found abandoning their birds at the Berlin zoo, and in response the zoo closed its gates.Of 35 parakeets involved in the German cases, 30 had no disease. Europe: United Kingdom Cases were reported in Birmingham, United Kingdom, in mid 1929. In December 1929, a ship's carpenter attended the London Hospital with a typhoid-like illness. He had previously purchased two parrots from Buenos Aires, which had died en route to London. By March 1930, 100 suspected cases were reported across the UK. One case was linked to a visit to a pub, where there was noted to be a sick parrot.Research into the cause was commenced by Samuel Bedson at the London Hospital.In the UK, the Parrots (Prohibition of Import) Regulations, 1930 was created following consideration by the permanent committee of the Office international d'hygiène publique. It prohibited the trade of parrots unless for research. North America: In 1929, around 500,000 canaries and nearly 50,000 parrots were imported to the United States from Brazil, Argentina, Colombia, Cuba, Trinidad, Salvador, Mexico and Japan. Most birds entered the U.S. via New York, except budgerigars, which entered via San Francisco and Los Angeles. North America: Early January 1930 In early January 1930, an outbreak of "mysterious pneumonias" in the United States came to media attention when cases in three members of one family were traced to the previous Christmas importation of parrots from South America. 10 days before Christmas, Simon Martin, secretary of the Chamber of Commerce in Annapolis, Maryland, bought a parrot in Baltimore for his wife, who subsequently, along with their daughter and son-in-law, became seriously ill. Their new parrot's feathers had become dirty and ruffled by Christmas Eve, and on Christmas Day it died. The wife of the family physician made a link to a newspaper article about "parrot fever" in Buenos Aires. In consequence, Martin's physician sent a telegram to the United States Public Health Service (PHS) in Washington DC, requesting for advice on parrot fever. The story came to the attention of Surgeon General Hugh S. Cumming, who received similar messages from Baltimore, New York, Ohio and California. The task of solving the cause of parrot fever was signposted to George W. McCoy, the director of PHS's Hygienic Laboratory and a renowned bacteriologist who had discovered tularaemia, and his deputy, Charlie Armstrong, neither of whom had ever heard of parrot fever.On 8 January 1930, The Washington Post reported "parrot disease baffles experts" and headlines of "Parrot Fever Hits Trio at Annapolis". On the same day, the outbreak made headlines in the Los Angeles Times with "two women and man in Annapolis believed to have 'parrot fever'". On 11 January, the same paper reported "Parrot Disease Fatal to Seven", and the Chicago Daily Tribune put on their front page "Baltimore woman dies". By 15 January, 50 cases were reported nationwide. The following day, the Baltimore Sun announced that "Woman's Case Brings Parrot Victims to 19". During this time, one of the first deaths was particularly alarming. The victim was a woman in Toledo, Ohio, who had been given three Cuban parrots by her husband.Cumming warned to stay away from imported parrots, whilst sailors at sea were ordered by one U.S. Navy admiral to throw overboard their parrots. Some were encouraged by one health commissioner to kill their pet parrots, and some abandoned them on the streets.Reports soon began to follow from the eastern coast of the U. S., with Baltimore, New York City and Los Angeles, involving other birds such as shell parakeets (Australian budgerigars). The director of the Bureau of Communicable Diseases, Daniel S. Hatfield, ordered the confiscation of all birds at Baltimore pet stores. North America: Late January 1930 Six major pet dealers in the U.S. stood to make a loss of $5 million per year as a result of an executive order issued by President Herbert Hoover on 24 January prohibiting "the immediate importation of parrots into the United States, its possessions and dependencies from any foreign port", until research could find the cause and mode of transmission. This decision came following Armstrong's initial research, which showed that healthy parrots being infected by sick ones and that some could become asymptomatic carriers. The following day, Armstrong's assistant, Henry "Shorty" Anderson, became ill. North America: February 1930 Two of the 16 people that developed the illness from exposure at the National Hygiene Laboratory died, including, on 8 February, Anderson. The following day, bacteriologist William Royal Stokes died, only weeks after commencing research on the parrot dropping samples given to him by Armstrong. By this time, Armstrong was ill himself but survived. They had failed to isolate the causative infectious agent, and McCoy was subsequently forced to kill the birds and fumigate the Hygienic Laboratory. North America: Later 43 of the 74 foci in the U.S. were traced to contact with Amazon parrots. Links were traced to Japan, Caribbean, Germany, Central America and South America. Between November 1929 and May 1930, the U.S. recorded 169 cases, of which 33 died. New York was the centre of the East Coast bird trade. However, the principal ports of entry for Australian budgerigars was San Francisco and Los Angeles. Later, it was discovered that the main source was domestic lovebirds raised in hundreds of independent Californian aviaries by breeders who were supplementing their incomes following the recent Wall Street Crash. The winter of 1929 was also witnessing an influenza epidemic and there were fears of a recurrence of Spanish flu, which added to the depressive effects. In this context, peddlers travelled door-to-door with “lovebirds” for housewives and widows. As a result, most victims in the U.S. were women. The realisation and connections of the various outbreaks may not have become apparent had it not been for the press. Likewise, the "hysteria" and heightened public concern surrounding the pandemic may not have occurred had it not been for headlines such as “Killed By A Pet Parrot.”The establishment of the National Institutes of Health is directly linked to the outbreak that occurred in Maryland. Its story was retold in Paul de Kruif's, Men Against Death (1933). South America: The first reports of the disease were recorded in July 1929, in Córdoba, Argentina. During the summer and autumn of 1929, Córdoba and Tucumán in Argentina, reported over 100 cases of a severe atypical pneumonia linked to a large shipment of birds from Brazil.One of the outbreaks occurred among an Argentine theatrical group in October 1929, after they had purchased an Amazon parrot in Buenos Aires. Two of the actors died from the illness. Florencio Parravicini, the main male actor, contracted the disease and according to the Hearst, recovered after suffering significantly for 17 days. Cases in Argentina followed a number of auctions that took place in several cities, with owners selling a number of sick birds as quickly as possible. In response, the Argentine parrot trade was stopped and pet owners were cautioned by authorities to look out for sick birds and report them. However, dishonest traders continued to sell sick birds to visitors to its seaports. The cases were reported in an Argentinian journal in November 1929 and later picked up by sensational American press. Countries affected: There were no reported cases in Brazil. The disease was reported in: Birds involved: Meyer later demonstrated that psittacosis could be transmitted by around 50 species of birds. Birds implicated in the 1929–30 pandemic included: Amazon parrots (Amazona species) Canary (Serinus canaria) Lovebirds (Agapornis species) Shell parakeets (Australian budgerigars, Melopsittacus undulatus). Talking parrots Grey parrots (Psittacus erithacus) Thrushes Gallery
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Matroid minor** Matroid minor: In the mathematical theory of matroids, a minor of a matroid M is another matroid N that is obtained from M by a sequence of restriction and contraction operations. Matroid minors are closely related to graph minors, and the restriction and contraction operations by which they are formed correspond to edge deletion and edge contraction operations in graphs. The theory of matroid minors leads to structural decompositions of matroids, and characterizations of matroid families by forbidden minors, analogous to the corresponding theory in graphs. Definitions: If M is a matroid on the set E and S is a subset of E, then the restriction of M to S, written M |S, is the matroid on the set S whose independent sets are the independent sets of M that are contained in S. Its circuits are the circuits of M that are contained in S and its rank function is that of M restricted to subsets of S. Definitions: If T is an independent subset of E, the contraction of M by T, written M/T, is the matroid on the underlying set E − T whose independent sets are the sets whose union with T is independent in M. This definition may be extended to arbitrary T by choosing a basis for T and defining a set to be independent in the contraction if its union with this basis remains independent in M. The rank function of the contraction is r′(A)=r(A∪T)−r(T). Definitions: A matroid N is a minor of a matroid M if it can be constructed from M by restriction and contraction operations. In terms of the geometric lattice formed by the flats of a matroid, taking a minor of a matroid corresponds to taking an interval of the lattice, the part of the lattice lying between a given lower bound and upper bound element. Forbidden matroid characterizations: Many important families of matroids are closed under the operation of taking minors: if a matroid M belongs to the family, then every minor of M also belongs to the family. In this case, the family may be characterized by its set of "forbidden matroids", the minor-minimal matroids that do not belong to the family. A matroid belongs to the family if and only if it does not have a forbidden matroid as a minor. Often, but not always, the set of forbidden matroids is finite, paralleling the Robertson–Seymour theorem which states that the set of forbidden minors of a minor-closed graph family is always finite. Forbidden matroid characterizations: An example of this phenomenon is given by the regular matroids, matroids that are representable over all fields. Equivalently a matroid is regular if it can be represented by a totally unimodular matrix (a matrix whose square submatrices all have determinants equal to 0, 1, or −1). Tutte (1958) proved that a matroid is regular if and only if it does not have one of three forbidden minors: the uniform matroid U42 (the four-point line), the Fano plane, or the dual matroid of the Fano plane. For this he used his difficult homotopy theorem. Simpler proofs have since been found. Forbidden matroid characterizations: The graphic matroids, matroids whose independent sets are the forest subgraphs of a graph, have five forbidden minors: the three for the regular matroids, and the two duals of the graphic matroids for the graphs K5 and K3,3 that by Wagner's theorem are forbidden minors for the planar graphs. Forbidden matroid characterizations: The binary matroids, matroids representable over the two-element finite field, include both graphic and regular matroids. Tutte again showed that these matroids have a forbidden minor characterization: they are the matroids that do not have the four-point line as a minor. Rota conjectured that, for any finite field, the matroids representable over that field have finitely many forbidden minors. A full proof of this conjecture has been announced by Geelen, Gerards, and Whittle; as of 2015 it has not appeared. However, the matroids that can be represented over the real numbers have infinitely many forbidden minors. Branchwidth: Branch-decompositions of matroids may be defined analogously to their definition for graphs. Branchwidth: A branch-decomposition of a matroid is a hierarchical clustering of the matroid elements, represented as an unrooted binary tree with the elements of the matroid at its leaves. Removing any edge of this tree partitions the matroids into two disjoint subsets; such a partition is called an e-separation. If r denotes the rank function of the matroid, then the width of an e-separation is defined as r(A) + r(B) − r(M) + 1. The width of a decomposition is the maximum width of any of its e-separations, and the branchwidth of a matroid is the minimum width of any of its branch-decompositions. Branchwidth: The branchwidth of a graph and the branchwidth of the corresponding graphic matroid may differ: for instance, the three-edge path graph and the three-edge star have different branchwidths, 2 and 1 respectively, but they both induce the same graphic matroid with branchwidth 1. However, for graphs that are not trees, the branchwidth of the graph is equal to the branchwidth of its associated graphic matroid. The branchwidth of a matroid always equals the branchwidth of its dual.Branchwidth is an important component of attempts to extend the theory of graph minors to matroids: although treewidth can also be generalized to matroids, and plays a bigger role than branchwidth in the theory of graph minors, branchwidth has more convenient properties in the matroid setting. Branchwidth: If a minor-closed family of matroids representable over a finite field does not include the graphic matroids of all planar graphs, then there is a constant bound on the branchwidth of the matroids in the family, generalizing similar results for minor-closed graph families. Well-quasi-ordering: The Robertson–Seymour theorem implies that every matroid property of graphic matroids characterized by a list of forbidden minors can be characterized by a finite list. Another way of saying the same thing is that the partial order on graphic matroids formed by the minor operation is a well-quasi-ordering. However, the example of the real-representable matroids, which have infinitely many forbidden minors, shows that the minor ordering is not a well-quasi-ordering on all matroids. Well-quasi-ordering: Robertson and Seymour conjectured that the matroids representable over any particular finite field are well-quasi-ordered. So far this has been proven only for the matroids of bounded branchwidth. Matroid decompositions: The graph structure theorem is an important tool in the theory of graph minors, according to which the graphs in any minor-closed family can be built up from simpler graphs by clique-sum operations. Some analogous results are also known in matroid theory. In particular, Seymour's decomposition theorem states that all regular matroids can be built up in a simple way as the clique-sum of graphic matroids, their duals, and one special 10-element matroid. As a consequence, linear programs defined by totally unimodular matrices may be solved combinatorially by combining the solutions to a set of minimum spanning tree problems corresponding to the graphic and co-graphic parts of this decomposition. Algorithms and complexity: One of the important components of graph minor theory is the existence of an algorithm for testing whether a graph H is a minor of another graph G, taking an amount of time that is polynomial in G for any fixed choice of H (and more strongly fixed-parameter tractable if the size of H is allowed to vary). By combining this result with the Robertson–Seymour theorem, it is possible to recognize the members of any minor-closed graph family in polynomial time. Correspondingly, in matroid theory, it would be desirable to develop efficient algorithms for recognizing whether a given fixed matroid is a minor of an input matroid. Unfortunately, such a strong result is not possible: in the matroid oracle model, the only minors that can be recognized in polynomial time are the uniform matroids with rank or corank one. However, if the problem is restricted to the matroids that are representable over some fixed finite field (and represented as a matrix over that field) then, as in the graph case, it is conjectured to be possible to recognize the matroids that contain any fixed minor in polynomial time.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Neo-Darwinism** Neo-Darwinism: Neo-Darwinism is generally used to describe any integration of Charles Darwin's theory of evolution by natural selection with Gregor Mendel's theory of genetics. It mostly refers to evolutionary theory from either 1895 (for the combinations of Darwin's and August Weismann's theories of evolution) or 1942 ("modern synthesis"), but it can mean any new Darwinian- and Mendelian-based theory, such as the current evolutionary theory. Original use: Darwin's theory of evolution by natural selection, as published in 1859, provided a selection mechanism for evolution, but not a trait transfer mechanism. Lamarckism was still a very popular candidate for this. August Weismann and Alfred Russel Wallace rejected the Lamarckian idea of inheritance of acquired characteristics that Darwin had accepted and later expanded upon in his writings on heredity.: 108  The basis for the complete rejection of Lamarckism was Weismann's germ plasm theory. Weismann realised that the cells that produce the germ plasm, or gametes (such as sperm and eggs in animals), separate from the somatic cells that go on to make other body tissues at an early stage in development. Since he could see no obvious means of communication between the two, he asserted that the inheritance of acquired characteristics was therefore impossible; a conclusion now known as the Weismann barrier.It is, however, usually George Romanes who is credited with the first use of the word in a scientific context. Romanes used the term to describe the combination of natural selection and Weismann's germ plasm theory that evolution occurs solely through natural selection, and not by the inheritance of acquired characteristics resulting from use or disuse, thus using the word to mean "Darwinism without Lamarckism."Following the development, from about 1918 to 1947, of the modern synthesis of evolutionary biology, the term neo-Darwinian started to be used to refer to that contemporary evolutionary theory. Current meaning: Biologists, however, have not limited their application of the term neo-Darwinism to the historical synthesis. For example, Ernst Mayr wrote in 1984 that: The term neo-Darwinism for the synthetic theory [of the early 20th century] is sometimes considered wrong, because the term neo-Darwinism was coined by Romanes in 1895 as a designation of Weismann's theory.Publications such as Encyclopædia Britannica use neo-Darwinism to refer to current-consensus evolutionary theory, not the version prevalent during the early 20th century. Similarly, Richard Dawkins and Stephen Jay Gould have used neo-Darwinism in their writings and lectures to denote the forms of evolutionary biology that were contemporary when they were writing.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Multi-level marketing** Multi-level marketing: Multi-level marketing (MLM), also called network marketing or pyramid selling, is a controversial marketing strategy for the sale of products or services in which the revenue of the MLM company is derived from a non-salaried workforce selling the company's products or services, while the earnings of the participants are derived from a pyramid-shaped or binary compensation commission system.In multi-level marketing, the compensation plan usually pays out to participants from two potential revenue streams. The first is based on a sales commission from directly selling the product or service; the second is paid out from commissions based upon the wholesale purchases made by other sellers whom the participant has recruited to also sell product. In the organizational hierarchy of MLM companies, recruited participants (as well as those whom the recruit recruits) are referred to as one's downline distributors.MLM salespeople are, therefore, expected to sell products directly to end-user retail consumers by means of relationship referrals and word of mouth marketing, but more importantly they are incentivized to recruit others to join the company's distribution chain as fellow salespeople so that these can become downline distributors. According to a report that studied the business models of 350 MLM companies in the United States, published on the Federal Trade Commission's website, at least 99% of people who join MLM companies lose money. Nonetheless, MLM companies function because downline participants are encouraged to hold onto the belief that they can achieve large returns, while the statistical improbability of this is de-emphasized. MLM companies have been made illegal or otherwise strictly regulated in some jurisdictions as merely variations of the traditional pyramid scheme. Terminology: Multi-level marketing is also known as "pyramid selling", "network marketing", and "referral marketing". Business model: Setup Independent non-salaried participants, referred to as distributors (variously called "associates", "independent business owners", "independent agents", etc.), are authorized to distribute the company's products or services. They are awarded their own immediate retail profit from customers plus commission from the company, not downlines, through a multi-level marketing compensation plan, which is based upon the volume of products sold through their own sales efforts as well as that of their downline organization. Business model: Independent distributors develop their organizations by either building an active consumer network, who buy direct from the company, or by recruiting a downline of independent distributors who also build a consumer network base, thereby expanding the overall organization.The combined number of recruits from these cycles are the sales force which is referred to as the salesperson's "downline". This "downline" is the pyramid in MLM's multiple-level structure of compensation. Business model: Participants The overwhelming majority of MLM participants participate at either an insignificant or nil net profit. A study of 27 MLM schemes found that on average, 99.6% of participants lost money. Indeed, the largest proportion of participants must operate at a net loss (after expenses are deducted) so that the few individuals in the uppermost level of the MLM pyramid can derive their significant earnings. Said earnings are then emphasized by the MLM company to all other participants to encourage their continued participation at a continuing financial loss. Business model: Companies Many MLM companies generate billions of dollars in annual revenue and hundreds of millions of dollars in annual profit. However, profits accrue to the detriment of the majority of the company's constituent workforce (the MLM participants). Only some of the profits are then shared with individual participants at the top of the MLM distributorship pyramid. The earnings of those top few participants are emphasized and championed at company seminars and conferences, thus creating the illusion that participants in the MLM can become financially successful. This is then advertised by the MLM company to recruit more distributors in the MLM with an unrealistic anticipation of earning margins which are in reality merely theoretical and statistically improbable.Although an MLM company holds out those few top individual participants as evidence of how participation in the MLM could lead to success, the MLM business model depends on the failure of the overwhelming majority of all other participants, through the injecting of money from their own pockets, so that it can become the revenue and profit of the MLM company, of which the MLM company shares only a small proportion with a few individuals at the top of the MLM participant pyramid. Other than the few at the top, participants provide nothing more than their own financial loss for the company's own profit and the profit of the top few individual participants. Business model: Financial independence The main sales pitch of MLM companies to their participants and prospective participants is not the MLM company's products or services. The products or services are largely peripheral to the MLM model. Rather, the true sales pitch and emphasis is on a confidence given to participants of potential financial independence through participation in the MLM, luring with phrases like "the lifestyle you deserve" or "independent distributor". Erik German's memoir My Father's Dream documents the author's father's failures through "get-rich-quick schemes" such as Amway. The memoir illustrates the multi-level marketing sales principle known as "selling the dream".Although the emphasis is always made on the potential of success and the positive life change that "might" or "could" (not "will" or "can") result, disclosure statements include disclaimers that they, as participants, should not rely on the earning results of other participants in the highest levels of the MLM participant pyramid as an indication of what they should expect to earn. MLM companies rarely emphasize the extreme likelihood of failure, or the extreme likelihood of financial loss, from participation in MLM. Comparisons to pyramid schemes: MLM companies have been made illegal in some jurisdictions as a mere variation of the traditional pyramid scheme, including in China. In jurisdictions where MLM companies have not been made illegal, many illegal pyramid schemes attempt to present themselves as MLM businesses. Given that the overwhelming majority of MLM participants cannot realistically make a net profit, let alone a significant net profit, but instead overwhelmingly operate at net losses, some sources have defined all MLM companies as a type of pyramid scheme, even if they have not been made illegal like traditional pyramid schemes through legislative statutes.MLM companies are designed to make profit for the owners/shareholders of the company and a few individual participants at the top levels of the MLM pyramid of participants. According to the U.S. Federal Trade Commission (FTC), some MLM companies already constitute illegal pyramid schemes even by the narrower existing legislation, exploiting members of the organization. Comparisons to pyramid schemes: Lawsuits Companies that use the MLM business model have been a frequent subject of criticism and lawsuits. Legal claims against MLM companies have included, among other things: Their similarity to traditional illegal pyramid schemes, Price fixing of products or services, Collusion and racketeering in backroom deals where secret compensation packages are created between the MLM company and a few individual participants, to the detriment of others, High initial entry costs (for marketing kit and first products), Emphasis on recruitment of others over actual sales (especially sales to non-participants). Comparisons to pyramid schemes: Encouraging if not requiring members to purchase and use the company's products, Exploitation of personal relationships as both sales and recruiting targets, Complex and exaggerated compensation schemes, False product claims, The company or leading distributors making major money off participant-attended conventions, training events and materials, advertising materials, and Cult-like techniques which some groups use to enhance their members' enthusiasm and devotion. Direct selling versus network marketing: "Network marketing" and "multi-level marketing" (MLM) have been described by author Dominique Xardel as being synonymous, with it being a type of direct selling. Some sources emphasize that multi-level marketing is merely one form of direct selling, rather than being direct selling. Other terms that are sometimes used to describe multi-level marketing include "word-of-mouth marketing", "interactive distribution", and "relationship marketing". Critics have argued that the use of these and other different terms and "buzzwords" is an effort to draw distinctions between multi-level marketing and illegal Ponzi schemes, chain letters, and consumer fraud scams—where none meaningfully exist.The Direct Selling Association (DSA), a lobbying group for the MLM industry, reported that in 1990 only 25% of DSA members used the MLM business model. By 1999, this had grown to 77.3%. By 2009, 94.2% of DSA members were using MLM, accounting for 99.6% of sellers, and 97.1% of sales. Companies such as Avon, Electrolux, Tupperware, and Kirby were all originally single-level marketing companies, using that traditional and uncontroversial direct selling business model (distinct from MLM) to sell their goods. However, they later introduced multi-level compensation plans, becoming MLM companies. The DSA has approximately 200 members while it is estimated there are over 1,000 firms using multi-level marketing in the United States alone. History: The origin of multi-level marketing is often disputed, but multi-level marketing style businesses existed in the 1920s and the 1930s, such as the California Vitamin Company (later named Nutrilite) and the California Perfume Company (renamed "Avon Products"). Income levels: Several sources have commented on the income level of specific MLM companies or MLM companies in general: The Times: "The Government investigation claims to have revealed that just 10% of Amway's agents in Britain make any profit, with less than one in ten selling a single item of the group's products." Eric Scheibeler, a high level "Emerald" Amway member: "UK Justice Norris found in 2008 that out of an IBO [Independent Business Owners] population of 33,000, 'only about 90 made sufficient incomes to cover the costs of actively building their business.' That's a 99.7 percent loss rate for investors." Newsweek: based on Mona Vie's own 2007 income disclosure statement "fewer than 1 percent qualified for commissions and of those, only 10 percent made more than $100 a week." Business Students Focus on Ethics: "In the USA, the average annual income from MLM for 90% MLM members is no more than US $5,000, which is far from being a sufficient means of making a living (San Lian Life Weekly 1998)" USA Today has had several articles:"While earning potential varies by company and sales ability, DSA says the median annual income for those in direct sales is $2,400." In an October 15, 2010, article, it was stated that documents of a MLM called Fortune Hi-Tech Marketing reveal that 30 percent of its representatives make no money and that 54 percent of the remaining 70 percent only make $93 a month, before costs. Fortune was under investigation by the Attorneys General of Texas, Kentucky, North Dakota, and North Carolina with Missouri, South Carolina, Illinois, and Florida following up complaints against the company. The FTC eventually stated that Fortune Hi-Tech Marketing was a pyramid scheme and that checks totaling more than $3.7 million were being mailed to the victims. Income levels: A February 10, 2011, article stated "It can be very difficult, if not impossible, for most individuals to make a lot of money through the direct sale of products to consumers. And big money is what recruiters often allude to in their pitches." "Roland Whitsell, a former business professor who spent 40 years researching and teaching the pitfalls of multilevel marketing": "You'd be hard-pressed to find anyone making over $1.50 an hour, (t)he primary product is opportunity. The strongest, most powerful motivational force today is false hope."Based on the results of a 2018 poll conducted with 1,049 MLM sellers, the majority (60%) earned an average of less than $100 in sales over a five-year period, and 20% never made a single sale. The majority of sellers made less than 70 cents per hour. Nearly 32 percent of those polled acquired credit card debt to finance their MLM involvement. Legality and legitimacy: Bangladesh In 2015, the Government of Bangladesh banned all types of domestic and foreign MLM trade in Bangladesh. Legality and legitimacy: China Multi-level marketing (simplified Chinese: 传销; traditional Chinese: 傳銷; pinyin: chuán xiāo; lit. 'spread selling') was first introduced to mainland China by American, Taiwanese, and Japanese companies following the Chinese economic reform of 1978. This rise in multi-level marketing's popularity coincided with economic uncertainty and a new shift towards individual consumerism. Multi-level marketing was banned on the mainland by the government in 1998, citing social, economic, and taxation issues. Further regulation "Prohibition of Chuanxiao" (where MLM is a type of Chuanxiao was enacted in 2005, clause 3 of Chapter 2 of the regulation states having downlines is illegal). O'Regan wrote 'With this regulation China makes clear that while Direct Sales is permitted in the mainland, Multi-Level Marketing is not'.MLM companies have been made illegal in China as a mere variation of the traditional pyramid scheme. MLM companies have been trying to find ways around China's prohibitions, or have been developing other methods, such as direct sales, to take their products to China through retail operations. The Direct Sales Regulations limit direct selling to cosmetics, health food, sanitary products, bodybuilding equipment and kitchen utensils, and they require Chinese or foreign companies ("FIEs") who intend to engage into direct sale business in mainland China to apply for and obtain direct selling license from the Ministry of Commerce ("MOFCOM"). In 2016, there are 73 companies, including domestic and foreign companies, that have obtained the direct selling license. Legality and legitimacy: Some multi-level marketing sellers have circumvented this ban by establishing addresses and bank accounts in Hong Kong, where the practice is legal, while selling and recruiting on the mainland.It was not until August 23, 2005, that the State Council promulgated rules that dealt specifically with direct sale operation- Administration of Direct Sales (entered into effect on December 1, 2005) and the Regulations for the Prohibition of Chuanxiao (entered into effect on November 1, 2005). When direct selling is allowed, it will only be permitted under the most stringent requirements, in order to ensure the operations are not pyramid schemes, MLM, or fly-by-night operations. Legality and legitimacy: Saudi Arabia MLM marketing is banned in Saudi Arabia by imposing religious fatwa nationally, for this reason MLM companies like Amway, Mary Kay, Oriflame and Herbalife sell their products by online selling method instead of MLM. Legality and legitimacy: United States MLM businesses operate in all 50 U.S. states. Businesses may use terms such as "affiliate marketing" or "home-based business franchising". Some sources say that all MLM companies are essentially pyramid schemes, even if they are legal. Utah has been named the "unofficial world capital of multi-level marketing and direct sales companies" and is home to at least 15 major MLMs, more MLMs per capita than any other state.The U.S. Federal Trade Commission (FTC) states: "Steer clear of multilevel marketing plans that pay commissions for recruiting new distributors. They're actually illegal pyramid schemes. Why is pyramiding dangerous? Because plans that pay commissions for recruiting new distributors inevitably collapse when no new distributors can be recruited. And when a plan collapses, most people—except perhaps those at the very top of the pyramid—end up empty-handed." In a 2004 Staff Advisory letter to the Direct Selling Association, the FTC states:Much has been made of the personal, or internal, consumption issue in recent years. In fact, the amount of internal consumption in any multi-level compensation business does not determine whether or not the FTC will consider the plan a pyramid scheme. The critical question for the FTC is whether the revenues that primarily support the commissions paid to all participants are generated from purchases of goods and services that are not simply incidental to the purchase of the right to participate in a money-making venture. Legality and legitimacy: The Federal Trade Commission warns Not all multilevel marketing plans are legitimate. Some are pyramid schemes. It's best not to get involved in plans where the money you make is based primarily on the number of distributors you recruit and your sales to them, rather than on your sales to people outside the plan who intend to use the products. Legality and legitimacy: In re Amway Corp. (1979), the Federal Trade Commission indicated that multi-level marketing was not illegal per se in the United States. However, Amway was found guilty of price fixing (by effectively requiring "independent" distributors to sell at the same fixed price) and making exaggerated income claims. The FTC advises that multi-level marketing organizations with greater incentives for recruitment than product sales are to be viewed skeptically. The FTC also warns that the practice of getting commissions from recruiting new members is outlawed in most states as "pyramiding".Walter J. Carl stated in a 2004 Western Journal of Communication article that "MLM organizations have been described by some as cults (Butterfield, 1985), pyramid schemes (Fitzpatrick & Reynolds, 1997), or organizations rife with misleading, deceptive, and unethical behavior (Carter, 1999), such as the questionable use of evangelical discourse to promote the business (Höpfl & Maddrell, 1996), and the exploitation of personal relationships for financial gain (Fitzpatrick & Reynolds, 1997)". In China, volunteers working to rescue people from the schemes have been physically attacked.MLM companies are also criticized for being unable to fulfill their promises for the majority of participants due to basic conflicts with Western cultural norms. There are even claims that the success rate for breaking even or even making money are far worse than other types of businesses: "The vast majority of MLM companies are recruiting MLM companies, in which participants must recruit aggressively to profit. Based on available data from the companies themselves, the loss rate for recruiting MLM companies is approximately 99.9%; i.e., 99.9% of participants lose money after subtracting all expenses, including purchases from the company." (By comparison, skeptic Brian Dunning points out that "only 97.14% of Las Vegas gamblers lose money .... .") In part, this is because encouraging recruits to further "recruit people to compete with [them]" leads to "market saturation." It has also been claimed "(b)y its very nature, MLM is completely devoid of any scientific foundations."Because of the encouraging of recruits to further recruit their competitors, some people have even gone so far as to say at best modern MLM companies are nothing more than legalized pyramid schemes with one stating "Multi-level marketing companies have become an accepted and legally sanctioned form of pyramid scheme in the United States" while another states "Multi-Level Marketing, a form of Pyramid Scheme, is not necessarily fraudulent." In October 2010 it was reported that multi-level marketing companies were being investigated by a number of state attorneys general amid allegations that salespeople were primarily paid for recruiting and that more recent recruits cannot earn anything near what early entrants do. Industry critic Robert L. FitzPatrick has called multi-level marketing "the Main Street bubble" that will eventually burst. Religious views: Islam Many Islamic jurists and religious bodies, including Permanent Committee for Scholarly Research and Ifta of Saudi Arabia, have considered MLM trade to be prohibited (haram). They argue that MLM trade involves deceiving others into participating, and the transaction bears resemblance to both riba and gharar.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Molecular mass** Molecular mass: The molecular mass (m) is the mass of a given molecule, for which the unit dalton (Da) is used. Different molecules of the same compound may have different molecular masses because they contain different isotopes of an element. The related quantity relative molecular mass, as defined by IUPAC, is the ratio of the mass of a molecule to the atomic mass constant (which is equal to one dalton) and is unitless. The molecular mass and relative molecular mass are distinct from but related to the molar mass. The molar mass is defined as the mass of a given substance divided by the amount of a substance and is expressed in g/mol. That makes the molar mass an average of many particles or molecules, and the molecular mass the mass of one specific particle or molecule. The molar mass is usually the more appropriate quantity when dealing with macroscopic (weigh-able) quantities of a substance. Molecular mass: The definition of molecular weight is most authoritatively synonymous with relative molecular mass; however, in common practice, use of this terminology is highly variable. When the molecular weight is given with the unit Da, it is frequently as a weighted average similar to the molar mass but with different units. In molecular biology, the mass of macromolecules is referred to as their molecular weight and is expressed in kDa, although the numerical value is often approximate and representative of an average. Molecular mass: The terms molecular mass, molecular weight, and molar mass are used interchangeably in less formal contexts where unit-correctness is not needed. The molecular mass is more commonly used when referring to the mass of a single or specific well-defined molecule and less commonly than molecular weight when referring to a weighted average of a sample. Prior to the 2019 redefinition of SI base units quantities expressed in daltons (Da) were by definition numerically equivalent to molar mass expressed in the units g/mol and were thus strictly numerically interchangeable. After the 20 May 2019 redefinition of units, this relationship is only nearly equivalent, although the difference is negligible for all practical purposes. Molecular mass: The molecular mass of small to medium size molecules, measured by mass spectrometry, can be used to determine the composition of elements in the molecule. The molecular masses of macromolecules, such as proteins, can also be determined by mass spectrometry; however, methods based on viscosity and light-scattering are also used to determine molecular mass when crystallographic or mass spectrometric data are not available. Calculation: Molecular masses are calculated from the atomic masses of each nuclide present in the molecule, while relative molecular masses are calculated from the standard atomic weights of each element. The standard atomic weight takes into account the isotopic distribution of the element in a given sample (usually assumed to be "normal"). For example, water has a relative molecular mass of 18.0153(3), but individual water molecules have molecular masses which range between 18.010 564 6863(15) Da (1H216O) and 22.027 7364(9) Da (2H218O). Calculation: Atomic and molecular masses are usually reported in daltons, which is defined in terms of the mass of the isotope 12C (carbon-12). Relative atomic and molecular masses as defined are dimensionless. However, the name unified atomic mass unit (u) is still used in common practice. For example, the relative molecular mass and molecular mass of methane, whose molecular formula is CH4, are calculated respectively as follows: The uncertainty in molecular mass reflects variance (error) in measurement not the natural variance in isotopic abundances across the globe. In high-resolution mass spectrometry the mass isotopomers 12C1H4 and 13C1H4 are observed as distinct molecules, with molecular masses of approximately 16.031 Da and 17.035 Da, respectively. The intensity of the mass-spectrometry peaks is proportional to the isotopic abundances in the molecular species. 12C 2H 1H3 can also be observed with molecular mass of 17 Da. Determination: Mass spectrometry In mass spectrometry, the molecular mass of a small molecule is usually reported as the monoisotopic mass, that is, the mass of the molecule containing only the most common isotope of each element. Note that this also differs subtly from the molecular mass in that the choice of isotopes is defined and thus is a single specific molecular mass of the many possibilities. The masses used to compute the monoisotopic molecular mass are found on a table of isotopic masses and are not found on a typical periodic table. The average molecular mass is often used for larger molecules since molecules with many atoms are unlikely to be composed exclusively of the most abundant isotope of each element. A theoretical average molecular mass can be calculated using the standard atomic weights found on a typical periodic table, since there is likely to be a statistical distribution of atoms representing the isotopes throughout the molecule. The average molecular mass of a sample, however, usually differs substantially from this since a single sample average is not the same as the average of many geographically distributed samples. Determination: Mass photometry Mass photometry (MP) is a rapid, in-solution, label-free method of obtaining the molecular mass of proteins, lipids, sugars & nucleic acids at the single-molecule level. The technique is based on interferometric scattered light microscopy. Contrast from scattered light by a single binding event at the interface between the protein solution and glass slide is detected and is linearly proportional to the mass of the molecule. This technique is also capable of measuring sample homogeneity, detecting protein oligomerisation state, characterisation of complex macromolecular assemblies (ribosomes, GroEL, AAV) and protein interactions such as protein-protein interactions. Mass photometry can measure molecular mass to an accurate degree over a wide range of molecular masses (40kDa – 5MDa). Determination: Hydrodynamic methods To a first approximation, the basis for determination of molecular mass according to Mark–Houwink relations is the fact that the intrinsic viscosity of solutions (or suspensions) of macromolecules depends on volumetric proportion of the dispersed particles in a particular solvent. Specifically, the hydrodynamic size as related to molecular mass depends on a conversion factor, describing the shape of a particular molecule. This allows the apparent molecular mass to be described from a range of techniques sensitive to hydrodynamic effects, including DLS, SEC (also known as GPC when the eluent is an organic solvent), viscometry, and diffusion ordered nuclear magnetic resonance spectroscopy (DOSY). The apparent hydrodynamic size can then be used to approximate molecular mass using a series of macromolecule-specific standards. As this requires calibration, it's frequently described as a "relative" molecular mass determination method. Determination: Static light scattering It is also possible to determine absolute molecular mass directly from light scattering, traditionally using the Zimm method. This can be accomplished either via classical static light scattering or via multi-angle light scattering detectors. Molecular masses determined by this method do not require calibration, hence the term "absolute". The only external measurement required is refractive index increment, which describes the change in refractive index with concentration.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Protector lock** Protector lock: The term protector lock has referred to two unrelated lock designs, one invented in the 1850s by Alfred Hobbs, the other in 1874 by Theodor Kromer. Hobbs's protector lock: The Protector lock (also called the "moveable lock") was an early 1850s lock design by the leading American locksmith Alfred Charles Hobbs, the first man to pick the six-levered Chubb detector lock, at the Crystal Palace Exhibition in 1851. That lock was created with the intent of being a lock that could not be picked. Hobbs's protector lock: Before Hobbs and his revolutionary lock designs, locks were opened by means of a series of false keys, in a process that could take an extremely long time. If the series was not properly completed in the lock, and the combination not completely correct, the lock could not be defeated. This design was accepted as quite unbreakable until Hobbs was able to pick them, using very fine and careful manual dexterity, applying a certain level of pressure on the bolt while manipulating each lever in turn with a tiny pick inserted through the keyhole.In an attempt to create a better locking system, Hobbs patented the Protector lock, which, complex in design as it was, involved a transfer in pressure between the lock's internal bolt and tumbler mechanisms to a fixed pin. Hobbs claimed that his design was impossible to defeat and superior to the locks that were then in use, but, in 1854, one of Chubb's locksmiths was able to crack it with the help of special tools.The Protector lock was distinct from Hobbs's other major lock design of the time, which he called the American lock and which slightly preceded the Protector lock. The American lock was complicated to use and expensive to purchase, with the added disadvantage of requiring a very large key, but it did not differ greatly from the Protector lock as far as security was concerned. The Protector lock was described as much simpler to use. The one advantage of the American lock over the Protector lock was its potential for greater security, if certain internal parts of the lock and key were arranged in a particular way. Kromer's protector lock: The protector lock of Theodor Kromer was a high-security lock first patented in Prussia in 1874. Earlier competing designs included the American Alfred Charles Hobbs's protector lock (described above), the Chubb detector lock, and the Englishman Joseph Bramah's lock. Kromer's lock was designed for mass production and was highly successful, rapidly outselling Bramah's. It was produced for more than 125 years with many further patented developments, and is regarded as one of the most secure locks ever made, remaining without a public demonstration of picking until October 2022.Kromer was a mechanic born in Neustadt in 1839. He founded the Kromer company together with his brother Carl in 1868. In design, his lock was a tumbler- or wafer-lock containing eleven wafers stacked in a central cylindrical core, slotted on each side such that the wafers project one side or the other when locked. When the correct key is inserted and turned, the wafers are pushed to a position where they span the central core precisely, projecting from neither side, and at this point, the core is free to rotate in its housing. The wafers vary in design. Some are in one piece, others in two, such that the two halves must both be aligned. The lock was not only hard to pick; it was also difficult to tell what shape the key should have by mere inspection inside the key-hole. Kromer's protector lock: The keys for the Kromer protector lock were designed to be extraordinarily difficult to copy. They are asymmetric, double-bitted keys, but unlike a typical key, some parts of the bit are bevelled such that the bit is longer or shorter on the leading edge than the trailing edge (i.e. the outer, working edge of the bit is not flat or gently-rounded to match the circular path described by the key as it is turned). One cut in the key was also angled relative to the shaft. These various features are tested by the eleven wafers. Since the lock could be fitted to anything from a small strong-box to a huge bank vault, the keys were made in corresponding lengths, including foldable keys with sufficient length to pass through a thick vault door.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Polarization in astronomy** Polarization in astronomy: Polarization is an important phenomenon in astronomy. Stars: The polarization of starlight was first observed by the astronomers William Hiltner and John S. Hall in 1949. Subsequently, Jesse Greenstein and Leverett Davis, Jr. developed theories allowing the use of polarization data to trace interstellar magnetic fields. Stars: Though the integrated thermal radiation of stars is not usually appreciably polarized at source, scattering by interstellar dust can impose polarization on starlight over long distances. Net polarization at the source can occur if the photosphere itself is asymmetric, due to limb polarization. Plane polarization of starlight generated at the star itself is observed for Ap stars (peculiar A type stars).[1] Sun: Both circular and linear polarization of sunlight has been measured. Circular polarization is mainly due to transmission and absorption effects in strongly magnetic regions of the Sun's surface. Another mechanism that gives rise to circular polarization is the so-called "alignment-to-orientation mechanism". Continuum light is linearly polarized at different locations across the face of the Sun (limb polarization) though taken as a whole, this polarization cancels. Linear polarization in spectral lines is usually created by anisotropic scattering of photons on atoms and ions which can themselves be polarized by this interaction. The linearly polarized spectrum of the Sun is often called the second solar spectrum. Atomic polarization can be modified in weak magnetic fields by the Hanle effect. As a result, polarization of the scattered photons is also modified providing a diagnostics tool for understanding stellar magnetic fields. Other sources: Polarization is also present in radiation from coherent astronomical sources due to the Zeeman effect (e.g. hydroxyl or methanol masers). The large radio lobes in active galaxies and pulsar radio radiation (which may, it is speculated, sometimes be coherent) also show polarization. Other sources: Apart from providing information on sources of radiation and scattering, polarization also probes the interstellar magnetic field in our galaxy as well as in radio galaxies via Faraday rotation.: 119, 124 : 336–337  In some cases it can be difficult to determine how much of the Faraday rotation is in the external source and how much is local to our own galaxy, but in many cases it is possible to find another distant source nearby in the sky; thus by comparing the candidate source and the reference source, the results can be untangled. Cosmic microwave background: The polarization of the cosmic microwave background (CMB) is also being used to study the physics of the very early universe. CMB exhibits 2 components of polarization: B-mode (divergence-free like magnetic field) and E-mode (curl-free gradient-only like electric field) polarization. The BICEP2 telescope located at the South Pole helped in the detection of B-mode polarization in the CMB. The polarization modes of the CMB may provide more information about the influence of gravitational waves on the development of the early universe. Cosmic microwave background: It has been suggested that astronomical sources of polarised light caused the chirality found in biological molecules on Earth.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Monoxenous development** Monoxenous development: Monoxenous development, or monoxeny, characterizes a parasite whose development is restricted to a single host species.The etymology of the terms monoxeny / monoxenous derives from the two ancient Greek words μόνος (mónos), meaning "unique", and ξένος (xénos), meaning "foreign".In a monoxenous life cycle, the parasitic species may be strictly host specific (using only a single host species, such as gregarines) or not (e.g. Eimeria, Coccidia).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Cross-wing** Cross-wing: A cross-wing is an addition to a house, at right angles to the original block of a house, usually with a gable. A cross-wing plan is an architectural plan reflecting this; cross-wing architecture describes the style. Cross-wing: James Stevens Curl, in A Dictionary of Architecture and Landscape Architecture, defines it as a "Wing attached to the hall-range of a medieval house, its axis at right angles to the hall-range, and often gabled."Cross-wing plans have been used in other eras. For example, during the settlement period in Utah in the late 1800s, original small hall-and-parlor plan houses, often built in vernacular Classical Revival style, were sometimes extended by the addition of a Victorian-style cross-wing.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Roller shoe** Roller shoe: Roller shoes are shoes that have wheels protruding slightly from the heel, allowing the wearer to alternate between walking and rolling. There are a number of tricks that can be done with them, including pop wheelies and spins.These shoes commonly include either one or two wheels on them. Speeds: Although not found usually, roller shoes can have a battery and other items that would be related to machinery. Depending on how fast you run, roller shoes should go 5 - 25 MPH
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Mohawk–Hudson convergence** Mohawk–Hudson convergence: Mohawk–Hudson convergence (MHC) is a mesoscale meteorology phenomenon occurring over the Capital District region of upstate New York, United States. The small convergence zone forms within specific weather conditions sometimes found in the wake of extratropical cyclones shifting east of the area. Given air pressure decreasing with both longitude and latitude, as well as weak synoptic low-level flow, winds are channeled east along the Mohawk Valley and south through the Hudson Valley, converging over Albany. With sufficient moisture in the lower atmosphere, a localized area of precipitation may form where the valleys meet, extending for several miles around Albany. The process manifests primarily in the lowest 2,500 ft (760 m) of the atmosphere. MHC-induced precipitation occurs predominately in the winter. It typically produces low clouds and light snowfall, often locally prolonging significant snow events by several hours. The strongest MHC events may yield snowfall rates approaching 1 in (2.5 cm) per hour. Occasionally, MHC contributes to shower and thunderstorm formation in the warm season. In early August 2008, two days of training thunderstorms over the Capital District were attributed to MHC; the result was locally heavy rain, amounting to over 1 in (25 mm). A relatively rare variation of MHC, termed "Southern Mohawk–Hudson convergence" (SMHC), occurs in the summer, when a southwesterly wind is present in advance of an approaching cold front. In that scenario, the Hudson and Mohawk valleys may direct the flow to become more southerly and westerly, respectively, yielding the formation of thunderstorms around Albany when conditions permit. As with MHC, SMHC is most pronounced in the absence of mechanisms for strong synoptic ascent over the region. Whereas the effects of the convergence zone are generally insignificant in the winter, SMHC presents more of a forecasting challenge when thunderstorms rapidly develop threaten and to impede travel at Albany International Airport. Thunderstorms associated with SMHC have the potential to become severe.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**OR2M3** OR2M3: Olfactory receptor 2M3 is a protein that in humans is encoded by the OR2M3 gene.Olfactory receptors interact with odorant molecules in the nose, to initiate a neuronal response that triggers the perception of a smell. The olfactory receptor proteins are members of a large family of G-protein-coupled receptors (GPCR) arising from single coding-exon genes. Olfactory receptors share a 7-transmembrane domain structure with many neurotransmitter and hormone receptors and are responsible for the recognition and G protein-mediated transduction of odorant signals. The olfactory receptor gene family is the largest in the genome. The nomenclature assigned to the olfactory receptor genes and proteins for this organism is independent of other organisms.OR2M3 has a copper binding pocket. Ligands: 3-Mercapto-2-methylpentan-1-ol This chemical is associated with characteristic smell of raw onions.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**PsychOpen** PsychOpen: PsychOpen is a European open-access publishing platform for psychology operated by the research support organization Leibniz Institute for Psychology Information (ZPID), which combines traditional scientific and Internet-based publishing. PsychOpen aims to foster the visibility of psychological research in Europe and beyond, and to ensure free access to research for scholars and professionals in the field. Mission: PsychOpen is free of charge and open to all areas of psychology and its related disciplines including scholarly as well as professional publications. The publication types published include research articles, clinical reports, monographs, etc. in English as well as other languages like Portuguese and Bulgarian. All content is enriched with English metadata (title, keywords and abstracts) and free of charge for authors, editors and readers. Publications: PsychOpen publishes the following international Open-Access journals (effective August 2013): Europe's Journal of Psychology Journal of Social and Political Psychology Interpersona: An International Journal on Personal Relationships Psychological Thought Psychology, Community & Health The European Journal of Counselling Psychology Social Psychological Bulletin Technical Infrastructure: The Leibniz Institute for Psychology Information provides the technical infrastructure. PsychOpen uses Open Journal Systems, an open-source software specifically developed for the management of peer-reviewed academic journals. Memberships: PsychOpen is a member of CLOCKSS, CrossRef and OASPA, the Open Access Scholarly Publishers Association.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Domengine Formation** Domengine Formation: The Domengine Formation is a geologic formation in California. It preserves fossils dating back to the Paleogene period.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Wien approximation** Wien approximation: Wien's approximation (also sometimes called Wien's law or the Wien distribution law) is a law of physics used to describe the spectrum of thermal radiation (frequently called the blackbody function). This law was first derived by Wilhelm Wien in 1896. The equation does accurately describe the short wavelength (high frequency) spectrum of thermal emission from objects, but it fails to accurately fit the experimental data for long wavelengths (low frequency) emission. Details: Wien derived his law from thermodynamic arguments, several years before Planck introduced the quantization of radiation.Wien's original paper did not contain the Planck constant. In this paper, Wien took the wavelength of black body radiation and combined it with the Maxwell–Boltzmann energy distribution for atoms. The exponential curve was created by the use of Euler's number e raised to the power of the temperature multiplied by a constant. Fundamental constants were later introduced by Max Planck.The law may be written as (note the simple exponential frequency dependence of this approximation) or, by introducing natural Planck units: where: I(ν,T) is the amount of energy per unit surface area per unit time per unit solid angle per unit frequency emitted at a frequency ν. Details: T is the temperature of the black body. x is the ratio of frequency over temperature. h is the Planck constant. c is the speed of light. kB is the Boltzmann constant.This equation may also be written as where I(λ,T) is the amount of energy per unit surface area per unit time per unit solid angle per unit wavelength emitted at a wavelength λ. The peak value of this curve, as determined by setting the derivative of the equation equal to zero and solving, occurs at a wavelength λmax and frequency νmax of: Relation to Planck's law: The Wien approximation was originally proposed as a description of the complete spectrum of thermal radiation, although it failed to accurately describe long wavelength (low frequency) emission. However, it was soon superseded by Planck's law which accurately describes the full spectrum, derived by treating the radiation as a photon gas and accordingly applying Bose–Einstein in place of Maxwell-Boltzmann statistics. Planck's law may be given as The Wien approximation may be derived from Planck's law by assuming hν≫kT . When this is true, then and so Planck's law approximately equals the Wien approximation at high frequencies. Other approximations of thermal radiation: The Rayleigh–Jeans law developed by Lord Rayleigh may be used to accurately describe the long wavelength spectrum of thermal radiation but fails to describe the short wavelength spectrum of thermal emission.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Lesson plan** Lesson plan: A lesson plan is a teacher's detailed description of the course of instruction or "learning trajectory" for a lesson. A daily lesson plan is developed by a teacher to guide class learning. Details will vary depending on the preference of the teacher, subject being covered, and the needs of the students. There may be requirements mandated by the school system regarding the plan. A lesson plan is the teacher's guide for running a particular lesson, and it includes the goal (what the students are supposed to learn), how the goal will be reached (the method, procedure) and a way of measuring how well the goal was reached (test, worksheet, homework etc.). Main classes of symbiotic relationships: While there are many formats for a lesson plan, most lesson plans contain some or all of these elements, typically in this order: Title of the lesson Time required to complete the lesson List of required materials List of objectives, which may be behavioral objectives (what the student can do at lesson completion) or knowledge objectives (what the student knows at lesson completion) The set (or lead-in, or bridge-in) that focuses students on the lesson's skills or concepts—these include showing pictures or models, asking leading questions, or reviewing previous lessons An instructional component that describes the sequence of events that make up the lesson, including the teacher's instructional input and, where appropriate, guided practice by students to consolidate new skills and ideas Independent practice that allows students to extend skills or knowledge on their own A summary, where the teacher wraps up the discussion and answers questions An evaluation component, a test for mastery of the instructed skills or concepts—such as a set of questions to answer or a set of instructions to follow A risk assessment where the lesson's risks and the steps taken to minimize them are documented An analysis component the teacher uses to reflect on the lesson itself—such as what worked and what needs improving A continuity component reviews and reflects on content from the previous lesson Herbartian approach: Fredrick Herbart (1776-1841) According to Herbart, there are eight lesson plan phases that are designed to provide "many opportunities for teachers to recognize and correct students' misconceptions while extending understanding for future lessons." These phases are: Introduction, Foundation, Brain Activation, Body of New Information, Clarification, Practice and Review, Independent Practice, and Closure. Main classes of symbiotic relationships: Preparation/Instruction: It pertains to preparing and motivating children to the lesson content by linking it to the previous knowledge of the student, by arousing curiosity of the children and by making an appeal to their senses. This prepares the child's mind to receive new knowledge. "To know where the pupils are and where they should try to be are the two essentials of good teaching." Lessons may be started in the following manner: a. Two or three interesting but relevant questions b. Showing a picture/s, a chart or a model c. A situation Statement of Aim: Announcement of the focus of the lesson in a clear, concise statement such as "Today, we shall study the..." Presentation/Development: The actual lesson commences here. This step should involve a good deal of activity on the part of the students. The teacher will take the aid of various devices, e.g., questions, illustrations, explanation, expositions, demonstration and sensory aids, etc. Information and knowledge can be given, explained, revealed or suggested. The following principles should be kept in mind. a. Principle of selection and division: This subject matter should be divided into different sections. The teacher should also decide as to how much he is to tell and how much the pupils are to find out for themselves. b. Principle of successive sequence: The teacher should ensure that the succeeding as well as preceding knowledge is clear to the students. c. Principle of absorption and integration: In the end separation of the parts must be followed by their combination to promote understanding of the whole. Main classes of symbiotic relationships: Association comparison: It is always desirable that new ideas or knowledge be associated to daily life situations by citing suitable examples and by drawing comparisons with the related concepts. This step is important when we are establishing principles or generalizing definitions. Generalizing: This concept is concerned with the systematizing of the knowledge learned. Comparison and contrast lead to generalization. An effort should be made to ensure that students draw the conclusions themselves. It should result in students' own thinking, reflection and experience. Application: It requires a good deal of mental activity to think and apply the principles learned to new situations. Knowledge, when it is put to use and verified, becomes clear and a part of the student's mental make-up. Main classes of symbiotic relationships: Recapitulation: Last step of the lesson plan, the teacher tries to ascertain whether the students have understood or grasped the subject matter or not. This is used for assessing/evaluating the effectiveness of the lesson by asking students questions on the contents of the lesson or by giving short objectives to test the student's level of understanding; for example, to label different parts on a diagram, etc. Main classes of symbiotic relationships: Lesson plans and unit plans A well-developed lesson plan reflects the interests and needs of students. It incorporates best practices for the educational field. The lesson plan correlates with the teacher's philosophy of education, which is what the teacher feels is the purpose of educating the students.Secondary English program lesson plans, for example, usually center around four topics. They are literary theme, elements of language and composition, literary history, and literary genre. A broad, thematic lesson plan is preferable, because it allows a teacher to create various research, writing, speaking, and reading assignments. It helps an instructor teach different literature genres and incorporate videotapes, films, and television programs. Also, it facilitates teaching literature and English together. Similarly, history lesson plans focus on content (historical accuracy and background information), analytic thinking, scaffolding, and the practicality of lesson structure and meeting of educational goals. School requirements and a teacher's personal tastes, in that order, determine the exact requirements for a lesson plan. Main classes of symbiotic relationships: Unit plans follow much the same format as a lesson plan, but cover an entire unit of work, which may span several days or weeks. Modern constructivist teaching styles may not require individual lesson plans. The unit plan may include specific objectives and timelines, but lesson plans can be more fluid as they adapt to student needs and learning styles. Unit Planning is the proper selection of learning activities which presents a complete picture. Unit planning is a systematic arrangement of subject matter. "A unit plan is one which involves a series of learning experiences that are linked to achieve the aims composed by methodology and contents," (Samford). "A unit is an organization of various activities, experiences and types of learning around a central problem or purpose developed cooperatively by a group of pupils under a teacher leadership involving planning, execution of plans and evaluation of results," (Dictionary of Education). Main classes of symbiotic relationships: Criteria of a Unit Plan Needs, capabilities, interest of the learner should be considered. Prepared on the sound psychological knowledge of the learner. Provide a new learning experience; systematic but flexible. Sustain the attention of the learner til the end. Related to social and physical environment of the learner. Development of learner's personality.It is important to note that lesson planning is a thinking process, not the filling in of a lesson plan template. A lesson plan is envisaged as a blue print, guide map for action, a comprehensive chart of classroom teaching-learning activities, an elastic but systematic approach for the teaching of concepts, skills and attitudes. Main classes of symbiotic relationships: The first thing for setting a lesson plan is to create an objective, that is, a statement of purpose for the whole lesson. An objective statement itself should answer what students will be able to do by the end of the lesson. The objective drives the whole lesson plan; it is the reason the lesson plan exists. The teacher should ensure that lesson plan goals are compatible with the developmental level of the students. The teacher ensures as well that their student achievement expectations are reasonable. Main classes of symbiotic relationships: Delivery of Lesson Plans The following guidelines were set by Canadian Council on Learning to enhance the effectiveness of the teaching process: At the start of teaching, provide the students with an overall picture of the material to be presented. When presenting material, use as many visual aids as possible and a variety of familiar examples. Organize the material so that it is presented in a logical manner and in meaningful units. Try to use terms and concepts that are already familiar to the students. Main classes of symbiotic relationships: Maximize the similarity between the learning situation and the assessment situation and provide adequate training practice. Give students the chance to use their new skills immediately on their return home through assignments. Communicate the message about the importance of the lesson, increase their motivation level, and control sidelining behaviors by planning rewards for students who successfully complete and integrate the new content. To sustain learning performance, the assessments must be fair and attainable. Main classes of symbiotic relationships: Motivation affects teaching outcomes independently of any increase in cognitive ability. Learning motivation is affected by individual characteristics like conscientiousness and by the learning climate. Therefore, it is important to try to provide as much realistic assignments as possible. Students learn best at their own pace and when correct responses are immediately reinforced, perhaps with a quick “Well done.” For many Generation Z students, the use of technology can motivate learning. Simulations, games, virtual worlds, and online networking are already revolutionizing how students learn and how learning experiences are designed and delivered. Learners who are immersed in deep experiential learning in highly visual and interactive environments become intellectually engaged in the experience. Main classes of symbiotic relationships: Research shows that it is important to create a perceived need for learning (Why should I learn, the realistic relatable objective) in the minds of students. Then only students can perceive the transferred "how and what to learn" part from the educator. Also, provide ample information that will help to set the students' expectations about the events and consequences of actions that are likely to occur in the learning environment. For example, students learning to become adept on differential equations may face stressful situations, high loads of study, and a difficult environment. Studies suggest that the negative impact of such conditions can be reduced by letting students know ahead of time what might occur and equipping them with skills to manage. Main classes of symbiotic relationships: Lesson plans and classroom management Creating a reliable lesson plan is an important part of classroom management. Doing so requires the ability to incorporate effective strategies into the classroom, the students and overall environment. There are many different types of lesson plans and ways of creating them. Teachers can encourage critical thinking in a group setting by creating plans that include the students participating collectively. Visual strategies are another component tied into lesson plans that help with classroom management. These visual strategies help a wide variety of students to increase their learning structure and possibly their overall comprehension of the material or what is in the lesson plan itself. These strategies also give students with disabilities the option to learn in a possible more efficient way. Teachers need to realize the wide range of strategies that can be used to maintain classroom management and students. They should find the best strategies to incorporate in their lesson planning for their specific grade, student type, teaching style, etc. and utilize them to their advantage. The classroom tends to flow better when the teacher has a proper lesson planned, as it provides structure for the students. Being able to utilize class time efficiently comes with creating lesson plans at their core. Main classes of symbiotic relationships: Assignments Assignments are either in-class or take-home tasks to be completed for the next class period. These tasks are important because they help ensure that the instruction provides the students with a goal, the power to get there, and the interest to be engaged in rigorous academic contexts as they acquire content and skills necessary to be able to participate in academic coursework.Experts cite that, in order to be effective and achieve objectives, the development of these assignment tasks must take into consideration the perceptions of the students because they are different from those of the teacher's. This challenge can be addressed by providing examples instead of abstract concepts or instructions. Another strategy involves the development of tasks that are specifically related to the learners' needs, interests, and age ranges. There are also experts who cite the importance of teaching learners about assignment planning. This is said to facilitate the students' engagement and interest in their assignment. Some strategies include brainstorming about the assignment process and the creation of a learning environment wherein students feel engaged and willing to reflect on their prior learning and to discuss specific or new topics.There are several assignment types so the instructor must decide whether class assignments are whole-class, small groups, workshops, independent work, peer learning, or contractual: Whole-class—the teacher lectures to the class as a whole and has the class collectively participate in classroom discussions. Main classes of symbiotic relationships: Small groups—students work on assignments in groups of three or four. Workshops—students perform various tasks simultaneously. Workshop activities must be tailored to the lesson plan. Independent work—students complete assignments individually. Peer learning—students work together, face to face, so they can learn from one another. Main classes of symbiotic relationships: Contractual work—teacher and student establish an agreement that the student must perform a certain amount of work by a deadline.These assignment categories (e.g. peer learning, independent, small groups) can also be used to guide the instructor's choice of assessment measures that can provide information about student and class comprehension of the material. As discussed by Biggs (1999), there are additional questions an instructor can consider when choosing which type of assignment would provide the most benefit to students. These include: What level of learning do the students need to attain before choosing assignments with varying difficulty levels? What is the amount of time the instructor wants the students to use to complete the assignment? How much time and effort does the instructor have to provide student grading and feedback? What is the purpose of the assignment? (e.g. to track student learning; to provide students with time to practice concepts; to practice incidental skills such as group process or independent research) How does the assignment fit with the rest of the lesson plan? Does the assignment test content knowledge or does it require application in a new context? Does the lesson plan fit a particular framework? For example, a Common Core Lesson Plan.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Honor cords** Honor cords: An honor cord is a token consisting of twisted cords with tassels on either end awarded to members of honor societies or for various academic and non-academic achievements, awards, or honors. Usually, cords come in pairs with a knot in the middle to hold them together. Sometimes sashes, stoles, or medallions are given in place of cords. They are most often worn at academic ceremonies and functions. With cap and gown, and (sometimes) the hood, high school or university degree candidates have worn these cords at the discretion of the educational institution, but they are not usually worn with academic regalia after the academic year in which the honor was awarded. Unlike hoods and stoles, by tradition more than one cord may be worn at the same time. Honor cords: At some universities, pairs of honor cords, in the school colors, indicate honors graduates: one pair for cum laude, two pairs for magna cum laude, and three pairs for summa cum laude. These are in addition to any cords for membership in an honor society. List of collegiate honor societies and the color of their cords: (Mostly taken from The Association of College Honor Societies)
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Weyl's theorem on complete reducibility** Weyl's theorem on complete reducibility: In algebra, Weyl's theorem on complete reducibility is a fundamental result in the theory of Lie algebra representations (specifically in the representation theory of semisimple Lie algebras). Let g be a semisimple Lie algebra over a field of characteristic zero. The theorem states that every finite-dimensional module over g is semisimple as a module (i.e., a direct sum of simple modules.) The enveloping algebra is semisimple: Weyl's theorem implies (in fact is equivalent to) that the enveloping algebra of a finite-dimensional representation is a semisimple ring in the following way. The enveloping algebra is semisimple: Given a finite-dimensional Lie algebra representation π:g→gl(V) , let End ⁡(V) be the associative subalgebra of the endomorphism algebra of V generated by π(g) . The ring A is called the enveloping algebra of π . If π is semisimple, then A is semisimple. (Proof: Since A is a finite-dimensional algebra, it is an Artinian ring; in particular, the Jacobson radical J is nilpotent. If V is simple, then JV⊂V implies that JV=0 . In general, J kills each simple submodule of V; in particular, J kills V and so J is zero.) Conversely, if A is semisimple, then V is a semisimple A-module; i.e., semisimple as a g -module. (Note that a module over a semisimple ring is semisimple since a module is a quotient of a free module and "semisimple" is preserved under the free and quotient constructions.) Application: preservation of Jordan decomposition: Proof: First we prove the special case of (i) and (ii) when π is the inclusion; i.e., g is a subalgebra of gln=gl(V) . Let x=S+N be the Jordan decomposition of the endomorphism x , where S,N are semisimple and nilpotent endomorphisms in gln . Now, ad gln⁡(x) also has the Jordan decomposition, which can be shown (see Jordan–Chevalley decomposition#Lie algebras) to respect the above Jordan decomposition; i.e., ad ad gln⁡(N) are the semisimple and nilpotent parts of ad gln⁡(x) . Since ad ad gln⁡(N) are polynomials in ad gln⁡(x) then, we see ad ad gln⁡(N):g→g . Thus, they are derivations of g . Since g is semisimple, we can find elements s,n in g such that [y,S]=[y,s],y∈g and similarly for n . Now, let A be the enveloping algebra of g ; i.e., the subalgebra of the endomorphism algebra of V generated by g . As noted above, A has zero Jacobson radical. Since [y,N−n]=0 , we see that N−n is a nilpotent element in the center of A. But, in general, a central nilpotent belongs to the Jacobson radical; hence, N=n and thus also S=s . This proves the special case. Application: preservation of Jordan decomposition: In general, π(x) is semisimple (resp. nilpotent) when ad ⁡(x) is semisimple (resp. nilpotent). This immediately gives (i) and (ii). ◻ Proofs: Analytic proof Weyl's original proof (for complex semisimple Lie algebras) was analytic in nature: it famously used the unitarian trick. Specifically, one can show that every complex semisimple Lie algebra g is the complexification of the Lie algebra of a simply connected compact Lie group K . (If, for example, g=sl(n;C) , then K=SU(n) .) Given a representation π of g on a vector space V, one can first restrict π to the Lie algebra k of K . Then, since K is simply connected, there is an associated representation Π of K . Integration over K produces an inner product on V for which Π is unitary. Complete reducibility of Π is then immediate and elementary arguments show that the original representation π of g is also completely reducible. Proofs: Algebraic proof 1 Let (π,V) be a finite-dimensional representation of a Lie algebra g over a field of characteristic zero. The theorem is an easy consequence of Whitehead's lemma, which says Der ⁡(g,V),v↦⋅v is surjective, where a linear map f:g→V is a derivation if f([x,y])=x⋅f(y)−y⋅f(x) . The proof is essentially due to Whitehead.Let W⊂V be a subrepresentation. Consider the vector subspace End ⁡(V) that consists of all linear maps t:V→V such that t(V)⊂W and t(W)=0 . It has a structure of a g -module given by: for x∈g,t∈LW ,x⋅t=[π(x),t] .Now, pick some projection p:V→V onto W and consider f:g→LW given by f(x)=[p,π(x)] . Since f is a derivation, by Whitehead's lemma, we can write f(x)=x⋅t for some t∈LW . We then have [π(x),p+t]=0,x∈g ; that is to say p+t is g -linear. Also, as t kills W , p+t is an idempotent such that (p+t)(V)=W . The kernel of p+t is then a complementary representation to W . ◻ See also Weibel's homological algebra book. Proofs: Algebraic proof 2 Whitehead's lemma is typically proved by means of the quadratic Casimir element of the universal enveloping algebra, and there is also a proof of the theorem that uses the Casimir element directly instead of Whitehead's lemma. Proofs: Since the quadratic Casimir element C is in the center of the universal enveloping algebra, Schur's lemma tells us that C acts as multiple cλ of the identity in the irreducible representation of g with highest weight λ . A key point is to establish that cλ is nonzero whenever the representation is nontrivial. This can be done by a general argument or by the explicit formula for cλ Consider a very special case of the theorem on complete reducibility: the case where a representation V contains a nontrivial, irreducible, invariant subspace W of codimension one. Let CV denote the action of C on V . Since V is not irreducible, CV is not necessarily a multiple of the identity, but it is a self-intertwining operator for V . Then the restriction of CV to W is a nonzero multiple of the identity. But since the quotient V/W is a one dimensional—and therefore trivial—representation of g , the action of C on the quotient is trivial. It then easily follows that CV must have a nonzero kernel—and the kernel is an invariant subspace, since CV is a self-intertwiner. The kernel is then a one-dimensional invariant subspace, whose intersection with W is zero. Thus, ker(VC) is an invariant complement to W , so that V decomposes as a direct sum of irreducible subspaces: V=W⊕ker(CV) .Although this establishes only a very special case of the desired result, this step is actually the critical one in the general argument. Proofs: Algebraic proof 3 The theorem can be deduced from the theory of Verma modules, which characterizes a simple module as a quotient of a Verma module by a maximal submodule. This approach has an advantage that it can be used to weaken the finite-dimensionality assumptions (on algebra and representation). Proofs: Let V be a finite-dimensional representation of a finite-dimensional semisimple Lie algebra g over an algebraically closed field of characteristic zero. Let b=h⊕n+⊂g be the Borel subalgebra determined by a choice of a Cartan subalgebra and positive roots. Let V0={v∈V|n+(v)=0} . Then V0 is an h -module and thus has the h -weight space decomposition: V0=⨁λ∈LVλ0 where L⊂h∗ . For each λ∈L , pick 0≠vλ∈Vλ and Vλ⊂V the g -submodule generated by vλ and V′⊂V the g -submodule generated by V0 . We claim: V=V′ . Suppose V≠V′ . By Lie's theorem, there exists a b -weight vector in V/V′ ; thus, we can find an h -weight vector v such that 0≠ei(v)∈V′ for some ei among the Chevalley generators. Now, ei(v) has weight μ+αi . Since L is partially ordered, there is a λ∈L such that λ≥μ+αi ; i.e., λ>μ . But this is a contradiction since λ,μ are both primitive weights (it is known that the primitive weights are incomparable.). Similarly, each Vλ is simple as a g -module. Indeed, if it is not simple, then, for some μ<λ , Vμ0 contains some nonzero vector that is not a highest-weight vector; again a contradiction. ◻
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Project Starline** Project Starline: Project Starline is an experimental video communication method currently in development by Google that allows the user to see a 3D model of the person they are communicating with. Google announced the product at its 2021 I/O developer conference, saying that it will allow users to "talk naturally, gesture and make eye contact" by utilizing machine learning, spatial audio, computer vision and real-time compression to create the 3D effect without the user wearing typical virtual reality goggles. The goal is to make the user feel as if they are in the same room with the other user. Development: Project Starline had been in development for more than five years prior to the official announcement on May 18, 2021. The technology is currently only available in a small number of Google's offices, but the company plans to begin collaborating with certain partners in the next year, particularly partners in the healthcare and media industries.In November 2021, the project was reorganized under a new division called Google Labs (unrelated to the defunct service of the same name) along with Area 120 and Google's AR and VR efforts. Google will begin testing the technology with corporations such as Salesforce and T-Mobile beginning in late 2022. Implementation: The current implementation of Project Starline is a booth that the user sits in, facing a 65 in (170 cm) "light field display," surrounded by depth sensors, cameras, and lights. Light field technology is a photography technique that captures the direction of light as well as its intensity and color to enable more effective 3D imaging. The user can then view another user on the display in 3D and vice versa. Google says it plans to "make this technology more affordable and accessible." Reception: Jay Peters of The Verge was impressed by a demo of Project Starline, comparing it to "real life science fiction".
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Mir (lenses)** Mir (lenses): The Mir (Russian: Мир) series of lenses are Russian camera lenses made by various manufacturers in the former Soviet Union. Mir-1: The first variant was the Mir-1 and won the Grand Prix at Expo '58 in Brussels. Its construction was derived from the Flektogon lens by Carl Zeiss. The first element in this lens is in fact a disperser meniscus to reduce vignette effects when photographing with an open diaphragm. Mir-1: Its focal length is 37.5mm and its maximum aperture is f/2.8. There are several versions of the lens (Mir-1, Mir-1V, Mir-A..). Most of them are made for Zenit SLRs with M39 lens mount or M42 lens mount. The only different version is Mir-1A. Soviet lenses with A suffix have interchangeable lens mounts, so the user can choose which lens mount they want to use. This lens supports multiple mounts, including Nikon F mount. Mir-1: The lens has six elements in four groups. Mir-3: Mir-3 is a wide angle lens for Kiev medium format cameras (Kiev 6x and Kiev 8x families). It was produced in former Soviet Union from 1973 to 1984, when it was replaced by Mir-38. The lens was available in two versions - Mir-3B has the Pentacon Six mount and is meant to be used with Kiev 6x medium format cameras when Mir-3V had the Hasselblad mount compatible with Kiev 8x family of cameras. Its focal length is 65mm and its maximum aperture is f/3.5. Mir-4: Very little is known about this lens. It's probably a prototype. Its focal length is 29mm and its maximum aperture is f/3.5. Mir-5: This is a very rare prototype lens. It was never commercially produced. It was meant to be used with Narcissus camera. This lens has a focal length of 28mm and a maximum aperture of f/2.0. Mir-6: This is a very rare prototype lens as well. It was never commercially produced. It was also meant to be used with Narcissus camera. It has very similar specifications to Mir-5. It's also a 28mm lens, but it has smaller maximum aperture - f/2.8. Mir-10: Mir-10 is a wide angle lens made for SLR cameras produced in the Soviet Union. It exists in 3 versions - Mir-10 (the experimental version), Mir-10M (the version with M42 lens mount), and Mir-10A - lenses with -A suffix have interchangeable mounts. This lens has a focal length of 28mm and a maximum aperture of f/3.5. Mir-11: Mir-11 is a wide-angle lens designed for 16 mm cinema cameras. There are two variants - Mir-11 with a Krasnogorsk bayonet mount, and the more common Mir-11M with an M32 mount, specifically made for the Kiev 16U. It has a focal length of 12.5mm and a maximum aperture of f/2.0. Russian sources: D.S. Volosov, «Photographic optics (theory, design basics, optical characteristics)», 2nd revised edition, Moscow, «Iskusstvo» Publishing House, 1978. (in Russian) Mir-1, Mir-5, Mir-6, Mir-10, Mir-20, Mir-24, Mir-46, Mir-47, Mir-51,Mir-61
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Demersal zone** Demersal zone: The demersal zone is the part of the sea or ocean (or deep lake) consisting of the part of the water column near to (and significantly affected by) the seabed and the benthos. The demersal zone is just above the benthic zone and forms a layer of the larger profundal zone.Being just above the ocean floor, the demersal zone is variable in depth and can be part of the photic zone where light can penetrate, and photosynthetic organisms grow, or the aphotic zone, which begins between depths of roughly 200 and 1,000 m (700 and 3,300 ft) and extends to the ocean depths, where no light penetrates. Fish: The distinction between demersal species of fish and pelagic species is not always clear cut. The Atlantic cod (Gadus morhua) is a typical demersal fish, but can also be found in the open water column, and the Atlantic herring (Clupea harengus) is predominantly a pelagic species but forms large aggregations near the seabed when it spawns on banks of gravel.Two types of fish inhabit the demersal zone: those that are heavier than water and rest on the seabed, and those that have neutral buoyancy and remain just above the substrate. In many species of fish, neutral buoyancy is maintained by a gas-filled swim bladder which can be expanded or contracted as the circumstances require. A disadvantage of this method is that adjustments need to be made constantly as the water pressure varies when the fish swims higher and lower in the water column. An alternative buoyancy aid is the use of lipids, which are less dense than water—squalene, commonly found in shark livers, has a specific gravity of just 0.86. In the velvet belly lanternshark (Etmopterus spinax), a benthopelagic species, 17% of the bodyweight is liver of which 70% are lipids. Benthic rays and skates have smaller livers with lower concentrations of lipids; they are therefore denser than water and they do not swim continuously, intermittently resting on the seabed. Some fish have no buoyancy aids but use their pectoral fins which are so angled as to give lift as they swim. The disadvantage of this is that, if they stop swimming, the fish sink, and they cannot hover, or swim backwards.Demersal fish have various feeding strategies; many feed on zooplankton or organisms or algae on the seabed; some of these feed on epifauna (invertebrates on top of the seafloor), while others specialise on infauna (invertebrates that burrow beneath the seafloor). Others are scavengers, eating the dead remains of plants or animals, while still others are predators. Invertebrates: Zooplankton are animals that drift with the current, but many have some limited means of locomotion and have some control over the depths at which they drift. They use gas-filled sacs or accumulations of substances with low densities to provide buoyancy, or they may have structures that slow down any passive descent. Where the adult, benthic organism is limited to life in a certain range of depths, their larvae need to optimise their chances of settling on a suitable substrate.Cuttlefish are able to adjust their buoyancy using their cuttlebones, lightweight rigid structures with cavities filled with gas, which have a specific gravity of about 0.6. This enables them to swim at varying depths. Another invertebrate that feeds on the seabed and has swimming abilities is the nautilus, which stores gas in its chambers and adjusts its buoyancy by use of osmosis, pumping water in and out.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Technology scouting** Technology scouting: Technology scouting is an element of technology management in which (1) emerging technologies are identified, (2) technology related information is channeled into an organization, and (3) supports the acquisition of technologies.It is a starting point of a long term and interactive matching process between external technologies and internal requirements of an existing organization for strategic purposes. This matching may also be aided by technology roadmapping. Technology scouting is also known to be part of competitive intelligence, which firms apply as a tool of competitive strategy. It can also be regarded as a method of technology forecasting or in the broader context also an element of corporate foresight. Technology scouting may also be applied as an element of an open innovation approach. Technology scouting is seen as an essential element of a modern technology management system.The technology scout is either an employee of the company or an external consultant who engages in boundary spanning processes to tap into novel knowledge and span internal boundaries. They may be assigned part-time or full-time to the scouting task. The desired characteristics of a technology scout are similar to the characteristics associated with the technological gatekeeper. These characteristics include being a lateral thinker, knowledgeable in science and technology, respected inside the company, cross-disciplinary orientated, and imaginative personality. Technology scouts would also often play a vital role in a formalised technology foresight process. Scientific Journals: Technological Forecasting and Social Change Futures Futures & Foresight Science Foresight Journal of Futures Studies
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**ShredIt** ShredIt: ShredIt is designed to securely erase files in a variety of ways, using various overwriting patterns. Originally released in 1998, Shredit is capable of erasing files on Mac OS 7 through Mac OS 10.8 and later, as well as Microsoft Windows 95 through Windows 7 and later and iOS(sublicensed by Burningthumb Software). Versions of ShredIt are available for 10.6 and later through the macOS App Store, earlier and alternate versions are available through the Mireth website. Features: Safeplace Shredding by file, by folder or optical media Overwriting Standards DoD 5220 Clear & DoD 5220 Sanitize DoE Secure Deletion Gutmann 35 Way Overwrite CD-RW Erasure
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Kir (cocktail)** Kir (cocktail): The Kir is a French cocktail made with a measure of crème de cassis (blackcurrant liqueur) topped up with white wine. In France it is usually drunk as an apéritif before a meal or snack. It was originally made with Bourgogne Aligoté, a white wine of Burgundy, but today various white wines are used throughout France, according to the region and the barkeeper. Many prefer a white Chardonnay-based Burgundy, such as Chablis. Kir (cocktail): It used to be called blanc-cassis, but it is now named after Félix Kir (1876–1968), mayor of Dijon in Burgundy. Kir was a pioneer of the twinning movement in the aftermath of the Second World War, and popularized the drink by offering it at receptions to visiting delegations. Besides treating his international guests well, he was also promoting two economic products of the region. Kir allowed one of Dijon's producers of crème de cassis to use his name, then extended the right to their competitors as well. According to Rolland (2004), the reinvention of blanc-cassis (post 1945) was necessitated by the German Army's confiscation of all the local red Burgundy during the war. Faced with an excess of white wine, Kir renovated a drink that used to be made primarily with the red. Kir (cocktail): Another explanation that has been offered is that Mayor Kir revived it during a year in which the ordinary white wine of the region was inferior and the crème de cassis helped to disguise the fact. Kir (cocktail): Following the commercial development of crème de cassis in 1841, the cocktail became a popular regional café drink, but has since become inextricably linked internationally with the name of Mayor Kir. When ordering a Kir, waiters in France sometimes ask whether the customer wants it made with crème de cassis (blackcurrant), de mûre (blackberry), de pêche (peach), or framboise (raspberry). Kir (cocktail): The International Bartenders Association gives a recipe using 1/10 crème de cassis, but French sources typically specify more; 19th-century recipes for blanc-cassis recommended 1/3 crème de cassis, which modern tastes find cloyingly sweet, and modern sources typically about 1/5. Replacing the crème de cassis with blackcurrant syrup is discouraged. Variations: Besides the basic Kir, a number of variations exist: Cidre royal – made with cider instead of wine, with a measure of calvados added Communard, or cardinal – made with red wine instead of white Hibiscus royal – made with sparkling wine, peach liqueur, raspberry liqueur, and an edible hibiscus flower Kir Berrichon – from the Berry region of France. Made with red wine and blackberry liqueur (crème de mûre) Kir bianco – made with sweet white Vermouth instead of wine. Variations: Kir Breton – made with Breton cider instead of wine. Kir impérial – made with raspberry liqueur (such as Chambord) instead of cassis, and champagne Kir Normand – made with Normandy cider instead of wine. Kir pamplemousse – made with red grapefruit liqueur and sparkling white wine Kir pêche – made with peach liqueur Kir pétillant – made with sparkling wine Kir royal – made with Champagne Pink Russian – made with milk instead of wine Tarantino – made with lager or light ale ("kir-beer")
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**JBIG** JBIG: JBIG is an early lossless image compression standard from the Joint Bi-level Image Experts Group, standardized as ISO/IEC standard 11544 and as ITU-T recommendation T.82 in March 1993. It is widely implemented in fax machines. Now that the newer bi-level image compression standard JBIG2 has been released, JBIG is also known as JBIG1. JBIG was designed for compression of binary images, particularly for faxes, but can also be used on other images. In most situations JBIG offers between a 20% and 50% increase in compression efficiency over Fax Group 4 compression, and in some situations, it offers a 30-fold improvement. JBIG: JBIG is based on a form of arithmetic coding developed by IBM (known as the Q-coder) that also uses a relatively minor refinement developed by Mitsubishi, resulting in what became known as the QM-coder. It bases the probability estimates for each encoded bit on the values of the previous bits and the values in previous lines of the picture. JBIG also supports progressive transmission, which generally incurs a small overhead in bit rate (around 5%). Patents: Doubts about patent licence requirements for JBIG1 implementations by IBM, Mitsu­bishi and AT&T prevented the codec from being widely implemented in open-source software. For example, as of 2012, none of the commonly used web browsers supported it. Since 2012, there are now no more JBIG1 patents in force – the last ones to expire were Mitsubishi's patents in Canada and Australia (on 25 February 2011) and in the United States (on 4 April 2012).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**InnoDB** InnoDB: InnoDB is a storage engine for the database management system MySQL and MariaDB. Since the release of MySQL 5.5.5 in 2010, it replaced MyISAM as MySQL's default table type. It provides the standard ACID-compliant transaction features, along with foreign key support (Declarative Referential Integrity). It is included as standard in most binaries distributed by MySQL AB, the exception being some OEM versions. Description: InnoDB became a product of Oracle Corporation after its acquisition of the Finland-based company Innobase in October 2005. The software is dual licensed; it is distributed under the GNU General Public License, but can also be licensed to parties wishing to combine InnoDB in proprietary software.InnoDB supports: Both SQL and XA transactions Tablespaces Foreign keys Full text search indexes, since MySQL 5.6 (February 2013) and MariaDB 10.0 Spatial operations, following the OpenGIS standard Virtual columns, in MariaDB
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Information–action ratio** Information–action ratio: The information–action ratio was a concept coined by cultural critic Neil Postman (1931–2003) in his work Amusing Ourselves to Death. In short, Postman meant to indicate the relationship between a piece of information and what action, if any, a consumer of that information might reasonably be expected to take once learning it. Information–action ratio: In a speech to the German Informatics Society (Gesellschaft für Informatik) on October 11, 1990 in Stuttgart, sponsored by IBM-Germany, Neil Postman said the following: "The tie between information and action has been severed. Information is now a commodity that can be bought and sold, or used as a form of entertainment, or worn like a garment to enhance one's status. It comes indiscriminately, directed at no one in particular, disconnected from usefulness; we are glutted with information, drowning in information, have no control over it, don't know what to do with it."In Amusing Ourselves to Death Postman frames the information-action ratio in the context of the telegraph's invention. Prior to the telegraph, Postman says people received information relevant to their lives, creating a high correlation between information and action: "The information-action ratio was sufficiently close so that most people had a sense of being able to control some of the contingencies in their lives” (p. 69). Information–action ratio: The telegraph allowed bits of information to travel long distances, and so Postman claims "the local and the timeless ... lost their central position in newspapers, eclipsed by the dazzle of distance and speed ... Wars, crimes, crashes, fires, floods—much of it the social and political equivalent of Adelaide's whooping coughs—became the content of what people called 'the news of the day'" (pp. 66–67). Information–action ratio: A high information-action ratio, therefore, refers to the helplessness people confront when faced with decontextualized information. Someone may know Adelaide has the whooping cough, but what could anyone do about it? Postman said that this kind of access to decontextualized information "made the relationship between information and action both abstract and remote." Information consumers were "faced with the problem of a diminished social and political potency." Cultural references: The term was referenced in Arctic Monkeys' song "Four Out of Five" off the band's 2018 album Tranquility Base Hotel & Casino, where the Information Action Ratio is the name of a fictional taqueria on the hotel based on the moon.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**FAM155A** FAM155A: Family with sequence similarity 155, member A is a protein that in humans is encoded by the FAM155A gene.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Sales comparison approach** Sales comparison approach: The sales comparison approach (SCA) relies on the assumption that a matrix of attributes or significant features of a property drive its value. For examples, in the case of a single family residence, such attributes might be floor area, views, location, number of bathrooms, lot size, age of the property and condition of property. Economic Basis: The sales comparison approach is based upon the principles of supply and demand, as well as upon the principle of substitution. Supply and demand indicates value through typical market behavior of both buyers and sellers. Substitution indicates that a purchaser would not purchase an improved property for any value higher than it could be replaced for on a site with equivalent utility, assuming no undue delays in construction. Examples of Methods: In practice, the most common SCA method used by estate agents and real estate appraisers is the sales adjustment grid. It uses a small number of recently sold properties in the immediate vicinity of the subject property to estimate the value of its attributes. Adjustments to the comparables may be determined by trend analysis, matched-pairs analysis, or simple surveys of the market. Examples of Methods: More advanced researchers and appraisers commonly employ statistical techniques based on multiple regression methods which generally compare a larger number of more geographically dispersed property transactions to determine the significance and magnitude of the impact of different attributes on property value. Research has shown that the sales adjustment grid and the multiple regression model are theoretically the same, with the former applying more heuristic methods and the latter using statistical techniques.Spatial auto regression plagues these statistical techniques, since high priced properties tend to cluster together and therefore one property price is not independent of its neighbor. Given property inflation and price cycles, both comparison techniques can become unreliable if the time interval between transactions sampled is excessive. The other factor undermining a simplistic use of the SCA is the evolving nature of city neighborhoods, though in reality urban evolution occurs gradually enough to minimize its impact on this approach to value. Examples of Methods: In more complex situations, such as litigation or contaminated property appraisal, appraisers develop SCA adjustments using widely accepted advanced techniques, such as repeat sales models (to measure house price appreciation over time), survey research (e.g. -- contingent valuation), case studies (to develop adjustments in complex situations) or other statistically based techniques.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Polymicrogyria** Polymicrogyria: Polymicrogyria (PMG) is a condition that affects the development of the human brain by multiple small gyri (microgyri) creating excessive folding of the brain leading to an abnormally thick cortex. This abnormality can affect either one region of the brain or multiple regions. The time of onset has yet to be identified; however, it has been found to occur before birth in either the earlier or later stages of brain development. Early stages include impaired proliferation and migration of neuroblasts, while later stages show disordered post-migration development. The symptoms experienced differ depending on what part of the brain is affected. There is no specific treatment to get rid of this condition, but there are medications that can control the symptoms such as seizures, delayed development or weakened muscles as some of the noted effects. Syndromes: Significant technological advances have been made within the past few decades that have allowed more extensive studies to be made regarding syndromes from conditions such as polymicrogyria. Research, imaging, and analysis has shown that distribution of polymicrogyria does not always appear to be random, which revealed different types polymicrogyria. A summary of clinical manifestations of each syndrome can be found below, in the section labelled "Clinical presentation".The main patterns of polymicrogyria are: perisylvian (61%), generalised (13%), frontal (5%), and parasagittal parieto-occipital (3%) and 11% is associated with gray matter heterotopia (grey matter is located in the white matter instead of usual location in the cerebral cortex). Syndromes: Bilateral frontal polymicrogyria (BFP) BFP appears to be a symmetrical polymicrogyria that extends anteriorly from the frontal poles to the posterior precentral gyrus, and inferiorly to the frontal operculum. Patients who had polymicrogyria distribution similar to this also experienced similar symptoms including delayed motor and language developments, spastic hemiparesis or quadriparesis, and forms of mild intellectual disability. Syndromes: Bilateral frontoparietal polymicrogyria (BFPP) BFPP was one of the first discovered forms of polymicrogyria to have a gene identified linking to the syndromes caused. This gene is called GPR56. Symmetrical distribution is also evident in this form, but more distinctly, patients with BFPP were found to have atrophy of the cerebellum and brain stem, as well as bilateral white matter abnormalities. BFPP is characterized by esotropia, global development delay, pyramidal signs, cerebral signs, and seizures. Esotropia is also known as dysconjugate gaze, and is a common feature of severe static encephalopathy. This differentiates BFPP from the other bilateral polymicrogyria syndromes. Syndromes: Bilateral perisylvian polymicrogyria (BPP) BPP is similar to the other types of polymicrogyria in that it is usually symmetrical, but BPP can vary among patients. BPP is characterized by its location; the cerebral cortex deep in the sylvian fissures is thickened and abnormally infolded, as well as the sylvian fissures extending more posteriorly up to the parietal lobes and more vertically oriented. BPP has been classified into a grading system consisting of four different grades that describe the variations in severity: The grades move from most severe (Grade 1) to least severe (Grade 4). Although BFPP was the first form of polymicrogyria to be discovered, BPP was the first form to be described and is also the most common form of polymicrogyria. The clinical characterizations of BPP "include pseudobulbar palsy with diplegia of the facial, pharyngeal and masticory muscles (facio-pharyngo-glosso-masticatory paresis), pyramidal signs, and seizures." These can result in drooling, feeding issues, restricted tongue movement, and dysarthria. Disorders in language development have also been associated with BPP, but the extent of language disorder depends on the severity of cortical damage. Patients who have BPP can also have pyramidal signs that vary in severity, and can be either unilateral or bilateral.The sodium channel SCN3A has been implicated in BPP. Syndromes: Bilateral parasagittal parieto-occipital polymicrogyria (BPOP) BPOP is located in the parasagittal and mesial regions of the parieto-occipital cortex. This form has been associated with IQ scores that range from average intelligence to mild intellectual disability, seizures, and cognitive slowing. The age of seizure onset has been found to occur anywhere from 20 months to 15 years, and in most cases the seizures were intractable (meaning hard to control). Syndromes: Bilateral generalised polymicrogyria (BGP) BGP is most severe in the perisylvian regions, but occurs in a generalised distribution. Associated factors include a reduced volume of white matter and ventriculomegaly. BGP tends to show excessively folded and fused gyri of an abnormally thin cerebral cortex, and an absence of the normal six-layered structure. The abnormally thin cortex is a key factor that distinguishes this form of polymicrogyria from the others, which are characterized by an abnormally thick cortex. Most of the patients have cognitive and motor delay, spastic hemi- or quadriparesis, and seizures in varying degrees. The seizures also vary at age of onset, type, and severity. There have been pseudobulbar signs reported with BGP, which are also seen in patients with BPP. This association leads to the belief that there is overlap between patients with BGP and patients with grade 1 BPP. Syndromes: Unilateral polymicrogyria The region in which unilateral polymicrogyria occurs has been generalized into different cortical areas. Features associated with this form of polymicrogyria are similar to the other forms and include spastic hemiparesis, intellectual disability in variable degrees, and seizures. The features depend on the exact area and extent to which polymicrogyria has affected the cortex. Patients who have unilateral polymicrogyria have been reported to also have electrical status epilepticus during sleep (EPES), and all had seizures. Signs and symptoms: The diagnosis of PMG is merely descriptive and is not a disease in itself, nor does it describe the underlying cause of the brain malformation.Polymicrogyria may be just one piece of a syndrome of developmental abnormalities, because children born with it may have a wide spectrum of other problems, including global developmental disabilities, mild to severe intellectual disabilities, motor dysfunctions including speech and swallowing problems, respiratory problems, seizures. Though it is difficult to make a predictable prognosis for children with the diagnosis of PMG, there are some generalized clinical findings according to the areas of the brain that are affected. Signs and symptoms: Bilateral frontal polymicrogyria (BFP) – Cognitive and motor delay, spastic quadriparesis, epilepsy Bilateral frontoparietal polymicrogyria (BFPP) – Severe cognitive and motor delay, seizures, dysconjugate gaze, cerebellar dysfunction Bilateral perisylvian polymicrogyria (BPP) – Pseudobulbar signs, cognitive impairment, epilepsy, some with arthrogryposis or lower motor neuron disease Bilateral parasagittal parieto-occipital polymicrogyria (BPPP) – Partial seizures, some with intellectual developmental disorder Bilateral generalized polymicrogyria (BGP) – Cognitive and motor delay of variable severity, seizuresRates of symptoms in PMG include 78% for epilepsy, 70% for global developmental delay, 51% for spasticity, 50% for microcephaly, 45% for dysmorphic features (e.g., abnormal facies or hand, feet, or digital anomalies), and 5% for macrocephaly. In the BPP subtype of PMG, up to 75% may have mild to moderate intellectual disability. Cause: The cause of polymicrogyria is unclear. It is generally agreed that PMG occurs during late neuronal migration (when majority of the neurons arrived at cerebral cortex after their starting points around the ventricular system of the brain) or early cortical organization of fetal development. Evidence for both genetic and non-genetic causes exists.Chromosomal abnormalities have been identified in PMG such as 22q11.2 deletion (characterised by bilateral perisylvian PMG, heart defects, facial dysmorphism, microcephaly) and 1p36 deletion (bilateral perisylvian PMG, intellectual disability, dysmorphic facial features and microcephaly). Apart from that, mutations in more than 30 genes have been associated with PMG. Common genes assocciated with PMG are TUBA1A and PIK3R2. Association with the gene WDR62 (diffuse or asymmetric PMG) and SCN3A has also been identified, as well as other ion channels such as KCN, CACNA, GRIN, and GABAR. Other genes implicated are: GPR56 (Bilateral frontoparietal PMG), TUBB2B (anterior predominant PMG), NDE1 (Diffuse PMG), AKT3 (Bilateral perisylvian PMG), and PIK3CA (Bilateral perisylvian PMG).Non-genetic causes include defects in placental oxygenation and in association with congenital infections, particularly cytomegalovirus, syphilis, and varicella zoster virus. Pathology: Polymicrogyria is a disorder of neuronal migration, resulting in structurally abnormal cerebral hemispheres. The Greek roots of the name describe its salient feature: many [poly] small [micro] gyri (convolutions in the surface of the brain). It is also characterized by shallow sulci, a slightly thicker cortex, neuronal heterotopia and enlarged ventricles. When many of these small folds are packed tightly together, PMG may resemble pachygyria (a few "thick folds" - a mild form of lissencephaly).The pathogenesis of polymicrogyria is still being researched for understanding though it is historically heterogeneous-4. It results from both genetic and destructive events. While polymicrogyria is associated with genetic mutations, none of these are the sole cause of this abnormality. The cortical development of mammals requires specific cell functions that all involve microtubules, whether it is because of mitosis, specifically cell division, cell migration or neurite growth. Some mutations that affect the role of microtubules and are studied as possible contributors, but not causes, to polymicrogyria include TUBA1A and TUBB2B. TUBB2B mutations are known to contribute to polymicrogyria either with or without congenital fibrosis or the external ocular muscles, as well as bilateral perisylvian. Pathology: The gene GPR56 is a member of the adhesion G protein-coupled receptor family and is directly related to causing Bilateral frontoparietal polymicrogyria, (BFPP)-6. Other genes in the G protein-coupled receptor family have effects with this condition as well such as the outer brain development, but not enough is known to carry out all the research properly so the main focus is starting with the specific GR56 gene within this category. This malformation of the brain is a result of numerous small gyri taking over the surface of the brain that should otherwise be normally convoluted. This gene is currently under studies to help identify and contribute to the knowledge about this condition. It is studied to provide information on the causes along with insight into the mechanisms of normal cortical development and the regional patterning of the cerebral cortex using magnetic resonance imagine, MRI. Specifically found to polymicrogyria due to mutation of this gene are myelination defects. GPR56 is observed to be important for myelinations due to a mutation in this gene results in reduced white matter volume and signal changes as shown in MRI's. While the cellular roles of GPR56 in myelination remains unclear, this information will be used to further other studies done with this gene.Another gene that has been associated with this condition is GRIN1 and GRIN2B. Diagnosis: The effects of PMG can be either focal or widespread. Although both can have physiological effects on the patient, it is hard to determine PMG as the direct cause because it can be associated with other brain malformations. Most commonly, PMG is associated with Aicardi and Warburg micro syndromes. These syndromes both have frontoparieto polymicrogyria as their anomalies. To ensure proper diagnosis, doctors thus can examine a patient through neuroimaging or neuropathological techniques. Diagnosis: Neuroimaging techniques Pathologically, PMG is defined as "an abnormally thick cortex formed by the piling upon each other of many small gyri with a fused surface." To view these microscopic characteristics, magnetic resonance imaging (MRI) is used. First physicians must distinguish between polymicrogyria and pachygyria. Pachygria leads to the development of broad and flat regions in the cortical area, whereas the effect of PMG is the formation of multiple small gyri. Underneath a computerized tomography (CT scan) scan, these both appear similar in that the cerebral cortex appears thickened. However, MRI with a T1 weighted inversion recovery will illustrate the gray-white junction that is characterized by patients with PMG. An MRI is also usually preferred over the CT scan because it has sub-millimeter resolution. The resolution displays the multiple folds within the cortical area, which is continuous with the neuropathology of an infected patient. Diagnosis: Neuropathological techniques Gross examination exposes a pattern of many small gyri clumped together, which causes an irregularity in the brain surface. The cerebral cortex, which in normal patients is six cell layers thick, is also thinned. As mentioned prior, the MRI of an affected patient shows what appears to be a thickening of the cerebral cortex because of the tiny folds that aggregate causing a more dense appearance. However, gross analysis shows that an affected patient can have as few as one to all six of these layers missing. Treatment: The PMG malformation cannot be reversed, but the symptoms can be treated. The removal of affected areas through hemispherectomy has been used in some cases to reduce the amount a seizure activity. Few patients are candidates for surgery. The global developmental delay that affects 94% can also be mitigated in some patients with occupational, physical, and speech therapies. The important aspect to realize is PMG affects each patient differently and treatment options and mitigation techniques will vary. Many services are available to help, most children's hospitals can direct caregivers guidance where to get the information they need to seek assistance. Epidemiology: The incidences of PMG and its different forms are unknown. However, the frequency of cortical dysplasia in general has been estimated to be 1 in 2,500 newborns. PMG is one of the best-known and most common malformations of cortical development, accounting for 20% of all cases. In the largest series of PMG cases, the bilateral perisylvian pattern was the most common topological pattern (52% of cases) followed by the unilateral perisylvian pattern (9% of cases). History: Limited information was known about cerebral disorders until the development of modern technologies. Brain imaging and genetic sequencing greatly increased the information known about polymicrogyria within the past decade. Understanding about development, classification and localization of the disorder have greatly improved. For instance, localization of specific cortex regions affected by the disease was determined. This allowed for clinical symptoms of patients to be linked with localized cortex areas affected. A gene that was identified to be a contributor to bilateral frontoparietal polymicrogyria was GPR56.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Surimi** Surimi: Surimi (Japanese: 擂り身 / すり身, "ground meat") is a paste made from fish or other meat. The term can also refer to a number of East Asian foods that use that paste as their primary ingredient. It is available in many shapes, forms, and textures, and is often used to mimic the texture and color of the meat of lobster, crab, grilled Japanese eel or shellfish. Surimi: The most common surimi product in the Western market is imitation crab meat. Such a product often is sold as krab, imitation crab and mock crab in the United States, and as seafood sticks, crab sticks, fish sticks, seafood highlighter or seafood extender in Commonwealth nations. In Britain, the product is sometimes known as seafood sticks to avoid breaking Trading Standards rules on false advertising. History: Fish pastes have been a popular food in East Asia. In China, the food is used to make fish balls (魚蛋/魚丸) and ingredients in a thick soup known as "Geng" (羹) common in Fujian cuisine. In Japan, the earliest surimi production was in 1115 for making kamaboko. Alaska pollock, native to the seas around Japan, played an important role in the development of processed surimi due to its high protein biomass. Satsumaage, chikuwa, and hanpen were other major surimi foods prior to 1960.: 4–5 After World War 2, machines were used to process surimi, but it was always sold fresh, since freezing had a negative effect on the finished product by denaturing the gel-forming capability of the surimi. Between 1945 and 1950, record catches of pollock in Hokkaido (primarily for harvesting the roe) resulted in large quantities of fish meat, so the Hokkaido Fisheries Research Station established a team to make better use of the excess. A team, led by K. Nishiya, discovered the addition of salt during the processing prevented the spongey texture that resulted after freezing, and also began using salted surimi in the manufacture of fish sausages. In 1969, Nishitani Yōsuke further discovered that the use of sucrose, or other carbohydrates such as sorbitol, acted as a cryoprotectant by stabilizing the Actomyosin in the surimi without denaturing the fish protein the way salt does.: 5–6 Surimi industrial technology developed by Japan in the early 1960s promoted the growth of the surimi industry. In 1963, the government of Hokkaido applied for a patent on the surimi processing technology, and companies such as Nippon Suisan and Maruha-Nichiro implemented at-sea frozen fish processing in the mid-1960s. After a peak of surimi consumption in 1975, consumption in Japan began to decline as the preference for other meats (beef, pork) went up, and lower quality products on the market influenced consumer opinion of surimi overall. Although the quality standards for fish in Japanese surimi products was quite high, the consumer perception of surimi generally attributes it to by-catch and lower quality fish.: 6–7 When the Magnuson–Stevens Fishery Conservation and Management Act was enacted in 1976, the United States became involved in the surimi industry through joint ventures with Japanese fish processors. Imitation crab products were developed in Japan between 1973 and 1975, and although not as popular in Japan, opened the door to international surimi consumption. Further developments for using different types of fish were made since the 1980s. The first US surimi processing plant was built in 1984 on Kodiak Island, and Canada in 1995, aided by Japanese technicians.: 6–9 In the early 1990s and the late 2000s, the price of surimi skyrocketed. This impacted many small Japanese kamaboko companies, causing many to go bankrupt due to cost of materials as well as the diminishing habit of eating kamaboko daily by younger generations.: 8  As the price rose, surimi industry sought methods to minimize waste. The decanter technique, developed in the mid-1990s, further improved the recovery of fish meat during the washing process.: 6–9 Two to three million tons of fish from around the world, amounting to 2–3 percent of the world fisheries' supply, are used for the production of surimi and surimi-based products. The United States and Japan are major producers of surimi and surimi-based products. Thailand has become an important producer. China's role as producer is increasing. Many newcomers to the surimi industry have emerged, including Lithuania, Vietnam, Chile, the Faroe Islands, France, and Malaysia. Production: Lean meat from fish or land animals is first separated or minced. The meat then is rinsed numerous times to eliminate undesirable odors. The result is beaten and pulverized to form a gelatinous paste. Depending on the desired texture and flavor of the surimi product, the gelatinous paste is mixed with differing proportions of additives such as binders (starch, egg white, soy protein, transglutaminases), salt, vegetable oil, humectants, cryoprotectants (sorbitol, sugar), seasonings, and flavor enhancers such as monosodium glutamate (MSG). Production: If the surimi is to be packed and frozen, food-grade cryoprotectants are added as preservatives while the meat paste is being mixed. Under most circumstances, surimi is processed immediately into a formed and cured product. Production: Fish surimi Typically the resulting paste, depending on the type of fish and whether it was rinsed in the production process, is tasteless and must be flavored artificially. According to the United States Department of Agriculture National Nutrient Database, fish surimi contains about 76% water, 15% protein, 6.85% carbohydrate, and 0.9% fat.In North America and Europe, surimi also alludes to fish-based products manufactured using this process. A generic term for fish-based surimi in Japanese is "fish-puréed products" (魚肉練り製品 gyoniku neri seihin). The gelling quality of whitefish make them ideal for surimi production, but other fish including the dark meat, has been incorporated as technological advances solve gelling issues. Production: The fish used to make surimi include: Alaska pollock (Gadus chalcogrammus) Atlantic cod (Gadus morhua) Big-head pennah croaker (Pennahia macrocephalus) Bigeyes (Priacanthus arenatus) Golden threadfin bream (Nemipterus virgatus) Milkfish (Chanos chanos) Pacific whiting (Merluccius productus) Various shark species Swordfish (Xiphias gladius) Tilapia Oreochromis mossambicus Oreochromis niloticus niloticus Black bass Smallmouth bass (Micropterus dolomieu) Largemouth bass (Micropterus salmoides) Florida black bass (Micropterus floridanus) Meat surimi Although seen less commonly in Japanese and Western markets, pork surimi (肉漿) is a common product found in a wide array of Chinese foods. The process of making pork surimi is similar to making fish surimi except that leaner cuts of meat are used and rinsing is omitted. Pork surimi is made into pork balls (Chinese: gòngwán; 貢丸) which, when cooked, have a texture similar to fish balls, but are much firmer and denser. Production: Pork surimi also is mixed with flour and water to make a type of dumpling wrapper called "yànpí" (燕皮 or 肉燕皮) that has the similar firm and bouncy texture of cooked surimi. Production: Beef surimi also can be shaped into a ball form to make "beef balls" (牛肉丸). When beef surimi is mixed with chopped beef tendons and formed into balls, "beef tendon balls" (牛筋丸) are produced. Both of these products commonly are used in Chinese hot pot as well as served in Vietnamese phở. Bakso, made from beef surimi, is a popular common food found in Indonesia. Production: The surimi process also is used to make turkey products. It is used to make turkey burgers, turkey sausage, turkey pastrami, turkey franks, turkey loaf and turkey salami. Production: Chemistry of curing The curing of the fish paste is caused by the polymerization of myosin when heated. The species of fish is the most important factor that affects this curing process. Many pelagic fish with higher fat contents lack the needed type of heat-curing myosin and are not used for surimi.Certain kinds of fish, such as the Pacific whiting, cannot form firm surimi without additives such as egg white or potato starch. Before the outbreak of bovine spongiform encephalopathy (BSE, mad cow disease), it was an industrial practice to add bovine blood plasma into the fish paste to help its curing or gel-forming. Today some manufacturers may use a transglutaminase to improve the texture of surimi. Although illegal, the practice of adding borax to fish balls and surimi to heighten the bouncy texture of the fish balls and whiten the product is widespread in Asia. Uses and products: Surimi is a useful ingredient for producing various kinds of processed foods. It allows a manufacturer to imitate the texture and taste of a more expensive product, such as lobster tail, using a relatively low-cost material. Surimi is an inexpensive source of protein. In Asian cultures, surimi is eaten as a food in its own right and seldom used to imitate other foods. In Japan, fish cakes (kamaboko) and fish sausages, as well as other extruded fish products, are commonly sold as cured surimi. In Chinese cuisine, fish surimi, often called "fish paste", is used directly as stuffing or made into balls. Balls made from lean beef (牛肉丸, lit. "beef ball") and pork surimi often are seen in Chinese cuisine. Fried, steamed, and boiled surimi products also are found commonly in Southeast Asian cuisine. Uses and products: In the West, surimi products usually are imitation seafood products, such as crab, abalone, shrimp, calamari, and scallop. Several companies do produce surimi sausages, luncheon meats, hams, and burgers. Some examples include Salmolux salmon burgers and SeaPak surimi ham, salami, and rolls. A patent was issued for the process of making even higher-quality proteins from fish such as in the making of imitation steak from surimi. Surimi is also used to make kosher imitation shrimp and crabmeat, using only kosher fish such as pollock. There is also a surimi salad which consists of imitation crab meat mixed with mayonnaise and vegetables. List of foods made from surimi: A-gei - a stuffed tofu Chikuwa - a Japanese grilled surimi Crab stick - also known as imitation crab Kamaboko - shaped into loaves and steamed, served sliced Gyoniku soseji - fish sausages Hanpen - made from a mix of yam and surimi, cut into triangles Tsukune (Tsumire) - often skewered and cooked a variety of ways Fish ball/Bakso ikan Narutomaki - a type of kamaboko Yong tau foo - stuffed tofu. List of foods made from surimi: Satsuma-age - deep fried surimi patties Ngo hiang - wrapped in tofu skin and fried Pempek - shaped into logs and deep fried Gefilte fish - formed into round patties and poached Keropok lekor - a Malaysian hawker food. Fish patties formed into finger-like and then deep fried. Eat with sweet chilli sauce.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Apache JServ Protocol** Apache JServ Protocol: The Apache JServ Protocol (AJP) is a binary protocol that can proxy inbound requests from a web server through to an application server that sits behind the web server. AJP is a highly trusted protocol and should never be exposed to untrusted clients, which could use it to gain access to sensitive information or execute code on the application server.It also supports some monitoring in that the web server can ping the application server. Web implementors typically use AJP in a load-balanced deployment where one or more front-end web servers feed requests into one or more application servers. Sessions are redirected to the correct application server using a routing mechanism wherein each application server instance gets a name (called a route). In this scenario the web server functions as a reverse proxy for the application server. Lastly, AJP supports request attributes which, when populated with environment-specific settings in the reverse proxy, provides for secure communication between the reverse proxy and application server.AJP runs in Apache HTTP Server 1.x using the mod_jk plugin and in Apache 2.x using mod_proxy_ajp, mod_proxy and proxy balancer modules together. Other web server implementations exist for: lighttpd 1.4.59, nginx, Grizzly 2.1, and the Internet Information Services.Web container application servers supporting AJP include: Apache Tomcat, WildFly (formerly JBoss AS), and GlassFish. History: Alexei Kosut originally developed the Apache JServ Protocol in July 1997 but the version 1.0 specification was published later on July 29, 1998. He also wrote the first implementations of it in the same month, with the releases of the Apache JServ servlet engine 0.9 and the Apache mod_jserv 0.9a (released on July 30, 1997).The specification was updated to version 1.1 on September 9, 1998. Also in 1998, a revamped protocol was created and published in specification versions 2 and 2.1, however it was never adopted. History: In 1999, Sun Microsystems donated their JavaServer Web Development Kit (JSWDK; codenamed Tomcat) reference implementation to Apache Software Foundation. This became Apache Tomcat version 3.0, the successor to JSWDK 2.1, and derailed further development of Apache JServ servlet engine and AJP towards support of Java servlet API version 2.1.The current specification remains at version 1.3, however there is a published extension proposal as well as an archived experimental 1.4 proposal.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Anprolene** Anprolene: Anprolene is a registered trade name for ethylene oxide that belongs to Andersen Sterilizers. Anprolene: Harold Willids Andersen invented Anprolene in 1967 and used plastic bags and small ampoules, hence, substantially less ethylene oxide (EtO) than traditional chamber type sterilizers which employ tanks of EtO. The "gas diffusion" method of using ethylene oxide was particularly useful to Andersen whose invention of the first double lumen nasogastric tube was being used by his colleagues at Bellevue Hospital in New York, NY, where he was chief resident. At that time a single lumen Levin tube was employed clean and packaged, but not sterile. Andersen recognized the need and the Andersen Tube was packaged and sterilized with ethylene oxide. The US EPA registered Anprolene in 1968. Another gas diffusion method, Sterijet, was invented and used to sterilize Andersen Tubes and other medical devices. Anprolene: On February 8, 2013 Anprolene and Sterijet were recognized by the FDA Office of Compliance as pre-amendment devices which is a reference to the Medical Device Amendments of 1976. Plastic, latex and rubber, and the like are "porous" to Ethylene oxide so that EO or EtO diffuses through a series of bags containing a specific quantity of gas. The bag containing the items for sterilization concentrates the gas for enough time which is why it is called a unit-dose gas diffusion method. Each sterilization cycle uses less than 18g of 100% EtO, hence economic value is gained when every corner of a traditional EtO chamber type sterilizer that relies on tanks containing pounds of EO need not be filled.Andersen's unit-dose, gas diffusion method is widely used where small quantities of goods require sterilization. The Ethylene Oxide flexible chamber technology is also called EO-FCT. The human, veterinary and industrial markets are beneficiaries of EO-FCT.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Redmi Pad** Redmi Pad: Redmi Pad is an Android tablet computer designed, marketed and manufactured by Xiaomi. This tablet computer was announced on October 4, 2022, it was released on October 5, 2022. Design: The tablet has glass front and aluminum unibody.The design of the camera bump is similar to Redmi K50 and K50 Pro. On the bottom side, there are USB-C and two speakers. On the top side, there are two speakers and a power button. On the right side there are a volume rocker, two microphones, and microSD tray. Redmi Pad solds in 3 colours: Graphite Gray, Moonlight Silver, Mint Green.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Belly landing** Belly landing: A belly landing or gear-up landing occurs when an aircraft lands without its landing gear fully extended and uses its underside, or belly, as its primary landing device. Normally the term gear-up landing refers to incidents in which the pilot forgets to extend the landing gear, while belly landing refers to incidents where a mechanical malfunction prevents the pilot from extending the landing gear. Belly landing: During a belly landing, there is normally extensive damage to the airplane. Belly landings carry the risk that the aircraft may flip over, disintegrate, or catch fire if it lands too fast or too hard. Extreme precision is needed to ensure that the plane lands as straight and level as possible while maintaining enough airspeed to maintain control. Strong crosswinds, low visibility, damage to the airplane, or unresponsive instruments or controls greatly increase the danger of performing a belly landing. Belly landings are one of the most common types of aircraft accidents nevertheless, and are normally not fatal if executed carefully. Causes and prevention: Pilot error The most common cause of gear-up landings is the pilot simply forgetting to extend the landing gear before touchdown. On any retractable gear aircraft, lowering the landing gear is part of the pilot's landing checklist, which also includes items such as setting the flaps, propeller and mixture controls for landing. Pilots who ritually perform such checklists before landing are less likely to land gear-up. However, some pilots neglect these checklists and perform the tasks by memory, increasing the chances of forgetting to lower the landing gear. Even careful pilots are at risk, because they may be distracted and forget to perform the checklist or be interrupted in the middle of it by other duties such as collision avoidance or another emergency. In the picture shown above, the B-17 Dutchess' Daughter had landed normally, when the copilot inadvertently flipped the landing gear switch to retract. The gear collapsed near the end of the landing roll.All aircraft with retractable landing gear are required to have a way to indicate the status of the landing gear, which is normally a set of lights that change colors from red to amber to green depending on whether the gear are up, in transit, or down. However, a distracted pilot may forget to look at these lights. This has led to aircraft designers building extra safety systems in the aircraft to reduce the possibility of human error. In small aircraft this most commonly takes the form of a warning light and horn which operate when any of the landing gear is not locked down and any of the engine throttles are retarded below a cruise power setting. However, the horn has been useless in situations when the pilot was unfamiliar with the aircraft and did not know what the horn sounding was meant to indicate. Pilots have sometimes confused the landing gear warning horn with the stall warning horn. In other cases, pilots cannot hear the horn on older aircraft due to wearing a modern noise-canceling headset. Causes and prevention: In larger aircraft, the warning system usually excludes the engine power setting and instead warns the pilot when the flaps are set for landing but the landing gear is not. An alternative system uses the ground proximity warning system or radar altimeter to engage a warning when the airplane is close to the ground and descending with the gear not down. Most airliners incorporate a voice message system which eliminates the ambiguity of a horn or buzzer and instead gives the pilot a clear verbal indication: "GEAR NOT DOWN". In addition, large aircraft are designed to be operated by two pilots working as a team. One flies the aircraft and handles communications and collision avoidance, while the other operates the aircraft systems. This provides a sort of human redundancy which reduces the workload placed on any one crew member, and provides for one crew member to be able to check the work of the other. The combination of advanced warning systems and effective crew training has made gear-up landing accidents in large aircraft extremely rare. Causes and prevention: In some cases, the pilot may be warned of an unsafe gear condition by the aircraft's flying characteristics. Often very sleek, high-performance airplanes will be very difficult to slow to a safe landing speed without the aerodynamic drag of the extended landing gear. Causes and prevention: Mechanical failure Mechanical failure is another cause of belly landings. Most landing gear are operated by electric motors or hydraulic actuators. Multiple redundancies are usually provided to prevent a single failure from failing the entire landing gear extension process. Whether electrically or hydraulically operated, the landing gear can usually be powered from multiple sources. In case the power system fails, an emergency extension system is always available. This may take the form of a manually operated crank or pump, or a mechanical free-fall mechanism which disengages the uplocks and allows the landing gear to fall and lock due to gravity and/or airflow. Causes and prevention: In cases where only one landing gear leg fails to extend, the pilot may choose to retract all the gear and perform a belly landing because he or she may believe it to be easier to control the aircraft during rollout with no gear at all than with one gear missing. Some aircraft, like the A-10 Thunderbolt II, are specifically designed to make belly landings safer. In the A-10's case, the retracted main wheels protrude out of their nacelles, so the plane virtually rolls on belly landings. Unmanned belly landings: There are examples of aircraft making comparatively successful belly landings after being abandoned by their crew in flight. Unmanned belly landings: A German Junkers 88 bomber which, after an attack on Soviet shipping in April 1942, was abandoned by its crew and came down on a hillside at Garddevarre in Finnmark in the far north of Norway. It was recovered in 1988 and is currently displayed at the Norsk Luftfartsmuseum, the Norwegian Aviation Museum at Bodø Airport.On 27 September 1956 a Bell X-2 experimental aircraft, after establishing an airspeed record of Mach 3.2, landed unmanned in the desert after a series of stalls and glides. It was only superficially damaged. The pilot had used his escape system at about 40,000 ft after losing control of the aircraft. He was killed when his capsule hit the desert.Possibly the most well-known is a United States Airforce Convair F-106 Delta Dart, tail number 58-0787. In February 1970, the aircraft entered a flat spin over Montana. Following the pilot ejecting, the aircraft's spin stabilised, and the Delta Dart proceeded to fly for several miles until it came down in a field near Big Sandy, Montana. The aircraft (later nicknamed the Cornfield Bomber) was repaired and returned to service. After the F-106 was withdrawn from service, the aircraft was presented to the National Museum of the United States Air Force. Examples: On 29 September 1940, during the 1940 Brocklesby mid-air collision, two Avro Ansons became wedged together after colliding, one on top of the other. Both of the upper aircraft's engines had been knocked out in the collision but those of the one below continued to turn at full power. The pilot of the lower Anson was injured and bailed out, but the pilot of the upper Anson, Leonard Graham Fuller, found that he was able to control the piggybacking pair of aircraft with his ailerons and flaps. He managed to travel 8 kilometres (5 mi) after the collision before making a successful emergency belly landing in a large paddock 6 kilometres (4 mi) south-west of Brocklesby, New South Wales, Australia. Examples: On 4 July 2000, Malév Flight 262, a Tupolev Tu-154, accidentally performed a gear-up touchdown during the landing and skidded on the runway, but was able to take off and land normally after a go-around. No injuries were reported.On 9 April 2006, a Canadair CL-215 water bomber sold by Buffalo Airways to the Turkish government belly landed on the runway at İzmir Adnan Menderes Airport when the Turkish pilots did not put the landing gear down. The hull was damaged in the crash and there were no injuries, but Buffalo had to fly in new drop doors to replace the ones damaged in the crash. On 8 May 2006, a United States Air Force B-1 Lancer strategic bomber landed on the atoll of Diego Garcia in the Indian Ocean without lowering its undercarriage. A fire ensued, but was extinguished with only minor personnel injuries. The pilots had reportedly switched off the warning system that would have warned them of the oversight and overlooked the red warning light on the instrument panel throughout the landing. The aircraft, after nearly $8 million in repairs, was returned to service the following year. Examples: On 1 November 2011, LOT Polish Airlines Flight 016, a Boeing 767, Captain Tadeusz Wrona declared an emergency with a loss of landing gear en route from Newark Liberty International Airport to Warsaw Chopin Airport. The aircraft involved was the newest 767 airframe in the fleet. It made a belly landing in Warsaw with a small fire, but all passengers and crew were evacuated with no injuries. The airport was closed for over a day afterwards.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Racing helmet** Racing helmet: A racing helmet is a form of protective headgear worn by racing car and rally drivers. Motor racing has long been known to be an exceptionally risky sport: sudden deceleration forces on the head can easily occur if a racing car loses control at the very high speeds of competitive motor racing or the rough terrain experienced in rallying. A risk more nearly unique to motor racing is the possibility of drastically severe burns from fuel igniting when the fuel lines or fuel tank of the vehicle are jolted sufficiently to dislodge or breach them in a situation in which the driver cannot soon enough escape from his car. This happened to world champion driver Niki Lauda at the 1976 German Grand Prix race at the Nürburgring in a crash from which he barely escaped alive. It is known that the percentage of racing accidents resulting in hospitalisation in motor racing, at around 25 percent, is higher than any other major international sport and that the average period in hospital is the longest. A recent Australian study also suggests motor racing may have the highest rate of actual injury among major sports. However, a study conducted between 1996 and 2000 by Fuji Toranomon Orthopaedic Hospital in Shizuoka suggests that only a small proportion of these injuries are actually to the head or surrounding areas. History: Analogous to gridiron football, cloth or leather helmets with goggles to protect drivers’ eyes from dust were used by many pre-World War I racing drivers and already in 1914 the Auto Cycle Union made helmets compulsory for drivers of its racing vehicles. However, these helmets did nothing to prevent massive head injuries or burns during the numerous crashes encountered even when races were moved onto private tracks.In the period following the war, concern about head injuries in motor racing continued to grow much faster than efforts to design safer helmets. Some racing drivers in the 1920s and 1930s were known to wear football or fire-fighting helmets as these offered better protection than standard racing headdresses of the time. Despite the fact that hard-shell helmets were used in motorcycle racing during the 1930s, it was not until the 1950s that a hard-shell helmet specifically designed for motor racing emerged, and extremely soon after the first metal helmets were developed, Formula One made helmets of this type compulsory for all drivers. NASCAR, however, did not make full-face helmets compulsory until after the death of Dale Earnhardt in 2001. History: Although helmets had been mandatory in other races beforehand, the new technology greatly improved safety and allowed the use of higher speeds. Bell Sports developed the first mass-produced auto racing helmet in 1954.By the end of the 1950s, full-face crash helmets were regarded as essential equipment for drivers in all forms of motor racing, and the Snell Memorial Foundation developed the first auto racing helmet standards in 1959. Since that time, the alternative standard to that of Snell has been that of the Fédération Internationale de l'Automobile, whose standards are used in Grand Prix racing. The Safety Helmet Council of America also developed a race-car helmet standard in the 1970s, but currently only provides certification to the Department of Transportation which does not certify racing car helmets. History: Since racing helmets became general standard equipment, there have been many improvements made to their design to cope with the increases in power and speed of racing cars. The most recent of these has been the development of flexible “tethers” so that a head inside the helmet cannot snap forward or to the side during a wreck. Construction: In most respects, auto racing helmets are not dissimilar to motorcycle helmets in construction, since they have similar requirements of protecting against extremely high-speed collisions. Modern racing helmets have an outer shell of carbon fiber, an inner shell of thick polystyrene and padding which must be in contact with the wearer’s head. There are, however, several major differences that make the two types of helmets non-interchangeable: Motorcycle helmets do not need fire protection because at high speed the rider will fly far from a burning motorcycle. In contrast, auto racing helmets must have fire protection since a driver is not likely to be able to escape if a car catches fire. Auto racing helmets can have a narrower field of view for greater head protection than is possible in a motorcycle helmet, especially when the driver is following a clearly defined track. For this reason, many auto racing helmets are illegal to use on a motorcycle over public roads. Construction: Auto racing helmets must be tested for sharp collisions with a roll bar which a motorcyclist is not likely to encounter.The fire-proof material used in racing helmets occurs in the inner lining and is known as Nomex, having been first introduced to racing helmets in 1967. In the 1960s and 1970s as crash fireproofing developed, concern became raised that in ordinary use racing helmets offered very little ventilation because they have more complete head coverage than any other type of helmet. Practical solutions to this problem were not developed until the 1980s when thermoelectric cooling was developed; however, most customers and governing bodies have preferred alternative less sophisticated means of improving ventilation over the past thirty years. Personalised designs: Many drivers – especially in open-topped cars such as Formula One cars – will have bright, vivid designs to help distinguish them from other drivers. These designs will traditionally remain with the driver throughout their career, although in 2015 the FIA introduced rules limiting F1 drivers to one design per season to curb a recent trend towards regularly changing the design. This rule was removed from the FIA regulations for the 2020 Formula One season following criticism from F1 drivers and fans.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Cantor–Zassenhaus algorithm** Cantor–Zassenhaus algorithm: In computational algebra, the Cantor–Zassenhaus algorithm is a method for factoring polynomials over finite fields (also called Galois fields). The algorithm consists mainly of exponentiation and polynomial GCD computations. It was invented by David G. Cantor and Hans Zassenhaus in 1981. It is arguably the dominant algorithm for solving the problem, having replaced the earlier Berlekamp's algorithm of 1967. It is currently implemented in many computer algebra systems. Overview: Background The Cantor–Zassenhaus algorithm takes as input a square-free polynomial f(x) (i.e. one with no repeated factors) of degree n with coefficients in a finite field Fq whose irreducible polynomial factors are all of equal degree (algorithms exist for efficiently factoring arbitrary polynomials into a product of polynomials satisfying these conditions, for instance, gcd (f(x),f′(x)) is a squarefree polynomial with the same factors as f(x) , so that the Cantor–Zassenhaus algorithm can be used to factor arbitrary polynomials). It gives as output a polynomial g(x) with coefficients in the same field such that g(x) divides f(x) . The algorithm may then be applied recursively to these and subsequent divisors, until we find the decomposition of f(x) into powers of irreducible polynomials (recalling that the ring of polynomials over any field is a unique factorisation domain). Overview: All possible factors of f(x) are contained within the factor ring R=Fq[x]⟨f(x)⟩ . If we suppose that f(x) has irreducible factors p1(x),p2(x),…,ps(x) , all of degree d, then this factor ring is isomorphic to the direct product of factor rings S=∏i=1sFq[x]⟨pi(x)⟩ . The isomorphism from R to S, say ϕ , maps a polynomial g(x)∈R to the s-tuple of its reductions modulo each of the pi(x) , i.e. if: mod mod mod ps(x)), then ϕ(g(x)+⟨f(x)⟩)=(g1(x)+⟨p1(x)⟩,…,gs(x)+⟨ps(x)⟩) . It is important to note the following at this point, as it shall be of critical importance later in the algorithm: Since the pi(x) are each irreducible, each of the factor rings in this direct sum is in fact a field. These fields each have degree qd Core result The core result underlying the Cantor–Zassenhaus algorithm is the following: If a(x)∈R is a polynomial satisfying: a(x)≠0,±1 for i=1,2,…,s, where ai(x) is the reduction of a(x) modulo pi(x) as before, and if any two of the following three sets is non-empty: A={i∣ai(x)=0}, B={i∣ai(x)=−1}, C={i∣ai(x)=1}, then there exist the following non-trivial factors of f(x) gcd (f(x),a(x))=∏i∈Api(x), gcd (f(x),a(x)+1)=∏i∈Bpi(x), gcd (f(x),a(x)−1)=∏i∈Cpi(x). Overview: Algorithm The Cantor–Zassenhaus algorithm computes polynomials of the same type as a(x) above using the isomorphism discussed in the Background section. It proceeds as follows, in the case where the field Fq is of odd-characteristic (the process can be generalised to characteristic 2 fields in a fairly straightforward way). Select a random polynomial b(x)∈R such that b(x)≠0,±1 . Set m=(qd−1)/2 and compute b(x)m . Since ϕ is an isomorphism, we have (using our now-established notation): ϕ(b(x)m)=(b1m(x)+⟨p1(x)⟩,…,bsm(x)+⟨ps(x)⟩). Overview: Now, each bi(x)+⟨pi(x)⟩ is an element of a field of order qd , as noted earlier. The multiplicative subgroup of this field has order qd−1 and so, unless bi(x)=0 , we have bi(x)qd−1=1 for each i and hence bi(x)m=±1 for each i. If bi(x)=0 , then of course bi(x)m=0 . Hence b(x)m is a polynomial of the same type as a(x) above. Further, since b(x)≠0,±1 , at least two of the sets A,B and C are non-empty and by computing the above GCDs we may obtain non-trivial factors. Since the ring of polynomials over a field is a Euclidean domain, we may compute these GCDs using the Euclidean algorithm. Applications: One important application of the Cantor–Zassenhaus algorithm is in computing discrete logarithms over finite fields of prime-power order. Computing discrete logarithms is an important problem in public key cryptography. For a field of prime-power order, the fastest known method is the index calculus method, which involves the factorisation of field elements. If we represent the prime-power order field in the usual way – that is, as polynomials over the prime order base field, reduced modulo an irreducible polynomial of appropriate degree – then this is simply polynomial factorisation, as provided by the Cantor–Zassenhaus algorithm. Implementation in computer algebra systems: The Cantor–Zassenhaus algorithm is implemented in the PARI/GP computer algebra system as the factorcantor() function.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Ursula Röthlisberger** Ursula Röthlisberger: Ursula Röthlisberger is a professor of computational chemistry at École Polytechnique Fédérale de Lausanne. She works on density functional theory using mixed quantum mechanical/molecular mechanical methods. She is an associate editor of the American Chemical Society Journal of Chemical Theory and Computation and a fellow of the American Association for the Advancement of Science. Early life and education: Röthlisberger was born in 1964 in Solothurn. She studied physical chemistry at the University of Bern. She earned her diploma under the supervision of Ernst Schumacher in 1988. She joined IBM Research – Zurich as a doctoral student with Wanda Andreoni. She worked in IBM Zurich as a postdoc until 1992. Röthlisberger moved to the University of Pennsylvania to work with Michael L. Klein. In 1995 she moved to Germany and joined the group of Michele Parrinello at the Max Planck Institute for Solid State Research. Together they used the Car-Parrinello method to study nanoscale clusters of silicon. Research and career: Röthlisberger was appointed assistant professor at ETH Zurich in 1996. She was the first woman to win the Swiss Federal Institute of Technology at Zurich Ruzicka Prize in 2001. She joined École Polytechnique Fédérale de Lausanne as an associate professor in 2002 and was made full professor in 2009. In 2005 she was the first woman to be awarded the World Association of Theoretical and Computational Chemists Dirac Medal.Röthlisberger works on density functional theory, extending the Car-Parrinello method to include QM/MM simulations in a code called CPMD. QM/MM systems treat the electronically active part of a molecular structure as a quantum mechanical system, whereas the rest of the molecule is treated classically using molecular mechanics. She uses her hybrid Car–Parrinello systems to study enzymatic reactions to design biomimetic compounds. Röthlisberger has also expanded QM/MM to include ground to excited state transitions, making it possible to predict photoinduced charge separation and electron transfer. She also works on ab initio simulations of biological systems, and has added the Van der Waals interactions of macromolecules to density functional theory. She has used her simulations for several different applications, including the design of new materials for photovoltaics and exploring the operational mechanisms of chemotherapy. In 2017 she demonstrated that taking Auranofin whilst on RAPTA-T enhances the activity of the anti-cancer drug.She teaches classes in Monte Carlo simulations and molecular dynamics. Research and career: Advocacy and engagement Röthlisberger supports young women scientists and is involved with mentoring of early career researchers. She contributed to the book A Journey into Time in Powers of Ten. She is involved with scientific art, which is regularly used on the journals in which she publishes. Awards and honours: 2001 Swiss Federal Institute of Technology at Zurich Ruzicka Prize 2004 World Association of Theoretical and Computational Chemists Dirac Medal 2015 European Chemical Society (EuChemS) Lecture Award 2015 International Academy of Quantum Molecular Science Member 2016 The Swiss Foundation for the Doron Prize 2018 American Association for the Advancement of Science Fellow
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**BRAP (gene)** BRAP (gene): BRCA1 associated protein is a protein that in humans is encoded by the BRAP gene. Function: The protein encoded by this gene was identified by its ability to bind to the nuclear localization signal of BRCA1 and other proteins. It is a cytoplasmic protein which may regulate nuclear targeting by retaining proteins with a nuclear localization signal in the cytoplasm. [provided by RefSeq, Jul 2008].
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Broadseam** Broadseam: Broadseam is a term particular to the making of a sail. The panels that make up the sections of a sail are cut with curves on the connecting edges or seams. This method adds a three dimensional shape to what would ordinarily be a flat triangular or quadrilateral piece of fabric. Since a sail is a type of airfoil, this method of sail making adds significantly to the amount of draft a sail can have.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Peak calling** Peak calling: Peak calling is a computational method used to identify areas in a genome that have been enriched with aligned reads as a consequence of performing a ChIP-sequencing or MeDIP-seq experiment. These areas are those where a protein interacts with DNA. When the protein is a transcription factor, the enriched area is its transcription factor binding site (TFBS). Popular software programs include MACS. Wilbanks and colleagues is a survey of the ChIP-seq peak callers, and Bailey et al. is a description of practical guidelines for peak calling in ChIP-seq data. Peak calling: Peak calling may be conducted on transcriptome/exome as well to RNA epigenome sequencing data from MeRIPseq or m6Aseq for detection of post-transcriptional RNA modification sites with software programs, such as exomePeak. Peak calling: Many of the peak calling tools are optimised for only some kind of assays such as only for transcription-factor ChIP-seq or only for DNase-seq. However new generation of peak callers such as DFilter are based on generalised optimal theory of detection and has been shown to work for nearly all kinds for tag profile signals from next-gen sequencing data. It is also possible to do more complex analysis using such tools like combining multiple ChIP-seq signal to detect regulatory sites. In the context of ChIP-exo, this process is known as 'peak-pair calling'.Differential peak calling is about identifying significant differences in two ChIP-seq signals. One can distinguish between one-stage and two-stage differential peak callers. One stage differential peak callers work in two phases: first, call peaks on individual ChIP-seq signals and second, combine individual signals and apply statistical tests to estimate differential peaks. DBChIP and MAnorm are examples for one stage differential peak callers. Peak calling: Two stage differential peak callers segment two ChIP-seq signals and identify differential peaks in one step. They take advantage of signal segmentation approaches such as Hidden Markov Models. Examples for two-stage differential peak callers are ChIPDiff, ODIN. and THOR. Differential peak calling can also be applied in the context of analyzing RNA-binding protein binding sites.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Greater trochanteric pain syndrome** Greater trochanteric pain syndrome: Greater trochanteric pain syndrome (GTPS), a form of bursitis, is inflammation of the trochanteric bursa, a part of the hip. Greater trochanteric pain syndrome: This bursa is at the top, outer side of the femur, between the insertion of the gluteus medius and gluteus minimus muscles into the greater trochanter of the femur and the femoral shaft. It has the function, in common with other bursae, of working as a shock absorber and as a lubricant for the movement of the muscles adjacent to it.Occasionally, this bursa can become inflamed and clinically painful and tender. This condition can be a manifestation of an injury (often resulting from a twisting motion or from overuse), but sometimes arises for no obviously definable cause. The symptoms are pain in the hip region on walking, and tenderness over the upper part of the femur, which may result in the inability to lie in comfort on the affected side.More often the lateral hip pain is caused by disease of the gluteal tendons that secondarily inflames the bursa. This is most common in middle-aged women and is associated with a chronic and debilitating pain which does not respond to conservative treatment. Other causes of trochanteric bursitis include uneven leg length, iliotibial band syndrome, and weakness of the hip abductor muscles.Greater trochanteric pain syndrome can remain incorrectly diagnosed for years, because it shares the same pattern of pain with many other musculoskeletal conditions. Thus people with this condition may be labeled malingerers, or may undergo many ineffective treatments due to misdiagnosis. It may also coexist with low back pain, arthritis, and obesity. Signs and symptoms: The primary symptom is hip pain, especially hip pain on the outer (lateral) side of the joint. This pain may appear when the affected person is walking or lying down on that side. Diagnosis: A doctor may begin the diagnosis by asking the patient to stand on one leg and then the other, while observing the effect on the position of the hips. Palpating the hip and leg may reveal the location of the pain, and range-of-motion tests can help to identify its source.X-rays, ultrasound and magnetic resonance imaging may reveal tears or swelling. But often these imaging tests do not reveal any obvious abnormality in patients with documented GTPS. Prevention: Because wear on the hip joint traces to the structures that support it (the posture of the legs, and ultimately, the feet), proper fitting shoes with adequate support are important to preventing GTPS. For someone who has flat feet, wearing proper orthotic inserts and replacing them as often as recommended are also important preventive measures.Strength in the core and legs is also important to posture, so physical training also helps to prevent GTPS. But it is equally important to avoid exercises that damage the hip. Treatment: Conservative treatments have a 90% success rate and can include any or a combination of the following: pain relief medication, NSAIDs, physiotherapy, shockwave therapy (SWT) and corticosteroid injection. Surgery is usually for cases that are non-respondent to conservative treatments and is often a combination of bursectomy, iliotibial band (ITB) release, trochanteric reduction osteotomy or gluteal tendon repair. A 2011 review found that traditional nonoperative treatment helped most patients, low-energy SWT was a good alternative, and surgery was effective in refractory cases and superior to corticosteroid therapy and physical therapy. There are numerous case reports in which surgery has relieved GTPS, but its effectiveness is not documented in clinical trials as of 2009.The primary treatment is rest. This does not mean bed rest or immobilizing the area but avoiding actions which result in aggravation of the pain. Icing the joint may help. A non-steroidal anti-inflammatory drug may relieve pain and reduce the inflammation. If these are ineffective, the definitive treatment is steroid injection into the inflamed area. Treatment: Physical therapy to strengthen the hip muscles and stretch the iliotibial band can relieve tension in the hip and reduce friction. The use of point ultrasound may be helpful, and is undergoing clinical trials.In extreme cases, where the pain does not improve after physical therapy, cortisone shots, and anti-inflammatory medication, the inflamed bursa can be removed surgically. The procedure is known as a bursectomy. Tears in the muscles may also be repaired, and loose material from arthritic degeneration of the hip removed. At the time of bursal surgery, a very close examination of the gluteal tendons will reveal sometimes subtle and sometimes very obvious degeneration and detachment of the gluteal tendons. If this detachment is not repaired, removal of the bursa alone will make little or no difference to the symptoms.The bursa is not required, so the main potential complication is potential reaction to anaesthetic. The surgery can be performed arthroscopically and, consequently, on an outpatient basis. Patients often have to use crutches for a few days following surgery up to a few weeks for more involved procedures.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Stuffed cookie** Stuffed cookie: A stuffed cookie, also known as a stuffed biscuit, is a type of cookie. Many types of fillings are used, such as nutella, caramel, peanut butter. List of stuffed cookies/biscuits: Nutella Stuffed Cookies Nutella stuffed chocolate cookies Nutella stuffed chocolate chip cookies Red velvet nutella stuffed cookies Nutella stuffed oatmeal hazelnut chocolate chip cookies Dark chocolate chocolate chip nutella stuffed chocolate cookies Caramel stuffed chocolate chip cookies Biscoff stuffed cookies Caramel stuffed apple cider cookiesApple stuffed cookies Chocolate stuffed cookies Custard-stuffed cookies Orange stuffed cookies Date stuffed cookies Lemon stuffed cookies Cherry stuffed cookies Fig stuffed cookies Fig and walnut stuffed cookies Coconut stuffed cookies Apricot stuffed cookies Quince stuffed cookies Almond stuffed cookies Pistachio-stuffed cookies Hazelnut stuffed cookies Walnut stuffed cookies Peanut-stuffed cookies Poppy-stuffed cookies White chocolate stuffed cookies White chocolate stuffed cocoa cookies Raspberry stuffed cookies Murabbalı mecidiye
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**IIG meteorite** IIG meteorite: IIG meteorites are a group of iron meteorites. The group currently has six members. They are hexahedrites with large amounts of schreibersite. The meteoric iron is composed of kamacite. Naming and history: Iron meteorites are designated with a Roman numeral and one or two letters. Classification is based on diagrams in which nickel content of meteoric iron is plotted against trace elements. Clusters in these diagrams are assigned a row (Roman numeral) and a letter in alphabetical order. IIG meteorites are therefore from the second row, cluster G.The Bellsbank, La Primitiva and Tombigbee meteorites were iron meteorites that were found to have chemical and structural similarities in 1967. Further descriptions were made in 1973 and in 1974 it was proposed that the three meteorites should be grouped into the "Bellsbank Trio" grouplet. The group status, that requires five specimen was filled in 1984 by the Twannberg meteorite and in 2000 by the Guanaco meteorite. Description: IIG meteorites are hexahedrites. The meteoric iron has a low concentration of Nickel (4.1 to 4.9%) and is exclusively kamacite. IIGs contain large amounts of phosphorus in the form of schreibersite and very low concentrations of sulfur. Parent body: Trace elements of IIAB and IIG meteorites are offset, which was interpreted as the two groups forming on a separate planetesimal. Other explanations for the offset are melt inmiscibility. This process took place while the planetesimal was cooling off. First meteoric iron crystallized into a network of cavities and channels. Eventually crystallization cut off the channels and made cavities of trapped melt. When the remaining melt reached the eutectic point, the cavities crystallized a mixture of schreibersite and meteoric iron. The matrix of this process would form the IIAB meteorites, while the cavities would form the IIG meteorites. Specimen: The IIG group currently has 6 meteorites that are assigned to it. The Bellsbank, La Primitiva, Tombigbee, Twannberg, Guanaco and the Auburn meteorite.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Hedge (finance)** Hedge (finance): A hedge is an investment position intended to offset potential losses or gains that may be incurred by a companion investment. A hedge can be constructed from many types of financial instruments, including stocks, exchange-traded funds, insurance, forward contracts, swaps, options, gambles, many types of over-the-counter and derivative products, and futures contracts. Public futures markets were established in the 19th century to allow transparent, standardized, and efficient hedging of agricultural commodity prices; they have since expanded to include futures contracts for hedging the values of energy, precious metals, foreign currency, and interest rate fluctuations. Etymology: Hedging is the practice of taking a position in one market to offset and balance against the risk adopted by assuming a position in a contrary or opposing market or investment. The word hedge is from Old English hecg, originally any fence, living or artificial. The first known use of the word as a verb meaning 'dodge, evade' dates from the 1590s; that of 'insure oneself against loss,' as in a bet, is from the 1670s. Hedge-investment duality: Optimal hedging and optimal investments are intimately connected. It can be shown that one person's optimal investment is another's optimal hedge (and vice versa). This follows from a geometric structure formed by probabilistic representations of market views and risk scenarios. In practice, the hedge-investment duality is related to the widely used notion of risk recycling. Examples: Agricultural commodity price hedging A typical hedger might be a commercial farmer. The market values of wheat and other crops fluctuate constantly as supply and demand for them vary, with occasional large moves in either direction. Based on current prices and forecast levels at harvest time, the farmer might decide that planting wheat is a good idea one season, but the price of wheat might change over time. Once the farmer plants wheat, he is committed to it for an entire growing season. If the actual price of wheat rises greatly between planting and harvest, the farmer stands to make a lot of unexpected money, but if the actual price drops by harvest time, he is going to lose the invested money. Examples: Due to the uncertainty of future supply and demand fluctuations, and the price risk imposed on the farmer, the farmer in this example may use different financial transactions to reduce, or hedge, their risk. One such transaction is the use of forward contracts. Forward contracts are mutual agreements to deliver a certain amount of a commodity at a certain date for a specified price and each contract is unique to the buyer and seller. For this example, the farmer can sell a number of forward contracts equivalent to the amount of wheat he expects to harvest and essentially lock in the current price of wheat. Once the forward contracts expire, the farmer will harvest the wheat and deliver it to the buyer at the price agreed to in the forward contract. Therefore, the farmer has reduced his risks to fluctuations in the market of wheat because he has already guaranteed a certain number of bushels for a certain price. However, there are still many risks associated with this type of hedge. For example, if the farmer has a low yield year and he harvests less than the amount specified in the forward contracts, he must purchase the bushels elsewhere in order to fill the contract. This becomes even more of a problem when the lower yields affect the entire wheat industry and the price of wheat increases due to supply and demand pressures. Also, while the farmer hedged all of the risks of a price decrease away by locking in the price with a forward contract, he also gives up the right to the benefits of a price increase. Another risk associated with the forward contract is the risk of default or renegotiation. The forward contract locks in a certain amount and price at a certain future date. Because of that, there is always the possibility that the buyer will not pay the amount required at the end of the contract or that the buyer will try to renegotiate the contract before it expires.Future contracts are another way our farmer can hedge his risk without a few of the risks that forward contracts have. Future contracts are similar to forward contracts except they are more standardized (i.e. each contract is the same quantity and date for everyone). These contracts trade on exchanges and are guaranteed through clearing houses. Clearing houses ensure that every contract is honored and they take the opposite side of every contract. Future contracts typically are more liquid than forward contracts and move with the market. Because of this, the farmer can minimize the risk he faces in the future through the selling of future contracts. Future contracts also differ from forward contracts in that delivery never happens. The exchanges and clearing houses allow the buyer or seller to leave the contract early and cash out. So tying back into the farmer selling his wheat at a future date, he will sell short futures contracts for the amount that he predicts to harvest to protect against a price decrease. The current (spot) price of wheat and the price of the futures contracts for wheat converge as time gets closer to the delivery date, so in order to make money on the hedge, the farmer must close out his position earlier than then. On the chance that prices decrease in the future, the farmer will make a profit on his short position in the futures market which offsets any decrease in revenues from the spot market for wheat. On the other hand, if prices increase, the farmer will generate a loss on the futures market which is offset by an increase in revenues on the spot market for wheat. Instead of agreeing to sell his wheat to one person on a set date, the farmer will just buy and sell futures on an exchange and then sell his wheat wherever he wants once he harvests it. Examples: Hedging a stock price A common hedging technique used in the financial industry is the long/short equity technique. Examples: A stock trader believes that the stock price of Company A will rise over the next month, due to the company's new and efficient method of producing widgets. They want to buy Company A shares to profit from their expected price increase, as they believe that shares are currently underpriced. But Company A is part of a highly volatile widget industry. So there is a risk of a future event that affects stock prices across the whole industry, including the stock of Company A along with all other companies. Examples: Since the trader is interested in the specific company, rather than the entire industry, they want to hedge out the industry-related risk by short selling an equal value of shares from Company A's direct, yet weaker competitor, Company B. The first day the trader's portfolio is: Long 1,000 shares of Company A at $1 each Short 500 shares of Company B at $2 eachThe trader has sold short the same value of shares (the value, number of shares × price, is $1000 in both cases). If the trader was able to short sell an asset whose price had a mathematically defined relation with Company A's stock price (for example a put option on Company A shares), the trade might be essentially riskless. In this case, the risk would be limited to the put option's premium. Examples: On the second day, a favorable news story about the widgets industry is published and the value of all widgets stock goes up. Company A, however, because it is a stronger company, increases by 10%, while Company B increases by just 5%: Long 1,000 shares of Company A at $1.10 each: $100 gain Short 500 shares of Company B at $2.10 each: $50 loss (in a short position, the investor loses money when the price goes up)The trader might regret the hedge on day two, since it reduced the profits on the Company A position. But on the third day, an unfavorable news story is published about the health effects of widgets, and all widgets stocks crash: 50% is wiped off the value of the widgets industry in the course of a few hours. Nevertheless, since Company A is the better company, it suffers less than Company B: Value of long position (Company A): Day 1: $1,000 Day 2: $1,100 Day 3: $550 => ($1,000 − $550) = $450 lossValue of short position (Company B): Day 1: −$1,000 Day 2: −$1,050 Day 3: −$525 => ($1,000 − $525) = $475 profitWithout the hedge, the trader would have lost $450. But the hedge – the short sale of Company B – nets a profit of $25 during a dramatic market collapse. Examples: Stock/futures hedging The introduction of stock market index futures has provided a second means of hedging risk on a single stock by selling short the market, as opposed to another single or selection of stocks. Futures are generally highly fungible and cover a wide variety of potential investments, which makes them easier to use than trying to find another stock which somehow represents the opposite of a selected investment. Futures hedging is widely used as part of the traditional long/short play. Examples: Hedging employee stock options Employee stock options (ESOs) are securities issued by the company mainly to its own executives and employees. These securities are more volatile than stocks. An efficient way to lower the ESO risk is to sell exchange traded calls and, to a lesser degree, to buy puts. Companies discourage hedging the ESOs but there is no prohibition against it. Examples: Hedging fuel consumption Airlines use futures contracts and derivatives to hedge their exposure to the price of jet fuel. They know that they must purchase jet fuel for as long as they want to stay in business, and fuel prices are notoriously volatile. By using crude oil futures contracts to hedge their fuel requirements (and engaging in similar but more complex derivatives transactions), Southwest Airlines was able to save a large amount of money when buying fuel as compared to rival airlines when fuel prices in the U.S. rose dramatically after the 2003 Iraq war and Hurricane Katrina. Examples: Hedging emotions As an emotion regulation strategy, people can bet against a desired outcome. A New England Patriots fan, for example, could bet their opponents to win to reduce the negative emotions felt if the team loses a game. Some scientific wagers, such as Hawking's 1974 "insurance policy" bet, fall into this category. People typically do not bet against desired outcomes that are important to their identity, due to negative signal about their identity that making such a gamble entails. Betting against your team or political candidate, for example, may signal to you that you are not as committed to them as you thought you were. Types of hedging: Hedging can be used in many different ways including foreign exchange trading. The stock example above is a "classic" sort of hedge, known in the industry as a pairs trade due to the trading on a pair of related securities. As investors became more sophisticated, along with the mathematical tools used to calculate values (known as models), the types of hedges have increased greatly. Types of hedging: Examples of hedging include: Forward exchange contract for currencies Commodity future contracts for hedging physical positions Currency future contracts Money Market Operations for currencies Forward Exchange Contract for interest Money Market Operations for interest Future contracts for interest Covered Calls on equities Short Straddles on equities or indexes Bets on elections or sporting events Hedging strategies: A hedging strategy usually refers to the general risk management policy of a financially and physically trading firm how to minimize their risks. As the term hedging indicates, this risk mitigation is usually done by using financial instruments, but a hedging strategy as used by commodity traders like large energy companies, is usually referring to a business model (including both financial and physical deals). Hedging strategies: In order to show the difference between these strategies, consider the fictional company BlackIsGreen Ltd trading coal by buying this commodity at the wholesale market and selling it to households mostly in winter. Back-to-back hedging Back-to-back (B2B) is a strategy where any open position is immediately closed, e.g. by buying the respective commodity on the spot market. Hedging strategies: This technique is often applied in the commodity market when the customers’ price is directly calculable from visible forward energy prices at the point of customer sign-up.If BlackIsGreen decides to have a B2B-strategy, they would buy the exact amount of coal at the very moment when the household customer comes into their shop and signs the contract. This strategy minimizes many commodity risks, but has the drawback that it has a large volume and liquidity risk, as BlackIsGreen does not know whether it can find enough coal on the wholesale market to fulfill the need of the households. Hedging strategies: Tracker hedging Tracker hedging is a pre-purchase approach, where the open position is decreased the closer the maturity date comes. Hedging strategies: If BlackIsGreen knows that most of the consumers demand coal in winter to heat their house, a strategy driven by a tracker would now mean that BlackIsGreen buys e.g. half of the expected coal volume in summer, another quarter in autumn and the remaining volume in winter. The closer the winter comes, the better are the weather forecasts and therefore the estimate, how much coal will be demanded by the households in the coming winter. Hedging strategies: Retail customers’ price will be influenced by long-term wholesale price trends. A certain hedging corridor around the pre-defined tracker-curve is allowed and fraction of the open positions decreases as the maturity date comes closer. Hedging strategies: Delta hedging Delta-hedging mitigates the financial risk of an option by hedging against price changes in its underlying. It is so called as Delta is the first derivative of the option's value with respect to the underlying instrument's price. This is performed in practice by buying a derivative with an inverse price movement. It is also a type of market neutral strategy. Hedging strategies: Only if BlackIsGreen chooses to perform delta-hedging as strategy, actual financial instruments come into play for hedging (in the usual, stricter meaning). Risk reversal Risk reversal means simultaneously buying a call option and selling a put option. This has the effect of simulating being long on a stock or commodity position. Natural hedges: Many hedges do not involve exotic financial instruments or derivatives such as the married put. A natural hedge is an investment that reduces the undesired risk by matching cash flows (i.e. revenues and expenses). For example, an exporter to the United States faces a risk of changes in the value of the U.S. dollar and chooses to open a production facility in that market to match its expected sales revenue to its cost structure. Natural hedges: Another example is a company that opens a subsidiary in another country and borrows in the foreign currency to finance its operations, even though the foreign interest rate may be more expensive than in its home country: by matching the debt payments to expected revenues in the foreign currency, the parent company has reduced its foreign currency exposure. Similarly, an oil producer may expect to receive its revenues in U.S. dollars, but faces costs in a different currency; it would be applying a natural hedge if it agreed to, for example, pay bonuses to employees in U.S. dollars. Natural hedges: One common means of hedging against risk is the purchase of insurance to protect against financial loss due to accidental property damage or loss, personal injury, or loss of life. Categories of hedgeable risk: There are varying types of financial risk that can be protected against with a hedge. Those types of risks include: Commodity risk: the risk that arises from potential movements in the value of commodity contracts, which include agricultural products, metals, and energy products. Corporates exposed on the "procurement side" of the value chain, require protection against rising commodity prices, where these cannot be "passed on to the customer"; on the sales side, corporates look towards hedging against a decline in price. Both may hedge using commodity-derivatives where available.Credit risk: the risk that money owing will not be paid by an obligor. Since credit risk is the natural business of banks, but an unwanted risk for commercial traders, an early market developed between banks and traders that involved selling obligations at a discounted rate. The contemporary practice in commerce settings is to purchase trade credit insurance; in an (investment) banking context, these risks can be hedged through credit derivatives. In the latter, analysts use models such as Jarrow–Turnbull and Merton / KMV to estimate the probability of default, and/or (portfolio-wide) will use a transition matrix of Bond credit ratings to estimate the probability and impact of a "credit migration". See Fixed income analysis.Currency risk: the risk that a financial instrument or business transaction will be affected unfavorably by a change in exchange rates. Foreign exchange risk hedging is used both by investors to deflect the risks they encounter when investing abroad, and by non-financial actors in the global economy for whom multi-currency activities are a "necessary evil" rather than a desired state of exposure.Interest rate risk: the risk that the value of an interest-bearing liability, such as a loan or a bond, will worsen due to an interest rate increase (see Bond valuation § Present value approach). Interest rate risks can be hedged using Interest rate derivatives such as interest rate swaps; sensitivities here are measured using duration and convexity for bonds, and DV01 and key rate durations generally. At the portfolio level, cash-flow risks are typically managed via immunization or cashflow matching, while valuation-risk is hedged through bond index futures or optionsEquity risk: the risk that one's investments will depreciate because of stock market dynamics causing one to lose money.Volatility risk: is the threat that an exchange rate movement poses to an investor's portfolio in a foreign currency.Volume risk is the risk that a customer demands more or less of a product than expected. Hedging equity and equity futures: Equity in a portfolio can be hedged by taking an opposite position in futures. To protect your stock picking against systematic market risk, futures are shorted when equity is purchased, or long futures when stock is shorted. One way to hedge is the market neutral approach. In this approach, an equivalent dollar amount in the stock trade is taken in futures – for example, by buying 10,000 GBP worth of Vodafone and shorting 10,000 worth of FTSE futures (the index in which Vodafone trades). Another way to hedge is the beta neutral. Beta is the historical correlation between a stock and an index. If the beta of a Vodafone stock is 2, then for a 10,000 GBP long position in Vodafone an investor would hedge with a 20,000 GBP equivalent short position in the FTSE futures. Futures contracts and forward contracts are means of hedging against the risk of adverse market movements. These originally developed out of commodity markets in the 19th century, but over the last fifty years a large global market developed in products to hedge financial market risk. Hedging equity and equity futures: Futures hedging Investors who primarily trade in futures may hedge their futures against synthetic futures. A synthetic in this case is a synthetic future comprising a call and a put position. Long synthetic futures means long call and short put at the same expiry price. To hedge against a long futures trade a short position in synthetics can be established, and vice versa. Hedging equity and equity futures: Stack hedging is a strategy which involves buying various futures contracts that are concentrated in nearby delivery months to increase the liquidity position. It is generally used by investors to ensure the surety of their earnings for a longer period of time. Hedging equity and equity futures: Contract for difference A contract for difference (CFD) is a two-way hedge or swap contract that allows the seller and purchaser to fix the price of a volatile commodity. Consider a deal between an electricity producer and an electricity retailer, both of whom trade through an electricity market pool. If the producer and the retailer agree to a strike price of $50 per MWh, for 1 MWh in a trading period, and if the actual pool price is $70, then the producer gets $70 from the pool but has to rebate $20 (the "difference" between the strike price and the pool price) to the retailer. Hedging equity and equity futures: Conversely, the retailer pays the difference to the producer if the pool price is lower than the agreed upon contractual strike price. In effect, the pool volatility is nullified and the parties pay and receive $50 per MWh. However, the party who pays the difference is "out of the money" because without the hedge they would have received the benefit of the pool price. Related concepts: Forwards: A contract specifying future delivery of an amount of an item, at a price decided now. The delivery is obligatory, not optional. Forward rate agreement (FRA): A contract specifying an interest rate amount to be settled at a pre-determined interest rate on the date of the contract. Option (finance): similar to a forward contract, but optional. Call option: A contract that gives the owner the right, but not the obligation, to buy an item in the future, at a price decided now. Put option: A contract that gives the owner the right, but not the obligation, to sell an item in the future, at a price decided now. Related concepts: Non-deliverable forwards (NDF): A strictly risk-transfer financial product similar to a forward rate agreement, but used only where monetary policy restrictions on the currency in question limit the free flow and conversion of capital. As the name suggests, NDFs are not delivered but settled in a reference currency, usually USD or EUR, where the parties exchange the gain or loss that the NDF instrument yields, and if the buyer of the controlled currency truly needs that hard currency, he can take the reference payout and go to the government in question and convert the USD or EUR payout. The insurance effect is the same; it's just that the supply of insured currency is restricted and controlled by government. See capital control. Related concepts: Interest rate parity and Covered interest arbitrage: The simple concept that two similar investments in two different currencies ought to yield the same return. If the two similar investments are not at face value offering the same interest rate return, the difference should conceptually be made up by changes in the exchange rate over the life of the investment. IRP basically provides the math to calculate a projected or implied forward rate of exchange. This calculated rate is not and cannot be considered a prediction or forecast, but rather is the arbitrage-free calculation for what the exchange rate is implied to be in order for it to be impossible to make a free profit by converting money to one currency, investing it for a period, then converting back and making more money than if a person had invested in the same opportunity in the original currency. Related concepts: Hedge fund: A fund which may engage in hedged transactions or hedged investment strategies.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Genomics Digital Lab** Genomics Digital Lab: Genomics Digital Lab (GDL) is a browser-based series of educational games, simulations, and animations created by Spongelab Interactive. It is designed to teach high school students about biology including photosynthesis, respiration, transcription, and translation. Genomics Digital Lab was released in 2009 and is available for purchase at home or school, or as a free 7-day trial. Game play: The user takes on the mission to save a dying plant. This is first done in the introductory level, where the user must identify the correct light, gas, and liquid conditions to make a plant thrive and survive. Game play: In the intermediate level, the user enters each of the organelles (chloroplast, mitochondrion, and nucleus) where they must pass a variety of challenges. Each lesson has 3-4 levels, each starting with a tutorial, then getting progressively harder. In Lesson 2, the user is taken to the chloroplast where their goal is to make sugar using CO2 and sunlight by playing the Light Reaction and Calvin Cycle games. In Lesson 3, the user is in the mitochondrion, where they have to convert the sugar into energy by playing the Glycolysis, Citric Acid Cycle, and Electron Transport Chain games. Finally, in Lesson 4, the user is in the nucleus, where they use the energy to build proteins by playing the Transcription and Translation games. Once a user completes all 8 games, they become a Master of the Cell. Game play: The Advanced Level consists of 5 text-based cases on real-world science topics. Scoring: In each game, points are awarded in the biological currency relevant to that game. Both the current score and the overall high score (in a number of sugars, ATP, or proteins) are recorded. Additional educational features: Plant Anatomy Explorer - includes whole plant anatomy, leaf structure, leaf tissue structure, plant cells, and major cell organelles Particle Map - a glossary of all the molecules, indexed by their organelle location with a brief description Notepad - online notebook where students can record their observations Available in English and French Compatible with digital whiteboards Supportive features for teachers: Genomics Digital Lab was designed to be easily used in the classroom and provides teachers with lesson plans, curriculum alignments, quizzes, worksheets. Teachers can also monitor their students’ progress online, in real-time. Reception: Genomics Digital Lab won first place in the Interactive Media category in both the 2008 and 2009 National Science Foundation's International Science and Engineering Visualization Challenge [1], [2]. It was also published in the September 26, 2008 issue of Science [3] and the February 19, 2010 issue of Science [4]. In 2009, Genomics Digital Lab was awarded a World Summit Award for best e-content in the e-Science and Technology category [5] as well as a Parents' Choice Award [6] and an Adobe Max Honorable Mention in the Education category [7] In the news: Tran, Lisa (2009-10-19). "Playing the Biology". Teach Magazine. Archived from the original on 2010-04-25. Retrieved 2009-11-20. Hutchison, Bill (2009-10-03). "webMANIA - Teaching to a video game generation". CTV News. Retrieved 2009-11-20. Rainford, Lisa (2009-09-21). "Gaming company experiments at making science fun". Inside Toronto. Archived from the original on 2011-07-19. Retrieved 2009-11-20. Bernard, Sophie (2009-07-29). "La biologie est un jeu d'enfant, avec le Labo Digital de Génomique". Le Lien Multimédia. Retrieved 2009-07-29. Cowan, Danny (2009-07-07). "Genomics Digital Lab Uses Gaming to Teach Biology". Serious Games Source. Retrieved 2009-07-23. Proudfoot, Shannon (2009-05-03). "Video games evolving as serious educational tools". Calgary Herald. Retrieved 2009-07-23. McNeely, Sean (2008-10-27). "New interactive video game gets students excited about plant biology". Backbone Magazine. Archived from the original on 2011-07-20. Retrieved 2009-07-23. Wurster, Paul (2008-07-02). "Have Some Fun in NECC Playgrounds" (PDF). L&L Daily Leader. Retrieved 2009-07-28.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Vesiculodeferential artery** Vesiculodeferential artery: The vesiculodeferential artery, also known as the middle vesical artery, is an artery that supplies blood to the seminal vesicles. Structure: The vesiculodeferential artery arises from the superior vesical artery, which is a branch of the umbilical artery. Function: The vesiculodeferential artery supplies oxygenated blood to the seminal vesicles. History: The vesiculodeferential artery is also known as the middle vesical artery.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Salt well** Salt well: A salt well (or brine well) is used to mine salt from caverns or deposits. Water is used as a solution to dissolve the salt or halite deposits so that they can be extracted by pipe to an evaporation process, which results in either a brine or a dry product for sale or local use. In the United States during the 19th century, salt wells were a significant source of income for operators and the government. Locating underground salt deposits was usually based on locations of existing salt springs.In mountainous areas, a similar technique called sink works (from German sinkwerk) is used. History: The Chinese have been using brine wells and a form of salt solution mining as part of their civilization for more than 2000 years. The first recorded salt well in China was dug in the Sichuan province around 2,250 years ago. This was the first time that ancient water well technology was applied successfully for the exploitation of salt, and marked the beginning of Sichuan's salt drilling industry. Shaft wells were sunk as early as 220 BC in the Sichuan and Yunnan Provinces. By 1035 AD, Chinese in the Sichuan area were using percussion drilling to recover deep brines, a technique that would not be introduced to the West for another 600 to 800 years. Medieval and modern European travelers to China between 1400 and 1700 AD reported salt and natural gas production from dense networks of brine wells. Archaeological evidence of Song dynasty salt drilling tools used are kept and displayed in the Zigong Salt Industry Museum. Many of the wells were sunk deeper than 450 m and at least one well was more than 1000 meters deep. The medieval Venetian traveler to China, Marco Polo, reported an annual production in a single province of more than 30,000 tonnes of brine during his time there. According to Salt: A World History, a Qing dynasty well, also in Zigong, "continued down to 3,300 feet (1,000 m) making it at the time the deepest drilled well in the world."
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded