text
stringlengths
60
353k
source
stringclasses
2 values
**2002 San Diego Padres season** 2002 San Diego Padres season: The 2002 San Diego Padres season was the 34th season in franchise history. Offseason: December 27, 2001: Alan Embree was signed as a free agent with the San Diego Padres. February 24, 2002: Trenidad Hubbard was signed as a free agent with the San Diego Padres. March 16, 2002: Mark Sweeney was signed as a free agent with the San Diego Padres. Regular season: Opening Day starters D'Angelo Jiménez – 2B Mark Kotsay – CF Ron Gant – LF Phil Nevin – 1B Bubba Trammell – RF Deivi Cruz – SS Wiki González – C Ramon Vazquez – 3B Kevin Jarvis – SP Season standings National League West Record vs. opponents Notable transactions June 26, 2002: Alan Embree was traded by the San Diego Padres with Andy Shibilo (minors) to the Boston Red Sox for Dan Giese and Brad Baker (minors). Regular season: July 15, 2002: Mark Sweeney was released by the San Diego Padres. July 31, 2002: Jason Bay was traded by the New York Mets with Josh Reynolds (minors) and Bobby Jones to the San Diego Padres for Steve Reed and Jason Middlebrook. August 13, 2002: Mark Sweeney was signed as a free agent with the San Diego Padres. August 16, 2002: Mark Sweeney was released by the San Diego Padres. September 4, 2002: Trenidad Hubbard was released by the San Diego Padres. Roster Player stats: Batting Starters by position Note: Pos = Position; G = Games played; AB = At bats; H = Hits; Avg. = Batting average; HR = Home runs; RBI = Runs batted in Other batters Note: G = Games played; AB = At bats; H = Hits; Avg. = Batting average; HR = Home runs; RBI = Runs batted in Pitching Starting pitchers Note: G = Games pitched; IP = Innings pitched; W = Wins; L = Losses; ERA = Earned run average; SO = Strikeouts Other pitchers Note: G = Games pitched; IP = Innings pitched; W = Wins; L = Losses; ERA = Earned run average; SO = Strikeouts Relief pitchers Note: G = Games pitched; W = Wins; L = Losses; SV = Saves; ERA = Earned run average; SO = Strikeouts Award winners: 2002 Major League Baseball All-Star Game Trevor Hoffman
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**238P/Read** 238P/Read: 238P/Read (P/2005 U1) is a main-belt comet discovered on 24 October 2005 by astronomer Michael T. Read using the Spacewatch 36-inch telescope on Kitt Peak National Observatory. It has an orbit within the asteroid belt and has displayed the coma of a traditional comet. It fits the definition of an Encke-type comet with (TJupiter > 3; a < aJupiter). Description: Before it was discovered 238P came to perihelion on 2005 July 27. When it was discovered on 2005 October 24, it showed vigorous cometary activity until 2005 December 27. Outgassing likely began at least 2 months before discovery. The activity of 238P is much stronger than 133P/Elst-Pizarro and 176P/LINEAR. This may indicate that the impact assumed to have triggered 238P's activity occurred very recently.Observations of 238P when it was inactive in 2007 suggests that it has a small nucleus only about 0.6 km in diameter.It came to perihelion on 2011 March 10, 2016 October 22. and 2022 June 5. It will next come to perihelion on 2028 January 24.238P/Read was the target of a mission proposal in NASA's Discovery Program in the 2010s called Proteus, however it was not selected for further development. Discovery program's founding mission was to an asteroid, but it went to a Near-Earth asteroid. A mission to a main-belt asteroid was proposed in the 1990s (also see Deep Impact (spacecraft)).The comet was observed by James Webb Space Telescope during the 2022 perihelion and it was found spectrographicaly that its coma was composed by water vapor, due to water sublimation, and lacked significant CO2 coma.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Polypyrimidine tract-binding protein** Polypyrimidine tract-binding protein: Polypyrimidine tract-binding protein, also known as PTB or hnRNP I, is an RNA-binding protein. PTB functions mainly as a splicing regulator, although it is also involved in alternative 3' end processing, mRNA stability and RNA localization. Two 2020 studies have shown that depleting PTB mRNA in astrocytes can convert these astrocytes to functional neurons. These studies also show that such a treatment can be applied to the substantia nigra of mice models of Parkinson's disease in order to convert astrocytes to dopaminergic neurons and as a consequence restore motor function in these mice.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**2011 Bulgaria foot-and-mouth disease outbreak** 2011 Bulgaria foot-and-mouth disease outbreak: 2011 Bulgaria foot-and-mouth disease outbreak is an outbreak of foot-and-mouth disease (FMD) occurring in Southeastern Bulgaria in 2011. 2011 Bulgaria foot-and-mouth disease outbreak: FMD was first confirmed on 5 January 2011, in a wild boar that had been shot on 30 December 2010. This animal is believed to have crossed the Bulgarian-Turkish border near the village of Kosti, Burgas Province in the Strandzha Mountains. A necropsy revealed foot-and mouth disease. Following this, 37 infected animals were discovered in the village of Kosti, and all susceptible livestock there were culled. Burgas Province and seven other neighbouring provinces declared a quarantine.On 14 January a new outbreak was suspected in the neighbouring village of Rezovo. It is believed to have been carried by a Turkish cattle herd. On 17 January the presence of the disease was confirmed. The Bulgarian authorities ordered culling of all susceptible livestock in Rezovo. The losses in the two villages are promised to be compensated.The mayor of Tsarevo Municipality Petko Arnaudov proposed construction of a wire fence along the Turkish border to prevent further movement of diseased animals into Bulgaria. The proposal was accepted by the Ministry of Agriculture and Forestry. The authorities ordered disinfection of all vehicles crossing from Turkey, where a major outbreak was occurring.On 31 January in the village of Gramatikovo, Malko Tarnovo Municipality, which is located in the 10 km prevention zone around the first two outbreaks, a new outbreak was discovered. The blood tests found 13 animals infected with the disease. The authorities ordered culling of all susceptible livestock in the village, which consists of 149 animals - 1 cow, 38 sheep and 110 goats. The losses are promised to be compensated.On 25 March two new outbreaks were discovered in the villages of Granichar and Kirovo. The authorities ordered culling of 173 infected animals.The last new case was detected in April 2011; Bulgaria was declared FMD-free in July 2011. Prior to this outbreak, Bulgaria had not had a case of FMD since 1996.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Huaguang Zhang** Huaguang Zhang: Huaguang Zhang is an engineer at Northeastern University China in Shenyang, China. He was named a Fellow of the Institute of Electrical and Electronics Engineers (IEEE) in 2015 for his contributions to stability analysis of recurrent neural networks and intelligent control of nonlinear systems.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**CODY Assessment** CODY Assessment: The CODY Assessment (Computer aided Dyscalculia test and training) is a diagnostic screener for elementary school children from 2nd to 4th grade used to determine math weakness or dyscalculia. It also generates a detailed report evaluating each child's mathematical skills. It was developed in 2013 as a part of the CODY Project, which partnered psychologists at the University of Münster with technology experts at Kaasa health, a German software company. Application: The CODY Assessment is part of the mathematical training software Meister Cody ‒ Talasia. Children take the assessment, which creates a detailed report evaluating their math skills, when they begin the program and again 30 days later. Additionally, the CODY Project used the assessment in its research with several elementary schools in order to evaluate the mathematical skills of children before and after various instructional/ intervention methods. Set-up: The CODY Assessment takes approximately 30–40 minutes and detects four aspects: core markers (dot enumeration & magnitude comparison), number processing, calculation and working memory skills. It's comprised several subtests (listed below), which evaluate both mathematical and cognitive skills: Reaction Time Test Dot enumeration Magnitude Comparison (symbolic and mixed) Transcoding Calculation Number Sets Number Line Matrix Span Missing NumberThe subtests were inspired by the scientific findings of Brian Butterworth, who developed the background of a computer-based screening-test for detecting a dyscalculia. Validation: University of Münster validated the CODY Assessment. The validity and reliability of the test procedure were elaborately tested with a sample of more than 600 elementary school children from the second to fourth grade. The specificity of the CODY Assessment is 81 and the sensitivity is 76. The Ratz-Index is 0,68, which shows a good level of reliability.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Deadheading (employee)** Deadheading (employee): Deadheading is the practice of carrying, free of charge, a transport company's own staff on a normal passenger trip so that they can be in the right place to begin their duties. In United States railway usage, the term may also be used for movement of train crews to or from a train using another means of vehicular transportation, as passenger train service is infrequent or nonexistent in many areas. Notable deadheaders: A day prior to the Lion Air Flight 610 crash in October 2018, a deadheading pilot during the 737 (PK-LQP)'s final successful flight reportedly saved the plane from the same malfunctioning flight control system that caused the crash the next day, killing 189 people. He is said to have been in the cockpit jump seat when the malfunction occurred; he identified the problem and advised the active crew on how to address it. Notable deadheaders: One of the four survivors of Japan Airlines Flight 123 in August 1985 was a deadheading flight attendant, Yumi Ochiai. She helped administer oxygen to passengers after the plane suffered explosive decompression. She survived because she was wedged between several seats during the crash, protecting her from suffering serious injury. Notable deadheaders: In July 1989, United Airlines Flight 232, a McDonnell Douglas DC-10, lost all hydraulic systems and flight controls, an event considered so improbable that no backup flight controls were provided and no emergency procedures had been established for pilots. Dennis Edward Fitch, a deadheading DC-10 flight instructor who had investigated how to fly the airliner following a total hydraulic failure following the crash of JAL 123, helped the flight crew guide the aircraft to a semi-controlled emergency landing. Notable deadheaders: In April 1994, on FedEx Flight 705, employee Auburn Calloway attempted to hijack the McDonnell-Douglas DC-10 on which he was deadheading with the intent of crashing it to initiate insurance fraud, but was repelled by the combined efforts of the plane's crew. Confidence trickster Frank Abagnale impersonated a pilot and supposedly deadheaded on more than 250 flights in the mid-1960s.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**TestComplete** TestComplete: TestComplete is a functional automated testing platform developed by SmartBear Software. TestComplete gives testers the ability to create automated tests for Microsoft Windows, Web, Android (operating system), and iOS applications. Tests can be recorded, scripted or manually created with keyword driven operations and used for automated playback and error logging. TestComplete contains three modules: Desktop Web MobileEach module contains functionality for creating automated tests on that specified platform. TestComplete is used for testing many different application types including Web, Windows, Android, iOS, WPF, HTML5, Flash, Flex, Silverlight, .NET, VCL and Java. It automates functional testing and back-end testing like database testing. Overview: Uses TestComplete is used to create and automate many different software test types. Record and playback test creation records a tester performing a manual test and allows it to be played back and maintained over and over again as an automated test. Recorded tests can be modified later by testers to create new tests or enhance existing tests with more use cases. Overview: Main Features Keyword Testing: TestComplete has a built-in keyword-driven test editor that consists of keyword operations that correspond to automated testing actions. Scripted Testing: TestComplete has a built-in code editor that helps testers write scripts manually. It also includes a set of special plug-ins that help. Test Record and Playback: TestComplete records the key actions necessary to replay the test and discards all unneeded actions. Distributed Testing: TestComplete can run several automated tests across separate workstations or virtual machines. Access to Methods and Properties of Internal Objects: TestComplete reads the names of the visible elements and many internal elements of Delphi, C++Builder, .NET, WPF, Java and Visual Basic applications and allows test scripts to access these values for verification or use in tests. Bug Tracking Integration: TestComplete includes issue-tracking templates that can be used to create or modify items stored in issue-tracking systems. TestComplete currently supports Microsoft Visual Studio 2005, 2008, 2010 Team System, BugZilla, Jira and AutomatedQA AQdevTeam. Data-driven testing: Data-driven testing with TestComplete means using a single test to verify many different test cases by driving the test with input and expected values from an external data source instead of using the same hard-coded values each time the test runs. COM-based, Open Architecture: TestComplete's engine is based on an open API, COM interface. It is source-language independent, and can read debugger information and use it at runtime through the TestComplete Debug Info Agent. Test Visualizer – TestComplete automatically captures screenshots during test recording and playback. This enables quick comparisons between expected and actual screens during test. Overview: Extensions and SDK - Everything visible in TestComplete — panels, project items, specific scripting objects, and others — are implemented as plug-ins. These plug-ins are included into the product and installed on your computer along with other TestComplete modules. You can create your own plug-ins that will extend TestComplete and provide specific functionality for your own needs. For example, you can create plug-ins or use third-party plug-ins for: Support for custom controls Custom keyword test operations New scripting objects Custom checkpoints Commands for test result processing Panels Project items Menu and toolbar items Supported testing types Supported scripting languages JavaScript Python VBScript JScript C++Script (specific dialect based on JScript supported by TestComplete - deprecated in version 12) C#Script (specific dialect based on JScript supported by TestComplete - deprecated in version 12) DelphiScript VB Supported applications Support for all 32-bit and 64-bit Windows applications. Overview: Extended support, access to internal objects, methods and properties, for the following: .NET (C#, VB.NET, JScript.NET, VCL.NET, C#Builder, Python .NET, Perl .NET etc.) WPF Java (AWT, SWT, Swing, WFC) Android iOS Xamarin (with the implementation of the Falafel Software bridge) Sybase PowerBuilder, Microsoft FoxPro, Microsoft Access, Microsoft InfoPath Web browsers (Internet Explorer, Firefox, Google Chrome, Opera (web browser), Safari (web browser) Visual C++ Visual Basic Visual FoxPro Delphi C++Builder Adobe Flash Adobe Flex Adobe AIR Microsoft Silverlight HTML5 Chromium (web browser) PhoneGap Awards The World of Software Development - Dr. Dobb's Jolt Awards: 2005, 2007, 2008, 2010, 2013, 2014 ATI Automation Honors: 2010, 2014 (Overall subcategory; Java subcategory) asp.netPRO Readers' Choice Awards: 2004, 2005, 2006, 2007, 2009 Windows IT Pro Editors' Best and Community Choice Awards: 2009 Delphi Informant Readers Choice Awards as the Best in the Testing/QA Tool category: 2003, 2004
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**OD600** OD600: OD600 (Also written as O.D. 600, D600, o.d. 600, OD600) is an abbreviation indicating the optical density of a sample measured at a wavelength of 600 nm. It is a commonly used in Spectrophotometry for estimating the concentration of bacteria or other cells in a liquid as the 600nm wavelength does little to damage or hinder their growth. Since optical density in case of OD600 measurements results from light scattering rather than absorption, size and shape as well as dead cells and debris of a cell may add to light dissipating. Distinctive cell types that are at densities of the same level (eg. cell/mL), may, therefore, show varying values OD600, when estimated on a similar instrument. OD600: Measuring the concentration can indicate the growth stage of cultured cell population, i.e. whether it is in the lag phase, log phase, or stationary phase. This is done by measuring the absorbance of the OD600 light with the use of a Spectrophotometer. A growth curve can then be constructed by taking absorbance measurements as a function of time. As a general rule, cells should be harvested towards the end of the log phase, using the optical density of the samples to determine when this point has been reached. Cells are routinely grown until the absorbance at 600 nm (known as OD600) reaches approximately 0.4 prior to induction or harvesting. OD600: OD600 is preferable to UV spectroscopy when measuring the growth over time of a cell population because at this wavelength, the cells will not be killed as they would under too much UV radiation. UV radiation has also been shown to cause small to medium-sized mutations in bacteria, potentially altering or destroying genes of interest.OD600 is a type of turbidity measurement.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Superior transverse scapular ligament** Superior transverse scapular ligament: The superior transverse ligament (transverse or suprascapular ligament) converts the suprascapular notch into a foramen or opening. It is a thin and flat fascicle, narrower at the middle than at the extremities, attached by one end to the base of the coracoid process and by the other to the medial end of the scapular notch. The suprascapular nerve always runs through the foramen; while the suprascapular vessels cross over the ligament in most of the cases.The suprascapular ligament can become completely or partially ossified. The ligament also been found to split forming doubled space within the suprascapular notch.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Scleraxis** Scleraxis: The scleraxis protein is a member of the basic helix-loop-helix (bHLH) superfamily of transcription factors. Currently two genes (SCXA and SCXB respectively) have been identified to code for identical scleraxis proteins. Function: It is thought that early scleraxis-expressing progenitor cells lead to the eventual formation of tendon tissue and other muscle attachments. Scleraxis is involved in mesoderm formation and is expressed in the syndetome (a collection of embryonic tissue that develops into tendon and blood vessels) of developing somites (primitive segments or compartments of embryos). Inducing scleraxis expression: The syndetome location within the somite is determined by FGF secreted from the center of the myotome (a collection of embryonic tissue that develops into skeletal muscle)- the FGF then induces the adjacent anterior and posterior sclerotome (a collection of embryonic tissue that develops into the axial skeleton) to adopt a tendon cell fate. This ultimately places future scleraxis-expressing cells between the two tissue types they will ultimately join. Scleraxis expression will be seen throughout the entire sclerotome (rather than just the sclerotome directly anterior and posterior to the myotome) with an overexpression of FGF8, demonstrating that all sclerotome cells are capable of expressing scleraxis in response to FGF signaling. While the FGF interaction has been shown to be necessary for scleraxis expression, it is still unclear as to whether the FGF signaling pathway directly induces the syndetome to secrete scleraxis, or indirectly through a secondary signaling pathway. Most likely, the syndetomal cells, through careful reading of the FGF concentration (coming from the myotome), can precisely determine their location and begin expressing scleraxis. Much of embryonic development follows this model of inducing specific cell fates through the reading of surrounding signaling molecule concentration gradients. Background: bHLH transcription factors have been shown to have a wide array of functions in developmental processes. More precisely, they have critical roles in the control of cellular differentiation, proliferation and regulation of oncogenesis. To date, 242 eukaryotic proteins belonging to the HLH superfamily have been reported. They have varied expression patterns in all eukaryotes from yeast to humans.Structurally, bHLH proteins are characterised by a “highly conserved domain containing a stretch of basic amino acids adjacent to two amphipathic α-helices separated by a loop”.These helices have important functional properties, forming part of the DNA binding and transcription activating domains. With respect to scleraxis, the bHLH region spans amino acid residues 78 to 131. A proline rich region is also predicted to lie between residues 161–170. A stretch of basic residues, which aids in DNA binding, is found closer to the N terminal end of scleraxis.HLH proteins that lack this basic domain have been shown to negatively regulate the activities of bHLH proteins and are called inhibitors of differentiation (Id). Basic HLH proteins function normally as dimers and bind to a specific hexanucleotide DNA sequence (CAANTG) known as an E-box thus switching on the expression of various genes involved in cellular development and survival.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**CST11** CST11: Cystatin-11 is a protein that in humans is encoded by the CST11 gene.The cystatin superfamily encompasses proteins that contain multiple cystatin-like sequences. Some of the members are active cysteine protease inhibitors, while others have lost or perhaps never acquired this inhibitory activity. There are three inhibitory families in the superfamily, including the type 1 cystatins (stefins), type 2 cystatins and the kininogens. The type 2 cystatin proteins are a class of cysteine proteinase inhibitors found in a variety of human fluids and secretions. The cystatin locus on chromosome 20 contains the majority of the type 2 cystatin genes and pseudogenes. This gene is located in the cystatin locus and encodes an epididymal-specific protein whose specific function has not been determined. Alternative splicing yields two variants encoding distinct isoforms.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Reciprocal inhibition** Reciprocal inhibition: Reciprocal inhibition describes the relaxation of muscles on one side of a joint to accommodate contraction on the other side. In some allied health disciplines, this is known as reflexive antagonism. The central nervous system sends a message to the agonist muscle to contract. The tension in the antagonist muscle is activated by impulses from motor neurons, causing it to relax. Mechanics: Joints are controlled by two opposing sets of muscles called extensors and flexors, that work in synchrony for smooth movement. When a muscle spindle is stretched, the stretch reflex is activated, and the opposing muscle group must be inhibited to prevent it from working against the contraction of the homonymous muscle. This inhibition is accomplished by the actions of an inhibitor interneuron in the spinal cord. Mechanics: The afferent of the muscle spindle bifurcates in the spinal cord. One branch innervates the alpha motor neuron that causes the homonymous muscle to contract, producing the reflex. The other branch innervates the inhibitory interneuron, which then innervates the alpha motor neuron that synapses onto the opposing muscle. Because the interneuron is inhibitory, it prevents the opposing alpha motor neuron from firing, thereby reducing the contraction of the opposing muscle. Without this reciprocal inhibition, both groups of muscles might contract simultaneously and work against each other. Mechanics: If opposing muscles were to contract at the same time, a muscle tear can occur. This may occur during physical activities such as running, during which opposing muscles engage and disengage sequentially to produce coordinated movement. Reciprocal inhibition facilitates ease of movement and is a safeguard against injury. However, if a "misfiring" of motor neurons occurs, causing simultaneous contraction of opposing muscles, a tear can occur. For example, if the quadriceps femoris and hamstring contract simultaneously at a high intensity, the stronger muscle (traditionally the quadriceps) overpowers the weaker muscle group (hamstrings). This can result in a common muscular injury known as a pulled hamstring, more accurately called a muscle strain. Duration: The phenomenon is fleeting, incomplete, and weak. For example, when the triceps brachii is stimulated, the biceps is reflexively inhibited. The incompleteness of the effect is related to postural and functional tone. Also, some reflexes in vivo are polysynaptic, with entire muscle groups responding to noxious stimuli. Application in physical therapy: Reciprocal inhibition is the basic original notion behind indirect muscle energy techniques. While this notion is now understood to be incomplete, the clinical mechanism of reflexive antagonism continues to be useful in physical therapy. Muscle energy techniques that use reflexive antagonism, such as rapid deafferentation techniques, are medical guideline techniques and protocols that make use of reflexive pathways and reciprocal inhibition as a means of switching off inflammation, pain, and protective spasm for entire synergistic muscle groups or singular muscles and soft tissue structures.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**GPS disciplined oscillator** GPS disciplined oscillator: A GPS clock, or GPS disciplined oscillator (GPSDO), is a combination of a GPS receiver and a high-quality, stable oscillator such as a quartz or rubidium oscillator whose output is controlled to agree with the signals broadcast by GPS or other GNSS satellites. GPSDOs work well as a source of timing because the satellite time signals must be accurate in order to provide positional accuracy for GPS in navigation. These signals are accurate to nanoseconds and provide a good reference for timing applications. Applications: GPSDOs serve as an indispensable source of timing in a range of applications, and some technology applications would not be practical without them. GPSDOs are used as the basis for Coordinated Universal Time (UTC) around the world. UTC is the official accepted standard for time and frequency. UTC is controlled by the International Bureau of Weights and Measures (BIPM). Timing centers around the world use GPS to align their own time scales to UTC. GPS based standards are used to provide synchronization to wireless base stations and serve well in standards laboratories as an alternative to cesium-based references.GPSDOs can be used to provide synchronization of multiple RF receivers, allowing for RF phase coherent operation among the receivers and applications, such as passive radar and ionosondes. Operation: A GPSDO works by disciplining, or steering a high quality quartz or rubidium oscillator by locking the output to a GPS signal via a tracking loop. The disciplining mechanism works in a similar way to a phase-locked loop (PLL), but in most GPSDOs the loop filter is replaced with a microcontroller that uses software to compensate for not only the phase and frequency changes of the local oscillator, but also for the "learned" effects of aging, temperature, and other environmental parameters.One of the keys to the usefulness of a GPSDO as a timing reference is the way it is able to combine the stability characteristics of the GPS signal and the oscillator controlled by the tracking loop. GPS receivers have excellent long-term stability (as characterized by their Allan deviation) at averaging times greater than several hours. However, their short-term stability is degraded by limitations of the internal resolution of the one pulse per second (1PPS) reference timing circuits, signal propagation effects such as multipath interference, atmospheric conditions, and other impairments. On the other hand, a quality oven-controlled oscillator has better short-term stability but is susceptible to thermal, aging, and other long-term effects. A GPSDO aims to utilize the best of both sources, combining the short-term stability performance of the oscillator with the long-term stability of the GPS signals to give a reference source with excellent overall stability characteristics.GPSDOs typically phase-align the internal flywheel oscillator to the GPS signal by using dividers to generate a 1PPS signal from the reference oscillator, then phase comparing this 1PPS signal to the GPS-generated 1PPS signal and using the phase differences to control the local oscillator frequency in small adjustments via the tracking loop. This differentiates GPSDOs from their cousins NCOs (numerically controlled oscillator). Rather than disciplining an oscillator via frequency adjustments, NCOs typically use a free-running, low-cost crystal oscillator and adjust the output phase by digitally lengthening or shortening the output phase many times per second in large phase steps assuring that on average the number of phase transitions per second is aligned to the GPS receiver reference source. This guarantees frequency accuracy at the expense of high phase noise and jitter, a degradation that true GPSDOs do not suffer. Operation: When the GPS signal becomes unavailable, the GPSDO goes into a state of holdover, where it tries to maintain accurate timing using only the internal oscillator. Operation: Sophisticated algorithms are used to compensate for the aging and temperature stability of the oscillator while the GPSDO is in holdover.The use of Selective Availability (SA) prior to May 2000 restricted the accuracy of GPS signals available for civilian use and in turn presented challenges to the accuracy of GPSDO derived timing. The turning off of SA resulted in a significant increase in the accuracy that GPSDOs can offer. Operation: GPSDOs are capable of generating frequency accuracies and stabilities on the order of parts per billion for even entry-level, low-cost units, to parts per trillion for more advanced units within minutes after power-on, and are thus one of the highest-accuracy physically-derived reference standards available.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Pyrimidine dimer** Pyrimidine dimer: Pyrimidine dimers are molecular lesions formed from thymine or cytosine bases in DNA via photochemical reactions, commonly associated with direct DNA damage. Ultraviolet light (UV; particularly UVC) induces the formation of covalent linkages between consecutive bases along the nucleotide chain in the vicinity of their carbon–carbon double bonds. The photo-coupled dimers are fluorescent. The dimerization reaction can also occur among pyrimidine bases in dsRNA (double-stranded RNA)—uracil or cytosine. Two common UV products are cyclobutane pyrimidine dimers (CPDs) and 6–4 photoproducts. These premutagenic lesions alter the structure of the DNA helix and cause non-canonical base pairing. Specifically, adjacent thymines or cytosines in DNA will form a cyclobutane ring when joined together and cause a distortion in the DNA. This distortion prevents replication or transcription machinery beyond the site of the dimerization. Up to 50–100 such reactions per second might occur in a skin cell during exposure to sunlight, but are usually corrected within seconds by photolyase reactivation or nucleotide excision repair. In humans, the most common form of DNA repair is nucleotide excision repair (NER). In contrast, organisms such as bacteria can counterintuitively harvest energy from the sun to fix DNA damage from pyrimidine dimers via photolyase activity. If these lesions are not fixed, polymerase machinery may misread or add in the incorrect nucleotide to the strand. If the damage to the DNA is overwhelming, mutations can arise within the genome of an organism and may lead to the production of cancer cells. Uncorrected lesions can inhibit polymerases, cause misreading during transcription or replication, or lead to arrest of replication. It causes sunburn and it triggers the production of melanin. Pyrimidine dimers are the primary cause of melanomas in humans. Types of dimers: A cyclobutane pyrimidine dimer (CPD) contains a four membered ring arising from the coupling of the two double-bonded carbons of each of the pyrimidines. Such dimers interfere with base pairing during DNA replication, leading to mutations. Types of dimers: A 6–4 photoproduct (6–4 pyrimidine–pyrimidone or 6–4 pyrimidine–pyrimidinone) is an alternate dimer consisting of a single covalent bond between the carbon at the 6 position of one ring and carbon at the 4 position of the ring on the next base. This type of conversion occurs at one third the frequency of CPDs but is more mutagenic.A third type of lesion is a Dewar pyrimidinone, formed by a reversible isomerization of the 6–4 photoproduct upon further exposure to light. Mutagenesis: Translesion polymerases frequently introduce mutations at pyrimidine dimers, both in prokaryotes (SOS mutagenesis) and in eukaryotes. Although the thymine-thymine CPDs (thymine dimers) are the most frequent lesions caused by UV light, translesion polymerases are biased toward introduction of As, so that TT dimers are often replicated correctly. On the other hand, any cytosine involved in CPDs is prone to be deaminated, inducing a C to T transition. DNA repair: Pyrimidine dimers introduce local conformational changes in the DNA structure, which allow recognition of the lesion by repair enzymes. In most organisms (excluding placental mammals such as humans) they can be repaired by photoreactivation. Photoreactivation is a repair process in which photolyase enzymes reverse CPDs using photochemical reactions. In addition, some photolyases can also repair 6-4 photoproducts of UV induced DNA damage. Photolyase enzymes utilize flavin adenine dinucleotide (FAD) as a cofactor in the repair process.The UV dose that reduces a population of wild-type yeast cells to 37% survival is equivalent (assuming a Poisson distribution of hits) to the UV dose that causes an average of one lethal hit to each of the cells of the population. The number of pyrimidine dimers induced per haploid genome at this dose was measured as 27,000. A mutant yeast strain defective in the three pathways by which pyrimidine dimers were known to be repaired in yeast was also tested for UV sensitivity. It was found in this case that only one or, at most, two unrepaired pyrimidine dimers per haploid genome are lethal to the cell. These findings thus indicate that the repair of thymine dimers in wild-type yeast is highly efficient. DNA repair: Nucleotide excision repair, sometimes termed "dark reactivation", is a more general mechanism for repair of lesions and is the most common form of DNA repair for pyrimidine dimers in humans. This process works by using cellular machinery to locate the dimerized nucleotides and excise the lesion. Once the CPD is removed, there is a gap in the DNA strand that must be filled. DNA machinery uses the undamaged complementary strand to synthesize nucleotides off of and consequently fill in the gap on the previously damaged strand.Xeroderma pigmentosum (XP) is a rare genetic disease in humans in which genes that encode for NER proteins are mutated and result in decreased ability to combat pyrimidine dimers that form as a result of UV damage. Individuals with XP are also at a much higher risk of cancer than others, with a greater than 5,000 fold increased risk of developing skin cancers. Some common features and symptoms of XP include skin discoloration, and the formation of multiple tumors proceeding UV exposure. DNA repair: A few organisms have other ways to perform repairs: Spore photoproduct lyase is found in spore-forming bacteria. It returns thymine dimers to their original state. Deoxyribodipyrimidine endonucleosidase is found in bacteriophage T4. It is a base excision repair enzyme specific for pyrimidine dimers. It is then able to cut open the AP site. DNA repair: Another type of repair mechanism that is conserved in humans and other non-mammals is translesion synthesis. Typically, the lesion associated with the pyrimidine dimer blocks cellular machinery from synthesizing past the damaged site. However, in translesion synthesis, the CPD is bypassed by translesion polymerases, and replication and or transcription machinery can continue past the lesion. One specific translesion DNA polymerase, DNA polymerase η, is deficient in individuals with XPD. Effect of topical sunscreen and effect of absorbed sunscreen: Direct DNA damage is reduced by sunscreen, which also reduces the risk of developing a sunburn. When the sunscreen is at the surface of the skin, it filters the UV rays, which attenuates the intensity. Even when the sunscreen molecules have penetrated into the skin, they protect against direct DNA damage, because the UV light is absorbed by the sunscreen and not by the DNA. Sunscreen primarily works by absorbing the UV light from the sun through the use of organic compounds, such as oxybenzone or avobenzone. These compounds are able to absorb UV energy from the sun and transition into higher-energy states. Eventually, these molecules return to lower energy states, and in doing so, the initial energy from the UV light can be transformed into heat. This process of absorption works to reduce the risk of DNA damage and the formation of pyrimidine dimers. UVA light makes up 95% of the UV light that reaches earth, whereas UVB light makes up only about 5%. UVB light is the form of UV light that is responsible for tanning and burning. Sunscreens work to protect from both UVA and UVB rays. Overall, sunburns exemplify DNA damage caused by UV rays, and this damage can come in the form of free radical species, as well as dimerization of adjacent nucleotides.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Epidural abscess** Epidural abscess: An epidural abscess refers to a collection of pus and infectious material located in the epidural space superficial to the dura mater which surrounds the central nervous system. Due to its location adjacent to brain or spinal cord, epidural abscesses have the potential to cause weakness, pain, and paralysis. Types: Spinal epidural abscess A spinal epidural abscess (SEA) is a collection of pus or inflammatory granulation between the dura mater and the vertebral column. Currently the annual incidence rate of SEAs is estimated to be 2.5-3 per 10,000 hospital admissions. Incidence of SEA is on the rise, due to factors such as an aging population, increase in use of invasive spinal instrumentation, growing number of patients with risk factors such as diabetes and intravenous drug use. SEAs are more common in posterior than anterior areas, and the most common location is the thoracolumbar area, where epidural space is larger and contains more fat tissue. Types: SEAs are more common in males, and can occur in all ages, although highest prevalence is during the fifth and seventh decades of life.Combined treatment - emergency surgery and antibiotics is the preferred treatment for the spinal epidural abscess, removing existing pus (which is tested for microorganisms to select the most appropriate antibiotic) and removing pressure from the spinal cord and nerve roots. Antibiotic therapy should start after obtaining pus for microbiological investigation. Types: Cranial epidural abscess A cranial epidural abscess involves pus and granulation tissue accumulation in between the dura mater and cranial bone. These typically arise (along with osteomyelitis of a cranial bone) from infections of the ear or paranasal sinuses. They rarely can be caused by distant infection or an infected cerebral venous sinus thrombosis. Staphylococcus aureus is the most common pathogen. Symptoms include pain at the forehead or ear, pus draining from the ear or sinuses, tenderness overlying the infectious site, fever, neck stiffness, and in rare cases focal seizures. Treatment requires a combination of antibiotics and surgical removal of infected bone.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**MMB-2201** MMB-2201: MMB-2201 (also known as 5F-MMB-PICA, 5F-AMB-PICA, and I-AMB) is a potent indole-3-carboxamide based synthetic cannabinoid, which has been sold as a designer drug and as an active ingredient in synthetic cannabis blends. It was first reported in Russia and Belarus in January 2014, but has since been sold in a number of other countries. In the United States, MMB-2201 was identified in Drug Enforcement Administration drug seizures for the first time in 2018.MMB-2201 is the indole core analogue of 5F-AMB. Synthetic cannabinoid compounds with an indole-3-carboxamide or indazole-3-carboxamide core bearing a N-1-methoxycarbonyl group with attached isopropyl or tert-butyl substituent, have proved to be much more dangerous than older synthetic cannabinoid compounds previously reported, and have been linked to many deaths in Russia, Japan, Europe and the United States. Legality: MMB-2201 is illegal in Russia, Belarus and Sweden.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Heart murmur** Heart murmur: Heart murmurs are unique heart sounds produced when blood flows across a heart valve or blood vessel. This occurs when turbulent blood flow creates a sound loud enough to hear with a stethoscope. Turbulent blood flow is not smooth. The sound differs from normal heart sounds by their characteristics. For example, heart murmurs may have a distinct pitch, duration and timing. The major way health care providers examine the heart on physical exam is heart auscultation; another clinical technique is palpation, which can detect by touch when such turbulence causes the vibrations called cardiac thrill. Heart murmur: A murmur is a sign found during the cardiac exam. Murmurs are of various types and are important in the detection of cardiac and valvular pathologies (i.e. can be a sign of heart diseases or defects). Heart murmur: There are two types of murmur. A functional murmur is a benign heart murmur that is primarily due to physiologic conditions outside the heart. The other type of heart murmur is due to a structural defect in the heart itself. Defects may be due to narrowing of one or more valves (stenosis), backflow of blood, through a leaky valve, (regurgitation), or the presence of abnormal passages through which blood flows in or near the heart.Most murmurs are normal variants that can present at various ages which relate to changes of the body with age such as chest size, blood pressure, and pliability or rigidity of structures.Heart murmurs are frequently categorized by timing. These include systolic heart murmurs, diastolic heart murmurs, or continuous murmurs. These differ in the part of the heartbeat they make sound, during systole, or diastole. Yet, continuous murmurs create sound throughout both parts of the heartbeat. Continuous murmurs are not placed into the categories of diastolic or systolic murmurs. Diagnostic approach and diagnosis: Classification Murmurs have seven main characteristics. These include timing, shape, location, radiation, intensity, pitch and quality. Timing refers to whether the murmur is a systolic, diastolic, or continuous murmur. Shape refers to the intensity over time. Murmurs can be crescendo, decrescendo or crescendo-decrescendo. Crescendo murmurs increase in intensity over time. Decrescendo murmurs decrease in intensity over time. Crescendo-decrescendo murmurs have both shapes over time. These have progressive increase in intensity, peak, and progressive decrease in intensity. Crescendo–decrescendo murmurs resemble a diamond or kite shape. Diagnostic approach and diagnosis: Location refers to where the heart murmur is usually heard best. There are four places on the anterior chest wall to listen for heart murmurs. Each location roughly corresponds to a specific part of the heart. Health care providers listen to these areas with a stethoscope.Position for auscultation: The patient is most often lying on their back (supine) with the head of bed at slight upward angle. The head of the bed is usually at a 30 degree upward angle. Usually the health care provider is standing to the right of the person they are examining. Below are positional changes that one may use: Left lateral decubitus (lying on the left side). This will decrease the distance from wall of the chest to the apex of the heart. This will help to examine the point of maximal impulse. Also, this will help to hear extra heart sounds (S3 or S4). Diagnostic approach and diagnosis: With the patient sitting upright. With the patient seated, leaning forward and holding breath after exhalation. This will decrease the distance of the chest wall to the left ventricular outflow tract. By doing so this will help find the presence of an aortic regurgitation murmur. Radiation refers to where the sound of the murmur travels. The rule of thumb is that the sound radiates in the direction of the blood flow. Intensity refers to the loudness of the murmur with grades according to the Levine scale, from 1 to 6:Pitch may be low, medium or high. This depends on whether auscultation is best with the bell or diaphragm of a stethoscope. Quality refers to unusual characteristics of a murmur. For example, blowing, harsh, rumbling or musical. Diagnostic approach and diagnosis: Interventions that change murmur sounds Inhalation leads to an increase in intrathoracic negative pressure. This increases the capacity of pulmonary circulation, thereby prolonging ejection time. This will affect the closure of the pulmonary valve. This finding is also called Carvallo's maneuver. This maneuver in studies had a sensitivity of 100% and a specificity of 80% to 88% in detecting murmurs originating in the right heart. Positive Carvallo's sign describes the increase in intensity of a tricuspid regurgitation murmur heard with inspiration. Diagnostic approach and diagnosis: Abrupt standing Squatting, by increasing afterload and increasing preload. Squatting leads to an increase in systemic vascular resistance. An increase in systemic vascular resistance results in an increase in afterload. With HOCM, an increase in afterload will hold the obstruction in a more open configuration. This will decrease the loudness of the murmur with HOCM. Handgrip maneuver, by increasing afterload. Like squatting, this will decrease the loudness of the HOCM murmur. Diagnostic approach and diagnosis: Valsalva maneuver. Valsalva maneuver has utility in detecting hypertrophic obstructive cardiomyopathy (HOCM). According to one study, it has a sensitivity of 65% and specificity of 96% in HOCM. Valsalva maneuver, as well as standing, decrease venous return to the heart. As a result, this decreases left ventricular filling. With HOCM, the outflow obstruction increases with a decrease in preload. This will increase the loudness of the murmur with HOCM. Diagnostic approach and diagnosis: Post ectopic potentiation Inhaled amyl nitrite. This is a vasodilator that diminishes systolic murmurs in left-to-right shunts in ventricular septal defects. It also reveals right-to left shunts in the setting of pulmonic stenosis and a ventricular septal defect. Methoxamine Positioning of the patient. In the lateral decubitus position or lying on the left side. This will make murmurs in the mitral valve area more pronounced. Anatomic sources Systolic Aortic valve stenosis is a crescendo/decrescendo systolic murmur. It is best heard at the right upper sternal border (aortic area). It sometimes radiates to the carotid arteries. In mild aortic stenosis, the crescendo-decrescendo is early peaking. Whereas in severe aortic stenosis, the crescendo is late-peaking. In severe cases, obliteration of the S2 heart sound may occur. Stenosis of Bicuspid aortic valve is like the aortic valve stenosis heart murmur. But, one may hear a systolic ejection click after S1 in calcified bicuspid aortic valves. Symptoms tend to present between 40 and 70 years of age. Diagnostic approach and diagnosis: Mitral regurgitation is a holosystolic murmur. One can best hear it at the apex location and it may radiate to the axilla or precordium. When associated with mitral valve prolapse, one may hear a systolic click. In this scenario, valsalva maneuver will decrease left ventricular preload. This will move the murmur onset closer to S1. Isometric handgrip will increase left ventricular afterload. This will increase murmur intensity. In acute severe mitral regurgitation, one may not hear a holosystolic murmur. Diagnostic approach and diagnosis: Pulmonary valve stenosis is a crescendo-decrescendo systolic murmur. One can hear it best at the left upper sternal border. It has association with a systolic ejection click that increases with inspiration. This finding results from an increased venous return to the right side of the heart. Pulmonary stenosis sometimes radiates to the left clavicle. Tricuspid valve regurgitation is a holosystolic murmur. It presents at the left lower sternal border with radiation to the left upper sternal border. One may see prominent v and c waves in the JVP (jugular venous pressure). The murmur will increase with inspiration. Hypertrophic obstructive cardiomyopathy (or hypertrophic subaortic stenosis) will be a systolic crescendo-decrescendo murmur. One can best hear it at the left lower sternal border. Valsalva maneuver will increase the intensity of the murmur. Going from squatting to standing will also increase the intensity of the murmur. Atrial septal defect will present with a systolic crescendo-decrescendo murmur. It is best heard at the left upper sternal border. This is the result of an increased volume going through the pulmonary valve. It has association with a fixed, split S2 and a right ventricular heave. Diagnostic approach and diagnosis: Ventricular septal defect (VSD) will present as a holosystolic murmur. One can hear it at the left lower sternal border. It has association with a palpable thrill, and increases with isometric handgrip. A right to left shunt (Eisenmenger syndrome) may develop with uncorrected VSDs. This is due to worsening pulmonary hypertension. Pulmonary hypertension will increase the murmur intensity and may present with cyanosis. Diagnostic approach and diagnosis: Flow murmur presents at the right upper sternal border. It may present in certain conditions, such as anemia, hyperthyroidism, fever, and pregnancy. Diagnostic approach and diagnosis: Diastolic Aortic valve regurgitation will present as a diastolic decrescendo murmur. One can hear it at the left lower sternal border. One may also hear it at the right lower sternal border (when associated with a dilated aorta). Other possible exam findings are bounding carotid and peripheral pulses. These are also known as Corrigan's pulse or Watson's water hammer pulse. Another possible finding is a widened pulse pressure. Diagnostic approach and diagnosis: Mitral stenosis presents as a diastolic low-pitched decrescendo murmur. It is best heard at the cardiac apex in the left lateral decubitus position. Mitral stenosis may have an opening snap. Increasing severity will shorten the time between S2 (A2) and the opening snap. For example, in severe MS the opening snap will occur earlier after A2. Tricuspid valve stenosis presents as a diastolic decrescendo murmur. One can hear it at the left lower sternal border. One may see signs of right heart failure on exam. Pulmonary valve regurgitation presents as a diastolic decrescendo murmur. One may hear it at the left lower sternal border. A palpable S2 in the second left intercostal space correlates with pulmonary hypertension due to mitral stenosis. The cooing dove murmur is a cardiac murmur with a musical quality (high pitched). Associated with aortic valve regurgitation (or mitral regurgitation before rupture of chordae). It is a diastolic murmur heard over the mid-precordium.Continuous and Combined Systolic/Diastolic Patent ductus arteriosus may present as a continuous murmur radiating to the back. Severe coarctation of the aorta can present with a continuous murmur. One may hear the systolic component at the left infraclavicular region and the back. This is due to the stenosis. One may hear the diastolic component over the chest wall. This is due to blood flow through collateral vessels. Diagnostic approach and diagnosis: Acute severe aortic regurgitation may present with a three phase murmur. First, a midsystolic murmur followed by S2. Following this is a parasternal early diastolic and mid-diastolic murmur (Austin Flint murmur). The exact cause of an Austin Flint murmur is unknown. Hypothesis is that the mechanism of murmur is from the severe aortic regurgitation. In severe aortic regurgitation the jet vibrates the anterior mitral valve leaflet. This causes collision with the mitral inflow during diastole. As such, the mitral valve orifice narrows. This results in increased mitral inflow velocity. This leads to the jet impinging on the myocardial wall.Ruptured aortic sinus (sinus of Valsalva) may present as a continuous murmur. This is an uncommon cause of continuous murmur One may hear it at the aortic area and along the left sternal border. Types and disease associations: Continuous machinery murmur, at the left upper sternal border Classic for a patent ductus arteriosus (PDA). Signs of infants associated with serious cases of PDA are poor feeding, failure to thrive and respiratory distress. Other examination findings may include widened pulse pressures and bounding pulses. A machinery murmur is also known as a Gibson murmur. Types and disease associations: Systolic murmur loudest below the left scapula Classic for a coarctation of the aorta. Coarctation of the aorta is narrowing of the aorta. This can occur in Turner's Syndrome, (gonadal dysgenesis). Turner's Syndrome is an X-linked disorder with absence of one X-chromosome. Other exam findings of coarctation of the aorta include radio-femoral delay. This is when the femoral pulse is later than the radial pulse. The pulses in the lower extremity may be weaker than those of the upper extremity. Another exam finding is of varying blood pressure in the upper and lower extremities. This presents as higher blood pressure in the arms and lower blood pressure in the legs. Types and disease associations: Harsh holosystolic (pansystolic) murmur at the left lower sternal border Classic for a ventricular septal defect (VSD). This may lead to the development of the delayed-onset cyanotic heart disease known as Eisenmenger syndrome. Eisenmenger syndrome is a reversal of the left-to-right heart shunt. This is the result of hypertrophy of the right ventricle over time. This causes a right-to-left heart shunt. The VSD allows deoxygenated blood to flow from the right to left side of the heart. This blood bypasses the lungs. The lack of oxygenation in the pulmonary circulation results in cyanosis. Types and disease associations: Widely split fixed S2 and systolic ejection murmur at the left upper sternal border Classic for a patent foramen ovale (PFO) or atrial septal defect (ASD). A PFO is lack of closure of the foramen ovale. At first, this produces a left-to-right heart shunt. This does not produce cyanosis, but causes pulmonary hypertension. Longstanding uncorrected atrial septal defects can also result in Eisenmenger syndrome. Eisenmenger syndrome can result in cyanosis. Management: A medical provider (e.g. doctor) may order tests for further evaluation of a heart murmur. The echocardiogram is a common test used. This is also known as an "echo" or ultrasound of the heart. It shows the heart structures and blood flow through the heart. Further testing is usually done when symptoms that may be of concern are present. Management: The need for treatment depends on the diagnosis and severity. In some cases, the condition causing the heart murmur may prompt monitoring. Sometimes, heart murmurs disappear on their own. This happens when the cause of the heart murmur is no longer present. Monitoring will help determine how the condition changes. It may stay the same, worsen, or improve. In other cases, the condition causing the heart murmur may not prompt any further tests. Management: Treatment ranges from medication to surgeries.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Bioinspiration &amp; Biomimetics** Bioinspiration &amp; Biomimetics: Bioinspiration & Biomimetics is a peer-reviewed journal that publishes research involving the study and distillation of principles and functions found in biological systems that have been developed through evolution. It was quarterly during 2006~2014 and became bimonthly in 2015. The editor-in-chief is Robert J Full at the University of California, Berkeley, USA. Abstracting and indexing: This journal is indexed by the following databases: Science Citation Index Materials Science Citation Index Journal Citation Reports/Science Edition Medline/PubMed Scopus Inspec Chemical Abstracts Service Current Awareness in Biological Sciences EMBiology NASA Astrophysics Data System VINITI Abstracts Journal (Referativnyi Zhurnal)
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Meta-circular evaluator** Meta-circular evaluator: In computing, a meta-circular evaluator (MCE) or meta-circular interpreter (MCI) is an interpreter which defines each feature of the interpreted language using a similar facility of the interpreter's host language. For example, interpreting a lambda application may be implemented using function application. Meta-circular evaluation is most prominent in the context of Lisp. A self-interpreter is a meta-circular interpreter where the interpreted language is nearly identical to the host language; the two terms are often used synonymously. History: The dissertation of Corrado Böhm describes the design of a self-hosting compiler. Due to the difficulty of compiling higher-order functions, many languages were instead defined via interpreters, most prominently Lisp. The term itself was coined by John C. Reynolds, and popularized through its use in the book Structure and Interpretation of Computer Programs. Self-interpreters: A self-interpreter is a meta-circular interpreter where the host language is also the language being interpreted. A self-interpreter displays a universal function for the language in question, and can be helpful in learning certain aspects of the language. A self-interpreter will provide a circular, vacuous definition of most language constructs and thus provides little insight into the interpreted language's semantics, for example evaluation strategy. Addressing these issues produces the more general notion of a "definitional interpreter". Self-interpreters: From self-interpreter to abstract machine This part is based on Section 3.2.4 of Danvy's thesis. Self-interpreters: Here is the core of a self-evaluator for the λ calculus. The abstract syntax of the λ calculus is implemented as follows in OCaml, representing variables with their de Bruijn index, i.e., with their lexical offset (starting from 0): The evaluator uses an environment: Values (of type value) conflate expressible values (the result of evaluating an expression in an environment) and denotable values (the values denoted by variables in the environment), a terminology that is due to Christopher Strachey. Self-interpreters: Environments are represented as lists of denotable values. The core evaluator has three clauses: It maps a variable (represented with a de Bruijn index) into the value in the current environment at this index. Self-interpreters: It maps a syntactic function into a semantic function. (Applying a semantic function to an argument reduces to evaluating the body of the corresponding syntactic function in its lexical environment, extended with the argument.) It maps a syntactic application into a semantic application.This evaluator is compositional in that each of its recursive calls is made over a proper sub-part of the given term. It is also higher order since the domain of values is a function space. Self-interpreters: In "Definitional Interpreters", Reynolds answered the question as to whether such a self-interpreter is well defined. He answered in the negative because the evaluation strategy of the defined language (the source language) is determined by the evaluation strategy of the defining language (the meta-language). If the meta-language follows call by value (as OCaml does), the source language follows call by value. If the meta-language follows call by name (as Algol 60 does), the source language follows call by name. And if the meta-language follows call by need (as Haskell does), the source language follows call by need. In "Definitional Interpreters", Reynolds made a self-interpreter well defined by making it independent of the evaluation strategy of its defining language. He fixed the evaluation strategy by transforming the self-interpreter into Continuation-Passing Style, which is evaluation-strategy independent, as later captured in Gordon Plotkin's Independence Theorems. Furthermore, because logical relations had yet to be discovered, Reynolds made the resulting continuation-passing evaluator first order by (1) closure-converting it and (2) defunctionalizing the continuation. He pointed out the "machine-like quality" of the resulting interpreter, which is the origin of the CEK machine since Reynolds's CPS transformation was for call by value. For call by name, these transformations map the self-interpreter to an early instance of the Krivine machine. The SECD machine and many other abstract machines can be inter-derived this way. It is remarkable that the three most famous abstract machines for the λ calculus functionally correspond to the same self-interpreter. Self-interpreters: Self-interpretation in total programming languages Total functional programming languages that are strongly normalizing cannot be Turing complete, otherwise one could solve the halting problem by seeing if the program type-checks. That means that there are computable functions that cannot be defined in the total language. In particular it is impossible to define a self-interpreter in a total programming language, for example in any of the typed lambda calculi such as the simply typed lambda calculus, Jean-Yves Girard's System F, or Thierry Coquand's calculus of constructions. Here, by "self-interpreter" we mean a program that takes a source term representation in some plain format (such as a string of characters) and returns a representation of the corresponding normalized term. This impossibility result does not hold for other definitions of "self-interpreter". Self-interpreters: For example, some authors have referred to functions of type πτ→τ as self-interpreters, where πτ is the type of representations of τ -typed terms. To avoid confusion, we will refer to these functions as self-recognizers. Brown and Palsberg showed that self-recognizers could be defined in several strongly-normalizing languages, including System F and System Fω. This turned out to be possible because the types of encoded terms being reflected in the types of their representations prevents constructing a diagonal argument. Self-interpreters: In their paper, Brown and Palsberg claim to disprove the "conventional wisdom" that self-interpretation is impossible (and they refer to Wikipedia as an example of the conventional wisdom), but what they actually disprove is the impossibility of self-recognizers, a distinct concept. In their follow-up work, they switch to the more specific "self-recognizer" terminology used here, notably distinguishing these from "self-evaluators", of type πτ→πτ They also recognize that implementing self-evaluation seems harder than self-recognition, and leave the implementation of the former in a strongly-normalizing language as an open problem. Uses: In combination with an existing language implementation, meta-circular interpreters provide a baseline system from which to extend a language, either upwards by adding more features or downwards by compiling away features rather than interpreting them. They are also useful for writing tools that are tightly integrated with the programming language, such as sophisticated debuggers. A language designed with a meta-circular implementation in mind is often more suited for building languages in general, even ones completely different from the host language. Examples: Many languages have one or more meta-circular implementations. Here below is a partial list. Examples: Some languages with a meta-circular implementation designed from the bottom up, in grouped chronological order: Lisp, 1958 Scheme, 1975 Pico, 1997 ActorScript, 2009? Clojure, 2007 Forth, 1968 PostScript, 1982 Prolog, 1972 TeX, based on virgin TeX, 1978 Smalltalk, 1980 Rebol, 1997 Red, 2011 Factor, 2003Some languages with a meta-circular implementation via third parties: Java via Jikes RVM, Squawk, Maxine or GraalVM's Espresso Scala via Metascala JavaScript via Narcissus or JS-Interpreter Oz via Glinda Python via PyPy Ruby via Rubinius Lua via Metalua
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**SEPT8** SEPT8: Septin-8 is a protein that in humans is encoded by the SEPT8 gene. Function: SEPT8 is a member of the highly conserved septin family. Septins are 40- to 60-kD GTPases that assemble as filamentous scaffolds. They are involved in the organization of submembranous structures, in neuronal polarity, and in vesicle trafficking. Interactions: SEPT8 has been shown to interact with PFTK1 and SEPT5.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Scherbius Drive** Scherbius Drive: The Static Scherbius Drive provides the speed control of a wound rotor motor below synchronous speed. The portion of rotor AC power is converted into DC by a diode bridge. This drive has the ability of flow the power both in the positive as well as the negative direction of the injected voltage.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Monohydrocalcite** Monohydrocalcite: Monohydrocalcite is a mineral that is a hydrous form of calcium carbonate, CaCO3·H2O. It was formerly also known by the name hydrocalcite, which is now discredited by the IMA. It is a trigonal mineral which is white when pure. Monohydrocalcite is not a common rock-forming mineral, but is frequently associated with other calcium and magnesium carbonate minerals, such as calcite, aragonite, lansfordite, and nesquehonite. Monohydrocalcite: Monohydrocalcite has been observed in air conditioning systems, and in moonmilk deposits in caves, both probably formed from spray of carbonate rich fluids. It is well known in Robe on the Limestone Coast of South Australia as a component of beach sands of Lake Fellmongery and Lake Butler, where it is believed to be formed from algal spume. Other lacustrine deposits include Lake Issyk-Kul, Kyrgyzstan, Lake Kivu, Democratic Republic of the Congo, and Solar Lake, Sinai. Monohydrocalcite: It has been reported as a significant component of the decomposition of ikaite in the towers of the Ikka Fjord, West Greenland. It is also noted for its bizarre occurrences, which include inside the otoliths of the tiger shark, the bladder of a guinea pig, the calcareous corpuscles of a cestode parasite, and the final stages of decomposition of the putrefying flesh of the giant saguaro cactus. These occurrences suggest a biochemical origin is possible. Formation of monohydrocalcite: Monohydrocalcite forms via a Mg-rich amorphous calcium carbonate (ACC) precursor. This Mg-rich ACC forms rapidly (seconds) and then transforms to monohydrocalcite via dissolution and reprecipitation, with monohydrocalcite forming via a nucleation-controlled reaction like spherulitic growth. Formation of monohydrocalcite: Recent studies have highlighted the importance of Mg in the formation process of monohydrocalcite. The presence of Mg in solution is known to inhibit the formation of vaterite and calcite. However, the hydrated nature of monohydrocalcite means that full dehydration of Mg is not required before incorporation of this ion into this mineral and therefore it will more likely form than the anhydrous calcium carbonate phases.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Theiler's encephalomyelitis virus** Theiler's encephalomyelitis virus: Theiler's murine encephalomyelitis virus (TMEV) is a single-stranded RNA murine cardiovirus from the family Picornaviridae. It has been used as a mouse model for studying virally induced paralysis, as well as encephalomyelitis comparable to multiple sclerosis. Depending on the mouse and viral strain, viral pathogenesis can range from negligible, to chronic or acute encephalomyelitis. Discovery: The virus was discovered by virologist Max Theiler in 1937 while working at the Rockefeller Institute. Theiler discovered the encephalomyelitis virus during research on poliovirus-like paralysis symptoms in mice. That year Theiler had completed work on developing a vaccine for yellow fever, for which he is best known; in 1951 he received the Nobel Prize for that achievement. Strains: The several different strains of TMEV are characterized by their pathology as well as genetic sequencing and proteomics. The two major groups are listed below; there are several other strains in the same group as DA (such as BeAn). Strains: GDVII TMEV GDVII virus is characterized by acute encephalomyelitis in susceptible mice, with a high mortality rate and no viral persistence after viral clearance by the immune system. No demyelination occurs in surviving mice. The GDVII L protein is specific in that it down-regulates the anti-viral response by inhibition of Interferon Regulatory Factor 3 (IRF3) after it is activated by hyperphosphorylation, but before it is able to enhance Interferon-β transcription by binding to the gene's promoter. Strains: DA The TMEV DA strain, in contrast, is characterized by chronic encephalomyelitis in susceptible mice. Infection initiates in astrocytes and microglia, but persists in macrophages. This strain has been used as an acceptable model for human multiple sclerosis. and epilepsy. The DA strain has also been shown to inhibit IRF-3 phosphorylation, by inhibiting an unknown intermediate step after RIG-I/MDA5 activation of IKKε and TBK1 kinases. The L protein has been shown to be critical in this process, although the mechanism is unknown. The DA strain of TMEV also encodes for a L* protein that is likely involved in viral persistence in macrophages. This protein's influence on the murine immune system, therefore, could be beneficial in understanding immune-mediated demyelination in diseases such as multiple sclerosis. Analogies with multiple sclerosis/pathology: Multiple sclerosis is a chronic disease that results in demyelination of the axons in brain and spinal cord, which often leads to severe neurological problems and eventually paralysis. The symptoms of MS are largely immune mediated, but the mechanism of the immune system's initiation in this disease is unknown. It is likely that both genetic and environmental factors play a large role in the initiation and progression of the disease. There are a number of animal models for MS. A common one is known as Experimental autoimmune encephalomyelitis, while TMEV occurs via injection of TMEV, and is thus distinct from EAE.One hypothesis for the initiation is that an infection stimulates the innate immune system, specifically perivascular microglia. This allows the entrance of T-cells, and microglia spread viral epitopes, along with myelin epitopes, to T cells, which then are activated to "attack" the myelin cells. This is the proposed course of disease in TMEV infection in mice.Many bacteria and viruses infect humans without pathology in normal individuals. If certain individuals are genetically predisposed to immunological intolerance of these commensal organisms, pathology can occur. The Saffold virus, a human virus discovered in 2007, has been shown to have high prevalence in humans (>90%). It may be an important link between the study of mouse TMEV-induced encephalomyelitis and human multiple sclerosis.The majority of mouse strains are not susceptible to the pathology associated with TMEV infection. As SJL/J mice are notoriously susceptible, the majority of studies exploring factors that could lead to MS utilize this strain. Max Theiler also used the SJL/J strain to study the progression of a polio-like disease in mice.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Spermatic cord** Spermatic cord: The spermatic cord is the cord-like structure in males formed by the vas deferens (ductus deferens) and surrounding tissue that runs from the deep inguinal ring down to each testicle. Its serosal covering, the tunica vaginalis, is an extension of the peritoneum that passes through the transversalis fascia. Each testicle develops in the lower thoracic and upper lumbar region and migrates into the scrotum. During its descent it carries along with it the vas deferens, its vessels, nerves etc. There is one on each side. Structure: The spermatic cord is ensheathed in three layers of tissue: external spermatic fascia, an extension of the innominate fascia that overlies the aponeurosis of the external oblique muscle. cremasteric muscle and fascia, formed from a continuation of the internal oblique muscle and its fascia. internal spermatic fascia, continuous with the transversalis fascia.The normal diameter of the spermatic cord is about 16 mm (range 11 to 22 mm). It is located behind the tunica vaginalis. Contents Blood vessels Testicular artery. Artery to the ductus deferens. Cremasteric artery. Nerves Nerve to cremaster (genital branch of the genitofemoral nerve) Testicular nerves (sympathetic nerves).The ilioinguinal nerve is not actually located inside the spermatic cord, but runs outside it in the inguinal canal. Other contents Vas deferens. Pampiniform plexus. Lymphatic vessels.The tunica vaginalis is located in front of the spermatic cord, outside it. Clinical significance: The spermatic cord is sensitive to torsion, in which the testicle rotates within its sac and blocks its own blood supply. Testicular torsion may result in irreversible damage to the testicle within hours. A collection of serous fluid in the spermatic cord is named 'funiculocele'. The contents of the abdominal cavity may protrude into the inguinal canal, producing an indirect inguinal hernia Varicose veins of the spermatic cord are referred to as varicocele. Though often asymptomatic, about one in four people with varicocele have negatively affected fertility.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Hierarchical clustering** Hierarchical clustering: In data mining and statistics, hierarchical clustering (also called hierarchical cluster analysis or HCA) is a method of cluster analysis that seeks to build a hierarchy of clusters. Strategies for hierarchical clustering generally fall into two categories: Agglomerative: This is a "bottom-up" approach: Each observation starts in its own cluster, and pairs of clusters are merged as one moves up the hierarchy. Hierarchical clustering: Divisive: This is a "top-down" approach: All observations start in one cluster, and splits are performed recursively as one moves down the hierarchy.In general, the merges and splits are determined in a greedy manner. The results of hierarchical clustering are usually presented in a dendrogram. Hierarchical clustering: Hierarchical clustering has the distinct advantage that any valid measure of distance can be used. In fact, the observations themselves are not required: all that is used is a matrix of distances. On the other hand, except for the special case of single-linkage distance, none of the algorithms (except exhaustive search in O(2n) ) can be guaranteed to find the optimum solution. Complexity: The standard algorithm for hierarchical agglomerative clustering (HAC) has a time complexity of O(n3) and requires Ω(n2) memory, which makes it too slow for even medium data sets. However, for some special cases, optimal efficient agglomerative methods (of complexity O(n2) ) are known: SLINK for single-linkage and CLINK for complete-linkage clustering. With a heap, the runtime of the general case can be reduced to log ⁡n) , an improvement on the aforementioned bound of O(n3) , at the cost of further increasing the memory requirements. In many cases, the memory overheads of this approach are too large to make it practically usable. Complexity: Divisive clustering with an exhaustive search is O(2n) , but it is common to use faster heuristics to choose splits, such as k-means. Cluster Linkage: In order to decide which clusters should be combined (for agglomerative), or where a cluster should be split (for divisive), a measure of dissimilarity between sets of observations is required. In most methods of hierarchical clustering, this is achieved by use of an appropriate distance d, such as the Euclidean distance, between single observations of the data set, and a linkage criterion, which specifies the dissimilarity of sets as a function of the pairwise distances of observations in the sets. The choice of metric as well as linkage can have a major impact on the result of the clustering, where the lower level metric determines which objects are most similar, whereas the linkage criterion influences the shape of the clusters. For example, complete-linkage tends to produce more spherical clusters than single-linkage. Cluster Linkage: The linkage criterion determines the distance between sets of observations as a function of the pairwise distances between observations. Cluster Linkage: Some commonly used linkage criteria between two sets of observations A and B and a distance d are: Some of these can only be recomputed recursively (WPGMA, WPGMC), for many a recursive computation with Lance-Williams-equations is more efficient, while for other (Mini-Max, Hausdorff, Medoid) the distances have to be computed with the slower full formula. Other linkage criteria include: The probability that candidate clusters spawn from the same distribution function (V-linkage). Cluster Linkage: The product of in-degree and out-degree on a k-nearest-neighbour graph (graph degree linkage). The increment of some cluster descriptor (i.e., a quantity defined for measuring the quality of a cluster) after merging two clusters. Agglomerative clustering example: For example, suppose this data is to be clustered, and the Euclidean distance is the distance metric. Agglomerative clustering example: The hierarchical clustering dendrogram would be: Cutting the tree at a given height will give a partitioning clustering at a selected precision. In this example, cutting after the second row (from the top) of the dendrogram will yield clusters {a} {b c} {d e} {f}. Cutting after the third row will yield clusters {a} {b c} {d e f}, which is a coarser clustering, with a smaller number but larger clusters. Agglomerative clustering example: This method builds the hierarchy from the individual elements by progressively merging clusters. In our example, we have six elements {a} {b} {c} {d} {e} and {f}. The first step is to determine which elements to merge in a cluster. Usually, we want to take the two closest elements, according to the chosen distance. Agglomerative clustering example: Optionally, one can also construct a distance matrix at this stage, where the number in the i-th row j-th column is the distance between the i-th and j-th elements. Then, as clustering progresses, rows and columns are merged as the clusters are merged and the distances updated. This is a common way to implement this type of clustering, and has the benefit of caching distances between clusters. A simple agglomerative clustering algorithm is described in the single-linkage clustering page; it can easily be adapted to different types of linkage (see below). Agglomerative clustering example: Suppose we have merged the two closest elements b and c, we now have the following clusters {a}, {b, c}, {d}, {e} and {f}, and want to merge them further. To do that, we need to take the distance between {a} and {b c}, and therefore define the distance between two clusters. Usually the distance between two clusters A and B is one of the following: The maximum distance between elements of each cluster (also called complete-linkage clustering): max {d(x,y):x∈A,y∈B}. The minimum distance between elements of each cluster (also called single-linkage clustering): min {d(x,y):x∈A,y∈B}. The mean distance between elements of each cluster (also called average linkage clustering, used e.g. in UPGMA): 1|A|⋅|B|∑x∈A∑y∈Bd(x,y). The sum of all intra-cluster variance. Agglomerative clustering example: The increase in variance for the cluster being merged (Ward's method) The probability that candidate clusters spawn from the same distribution function (V-linkage).In case of tied minimum distances, a pair is randomly chosen, thus being able to generate several structurally different dendrograms. Alternatively, all tied pairs may be joined at the same time, generating a unique dendrogram.One can always decide to stop clustering when there is a sufficiently small number of clusters (number criterion). Some linkages may also guarantee that agglomeration occurs at a greater distance between clusters than the previous agglomeration, and then one can stop clustering when the clusters are too far apart to be merged (distance criterion). However, this is not the case of, e.g., the centroid linkage where the so-called reversals (inversions, departures from ultrametricity) may occur. Divisive clustering: The basic principle of divisive clustering was published as the DIANA (DIvisive ANAlysis clustering) algorithm. Initially, all data is in the same cluster, and the largest cluster is split until every object is separate. Because there exist O(2n) ways of splitting each cluster, heuristics are needed. DIANA chooses the object with the maximum average dissimilarity and then moves all objects to this cluster that are more similar to the new cluster than to the remainder. Divisive clustering: Informally, DIANA is not so much a process of "dividing" as it is of "hollowing out": each iteration, an existing cluster (e.g. the initial cluster of the entire dataset) is chosen to form a new cluster inside of it. Objects progressively move to this nested cluster, and hollow out the existing cluster. Eventually, all that's left inside a cluster is nested clusters that grew there, without it owning any loose objects by itself. Divisive clustering: Formally, DIANA operates in the following steps: Let C0={1…n} be the set of all n object indices and C={C0} the set of all formed clusters so far. Divisive clustering: Iterate the following until |C|=n Find the current cluster with 2 or more objects that has the largest diameter: arg max max i1,i2∈Cδ(i1,i2) Find the object in this cluster with the most dissimilarity to the rest of the cluster: arg max i∈C∗1|C∗|−1∑j∈C∗∖{i}δ(i,j) Pop i∗ from its old cluster C∗ and put it into a new splinter group new ={i∗} As long as C∗ isn't empty, keep migrating objects from C∗ to add them to new . To choose which objects to migrate, don't just consider dissimilarity to C∗ , but also adjust for dissimilarity to the splinter group: let arg max i∈CD(i) where we define new new δ(i,j) , then either stop iterating when D(i∗)<0 , or migrate i∗ Add new to C .Intuitively, D(i) above measures how strongly an object wants to leave its current cluster, but it is attenuated when the object wouldn't fit in the splinter group either. Such objects will likely start their own splinter group eventually. Divisive clustering: The dendrogram of DIANA can be constructed by letting the splinter group new be a child of the hollowed-out cluster C∗ each time. This constructs a tree with C0 as its root and n unique single-object clusters as its leaves. Software: Open source implementations ALGLIB implements several hierarchical clustering algorithms (single-link, complete-link, Ward) in C++ and C# with O(n²) memory and O(n³) run time. ELKI includes multiple hierarchical clustering algorithms, various linkage strategies and also includes the efficient SLINK, CLINK and Anderberg algorithms, flexible cluster extraction from dendrograms and various other cluster analysis algorithms. Julia has an implementation inside the Clustering.jl package. Octave, the GNU analog to MATLAB implements hierarchical clustering in function "linkage". Orange, a data mining software suite, includes hierarchical clustering with interactive dendrogram visualisation. R has built-in functions and packages that provide functions for hierarchical clustering. SciPy implements hierarchical clustering in Python, including the efficient SLINK algorithm. scikit-learn also implements hierarchical clustering in Python. Weka includes hierarchical cluster analysis. Commercial implementations MATLAB includes hierarchical cluster analysis. SAS includes hierarchical cluster analysis in PROC CLUSTER. Mathematica includes a Hierarchical Clustering Package. NCSS includes hierarchical cluster analysis. SPSS includes hierarchical cluster analysis. Qlucore Omics Explorer includes hierarchical cluster analysis. Stata includes hierarchical cluster analysis. CrimeStat includes a nearest neighbor hierarchical cluster algorithm with a graphical output for a Geographic Information System.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Kaya identity** Kaya identity: The Kaya identity is a mathematical identity stating that the total emission level of the greenhouse gas carbon dioxide can be expressed as the product of four factors: human population, GDP per capita, energy intensity (per unit of GDP), and carbon intensity (emissions per unit of energy consumed). It is a concrete form of the more general I = PAT equation relating factors that determine the level of human impact on climate. Although the terms in the Kaya identity would in theory cancel out, it is useful in practice to calculate emissions in terms of more readily available data, namely population, GDP per capita, energy per unit GDP, and emissions per unit energy. It furthermore highlights the elements of the global economy on which one could act to reduce emissions, notably the energy intensity per unit GDP and the emissions per unit energy. Overview: The Kaya identity was developed by Japanese energy economist Yoichi Kaya. It is the subject of his book Environment, Energy, and Economy: strategies for sustainability co-authored with Keiichi Yokobori as the output of the Conference on Global Environment, Energy, and Economic Development (1993 : Tokyo, Japan). It is a mathematically more consistent variation of Paul R. Ehrlich & John Holdren's I=PAT formula that describes the factors of environmental impact. Overview: Kaya identity is expressed in the form: F=P⋅GP⋅EG⋅FE Where: F is global CO2 emissions from human sources P is global population G is world GDP E is global energy consumptionAnd: G/P is the GDP per capita E/G is the energy intensity of the GDP F/E is the emission intensity of energy Use in IPCC reports: The Kaya identity plays a core role in the development of future emissions scenarios in the IPCC Special Report on Emissions Scenarios. The scenarios set out a range of assumed conditions for future development of each of the four inputs. Population growth projections are available independently from demographic research; GDP per capita trends are available from economic statistics and econometrics; similarly for energy intensity and emission levels. The projected carbon emissions can drive carbon cycle and climate models to predict future CO2 concentration and global warming. Other uses: Bill Gates used a form of the Identity, without attribution, at a TED Talk called Innovating to zero!. Writing in ThinkProgress, Joseph J. Romm disputed the validity of Gates' arguments, as well as clarifying the key idea behind the identity.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Missing energy** Missing energy: In experimental particle physics, missing energy refers to energy that is not detected in a particle detector, but is expected due to the laws of conservation of energy and conservation of momentum. Missing energy is carried by particles that do not interact with the electromagnetic or strong forces and thus are not easily detectable, most notably neutrinos. In general, missing energy is used to infer the presence of non-detectable particles and is expected to be a signature of many theories of physics beyond the Standard Model.The concept of missing energy is commonly applied in hadron colliders. The initial momentum of the colliding partons along the beam axis is not known — the energy of each hadron is split, and constantly exchanged, between its constituents — so the amount of total missing energy cannot be determined. However, the initial energy in particles traveling transverse to the beam axis is zero, so any net momentum in the transverse direction indicates missing transverse energy, also called missing ET or MET. Missing energy: Accurate measurements of missing energy are difficult, as they require full, accurate, energy reconstruction of all particles produced in an interaction. Mismeasurement of particle energies can make it appear as if there is missing energy carried away by other particles when, in fact, no such particles were created.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Californidine** Californidine: Californidine is an alkaloid with the molecular formula C20H20NO4+. It has been isolated from extracts of the California poppy (Eschscholzia californica), from which it gets its name, and from other plants of the genus Eschscholzia. Pharmaceutical use: Because of the sedative, anxiolytic, and analgesic effects, the herb California Poppy (Amapola de California, Eschscholzia californica, Pavot d'Amérique, Pavot d'Or, Pavot de Californie, Poppy California, Yellow Poppy) is currently sold in pharmacies in many countries.Horticulturalist Alys Fowler wrote in 2022 that the California poppy "makes the most wonderful tea. You can use aerial parts: flowers, stems, leaves, fresh or dried. It is a gentle tea that can reduce anxiety and aid sleep. It contains none of the alkaloids associated with opium poppies."
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**CUMYL-NBMINACA** CUMYL-NBMINACA: CUMYL-NBMINACA (SGT-152, Cumyl-BC[2.2.1]HpMINACA) is a synthetic cannabinoid compound first reported in a 2013 patent, but not identified as a designer drug until 2021, being identified by a forensic laboratory in Germany in February 2021.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Dimeback** Dimeback: In American football, a dimeback is a cornerback or safety who serves as the sixth defensive back (fourth cornerback, third safety; and in some rare cases, a fourth safety) on defense. The third cornerback or safety on defense is known as a nickelback. The dimeback position is essentially relegated to backup cornerbacks and safeties who do not play starting cornerback or safety positions. Dimebacks are usually fast players because they must be able to keep up on passing plays with 3+ wide receivers. Dimeback: Dimebacks are brought into the game when the defense uses a dime formation, which uses six defensive backs rather than four or five. Usually, a dimeback replaces a linebacker in order to gain better pass defense, although some teams may substitute the extra defensive back for a defensive lineman in their dime formation.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Groin attack** Groin attack: A groin attack is a deliberate strike to the groin area of one's opponent. The technique can be quickly debilitating due to the sensitivity of the groin area and genitalia and is sometimes used as a self-defense technique. The technique is often banned in sports. Groin attacks have been popularized as a comedic device in various forms of media. In sports: An attack to the groin in sports is considered to be a "low blow," not only in the literal sense, but is the origin of the metaphor as well. In a playful attack, or attack in the framework of a sport, a low blow is seen as unfair or improper and is often considered dishonorable. In sports: Strikes to the groin have been forbidden in boxing as far back as the Marquess of Queensberry Rules, and they are almost universally forbidden in martial arts competitions including kickboxing, and mixed martial arts.UFC rules dictate that a groin strike is a foul in both male and female matches, with the competitor who has received such a strike given up to five minutes to recover. The rules require male competitors to wear groin protection but prohibit female competitors from doing so.Groin attacks were allowed until the 1980s in international Muay Thai boxing and are still permitted in Thailand itself, though male boxers wear cups to lessen the impact.Direct strikes to the groin are generally considered illegal in professional wrestling as well, and unofficial rings may consider it shameful. However, in certain "hardcore" matches the rules are relaxed, and such attacks are allowed by mutual consent. In self-defense: Groin attacks are sometimes used as a self-defense technique. The attack can allow a combatant to temporarily disable an assailant, making it easy for them to escape. When an opponent is at close range, a knee strike to the groin is easy to execute and difficult to defend against. It is often, but not always, effective.Some martial arts include instruction in kappo, healing techniques to recover from incapacitating attacks including groin attacks. In BDSM: Groin attacks are viewed as erotic in the context of some sexual activities, including cock and ball torture and pussy torture. In popular culture: Groin attacks on men are the most widely known and have been popularized as a comedic device in popular culture. In media, groin attacks are sometimes depicted as causing men to speak in a falsetto or soprano register, as well as experience strabimus. As well as Groan, Moan . Groin attacks on men are also the subject of an Internet meme where they are commonly called "nutshots." They have been featured in practical joke videos uploaded to websites such as YouTube. The meme sometimes also involves an accidental and comedic injury to the groin, usually as a result of falling or struck by an object. In popular culture: Groin attacks on women are depicted less often in media and are often depicted as having the same effect as a hit to anywhere else (or occasionally no effect at all). They are sometimes called "cunt-punts." Effects: The testicles lack anatomical protection and are highly sensitive to impact. The pain resulting from impact to the testicles lessens and travels through the spermatic plexus into the abdomen at which point it is less pain and more of an ache unlike the stabbing feeling induced in women. In extreme cases, a blow to the testicle can cause one or both of the testicles to rupture, potentially resulting in sterility. Effects: The clitoris is highly sensitive to impact as it has more nociceptive pain nerve endings than the testicles, making injuries especially painful. Although, in females this type of injury is rare as it is not likely to be affected usually due to the much smaller size and location. It is a commonly known and observed occurrence in combat sports however in competition it has been recorded numerous times in the athletic events. A sufficiently powerful blow to the groin could potentially fracture the pubic bone, resulting in further physical disability.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**F6 disk** F6 disk: F6 disk is a colloquial name for a floppy disk containing a device driver that enables Windows Setup to install Microsoft Windows on storage devices based on SCSI, SATA, or RAID technologies. All versions of the Windows NT family prior to Windows Vista required F6 disks. Starting with Windows Vista, Windows Setup supports loading third-party drivers from USB drives and CD-ROMs. Usage: An F6 disk is named after the manner in which it is used. During the installation process, Windows Setup must load device drivers for the storage system on which Windows will be installed. Microsoft ships Windows with device drivers that support popular storage hardware. However, newer storage technologies will inevitably appear after the release of each version of Windows, needing newer drivers. To use these drivers, Windows Setup prompts its user to press the F6 key shortly after the setup process starts.Hardware manufacturers often provided their device drivers on CD-ROMs. Prior to Windows Vista, however, Windows Setup only supported reading storage drivers from the root directory of a floppy disk. Thus, users must have copied said drivers from their CD-ROMs an F6 disk. Starting with Windows Vista, Windows Setup runs on a copy of Windows Preinstallation Environment. Thus, it can read device drivers from CD-ROMs and USB flash drives. Alternative: An alternative approach is to slipstream the required drivers into the Windows installation source. Prior to Windows Vista, doing so required third-party software such as nLite. After Windows Vista, Microsoft's DISM utility supports customizing a Windows installation source.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Homothetic vector field** Homothetic vector field: In physics, a homothetic vector field (sometimes homothetic collineation or homothety) is a projective vector field which satisfies the condition: LXgab=2cgab where c is a real constant. Homothetic vector fields find application in the study of singularities in general relativity. They can also be used to generate new solutions for Einstein equations by similarity reduction.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Methanol reformer** Methanol reformer: A methanol reformer is a device used in chemical engineering, especially in the area of fuel cell technology, which can produce pure hydrogen gas and carbon dioxide by reacting a methanol and water (steam) mixture. 298 49.2 kJ/mol Methanol is transformed into hydrogen and carbon dioxide by pressure and heat and interaction with a catalyst. Technology: A mixture of water and methanol with a molar concentration ratio (water:methanol) of 1.0 - 1.5 is pressurized to approximately 20 bar, vaporized and heated to a temperature of 250 - 360 °C. The hydrogen that is created is separated through the use of Pressure swing adsorption or a hydrogen-permeable membrane made of polymer or a palladium alloy. There are two basic methods of conducting this process. The water-methanol mixture is introduced into a tube-shaped reactor where it makes contact with the catalyst. Hydrogen is then separated from the other reactants and products in a later chamber, either by pressure swing adsorption (PSA), or through use of a membrane where the majority of the hydrogen passes through. This method is typically used for larger, non-mobile units. Technology: The other process features an integrated reaction chamber and separation membrane, a membrane reactor. In this relatively new approach, the reaction chamber is made to contain high-temperature, hydrogen-permeable membranes that can be formed of refractory metals, palladium alloys, or a PdAg-coated ceramic. The hydrogen is thereby separated out of the reaction chamber as the reaction proceeds, This purifies the hydrogen and, as the reaction continues, increases both the reaction rate and the amount of hydrogen extracted.With either design, not all of the hydrogen is removed from the product gases (raffinate). Since the remaining gas mixture still contains a significant amount of chemical energy, it is often mixed with air and burned to provide heat for the endothermic reforming reaction. Advantages and disadvantages: Methanol reformers are used as a component of stationary fuel cell systems or hydrogen fuel cell-powered vehicles (see Reformed methanol fuel cell). A prototype car, the NECAR 5, was introduced by Daimler-Chrysler in the year 2000. The primary advantage of a vehicle with a reformer is that it does not need a pressurized gas tank to store hydrogen fuel; instead methanol is stored as a liquid. The logistic implications of this are great; pressurized hydrogen is difficult to store and produce. Also, this could help ease the public's concern over the danger of hydrogen and thereby make fuel cell-powered vehicles more attractive. However, methanol, like gasoline, is toxic and (of course) flammable. The cost of the PdAg membrane and its susceptibility to damage by temperature changes provide obstacles to adoption. Advantages and disadvantages: While hydrogen power produces energy without CO2, a methanol reformer creates the gas as a byproduct. Methanol (prepared from natural gas) that is used in an efficient fuel cell, however, releases less CO2 in the atmosphere than gasoline, in a net analysis.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**ZNF444** ZNF444: Zinc finger protein 444 is a protein that in humans is encoded by the ZNF444 gene. Function: This gene encodes a zinc finger protein which activates transcription of a scavenger receptor gene involved in the degradation of acetylated low-density lipoprotein (Ac-LDL) (Adachi H, Tsujimoto M (2002). "Characterization of the human gene encoding the scavenger receptor expressed by endothelial cell and its regulation by a novel transcription factor, endothelial zinc finger protein-2". J Biol Chem. 277 (27): 24014–21. doi:10.1074/jbc.M201854200. PMID 11978792.). This gene is located in a cluster of zinc finger genes on chromosome 19 at q13.4. A pseudogene of this gene is located on chromosome 15. Multiple transcript variants encoding different isoforms have been found for this gene.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Voltmeter** Voltmeter: A voltmeter is an instrument used for measuring electric potential difference between two points in an electric circuit. It is connected in parallel. It usually has a high resistance so that it takes negligible current from the circuit. Analog voltmeters move a pointer across a scale in proportion to the voltage measured and can be built from a galvanometer and series resistor. Meters using amplifiers can measure tiny voltages of microvolts or less. Digital voltmeters give a numerical display of voltage by use of an analog-to-digital converter. Voltmeter: Voltmeters are made in a wide range of styles, some separately powered (e.g. by battery), and others powered by the measured voltage source itself. Instruments permanently mounted in a panel are used to monitor generators or other fixed apparatus. Portable instruments, usually equipped to also measure current and resistance in the form of a multimeter are standard test instruments used in electrical and electronics work. Any measurement that can be converted to a voltage can be displayed on a meter that is suitably calibrated; for example, pressure, temperature, flow or level in a chemical process plant. Voltmeter: General-purpose analog voltmeters may have an accuracy of a few percent of full scale and are used with voltages from a fraction of a volt to several thousand volts. Digital meters can be made with high accuracy, typically better than 1%. Specially calibrated test instruments have higher accuracies, with laboratory instruments capable of measuring to accuracies of a few parts per million. Part of the problem of making an accurate voltmeter is that of calibration to check its accuracy. In laboratories, the Weston cell is used as a standard voltage for precision work. Precision voltage references are available based on electronic circuits. Schematic symbol: In circuit diagrams, a voltmeter is represented by the letter V in a circle, with two emerging lines representing the two points of measurement. Analog voltmeter: A moving coil galvanometer can be used as a voltmeter by inserting a resistor in series with the instrument. The galvanometer has a coil of fine wire suspended in a strong magnetic field. When an electric current is applied, the interaction of the magnetic field of the coil and of the stationary magnet creates a torque, tending to make the coil rotate. The torque is proportional to the current through the coil. The coil rotates, compressing a spring that opposes the rotation. The deflection of the coil is thus proportional to the current, which in turn is proportional to the applied voltage, which is indicated by a pointer on a scale. One of the design objectives of the instrument is to disturb the circuit as little as possible and so the instrument should draw a minimum of current to operate. This is achieved by using a sensitive galvanometer in series with a high resistance, and then the entire instrument is connected in parallel with the circuit examined. Analog voltmeter: The sensitivity of such a meter can be expressed as "ohms per volt", the number of ohms resistance in the meter circuit divided by the full scale measured value. For example, a meter with a sensitivity of 1000 ohms per volt would draw 1 milliampere at full scale voltage; if the full scale was 200 volts, the resistance at the instrument's terminals would be 200000 ohms and at full scale, the meter would draw 1 milliampere from the circuit under test. For multi-range instruments, the input resistance varies as the instrument is switched to different ranges. Analog voltmeter: Moving-coil instruments with a permanent-magnet field respond only to direct current. Measurement of AC voltage requires a rectifier in the circuit so that the coil deflects in only one direction. Some moving-coil instruments are also made with the zero position in the middle of the scale instead of at one end; these are useful if the voltage reverses its polarity. Analog voltmeter: Voltmeters operating on the electrostatic principle use the mutual repulsion between two charged plates to deflect a pointer attached to a spring. Meters of this type draw negligible current but are sensitive to voltages over about 100 volts and work with either alternating or direct current. Amplified voltmeter: The sensitivity and input resistance of a voltmeter can be increased if the current required to deflect the meter pointer is supplied by an amplifier and power supply instead of by the circuit under test. The electronic amplifier between input and meter gives two benefits; a rugged moving coil instrument can be used, since its sensitivity need not be high, and the input resistance can be made high, reducing the current drawn from the circuit under test. Amplified voltmeters often have an input resistance of 1, 10, or 20 megohms which is independent of the range selected. A once-popular form of this instrument used a vacuum tube in the amplifier circuit and so was called the vacuum tube voltmeter (VTVM). These were almost always powered by the local AC line current and so were not particularly portable. Today these circuits use a solid-state amplifier using field-effect transistors, hence FET-VM, and appear in handheld digital multimeters as well as in bench and laboratory instruments. These largely replaced non-amplified multimeters except in the least expensive price ranges. Amplified voltmeter: Most VTVMs and FET-VMs handle DC voltage, AC voltage, and resistance measurements; modern FET-VMs add current measurements and often other functions as well. A specialized form of the VTVM or FET-VM is the AC voltmeter. These instruments are optimized for measuring AC voltage. They have much wider bandwidth and better sensitivity than a typical multifunction device. Digital voltmeter: A digital voltmeter (DVM) measures an unknown input voltage by converting the voltage to a digital value and then displays the voltage in numeric form. DVMs are usually designed around a special type of analog-to-digital converter called an integrating converter. Digital voltmeter: DVM measurement accuracy is affected by many factors, including temperature, input impedance, and DVM power supply voltage variations. Less expensive DVMs often have input resistance on the order of 10 MΩ. Precision DVMs can have input resistances of 1 GΩ or higher for the lower voltage ranges (e.g. less than 20 V). To ensure that a DVM's accuracy is within the manufacturer's specified tolerances, it must be periodically calibrated against a voltage standard such as the Weston cell. Digital voltmeter: The first digital voltmeter was invented and produced by Andrew Kay of Non-Linear Systems (and later founder of Kaypro) in 1954.Simple AC voltmeters use a rectifier connected to a DC measurement circuit, which responds to the average value of the waveform. The meter can be calibrated to display the root mean square value of the waveform, assuming a fixed relation between the average value of the rectified waveform and the RMS value. If the waveform departs significantly from the sinewave assumed in the calibration, the meter will be inaccurate, though for simple wave shapes the reading can be corrected by multiplying by a constant factor. Early "true RMS" circuits used a thermal converter that responded only to the RMS value of the waveform. Modern instruments calculate the RMS value by electronically calculating the square of the input value, taking the average, and then calculating the square root of the value. This allows accurate RMS measurements for a variety of waveforms.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Electronic Dream Plant** Electronic Dream Plant: Electronic Dream Plant (EDP) was a small British synthesizer manufacturer, active during the late 1970s and early 1980s. At the time their products were not particularly successful commercially. In later years products like the "WASP" became prized by collectors for their unique sound, and later synthesizer companies have successfully copied some of their design elements. Background: The company was formed in 1977 by musician Adrian Wagner and electronics designer Chris Huggett. The pair wanted to design an inexpensive instrument that would trade off cosmetic appeal and user interface against cost and sound quality. They realised that transistor-transistor logic (TTL) could provide a stable sound source, and managed to persuade Argent's Keyboards, a London music shop, to finance £10,000 for putting the synth into production. Products: Wasp Launched in 1978, the "Wasp" was EDP's best-received product. It was named after its black-and-yellow colour scheme. For cost reasons, it did not have a mechanical keyboard; instead, it used flat conductive copper plates, hidden under a silk-screened vinyl sticker. This alienated some players who thought the instrument lacked the expression present with a real keyboard. It was also difficult to use in live settings, and according to Gerald Casale of Devo, the synth would play itself when exposed to sweat. Despite these flaws, the Wasp was fairly advanced technologically. It was one of the first commercially available synthesisers to adopt digital technology, which at the time was just beginning to become a standard. It also utilised a proprietary system for connecting several Wasp synthesisers together, predating MIDI by several years. The Wasp had a tiny internal speaker, which gave the instrument a distinctive sound, though it could also be amplified externally. Eurythmics' David A. Stewart recorded the Wasp by miking its internal speaker.Architecturally, the wasp is a dual digital oscillator synth, with dual envelopes and a single, switchable (low/band/highpass) CMOS-based filter. Products: Wasp Deluxe The last Wasp revision was named the Deluxe. It offered virtually the same circuitry (but on a redesigned PCB) to the other two Wasps, but with the additional moving keyboard. The deluxe also featured an external audio input to its filter, and mix controls for the Oscillators, and external input levels. Rumour states that around 80 deluxes were produced before the demise of EDP. Products: The Deluxe commands an increased price with collectors, presumably because of its further increased rarity, and its improved playability. Products: Spider The Spider was a 252-step digital sequencer (most analogue sequencers at the time had 8 or 16 steps). The unit was designed by Huggett and Oxford University engineering technician Steve Evans. It was built in the same style as the standard Wasp, outputting both LINK (to drive EDP products) and CV/gate information for use with standard analogue synths. It was programmable in real time or by individual step entry. Products: Gnat The Gnat was a single-oscillator version of the Wasp. Anthony Harrison-Griffin, an independent product designer was responsible for the design and build of the Gnat, drawing on the basic design and colour scheme of the already well established Wasp. Products: Caterpillar The Caterpillar was a 3-octave master keyboard which could control up to four Wasps or Gnats using EDP's proprietary digital control system. It was released in 1980, after Huggett had left the company, though he contributed to the design as a freelancer. The unit suffered because each Wasp typically had a distinctive sound owing to the variable quality of components used to build it. Products: Keytar A "heavily modded Wasp that was built into a guitar form" which, although prototyped, never went into production. Legacy: The company suffered financial losses during 1979, leading to Huggett leaving the company early the following year. It filed for bankruptcy after debts to component suppliers increased beyond a sustainable level. Wagner formed a follow-up company, Electronic Dream Plant (Oxford) which produced the Gnat. He was ousted from the company in 1981, and formed Wasp Synthesizers, which produced a very limited run of Special versions of the Wasp and Gnat. This breakaway company lasted less than a year and its products are even rarer. Legacy: Wasp SpecialThis 'wooden' wasp used the same membrane keyboard as the standard version, but with a new black and gold colour scheme, and the loss of the internal batteries and speaker. An internal mains transformer was added. Gnat SpecialAgain, as with the wasp special, a wooden case, different colour scheme, and the loss of the internal speaker. Legacy: EDP closed in 1982. Huggett, went on to co-form another British company, OSC, with Paul Wiffen producing the OSCar synthesizer in collaboration with Anthony Harrison-Griffin, an independent product designer responsible for the unique look and build of the OSCar.In 1992, Novation developed a new synth with Huggett based on the legacy of the Wasp and Gnat, the Bass Station. This was expanded and redeveloped in the 2010s as the Bass Station 2, a widely available monosynth competing with instruments such as the Arturia MiniBrute and the Korg MS20 mini. The Bass Station 2 shares key features with the EDP Wasp synthesizer.Later Novation products like the PEAK continued to embrace the philosophy of the digital "Oxford Oscillator" pioneered by EDP. Legacy: In 2017 the Jasper DIY became the first real Wasp clone with the 'touch' keyboard and similar electronic architecture. In 2019 Behringer marketed a modernized hardware clone of the WASP, sans keyboard, at a very competitive price point. Related companies: Oxford Synthesiser Company (OSCar) Groove Electronics (Stinger -essentially two wasp voices in a midi controlled case & MIDI2CV Midi to Link converter) note: contrary to popular belief, the stinger was not made from two butchered wasp boards Kenton Electronics (Midi to Link daughter board, for their range of midi-cv units) Novation, co-developers with Chris Huggett since the 1990s.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Pentax DA 14mm lens** Pentax DA 14mm lens: The Pentax smc DA 14mm F2.8 ED (IF) is an interchangeable camera lens announced by Pentax on February 2, 2004.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Drama Desk Award for Outstanding Lighting Design for a Musical** Drama Desk Award for Outstanding Lighting Design for a Musical: The Drama Desk Award for Outstanding Lighting Design for a Musical is an annual award presented by Drama Desk in recognition of achievements in the theatre among Broadway, Off Broadway and Off-Off Broadway productions. also: Laurence Olivier Award for Best Lighting Design Tony Award for Best Lighting Design
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**DVD-R DL** DVD-R DL: DVD-R DL (DL stands for Dual Layer), also called DVD-R9, is a derivative of the DVD-R format standard. DVD-R DL discs hold 8.5 GB by utilizing two recordable dye layers, each capable of storing a little less than the 4.7 gigabyte (GB) of a single layer disc, almost doubling the total disc capacity. Discs can be read in many DVD devices (older units are less compatible) and can only be written using DVD-R DL compatible recorders. It is part of optical disc recording technologies for digital recording to optical disc. Capacities: Compatibility DVD-R DL has compatibility issues with legacy DVD-ROM drives known as pickup head overrun. To avoid this issue, the two layers of the disc need to be equally recorded. But this is a contradiction with the sequential nature of the DVD recording. Thus DVD Forum under Pioneer's lead developed a technology known as Layer Jump Recording (LJR), which incrementally record smaller sections of each layer to maintain compatibility with DVD-ROM drives. DVD-R DL media has been discontinued by most manufacturers. DVD+R DL is dominating the market for dual layered media. Dual layer recording: Dual Layer recording allows DVD-R and DVD+R discs to store significantly more data, up to 8.5 GB, per disc, compared with 4.7 GB for single-layer discs. DVD-R DL was developed for the DVD Forum by Pioneer Corporation, DVD+R DL (formally known as Double Layer) was developed for the DVD+RW Alliance by Philips and Mitsubishi Kagaku Media (MKM).A Dual Layer disc differs from its usual DVD counterpart by employing a second physical layer within the disc itself. The drive with Dual Layer capability accesses the second layer by shining the laser through the first semi-transparent layer. The layer change can exhibit a noticeable pause in some DVD players, up to several seconds. This caused more than a few viewers to worry that their dual layer discs were damaged or defective, with the end result that studios began listing a standard message explaining the dual layer pausing effect on all dual layer disc packaging. Dual layer recording: The stacked, shine-through arrangement of layers does come with a small increase in error rate due to reduced reflectivity of the written layers, and a similar small risk of crosstalk interference. One of the techniques employed to help compensate for these reliability shortcomings is a 10% increase in minimum mark (digital 0 or 1) length on the disc, with a corresponding 10% increase in rotation speed and 10% reduction in gross recordable capacity, accounting for the lower capacity of a single-sided dual-layer DVD at 8.5 billion bytes, versus a double-sized, single-layer DVD at 9.4 billion (for 12 cm discs). Detail differences in formatting and file structure mean the "usable" data area capacity does not change by exactly this much, but for all intents a DVD-R DL has effectively 20/11ths the capacity of a DVD-R SL, and the same holds for +R, commercially pressed, and 8 cm discs. Dual layer recording: DVD recordable discs supporting this technology are backward compatible with some existing DVD players and DVD-ROM drives. Many current DVD recorders support dual-layer technology, and the price is now comparable to that of single-layer drives, though the blank media remains more expensive. The recording speeds reached by dual-layer media are still well below those of single-layer media. Dual layer recording: There are two modes for dual layer orientation. With parallel track path (PTP), used on DVD-ROM, both layers start at the inside diameter (ID) and end at the outside diameter (OD) with the lead-out. With Opposite Track Path (OTP), used on DVD-Video, the lower layer starts at the ID and the upper layer starts at the OD where the first layer ends. The two layers share one lead-in and one lead-out. Only blank disks and drives that support the latter mode are currently available. Recordable DVD capacity comparison: For comparison, the table below shows storage capacities of the four most common DVD recordable media, excluding DVD-RAM. (SL) stands for standard single-layer discs, while DL denotes the dual-layer variants. See articles on the formats in question for information on compatibility issues.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**XHTML+SMIL** XHTML+SMIL: XHTML+SMIL is a W3C Note that describes an integration of SMIL semantics with XHTML and CSS. It is based generally upon the HTML+TIME submission. The language is also known as HTML+SMIL.The XHTML+SMIL language profile shares many modules with the standard SMIL language profiles, including the core modules of timing, media objects, linking, animation, transitions and content control. Where the other SMIL profiles use a language-specific layout model, XHTML+SMIL leverages the HTML flow layout and CSS positioning model familiar to many web authors. The semantics of integrating SMIL animation with the CSS model were also adopted in SVG. XHTML+SMIL: XHTML+SMIL was issued as a W3C Note rather than a recommendation as there was only one implementation of the language profile (in MSIE).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**OXGR1** OXGR1: 2-Oxoglutarate receptor 1 (OXGR1), also known as cysteinyl leukotriene receptor E (CysLTE) and GPR99, is a protein that in humans is encoded by the OXGR1 (also termed GPR99) gene. The Gene has recently been nominated as a receptor not only for 2-oxogluterate (see alpha-Ketoglutaric acid) but also for the three cysteinyl leukotrienes (CysLTs), particularly leukotriene E4 (LTE4) and to far lesser extents LTC4 and LTE4. Recent studies implicate GPR99 as a cellular receptor which is activated by LTE4 thereby causing these cells to contribute to mediating various allergic and hypersensitivity responses. History: In 2001, a gene projected to code for a G protein-like receptor protein was reported; the gene's apparent protein product was classified as an orphan receptor (i.e., a receptor whose activating ligand and function were unknown) and named GPR80. The projected amino acid sequence of the protein encoded by the GPR80 gene bore similarities to a purinergic receptor, P2Y1, and therefore might, like P2Y1, be a receptor for purine compounds. Shortly thereafter, a second report found this same gene, indicated that it coded for a G protein receptor with its amino acid sequence similarities closest to purinergic receptors GPR91 and P2Y1, and named the gene and its protein GPR99 and GPR99, respectively. While the latter report found that a large series of purinergic nucleotides, other nucleotides, and derivatives of these compounds did not activate GPR99-bearing cells, a third report in 2004 found that GPR99-bearing cells bound and responded to two purines, adenosine and Adenosine monophosphate, nominated GPR99 as a true purinergic receptor, and renamed GPR99 as P2Y15. However, a review of these studies in the same year by members of the International Union of Pharmacology (IUPHAR) Subcommittee for P2Y receptor nomenclature and classification decided that GPR80/GPR99 is not a P2Y receptor for adenosine, AMP or other nucleotides. Again in 2004, another report found that GPR99-bearing cells responded to alpha-ketoglutarate. This report was accepted by IUPHAR. The gene and its protein were renamed OXGR1 and OXGR1. Finally, in 2013, GPR99-bearing cells were found to bind and respond to CysLTs. The latter finding, while attracting further studies and of potential clinical importance, has not yet lead to a renaming of GPR99 nor its protein product. Gene and product: GPR99 (OXGR1) is localized to human chromosome 13 at position 13q32.2; it codes for a cellular G protein coupled receptor linked primarily to G protein heterotrimers containing the Gq subunit; when bound to one of its activating ligands, the GPR99 protein stimulates cellular pathways (see Gq alpha subunit#Function) that lead to cell function. Activating ligands: GPR99 appears to be the receptor for alpha-ketoglutarate (AKG) and CysLTs. CyslTs and AKG have the following relative potencies in binding to GPR99-bearing cells, LTE4>>LTC4=LTD4>AKG; LTE4 is able to stimulate responses in these cells at concentrations as low as picomole/liter. Inhibiting ligands: GPR99 is inhibited by montelukast, a well-known and clinically useful inhibitor of cysteinyl leukotriene receptor 1 (CysLTR1); this drug binds to CysLTR1 thereby blocking the binding and action of LTD4, LTC4, and LTE4. It is presumed to act similarly to block the actins of these cystienyl leukotrienes on GPR99. It is not known if other CysLTR1 inhibitors (see Cysteinyl leukotriene receptor 1#Clinical significance) can mimic montelukast in blocking GPR99. Expression: Based on their content of GPR99 mRNA, GPR99 is expressed in human kidney, placenta, fetal brain, and tissues involved in allergic and hypersensitivity reactions such as the lung trachea, salivary glands, eosinophils, mast cells derived from umbilical cord blood, and nasal mucosa, particularly the vascular smooth muscle in the latter tissue. In mice, Gpr99 mRNA is expressed in kidneys, testes, and smooth muscle. Function: GPR99 binds as is activated by LTE4 at concentrations far lower than the other major CysLT receptors, Cysteinyl leukotriene receptor 1 (CysLTR1) and Cysteinyl leukotriene receptor 2 (CysLTR2), both of which appear to be physiological receptors for LTD4 and LTC4 but not LTF4 (see Cysteinyl leukotriene receptor 1#Function). This suggests that the actions of LTE4 are mediated, at least to a large extent, by GPR99. Several findings support this notion: a) pretreatment of guinea pig trachea and human bronchial smooth muscle with LTE4 but not LTC4 or LTD4 enhances their contraction responses to histamine; b) LTE4 is as potent as LTC4 and LTD4 in eliciting vascular leakage when injected into the skin of guinea pigs and humans; c) inhalation of LTE4 but not LTD4 by asthmatic subjects caused the accumulation of eosinophils and basophils in their bronchial mucosa; d) mice engineered to lack Cysltr1 and Cysltr2 receptors exhibited edema responses to the interdermal injection of LTC4, LTD4, and LTE4 but only LTE4 was more potent (by a factor of 64-fold) proved more potent in these mice compared to in wild type mice; and e) mice engineered to lack all three Cysltr1, Cysltr2, and Gpr99 receptors showed no dermal edema responses to the injection of LTC4, LTD4, or LTE4.Mice deficient in Gpr99 (i.e. Oxgr1-/- gene knockout mice) develop (82% penetrance) spontaneous Otitis media with many characteristics of the human disease; while the underlying cause of this development, the Oxgr1-/- mouse is proposed to be a good model to study and relate to human ear pathology.GPR99 also appears to be involved in the adaptive regulation of bicarbonate (HCO(3)(-)) secretion and salt (NaCl) reabsorption in the mouse kidneys undergoing acid-base stress: the kidneys of GPR99 gene knockout mice did not respond to alpha-Ketoglutaric acid by upregulating bicarbonate/NaCl exchange and exhibited a reduced ability to maintain acid-base balance. Clinical significance: Montelucast is in use to treat various conditions including asthma, exercise-induced bronchoconstriction, allergic rhinitis, primary dysmenorrhoea (i.e. dysmenorrhoea not associated with known causes; see dysmenorrhea#Causes), and urticaria. It has been presumed that this drug's beneficial effects in these diseases is due to its well-known ability to act as a receptor antagonist for the cysteinyl leukotriene receptor 1 (CysLTR1), i.e. it binds to but does not activate this receptor thereby interfering with LTD4, LTC4, and LTE4 provocative actions by blocking their binding to CysLTR1 (the drug does not block the cysteinyl leukotriene receptor 2) (see cysteinyl leukotriene receptor 1#Clinical significance). The more recently discovered ability of this drug to block the ability of LTE4 and LTD4 to stimulate GPR99 in GPR99-bearing cells allows that montelucast's beneficial effects on these conditions might reflect its ability to block not only CysLTR1 but also GPR99.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Hatsune Miku: Project DIVA Extend** Hatsune Miku: Project DIVA Extend: Hatsune Miku: Project DIVA Extend (初音ミク -Project DIVA- extend) is a 2011 rhythm game created by Sega and Crypton Future Media for the PlayStation Portable. The game is an expansion to the 2010 video game, Hatsune Miku: Project DIVA 2nd, and was first released on November 10, 2011 in Japan with no international release. Like the original the game primarily makes use of Vocaloids, a series of singing synthesizer software, and the songs created using these vocaloids most notably the virtual-diva Vocaloid Hatsune Miku. Rock band Gacharic Spin served as motion capture models. Gameplay: As it is an expansion to Hatsune Miku: Project DIVA 2nd, the game features exactly the same gameplay. The only differences between the two games are the selection of modules and songs available in the games. Gameplay: Clearing every song in the game on Normal difficulty will unlock the special voiceover version of the Opening Movies of Hatsune Miku: Project DIVA 2nd and Hatsune Miku: Project DIVA Extend, featuring the voices of Saki Fujita as Hatsune Miku, Asami Shimoda as the Kagamine twins, Rin & Len, and Yū Asakawa as Megurine Luka, all in their original human voices. Song list: There are a total of 50 songs available in Hatsune Miku: Project DIVA Extend; 45 songs (14 new and 31 old) are obtained normally by playing through the game, and 5 songs are only available through Edit Mode. Songs with a light blue background are returning songs with charts ported over from Hatsune Miku: Project DIVA 2nd. Songs with a gray background can only be played in Diva Room and Edit Mode. Dreamy Theater Extend: Similar to Project DIVA and Project DIVA 2nd, a companion game for the PlayStation 3, Hatsune Miku: Project DIVA Dreamy Theater Extend, was released digitally via the Japanese PlayStation Store on September 13, 2012. Like its predecessors, the game features updated high-definition visual improvements over its respective PSP game while featuring the same content and PlayStation Trophies support, and requires the player to connect the PSP (with Project DIVA Extend) to the PS3 via USB to access the content in the game. In addition, the game also supports stereoscopic 3D for the first time in the series. Dreamy Theater Extend: Live Concert Mode song list Dreamy Theater Extend features a new mode called Live Concert Mode, which allows players to watch music videos of eleven songs being performed at a virtual reconstruction of the stage of Tokyo Dome City Hall, where the Hatsune Miku Concert: Saigo no Miku no Hi Kanshasai (発音ミクコンサート 最後のミクの日感謝祭) concert was held at in real-life on March 9, 2012 as the second part of Miku no Hi Dai Kanshasai (ミクの日大感謝祭); in addition, the characters also perform exactly as they did in the concert. While watching a video, the camera can be controlled to change viewing angles by using the analogue sticks and shoulder buttons.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Junctional diversity** Junctional diversity: Junctional diversity describes the DNA sequence variations introduced by the improper joining of gene segments during the process of V(D)J recombination. This process of V(D)J recombination has vital roles for the vertebrate immune system, as it is able to generate a huge repertoire of different T-cell receptor (TCR) and immunoglobulin molecules required for pathogen antigen recognition by T-cells and B cells, respectively. Process: Junctional diversity includes the process of somatic recombination or V(D)J recombination, during which the different variable gene segments (those segments involved in antigen recognition) of TCRs and immunoglobulins are rearranged and unused segments removed. This introduces double-strand breaks between the required segments. These ends form hairpin loops and must be joined together to form a single strand (summarised in diagram, right). This joining is a very inaccurate process that results in the variable addition or subtraction of nucleotides and, thus, generates junctional diversity.Generation of junctional diversity starts as the proteins, recombination activating gene-1 and -2 (RAG1 and RAG2), along with DNA repair proteins, such as Artemis, are responsible for single-stranded cleavage of the hairpin loops and addition of a series of palindromic, 'P' nucleotides. Subsequent to this, the enzyme, terminal deoxynucleotidyl transferase (TdT), adds further random 'N' nucleotides. The newly synthesised strands anneal to one another, but mismatches are common. Exonucleases remove these unpaired nucleotides and the gaps are filled by DNA synthesis and repair machinery. Exonucleases may also cause shortening of this junction, however this process is still poorly understood.Junctional diversity is liable to cause frame-shift mutations and thus production of non-functional proteins. Therefore, there is considerable waste involved in this process.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Microsoft Search Server** Microsoft Search Server: Microsoft Search Server (MSS) was an enterprise search platform from Microsoft, based on the search capabilities of Microsoft Office SharePoint Server. MSS shared its architectural underpinnings with the Windows Search platform for both the querying engine and the indexer. Microsoft Search Server was once known as SharePoint Server for Search.Microsoft Search Server was made available as Search Server 2008, which was released in the first half of 2008. In 2010, Search Server 2010 http://www.microsoft.com/enterprisesearch/searchserverexpress/en/us/technical-resources.aspx became available, including a free version, named Search Server 2010 Express. The express edition featured the same feature-set as the commercial edition, including no limitation on the number of files indexed; however, it was limited to a stand-alone installation and could not be scaled out to a cluster. A release candidate of Search Server Express 2008 was made available on November 7, 2007 and was scheduled to Release to Manufacturing (RTM) in sync with Search Server 2008. Microsoft Search Server: A more detailed comparison of the feature differences between Search Server 2008, Search Server 2010, and Search Server 2010 Express can be found at http://www.microsoft.com/enterprisesearch/searchserverexpress/en/us/compare.aspx Overview: MSS provided a search center interface to present the UI for querying. The interface was available as a web application, accessed using a browser. The query could either be a simple query, or use advanced operators as defined by the AQS syntax. The matched files were listed along with a snippet from the file, with the search terms highlighted, sorted by relevance. The relevance determination algorithm was developed by Microsoft Research and Windows Live Search. MSS also showed definitions of the search terms, where applicable, as well as suggesting corrections for misspelled terms. Duplicate results were collapsed together. Alerts could be set for specific queries, where the user was informed of changes to the results of a query via email or RSS. The search center UI used the ASP.NET web part infrastructure and could be customized using either Microsoft Visual Studio or Microsoft Office SharePoint Designer. Custom actions could be defined on a per-filetype basis as well.MSS could index any data source as long as an indexing connector for the data source was provided. The indexing connector included protocol handlers, metadata handlers and iFilters to enumerate the data items in the source and extract metadata from the items in the data source. If the file type in the source had a corresponding iFilter, then it was used to extract the text of the file for full text indexing as well. The handlers and iFilters MSS used are the same as used by SharePoint, Microsoft SQL Server and Windows Search as well. The data sources that were to be indexed were identified by their URIs and had to be configured prior to indexing. The indexer updated the search index as soon as an item is indexed (continuous propagation) so that the items can be queried against even before the indexing crawl was complete. MSS could also federate searches to other search services (including SharePoint and web search servers) that supported the OpenSearch protocol. Federated locations could be serialized to a .fld file.The administration UI, which was also presented as a web application, could be used to review statistics such as most frequent queries, top destination hits, click through rates etc., as well as fine tune relevancy settings, indexing policies (including inclusion and exclusion filters) and schedules, and set up a cluster of the servers. It could also be used to back up either the configuration state or the search indices. ACLs could also be defined to limit the search result according to the rights of the user initiating the query.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**SystemC AMS** SystemC AMS: SystemC AMS is an extension to SystemC for analog, mixed-signal and RF functionality. The SystemC AMS 2.0 standard was released on April 6, 2016 as IEEE Std 1666.1-2016. Language features: ToDo: description MoC - Model of Computation A model of computation (MoC) is a set of rules defining the behavior and interaction between SystemC AMS primitive modules. SystemC AMS defines the following models of computation: timed data flow (TDF), linear signal flow (LSF) and electrical linear networks (ELN). TDF - Timed Data Flow In the timed data flow (TDF) model, components exchange analogue values with each other on a periodic basis at a chosen sampling rate, such as every 10 microseconds. By the sampling theorem, this would be sufficient to convey signals of up to 50 MHz bandwidth without aliasing artefacts. A TDF model defines a method called `processing()' that is invoked at the appropriate rate as simulation time advances. A so-called cluster of models share a static schedule of when they should communicate. This sets the relative ordering of the calls to the processing() methods of each TDF instance in the cluster. The periodic behaviour of TDF allows it to operate independently of the main SystemC event-driven kernel used for digital logic. Language features: ELN - Electrical Linear Networks The SystemC electrical linear networks (ELN) library provides a set of standard electrical components that enable SPICE-like simulations to be run. The three basic components, resistors, capacitors and inductors are, of course, available. Further voltage-controlled variants, such as a transconductance amplifier (voltage-controlled current generator) enable most FET and other semiconductor models to be readily created. Language features: Current flowing in ELN networks of resistors can be solved with a suitable simultaneous equation solver. These are called the nodal equations. Where time-varying components, such as capacitors and inductors are included, Euler's method is typically implemented to model them. Euler's method is a simple approach to solving finite-difference time-domain (FDTD) problems. For instance, to simulate the capacitor charge problem on the left below, a timestep delta\_t is selected that is typically about one percent of the time constant and the iteration on the bottom right is executed. Language features: The error in Euler's method decreases quadratically with smaller time steps, but an overly-small time step results in a slow simulation for a complex finite-element simulation. But this is not a problem in many situations where part of a complex SoC or plant controller is run alongside a plant model that has just a few state variables, such as the car transmission system because there are orders of magnitude difference in time constants (e.g. 100 MHz clock versus 1~ms shortest inertial time constant). Language features: Simulating the analogue subsystem inside the RTL simulator then makes sense. Moreover, most plant control situations use closed-loop negative feedback with the controller being just as good at managing a slightly errored plant model as the real model. Language features: Under the ELN formalism, the SystemC initialisation and simulation cycles are extended to support solving nodal flow equations. Nodal equation solving is generally solved iteratively rather than using direct methods such as Gaussian Elimination or based on matrix inverses. Iterative methods tend to have greater stability and are fast when the state has only advanced slightly from the previous time step. When the kernel de-queues a time-advancing event from the event queue, the simulation time is advanced. The analogue part of the simulator maintains a time quantum beyond which the nodal equations need to be re-computed. This quantum is dynamically adjusted depending on the behaviour of the equations. If the equations are `bendy', meaning that linear extrapolation using Euler's method over the quantum will lead to too much error, the time step is reduced, otherwise it can be gradually enlarged at each step. Overall, two forms of iteration are needed: the first is iteration at a time step to solve the nodal equations to a sufficient accuracy. The second is between time steps. In a simple implementation, once simulation time has advanced beyond the Euler quantum, the analogue sub-system is re-solved. If the extrapolation errors are too great, the simulator must go back to the last time step and simulate forward again using a smaller analogue quantum. This mechanism is also the basis for SPICE simulations. Language features: Each analogue variable that is the argument to a `cross', or other analogue sensitivity, is then examined to see if new digital domain work has been triggered. If so, new events are injected on the discrete event queue for the current simulation time. Language features: LSF - Linear Signal Flow The SystemC linear signal flow (LSF) library provides a set of primitive analogue operators, such as adders and differentiators that enable all basic structures found in differential equations to be constructed in a self-documenting and executable form. The advantage of constructing the system from a standard operator library is that `reflection' is possible: other code can analyse the structure and perform analytic differentiation, summation, integration and other forms of analysis, such as sensitivity analysis to determine a good time step. Language features: This would not be possible for an implementation using ad-hoc coding. In general programming, reflection refers to a program being able to read its own source code. Ports TDF in/outport definition: sca_tdf::sca_in<PortType> sca_tdf::sca_out<PortType> TDF converter in/outport definition: sca_tdf::sc_in<PortType> // DE → TDF inport sca_tdf::sc_out<PortType> // TDF → DE outport ELN terminal definition: sca_eln::sca_terminal Nodes sca_eln::sca_node // ELN node sca_eln::sca_node_ref // ELN reference node Cluster ToDo: description Tracing Example code: TDF Timed-Data-Flow 1st order low pass model: linear transfer function: H(s)=11+12∗π∗fcuts ToDo: description ToDo: description ELN Electrical-Linear-Networks 1st order low pass netlist: LSF Linear-Signal-Flow netlist: History: SystemC AMS study group was founded in 2002 to develop and maintain analog and mixed-signal extensions to SystemC, and to initiate an OSCI (Open SystemC initiative) SystemC-AMS working group. The study group has made initial investigations and specified and implemented a SystemC extension to demonstrate feasibility of the approach. In 2006, a SystemC AMS working group has been funded which continued the work of the study group inside OSCI, and now goes on to work on SystemC AMS within the Accellera Systems Initiative, resulting in the AMS 1.0 standard in 2010. After the release of the Accellera SystemC AMS 2.0 standard in 2013, the standard was transferred to the IEEE Standards Association in 2014 for further industry adoption and maintenance. The SystemC AMS standard was released April 6, 2016 as IEEE Std 1666.1-2016. COSEDA Technologies provides with COSIDE the first commercially available design environment based on SystemC AMS standard. History: SystemC AMS-Standard IEEE 1666.1-2016 SystemC AMS Proof-of-Concept Download
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Paribus** Paribus: Paribus is an American company and creator of the price tracking app of the same name, which syncs with a user's email account to scan for receipts and negotiates with online companies to refund the difference if there is a price drop shortly after a purchase. History: Paribus was founded in 2014 by Eric Glyman and Karim Atiyeh. The company is based in Brooklyn, New York. The name is derived from the Latin phrase ceteris paribus, meaning "all others things being equal."Glyman built Paribus to simplify the process of receiving a refund following a price drop, which can be complicated to track and often go unclaimed. He and fellow Harvard University alumnus Atiyeh conceived of the idea and started working on the concept in the summer of 2013. After launching in beta in September 2014, the app launched publicly at TechCrunch Disrupt New York on May 5, 2015. Paribus released its iOS app on August 6, 2015, and its Android app on April 28, 2016.In October 2015, Paribus announced that it had raised $2.1 million in seed funding, following its participation in the Y Combinator summer program and Startup Battlefield at TechCrunch Disrupt NY. The funding round was led by General Catalyst Partners, and also included Greylock Partners, Foundation Capital, Soma Capital and Mick Johnson, Facebook's former director of product.In October 2016, it was announced that Paribus had been acquired by Capital One. Since then, Paribus has continued to launch new products to help save users time and money and has reportedly found more than $20,000,000 in savings for their over 3,000,000 users. Software: Paribus connects to a user's email account to scan messages for receipts from e-commerce retailers. The app tracks the user's purchases and, if an item goes on sale shortly after the purchase, Paribus contacts customer service departments in the user's name to file a price adjustment claim and request a refund of the difference. It is also able to detect coupons or promo codes that could have been applied to a purchase, and have the coupon redeemed retroactively. The app is free. After the acquisition closed with Capital One, Paribus users now keep 100% of the difference. It is available on the iPhone, iPad, iPod Touch, and on Android smartphones and tablets.At its launch, the service worked with 18 major retailers, including Amazon.com, Best Buy, Walmart, Target, Macy's and Newegg. This list had grown to 29 retailers in the United States by December 2017. The company states that the average user saves between $60 and $100 per year. As of October 2016, it had over 700,000 users.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Polar exploration** Polar exploration: Polar exploration is the process of exploration of the polar regions of Earth – the Arctic region and Antarctica – particularly with the goal of reaching the North Pole and South Pole, respectively. Historically, this was accomplished by explorers making often arduous travels on foot or by sled in these regions, known as a polar expedition. More recently, exploration has been accomplished with technology, particularly with satellite imagery. Polar exploration: From 600 BC to 300 BC, Greek philosophers theorized that the planet was a Spherical Earth with North and South polar regions. By 150 AD, Ptolemy published Geographia, which notes a hypothetical Terra Australis Incognita. However, due to harsh weather conditions, the poles themselves would not be reached for centuries after that. When they finally were reached, the achievement was realized only a few years apart. Polar exploration: There are two claims, both disputed, about who was the first persons to reach the geographic North Pole. Frederick Cook, accompanied by two Inuit men, Ahwelah and Etukishook claimed to have reached the Pole on April 21, 1908, although this claim is generally doubted. On April 6, 1909, Robert Peary claimed to be the first person in recorded history to reach the North Pole, accompanied by his employee Matthew Henson and four Inuit men Ootah, Seegloo, Egingway, and Ooqueah.Norwegian explorer Roald Amundsen had planned to reach the North Pole by means of an extended drift in an icebound ship. He obtained the use of Fridtjof Nansen's polar exploration ship Fram, and undertook extensive fundraising. Preparations for this expedition were disrupted when Cook and Peary each claimed to have reached the North Pole. Amundsen then changed his plan and began to prepare for a conquest of the geographic South Pole; uncertain of the extent to which the public and his backers would support him, he kept this revised objective secret. When he set out in June 1910, he led even his crew to believe they were embarking on an Arctic drift, and revealed their true Antarctic destination only when Fram was leaving their last port of call, Madeira. Polar exploration: Amundsen's South Pole expedition, with Amundsen and four others, arrived at the pole on 14 December 1911, five weeks ahead of a British party led by Robert Falcon Scott as part of the Terra Nova expedition. Amundsen and his team returned safely to their base, and later learned that Scott and his four companions had died on their return journey.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Animal-borne bomb attacks** Animal-borne bomb attacks: Animal-borne bomb attacks are the use of animals as delivery systems for explosives. The explosives are strapped to a pack animal such as a horse, mule or donkey. The pack animal may be set off in a crowd. Projects of bat bombs, dog bombs, and pigeon bombs, have also been studied. Incidents: Afghanistan In 2009, Taliban insurgents strapped an improvised explosive device to a donkey and let the donkey loose a short way from a camp of the British Armed Forces in Helmand Province.In April 2013, in Kabul, a bomb attached to a donkey blew up in front of a police security post, killing a policeman and wounding three civilians. A government spokesman claimed insurgents were challenging the competence of the Afghan government prior to the 2014 withdrawal of the U.S. military. Incidents: Iraq On 21 November 2003, eight rockets were fired from donkey carts at the Iraqi oil ministry and two hotels in downtown Baghdad, injuring one man and causing some damage. Incidents: In 2004, a donkey in Ramadi was loaded with explosives and set off towards a US-run checkpoint. It exploded before it was able to injure or kill anyone. The incident, along with a number of similar incidents involving dogs, fueled fears of terrorist practices of using living animals as weapons, a change from an older practice of using the bodies of dead animals to hold explosives. The use of improvised explosive devices concealed in animal's carcasses was also a common practice among the Iraqi Insurgency. Incidents: Lebanon Malia Sufangi, a young Lebanese woman, was caught in the Security Zone in November 1985 with an explosive device mounted on a donkey with which she had failed to carry out an attack. She claimed that she had been recruited and dispatched by Syrian Brigadier-General Ghazi Kanaan who supplied the explosives and instructions on how the attack was to be carried out from his headquarters in the town of Anjer in the Bekaa Valley. Incidents: United States In 1862, during the New Mexico Campaign of the American Civil War a Confederate force approached the ford at Valverde, six miles north of Fort Craig, hoping to cut Union communications between the fort and their headquarters in Santa Fe. About midnight, Union Captain James Craydon tried to blow up a few rebel picket posts by sending mules loaded with barrels of fused gunpowder into the Confederate lines, but the faithful old army mules insisted on wandering back toward the Union camp before blowing to bits. Although the only casualties were two mules, the explosions stampeded a herd of Confederate beef cattle and horses into the Union's lines, so depriving the Confederate troops of some much-needed provisions and horses.In the Wall Street bombing of 1920, an incident thought to be related to the 1919 United States anarchist bombings, anarchists used a bomb carried by horse-drawn cart. Incidents: West Bank and Gaza Strip June 25, 1995 – At approximately 11 a.m., a Palestinian rode a booby-trapped donkey cart to an Israeli army base west of Khan Yunis in the Gaza Strip and detonated it. The Palestinian and the donkey were killed, but no soldiers were wounded. Hamas claimed responsibility for the attack. Three soldiers were treated for minor shock. Incidents: June 17, 2001 – A Palestinian man rode a bomb-laden donkey cart up to an Israeli position in the southern Gaza Strip and set off a small explosion. Israeli soldiers destroyed the cart, and no soldiers were wounded. The Palestinian man was captured by the soldiers. Incidents: January 26, 2003 – Palestinian fighters strapped a bomb to a donkey and then exploded it remotely on the road between Jerusalem and Gush Etzion. No humans were injured in the attack. PETA director Ingrid Newkirk wrote to PLO Chairman Yasser Arafat asking him to keep animals out of the conflict. PETA was criticized for not objecting to killing of humans in the context. Incidents: June 8, 2009 – Palestinian gunmen approached the Karni crossing between the Gaza Strip and Israel with several trucks and at least five horses loaded with explosive devices and mines. The gunmen fired on IDF troops who observed them, and at least four gunmen were killed in the ensuing battle. A previously unknown organization called "the army of Allah's supporters" (Jund Ansar Allah) claimed responsibility for the foiled attack. The IDF estimated that the gunmen had planned to kidnap an Israeli soldier. Incidents: May 25, 2010 – A small Syrian-backed militant group in the Gaza Strip blew up a donkey cart laden with explosives close to the border with Israel. According to a spokesman for the group, more than 200 kilograms of dynamite were heaped on the animal-drawn cart. The explosives were detonated several dozen meters from the border fence with Israel. The animal was killed in the blast but no human injuries or damage were reported. Incidents: July 19, 2014 – Hamas militants attempted to attack Israeli troops in Gaza with a bomb-laden donkey. IDF forces operating in the Rafah area near the Gaza-Egypt border located the donkey suspiciously approaching their position and were forced to open fire at it, causing the explosives to detonate. Military: During World War II the U.S. investigated the use of "bat bombs", or bats carrying small incendiary bombs. During the same war, Project Pigeon (later Project Orcon, for "organic control") was American behaviorist B. F. Skinner's attempt to develop a pigeon-guided missile. The project was barely funded and was cancelled on the 8th of October 1944. They had also used incendiary bat bombs that were largely ineffective. At the same time the Soviet Union developed the "anti-tank dog" for use against German tanks. The anti-tank dog project mostly failed, as the dogs would be spooked by the noises and gunfire, as well as running under Russian tanks due to the dogs being trained with Diesel Tanks, as opposed to the German tanks, which ran on petrol. The Imperial Japanese Army had used dogs and other animals strapped with bombs to run into American lines during Iwo Jima and Okinawa. More recently, Iran purchased several dolphins, some of which were former Soviet military dolphins, along with other sea mammals and birds, in what some have alleged to be an attempt by Iran to develop kamikaze dolphins, intended to seek out and destroy submarines and enemy warships. However, the animals are today on display at the Kish Dolphin Park, on Iran's resort island of Kish in the Persian Gulf. During the Cold War, the Soviet Navy trained dolphins to attach underwater explosives and beacons to ships and submarines at Object 825 GTS at Balaklava, Crimea.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Ivo Babuška** Ivo Babuška: Ivo M. Babuška (22 March 1926 – 12 April 2023) was a Czech-American mathematician, noted for his studies of the finite element method and the proof of the Babuška–Lax–Milgram theorem in partial differential equations. One of the celebrated result in the finite elements is the so-called Ladyzenskaja–Babuška–Brezzi (LBB) condition (also referred to in some literature as Banach–Nečas–Babuška (BNB)), which provides sufficient conditions for a stable mixed formulation. The LBB condition has guided mathematicians and engineers to develop state-of-the-art formulations for many technologically important problems like Darcy flow, Stokes flow, incompressible Navier–Stokes, nearly incompressible elasticity. Babuška is also well known for his work on adaptive methods and the p-- and hp--versions of the finite element method. He also developed the mathematical framework for the partition of unity methods. Ivo Babuška: Babuška was elected as a member of the National Academy of Engineering in 2005 for contributions to the theory and implementation of finite element methods for computer-based engineering analysis and design. Biography: Ivo Babuška was born on 22 March 1926, in Prague, the son of architect Milan Babuška (who designed the National Technical Museum in Prague) and his wife Marie. He studied civil engineering at the Czech Technical University in Prague, where he received the Dipl. Ing in 1949. In 1951 he received the degree Dr. Tech.; his doctoral dissertation was supervised by Eduard Čech and Vladimir Knichal. From 1949 he studied at Mathematical Institute of the Czechoslovak Academy of Sciences and then was the head of the Department of Partial Differential Equations. In 1955, he received a CSc. (= Ph.D.) in mathematics and in 1960 DSc. in mathematics. He was married to Renata and they had two children, a girl, Lenka and a boy, Vit.Babuška fled Communist Czechoslovakia in 1968 with barely more than he could carry, following a conference in Western Europe, and emigrated to the United States. After many years as a professor at the University of Maryland, he eventually moved to the University of Texas at Austin where he spent many years at the Oden Institute for Computational Engineering and Sciences. He moved to New Mexico in 2020, following retirement in 2018 at the age of 92. Biography: Babuška died on 12 April 2023, at the age of 97. Work: Babuška worked in the field of mathematics, applied mathematics, numerical methods, finite element methods, and computational mechanics. In 1968, he became a professor at University of Maryland, College Park in the mathematics department, which is part of the University of Maryland College of Computer, Mathematical, and Natural Sciences. He retired in 1996 as a Distinguished University Professor. In 1989 he co-founded the company ESRD, Inc. which developed the StressCheck finite element software, putting into practice much of Babuška's research and contributions to the finite element method. After his time at the University of Maryland, he moved to the Institute for Computational Engineering and Sciences at the University of Texas at Austin where he held the Robert B. Trull Chair in Engineering. Babuška has published more than 300 papers in refereed journals, more than 70 papers in conference proceedings, and several books. He was an invited speaker at many major international conferences and a member of numerous editorial boards for scientific journals. In 2018 he retired as Professor Emeritus. Among his more than 30 doctoral students are Christoph Schwab and Michael Vogelius. Honors: Babuška received many honors for his work, including five doctorates honoris causa, member of European Academy of Sciences (2003), Fellow of SIAM and ICAM, the Czechoslovak State prize for Mathematics, the Leroy P. Steele Prize (2012), the Birkhoff Prize (1994), the Humboldt Award of Federal Republic of Germany, the Neuron Prize Czech Republic, Honorary Foreign Member of the Czech Learned Society and the Bolzano Medal. In 2003, asteroid 36060 Babuška was named in his honor by the International Astronomical Union. In 2005, Babuska was awarded the Honorary Medal "De Scientia Et Humanitate Optime Meritis", received the ICAM Congress Medal (Newton Gauss, 2016) and he was elected to the National Academy of Engineering. He was also a member of the Academy of Medicine, Engineering, and Sciences of Texas.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Excursion** Excursion: An excursion is a trip by a group of people, usually made for leisure, education, or physical purposes. It is often an adjunct to a longer journey or visit to a place, sometimes for other (typically work-related) purposes. Public transportation companies issue reduced price excursion tickets to attract business of this type. Often these tickets are restricted to off-peak days or times for the destination concerned. Short excursions for education or for observations of natural phenomena are called field trips. One-day educational field studies are often made by classes as extracurricular exercises, e.g. to visit a natural or geographical feature. The term is also used for short military movements into foreign territory, without a formal announcement of war.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Abarelix** Abarelix: Abarelix, sold under the brand name Plenaxis, is an injectable gonadotropin-releasing hormone antagonist (GnRH antagonist) which is marketed in Germany and the Netherlands. It is primarily used in oncology to reduce the amount of testosterone made in patients with advanced symptomatic prostate cancer for which no other treatment options are available.It was originally marketed by Praecis Pharmaceuticals as Plenaxis, and is now marketed by Speciality European Pharma in Germany after receiving a marketing authorization in 2005. The drug was introduced in the United States in 2003, but was discontinued in this country in May 2005 due to poor sales and a higher-than-expected incidence of severe allergic reactions. It remains marketed in Germany and the Netherlands however.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Probarbital** Probarbital: Probarbital (trade names Ipral, Vasalgin) is a barbiturate derivative invented in the 1920s. It has sedative, hypnotic and anticonvulsant properties.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Circeaster** Circeaster: Circeaster is a genus of abyssal sea stars in the family Goniasteridae. Habitat and distribution: These sea stars have a flattened and broad pentagonal central disc, with 5 tapering arm. The marginal plates are thick and well delimited.They live between 320 and 3000 meters deep, in the three main oceanic basins. Species list: According to World Register of Marine Species: Circeaster americanus (A.H. Clark, 1916) Circeaster arandae Mah, 2006 Circeaster helenae Mah, 2006 Circeaster kristinae Mah, 2006 Circeaster loisetteae Mah, 2006 Circeaster magdalenae Koehler, 1909 Circeaster marcelli Koehler, 1909 Circeaster pullus Mah, 2006 Circeaster sandrae Mah, 2006
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Quantifier shift** Quantifier shift: A quantifier shift is a logical fallacy in which the quantifiers of a statement are erroneously transposed during the rewriting process. The change in the logical nature of the statement may not be obvious when it is stated in a natural language like English. Definition: The fallacious deduction is that: For every A, there is a B, such that C. Therefore, there is a B, such that for every A, C. ∀x∃yRxy⊢∃y∀xRxy However, an inverse switching: ∃y∀xRxy⊢∀x∃yRxy is logically valid. Examples: 1. Every person has a woman that is their mother. Therefore, there is a woman that is the mother of every person. ∀x∃y(Px→(Wy∧M(yx)))⊢∃y∀x(Px→(Wy∧M(yx))) It is fallacious to conclude that there is one woman who is the mother of all people. However, if the major premise ("every person has a woman that is their mother") is assumed to be true, then it is valid to conclude that there is some woman who is any given person's mother. 2. Everybody has something to believe in. Therefore, there is something that everybody believes in. ∀x∃yBxy⊢∃y∀xBxy It is fallacious to conclude that there is some particular concept to which everyone subscribes. It is valid to conclude that each person believes a given concept. But it is entirely possible that each person believes in a unique concept. 3. Every natural number n has a successor m=n+1 , the smallest of all natural numbers that are greater than n . Therefore, there is a natural number m that is a successor to all natural numbers. ∀n∃mSnm⊢∃m∀nSnm It is fallacious to conclude that there is a single natural number that is the successor of every natural number.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Chemtrail conspiracy theory** Chemtrail conspiracy theory: The chemtrail conspiracy theory is the erroneous belief that long-lasting condensation trails left in the sky by high-flying aircraft are actually "chemtrails" consisting of chemical or biological agents, sprayed for nefarious purposes undisclosed to the general public. Believers in this conspiracy theory say that while normal contrails dissipate relatively quickly, contrails that linger must contain additional substances. Those who subscribe to the theory speculate that the purpose of the chemical release may be solar radiation management, weather modification, psychological manipulation, human population control, biological or chemical warfare, or testing of biological or chemical agents on a population, and that the trails are causing respiratory illnesses and other health problems.The claim has been dismissed by the scientific community. There is no evidence that purported chemtrails differ from normal water-based contrails routinely left by high-flying aircraft under certain atmospheric conditions. Although proponents have tried to prove that chemical spraying occurs, their analyses have been flawed or based on misconceptions. Because of the persistence of the conspiracy theory and questions about government involvement, scientists and government agencies around the world have repeatedly explained that the supposed chemtrails are in fact normal contrails.The term chemtrail blends the words chemical and trail, just as contrail blends condensation and trail. History: Chemtrail conspiracy theories began to circulate after the United States Air Force (USAF) published a 1996 report about weather modification. Following the report, in the late 1990s the USAF was accused of "spraying the U.S. population with mysterious substances" from aircraft "generating unusual contrail patterns." The theories were posted on Internet forums by people including Richard Finke and William Thomas and were among many conspiracy theories popularized by late-night radio host Art Bell, starting in 1999. As the chemtrail conspiracy theory spread, federal officials were flooded with angry calls and letters.A multi-agency response attempting to dispel the rumors was published in 2000 by the Environmental Protection Agency (EPA), the Federal Aviation Administration (FAA), the National Aeronautics and Space Administration (NASA) and the National Oceanic and Atmospheric Administration (NOAA). Many chemtrail believers interpreted agency fact sheets as further evidence of the existence of a government cover-up. The EPA refreshed its posting in 2015.In the early 2000s the USAF released an undated fact sheet that stated the conspiracy theories were a hoax fueled in part by citations to a 1996 strategy paper drafted within their Air University titled Weather as a Force Multiplier: Owning the Weather in 2025. The paper was presented in response to a military directive to outline a future strategic weather modification system for the purpose of maintaining the United States' military dominance in the year 2025, and identified as "fictional representations of future situations/scenarios." The USAF further clarified in 2005 that the paper "does not reflect current military policy, practice, or capability", and that it is "not conducting any weather modification experiments or programs and has no plans to do so in the future." Additionally, the USAF states that the "'chemtrail' hoax has been investigated and refuted by many established and accredited universities, scientific organizations, and major media publications."The conspiracy theories are seldom covered by the mainstream media, and when they are, they are usually cast as an example of anti-government paranoia. For example, in 2013, when it was made public that the CIA, NASA, and NOAA intended to provide funds to the National Academy of Sciences to conduct research into methods to counteract global warming with geoengineering, an article in the International Business Times anticipated that "the idea of any government agency looking at ways to control, or manipulate, the weather will be met with scrutiny and fears of a malign conspiracies" [sic], and mentioned chemtrail conspiracy theories as an example. Description: Proponents of the chemtrail conspiracy theory find support for their theories in their interpretations of sky phenomena, videos posted to the Internet, and reports about government programs; they also have certain beliefs about the goals of the alleged conspiracy and the effects of its alleged efforts and generally take certain actions based on those beliefs. Description: Interpretation of evidence Proponents of the chemtrail conspiracy theory say that chemtrails can be distinguished from contrails by their long duration, asserting that the chemtrails are those trails left by aircraft that persist for as much as a half-day or transform into cirrus-like clouds. The proponents claim that after 1995, contrails had a different chemical composition and lasted a lot longer in the sky; proponents fail to acknowledge evidence of long-lasting contrails shown in World War II–era photographs.Proponents characterize contrails as streams that persist for hours and that, with their criss-cross, grid-like, or parallel stripe patterns, eventually blend to form large clouds. Proponents view the presence of visible color spectra in the streams, unusual concentrations of sky tracks in a single area, or lingering tracks left by unmarked or military airplanes flying atypical altitudes or locations as markers of chemtrails.Photographs of barrels installed in the passenger space of an aircraft for flight test purposes have been claimed to show aerosol dispersion systems. The real purpose of the barrels is to simulate the weight of passengers or cargo. The barrels are filled with water, and the water can be pumped from barrel to barrel in order to test different centers of gravity while the aircraft is in flight.Former CIA employee and whistleblower Edward Snowden, interviewed on The Joe Rogan Experience, stated that he had searched through all the secret information of the US government for evidence about (aliens and) chemtrails. According to a CNN report about the webcast he said: "In case you were wondering: ... Chemtrails are not a thing", and: "I had ridiculous access to the networks of the NSA, the CIA, the military, all these groups. I couldn't find anything". Description: Jim Marrs has cited a 2007 Louisiana television station report as evidence for chemtrails. In the report, the air underneath a crosshatch of supposed chemtrails was measured and apparently found to contain unsafe levels of barium: at 6.8 parts per million, three times the US nationally recommended limit. A subsequent analysis of the footage showed, however, that the equipment had been misused, and the reading exaggerated by a factor of 100—the true level of barium measured was both usual and safe.In May 2014, a video that went viral showed a commercial passenger airplane landing on a foggy night, which was described as emitting chemtrails. Discovery News pointed out that passengers sitting behind the wings would clearly see anything being sprayed, which would defeat any intent to be secretive, and that the purported chemical emission was normal air disruption caused by the wings, visible due to the fog.In October 2014, Englishman Chris Bovey filmed a video of a plane jettisoning fuel on a flight from Buenos Aires to London, which had to dump fuel to lighten its load for an emergency landing in São Paulo. The clip went viral on Facebook, with over three million views and more than 52,000 shares, cited as evidence of chemtrails. He later disclosed that the video post was done as a prank, and consequently, he was subjected to some vitriolic abuse and threats from several conspiracy believers.In some accounts, the chemicals are described as barium and aluminum salts, polymer fibers, thorium, or silicon carbide.Chemtrail believers interpret the existence of cloud seeding programs and research into climate engineering as evidence for the conspiracy. Description: Beliefs Various versions of the chemtrail conspiracy theory have been propagated via the Internet and radio programs. There are websites dedicated to the conspiracy theory, and it is particularly favored by far-right groups because it fits well with a deep suspicion of the government.A 2014 review of 20 chemtrail websites found that believers appeal to science in some of their arguments but do not believe what academic or government-employed scientists say; scientists and federal agencies have consistently denied that chemtrails exist, explaining the sky tracks are simply persistent contrails. The review also found that believers generally hold that chemtrails are evidence of a global conspiracy; they allege various goals which include profit (for example, manipulating futures prices, or making people sick to benefit drug companies), population control, or weapons testing (use of weather as a weapon, or testing bioweapons). One of these ideas is that clouds are being seeded with electrically conductive materials as part of a massive electromagnetic superweapons program based around the High Frequency Active Auroral Research Program (HAARP). Believers say chemtrails are toxic; the 2014 review found that they generally hold that every person is under attack and often express fear, anxiety, sadness, and anger about this. A 2011 study of people from the US, Canada, and the UK found that 2.6% of the sample believed entirely in the conspiracy theory, and 14% believed it partially. An analysis of responses given to the 2016 Cooperative Congressional Election Study showed that 9% of the 36,000 respondents believed it was "completely true" that "...the government has a secret program that uses airplanes to put harmful chemicals into the air..." while a further 19% believed this was "somewhat true".Chemtrail conspiracy theorists often describe their experience as being akin to a religious conversion experience. When they "wake up" and become "aware" of chemtrails, the experience motivates them to advocacy of various forms. For example, they often attend events and conferences on geoengineering, and have sent threats to academics working in the geoengineering field.Some chemtrail believers adopt the notions of Wilhelm Reich (1897–1957) who devised a "cloudbuster" device from pipework. Reich claimed this device would influence weather and remove harmful energy from the atmosphere. Some chemtrail believers have built cloudbusters filled with crystals and metal filings, which are pointed at the sky in an attempt to clear it of chemtrails.Chemtrail believers sometimes gather samples and have them tested, rather than rely on reports from government or academic laboratories, but their experiments are usually flawed; for example, collecting samples in jars with metal lids contaminates the sample and is not done in scientific testing. Description: Incidents In 2001, in response to requests from constituents, US Congressman Dennis Kucinich introduced (but did not author) H.R. 2977 (107th), the Space Preservation Act of 2001 that would have permanently prohibited the basing of weapons in space, listing chemtrails as one of a number of "exotic weapons" that would be banned. Proponents have interpreted this explicit reference to chemtrails as official government acknowledgement of their existence. Skeptics note that the bill in question also mentions "extraterrestrial weapons" and "environmental, climate, or tectonic weapons". The bill received an unfavorable evaluation from the United States Department of Defense and died in committee, with no mention of chemtrails appearing in the text of any of the three subsequent failed attempts by Kucinich to enact a Space Preservation Act. Description: In 2003, in a response to a petition by concerned Canadian citizens regarding "chemicals used in aerial sprayings are adversely affecting the health of Canadians", the Government House Leader responded by stating, "There is no substantiated evidence, scientific or otherwise, to support the allegation that there is high altitude spraying conducted in Canadian airspace. The term 'chemtrails' is a popularised expression, and there is no scientific evidence to support their existence." The House leader went on to say that "it is our belief that the petitioners are seeing regular airplane condensation trails or contrails."In the United Kingdom, in 2005 Elliot Morley, a Minister of State for the Department for Environment, Food and Rural Affairs was asked by David Drew, the Labour Party Member of Parliament for Stroud, "what research [the] Department has undertaken into the polluting effects of chemtrails for aircraft", and responded that "the Department is not researching into chemtrails from aircraft as they are not scientifically recognised phenomena", and that work was being conducted to understand "how contrails are formed and what effects they have on the atmosphere."During the 2011–2017 California drought, some local politicians in Shasta County reacted credulously to conspiracy theories suggesting that the unusual weather conditions has been caused by weather-modifying chemtrails. Contrails: Contrails, or condensation trails, are "streaks of condensed water vapor created in the air by an airplane or rocket at high altitudes". Fossil fuel combustion (as in piston and jet engines) produces carbon dioxide and water vapor. At high altitudes, the air is very cold. Hot humid air from the engine exhaust mixes with the colder surrounding air, causing the water vapor to condense into droplets or ice crystals that form visible clouds. The rate at which contrails dissipate is entirely dependent on weather conditions. If the atmosphere is near saturation, the contrail may exist for some time. Conversely, if the atmosphere is dry, the contrail will dissipate quickly. Contrails: It is well established by atmospheric scientists that contrails can persist for hours, and that it is normal for them to spread out into cirrus sheets. The different-sized ice crystals in contrails descend at different rates, which spreads the contrail vertically. Then the differential in wind speeds between altitudes (wind shear) results in the horizontal spreading of the contrail. This mechanism is similar to the formation of cirrus uncinus clouds. Contrails between 25,000 and 40,000 feet (7,600 and 12,200 m) can often merge into an "almost solid" interlaced sheet. Contrails can have a lateral spread of several kilometers, and given sufficient air traffic, it is possible for contrails to create an entirely overcast sky that increases the ice budget of individual contrails and persists for hours. Contrails: Experts on atmospheric phenomena say that the characteristics attributed to chemtrails are simply features of contrails responding to diverse conditions in terms of sunlight, temperature, horizontal and vertical wind shear, and humidity levels present at the aircraft's altitude. In the US, the gridlike nature of the National Airspace System's flight lanes tends to cause crosshatched contrails, and in general it is hard to discern from the ground whether overlapping contrails are at similar altitudes or not. The jointly published fact sheet produced by NASA, the EPA, the FAA, and NOAA in 2000 in response to alarms over chemtrails details the science of contrail formation, and outlines both the known and potential impacts of contrails have on temperature and climate. The USAF produced a fact sheet that described these contrail phenomena as observed and analyzed since at least 1953. It also rebutted chemtrail theories more directly by identifying the theories as a hoax and disproving the existence of chemtrails.Patrick Minnis, an atmospheric scientist with NASA's Langley Research Center in Hampton, Virginia, has said that logic does not dissuade most chemtrail proponents: "If you try to pin these people down and refute things, it's, 'Well, you're just part of the conspiracy'", he said.Analysis of the use of commercial aircraft tracks for climate engineering has shown them to be generally unsuitable.Astronomer Bob Berman has characterized the chemtrail conspiracy theory as a classic example of failure to apply Occam's razor, writing in 2009 that instead of adopting the long-established "simple solution" that the trails consist of frozen water vapour, "the conspiracy web sites think the phenomenon started only a decade ago and involves an evil scheme in which 40,000 commercial pilots and air traffic controllers are in on the plot to poison their own children."A 2016 survey of 77 atmospheric scientists concluded that "76 out of 77 (98.7%) of scientists that took part in this study said they had not encountered evidence of a [secret large-scale atmospheric program] (SLAP), and that the data cited as evidence could be explained through other factors, such as typical contrail formation and poor data sampling instructions presented on SLAP websites."
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Math circle** Math circle: A math circle is a learning space where participants engage in the depths and intricacies of mathematical thinking, propagate the culture of doing mathematics, and create knowledge. To reach these goals, participants partake in problem-solving, mathematical modeling, the practice of art, and philosophical discourse. Some circles involve competition, while others do not. Characteristics: Math circles can have a variety of styles. Some are very informal, with the learning proceeding through games, stories, or hands-on activities. Others are more traditional enrichment classes but without formal examinations. Some have a strong emphasis on preparing for Olympiad competitions; some avoid competition as much as possible. Models can use any combination of these techniques, depending on the audience, the mathematician, and the environment of the circle. Athletes have sports teams through which to deepen their involvement with sports; math circles can play a similar role for kids who like to think. Two features all math circles have in common are (1) that they are composed of students who want to be there - either like math, or want to like math, and (2) that they give students a social context in which to enjoy mathematics. History: Mathematical enrichment activities in the United States have been around since sometime before 1977, in the form of residential summer programs, math contests, and local school-based programs. The concept of a math circle, on the other hand, with its emphasis on convening professional mathematicians and secondary school students regularly to solve problems, appeared in the U.S. in 1994 with Robert and Ellen Kaplan at Harvard University. This form of mathematical outreach made its way to the U.S. most directly from the former Soviet Union and present-day Russia and Bulgaria. They first appeared in the Soviet Union during the 1930s; they have existed in Bulgaria since sometime before 1907. The tradition arrived in the U.S. with émigrés who had received their inspiration from math circles as teenagers. Many of them successfully climbed the academic ladder to secure positions within universities, and a few pioneers among them decided to initiate math circles within their communities to preserve the tradition which had been so pivotal in their own formation as mathematicians. These days, math circles frequently partner with other mathematical education organizations, such as CYFEMAT: The International Network of Math Circles and Festivals, the Julia Robinson Mathematics Festival ,and the Mandelbrot Competition. Content choices: Decisions about content are difficult for newly forming math circles and clubs, or for parents seeking groups for their children. Content choices: 'Project-based clubs may spend a few meetings building origami, developing a math trail in their town, or programming a math-like computer game together. Math-rich projects may be artistic, exploratory, applied to sciences, executable (software-based), business-oriented, or directed at fundamental contributions to local communities. Museums, cultural and business clubs, tech groups, online networks, artists/musicians/actors active in the community, and other individual professionals can make math projects especially real and meaningful. Increasingly, math clubs invite remote participation of active people (authors, community leaders, professionals) through webinars and teleconferencing software. Content choices: Problem-solving circles get together to pose and solve interesting, deep, meaningful math problems. Problems considered "good" are easy to pose, challenging to solve, require connections among several concepts and techniques, and lead to significant math ideas. Best problem-solving practices include meta-cognition (managing memory and attention), grouping problems by type and conceptual connections (e.g. "river crossing problems"), moving between more general and abstract problems and particular, simpler examples, and collaboration with other club members, with current online communities, and with past mathematicians through the media they contributed to the culture. Content choices: 'Guided exploration circles use self-discovery and the Socratic method to probe deep questions. Robert & Ellen Kaplan, in their book Out of the Labyrinth: Setting Mathematics Free, make a case for this format describing the non-profit Cambridge/Boston Math Circle they founded in 1994 at the Harvard University. The book describes the classroom, organizational and practical issues the Kaplans faced in founding their Math Circle. The meetings encourage a free discussion of ideas; while the content is mathematically rigorous, the atmosphere is friendly and relaxed. The philosophy of the teachers is, "What you have been obliged to discover by yourself leaves a path in your mind which you can use again when the need arises" (G. C. Lichtenberg). Children are encouraged to ask exploratory questions. Are there numbers between numbers? What's geometry like with no parallel lines? Can you tile a square with squares all of the different sizes? Research mathematicians and connecting students with them can be a focus of math circles. Students in these circles appreciate and start to attain a very special way of thinking in research mathematics, such as generalizing problems, continue asking deeper questions, seeing similarities across different examples and so on.Topic-centered clubs follow math themes such as clock arithmetic, fractals, or linearity. Club members write and read essays, pose and solve problems, create and study definitions, build interesting example spaces, and investigate applications of their current topic. There are lists of time-tested, classic math club topics, especially rich in connections and accessible to a wide range of abilities. The plus of using a classic topic is the variety of resources available from the past; however, bringing a relatively obscure or new topic to the attention of the club and the global community is very rewarding, as well. Content choices: Applied math clubs center on a field other than mathematics, such as math for thespians, computer programming math, or musical math. Such clubs need strong leadership both for the math parts and for the other field part. Such clubs can meet at an artists' studio, at a game design company, at a theater or another authentic professional setting. More examples of fruitful applied math pathways include history, storytelling, art, inventing and tinkering, toy and game design, robotics, origami, and natural sciences.Most circles and clubs mix some features of the above types. For example, the Metroplex Math Circle has a combination of problem-solving and research, and the New York Math Circle is some combination of a problem-solving circle and a topic-centered club, with vestiges of a research circle. Content choices: One can expect problem-solving groups to attract kids already strong in math and confident in their math abilities. On the other hand, math anxious kids will be more likely to try project-based or applied clubs. Topic-centered clubs typically work with kids who can all work at about the same level. The decision about the type of the club strongly depends on your target audience. Competition decisions: Math competitions involve comparing speed, depth, or accuracy of math work among several people or groups. Traditionally, European competitions are more depth-oriented, and Asian and North American competitions are more speed-oriented, especially for younger children. The vast majority of math competitions involve solving closed ended (known answers) problems, however, there are also essay, project and software competitions. As with all tests requiring limited time, the problems focus more on the empirical accuracy and foundations of mathematics work rather than an extension of basic knowledge. More often than not, competition differs entirely from curricular mathematics in requiring creativity in elementary applications—so that although there may be closed answers, it takes significant extension of mathematical creativity in order to successfully achieve the ends. Competition decisions: For people like Robert and Ellen Kaplan, competition carries with it a negative connotation and corollary of greed for victory rather than an appreciation of mathematics. However, those who run math circles centering mostly on competition rather than seminars and lessons attest that this is a large assumption. Rather, participants grow in their appreciation of math via math competitions such as the AMC, AIME, USAMO, and ARML. Competition decisions: Some math circles are completely devoted to preparing teams or individuals for particular competitions. The biggest plus of the competition framework for a circle organizer is the ready-made set of well-defined goals. The competition provides a time and task management structure, and easily defined progress tracking. This is also the biggest minus of competition-based mathematics, because defining goals and dealing with complexity and chaos are important in all real-world endeavors. Competitive math circles attract students who are already strong and confident in mathematics, but also welcome those who wish to engage in the mathematics competitive world. Beyond the age of ten or so, they also attract significantly more males than females, and in some countries, their racial composition is disproportionate to the country's demographic. Competition decisions: Collaborative math clubs are more suitable for kids who are anxious about mathematics, need "math therapy" because of painful past experiences, or want to have more casual and artistic relationships with mathematics. A playgroup or a coop that does several activities together, including a math club, usually chooses collaborative or hybrid models that are more likely to accommodate all members already in the group. Competition decisions: Most math circles and clubs combine some competitive and some collaborative activities. For example, many math circles, while largely centering on competitions, host seasonal tournaments and infuse their competition seminars with fun mathematical lessons.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**MIR4761** MIR4761: MicroRNA 4761 is a microRNA that in humans is encoded by the MIR4761 gene. Function: microRNAs (miRNAs) are short (20-24 nt) non-coding RNAs that are involved in post-transcriptional regulation of gene expression in multicellular organisms by affecting both the stability and translation of mRNAs. miRNAs are transcribed by RNA polymerase II as part of capped and polyadenylated primary transcripts (pri-miRNAs) that can be either protein-coding or non-coding. The primary transcript is cleaved by the Drosha ribonuclease III enzyme to produce an approximately 70-nt stem-loop precursor miRNA (pre-miRNA), which is further cleaved by the cytoplasmic Dicer ribonuclease to generate the mature miRNA and antisense miRNA star (miRNA*) products. The mature miRNA is incorporated into a RNA-induced silencing complex (RISC), which recognizes target mRNAs through imperfect base pairing with the miRNA and most commonly results in translational inhibition or destabilization of the target mRNA. The RefSeq represents the predicted microRNA stem-loop.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Critical system** Critical system: A critical system is a system which must be highly reliable and retain this reliability as it evolves without incurring prohibitive costs.There are four types of critical systems: safety critical, mission critical, business critical and security critical.Description For such systems, trusted methods and techniques must be used for development. Consequently, critical systems are usually developed using well-tested techniques rather than newer techniques that have not been subject to extensive practical experience. Developers of critical systems are naturally conservative, preferring to use older techniques whose strengths and weaknesses are understood, rather than new techniques which may appear to be better, but whose long-term problems are unknown.Expensive software engineering techniques that are not cost-effective for non-critical systems may sometimes be used for critical systems development. For example, formal mathematical methods of software development have been successfully used for safety and security critical systems. One reason why these formal methods are used is that it helps reduce the amount of testing required. For critical systems, the costs of verification and validation are usually very high—more than 50% of the total system development costs. Classification: A critical system is distinguished by the consequences associated with system or function failure. Likewise, critical systems are further distinguished between fail-operational and fail safe systems, according to the tolerance they must exhibit to failures: Fail-operational — typically required to operate not only in nominal conditions (expected), but also in degraded situations when some parts are not working properly. For example, airplanes are fail-operational because they must be able to fly even if some components fail. Classification: Fail-safe — must safely shut down in case of single or multiple failures. Trains are fail-safe systems because stopping a train is typically sufficient to put into safe state. Safety critical Safety critical systems deal with scenarios that may lead to loss of life, serious personal injury, or damage to the natural environment. Examples of safety-critical systems are a control system for a chemical manufacturing plant, aircraft, the controller of an unmanned train metro system, a controller of a nuclear plant, etc. Mission critical Mission critical systems are made to avoid inability to complete the overall system, project objectives or one of the goals for which the system was designed. Examples of mission-critical systems are a navigational system for a spacecraft, software controlling a baggage handling system of an airport, etc. Classification: Business critical Business critical systems are programmed to avoid significant tangible or intangible economic costs; e.g., loss of business or damage to reputation. This is often due to the interruption of service caused by the system being unusable. Examples of a business-critical systems are the customer accounting system in a bank, stock-trading system, ERP system of a company, Internet search engine, etc. Classification: Security critical Security critical systems deal with the loss of sensitive data through theft or accidental loss.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Data technology** Data technology: Data technology (may be shortened to DataTech or DT) is the technology connected to areas such as martech or adtech. Data technology sector includes solutions for data management, and products or services that are based on data generated by both human and machines. DataTech is an emerging industry that uses Artificial Intelligence, Big Data analysis and Machine Learning algorithms to improve business activities in various sectors, such as digital marketing, or business analysis (e.g. predictive analytics). Key areas: Data technology has been used to manage big data sets, build solutions for data management and integrate data from various sources to discover new business or analytical insights from collected information.Growing global amount of generated data (the number is forecast to reach 163 zettabytes in 2025) determines spendings on technologies that help control data assets. The big data market is expected to reach $156.72 billion by 2026. Spendings on data, including data technologies, in digital marketing reach $26.0 B in 2019 globally.Data technologies are developed to help manage data generated by human or by machines, which will be 200 billion by 2020. Data technologies aim to manage growing data streams, get valuable insights from data and find solutions to integrate the most important data sources for companies and organizations. Therefore, key areas for DataTech sector are: Data Management Technologies - technologies and platforms for managing growing sets of data, such as data generated by customers (1st, 2nd and 3rd party data). Common platforms for managing data are Data Management Platform or Customer Data Platform. Key areas: Data Integration - services that match the data from two or more sources to get more information about stored data. If company collects user data in Customer-relationship management system, it can enrich it with the data from external sources to create 360-customer view (by integrating data, the company will know e.g. interests, demography and intentions of users who are in their databases). Key areas: Data Consulting - services based on analysing customer data and discovering insights from big data sets. It uses Machine Learning algorithms to find useful information from chaotic data. Technologies for AdTech sector - products and services that support digital marketing environment, including SSP, Demand-side platform and services used for targeting the right group in online campaigns. Building strategic data ecosystem - service that allow to build data ecosystem in organization, by identifying and choosing the right data sources, integrating data and preparing adequate analytical algorithms to discover new insights about customers. Internet of Things - products and services that helps store and manage data generated by machines.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Trinification** Trinification: In physics, the trinification model is a Grand Unified Theory proposed by Alvaro De Rújula, Howard Georgi and Sheldon Glashow in 1984. Details: It states that the gauge group is either SU(3)C×SU(3)L×SU(3)R or [SU(3)C×SU(3)L×SU(3)R]/Z3 ;and that the fermions form three families, each consisting of the representations: Q=(3,3¯,1) , Qc=(3¯,1,3) , and L=(1,3,3¯) . The L includes a hypothetical right-handed neutrino, which may account for observed neutrino masses (see neutrino oscillations), and a similar sterile "flavon." There is also a (1,3,3¯) and maybe also a (1,3¯,3) scalar field called the Higgs field which acquires a vacuum expectation value. This results in a spontaneous symmetry breaking from SU(3)L×SU(3)R to [SU(2)×U(1)]/Z2 .The fermions branch (see restricted representation) as (3,3¯,1)→(3,2)16⊕(3,1)−13 ,(3¯,1,3)→2(3¯,1)13⊕(3¯,1)−23 ,(1,3,3¯)→2(1,2)−12⊕(1,2)12⊕2(1,1)0⊕(1,1)1 ,and the gauge bosons as (8,1,1)→(8,1)0 ,(1,8,1)→(1,3)0⊕(1,2)12⊕(1,2)−12⊕(1,1)0 ,(1,1,8)→4(1,1)0⊕2(1,1)1⊕2(1,1)−1 .Note that there are two Majorana neutrinos per generation (which is consistent with neutrino oscillations). Also, each generation has a pair of triplets (3,1)−13 and (3¯,1)13 , and doublets (1,2)12 and (1,2)−12 , which decouple at the GUT breaking scale due to the couplings (1,3,3¯)H(3,3¯,1)(3¯,1,3) and (1,3,3¯)H(1,3,3¯)(1,3,3¯) .Note that calling representations things like (3,3¯,1) and (8,1,1) is purely a physicist's convention, not a mathematician's, where representations are either labelled by Young tableaux or Dynkin diagrams with numbers on their vertices, but it is standard among GUT theorists. Details: Since the homotopy group π2(SU(3)×SU(3)[SU(2)×U(1)]/Z2)=Z ,this model predicts 't Hooft–Polyakov magnetic monopoles. Trinification is a maximal subalgebra of E6, whose matter representation 27 has exactly the same representation and unifies the (3,3,1)⊕(3¯,3¯,1)⊕(1,3¯,3) fields. E6 adds 54 gauge bosons, 30 it shares with SO(10), the other 24 to complete its 16 16 ¯
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Creatorrhea** Creatorrhea: Creatorrhea is the abnormal excretion of muscle fibre in feces. The term is made from the prefix creat- or creato- which comes from Greek kreas meaning "flesh". The suffix -rrhea (or -rrhoea) comes from Greek -rrhoia meaning "flow," hence discharge.Digestion of food is achieved through a mixture of mechanical and chemical processes. Failure to produce or secrete chemicals known as digestive enzymes can lead to failure digesting or breaking down specific components of ingested food.Where there is a failure to produce, release, or convert trypsinogen—an inactive enzyme precursor or zymogen—muscle fibres are not properly digested and are therefore released in the feces. Trypsinogen is produced in the pancreas and released into the alimentary canal where it is converted to the active enzyme trypsin. Creatorrhea: Inflammation of the pancreas, or pancreatitis can therefore precipitate this condition, as can cystic fibrosis which also affects the production of digestive enzymes.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**FFADO** FFADO: Free FireWire Audio Drivers (FFADO) is a project to provide open-source drivers for FireWire sound interfaces on Linux. The project began as FreeBoB, a driver specifically for FireWire audio devices based on technology made by BridgeCo, which use an interface named BeBoB The current version allows such devices to be accessed via the JACK Audio Connection Kit (JACK). Following a presentation of a paper at the 2007 Linux Audio Conference outlining the future of the project, on March 26, 2007, it was announced that the project would be renamed to FFADO, as the drivers are being rewritten to include support for other FireWire audio chipsets.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**ATT&amp;CK** ATT&amp;CK: The Adversarial Tactics, Techniques, and Common Knowledge or MITRE ATT&CK is a guideline for classifying and describing cyberattacks and intrusions. It was created by the Mitre Corporation and released in 2013.The framework consists of 14 tactics categories consisting of "technical objectives" of an adversary. Examples include privilege escalation and command and control. These categories are then broken down further into specific techniques and sub-techniques.The framework is an alternative to the Cyber Kill Chain developed by Lockheed Martin.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Diffuse series** Diffuse series: The diffuse series is a series of spectral lines in the atomic emission spectrum caused when electrons jump between the lowest p orbital and d orbitals of an atom. The total orbital angular momentum changes between 1 and 2. The spectral lines include some in the visible light, and may extend into ultraviolet or near infrared. The lines get closer and closer together as the frequency increases never exceeding the series limit. The diffuse series was important in the development of the understanding of electron shells and subshells in atoms. The diffuse series has given the letter d to the d atomic orbital or subshell. Diffuse series: The diffuse series has values given by where m=2,3,4,... The series is caused by transitions from the lowest P state to higher energy D orbitals. One terminology to identify the lines is: 1P-mD But note that 1P just means the lowest P state in the valence shell of an atom and that the modern designation would start at 2P, and is larger for higher atomic numbered atoms. The terms can have different designations, mD for single line systems, mδ for doublets and md for triplets.Since the Electron in the D subshell state is not the lowest energy level for the alkali atom (the S is) the diffuse series will not show up as absorption in a cool gas, however it shows up as emission lines. The Rydberg correction is largest for the S term as the electron penetrates the inner core of electrons more. Diffuse series: The limit for the series corresponds to electron emission, where the electron has so much energy it escapes the atom.In alkali metals the P terms are split 2P32 and 2P12 . This causes the spectral lines to be doublets, with a constant spacing between the two parts of the double line.This splitting is called fine structure. The splitting is larger for atoms with higher atomic number. The splitting decreases towards the series limit. Another splitting occurs on the redder line of the doublet. This is because of splitting in the D level nd2D32 and nd2D52 . Splitting in the D level has a lesser amount than the P level, and it reduces as the series limit is approached. History: The diffuse series used to be called the first subordinate series, with the sharp series being the second subordinate, both being subordinate to (less intense than) the principal series. Laws for alkali metals: The diffuse series limit is the same as the sharp series limit. In the late 1800s these two were termed supplementary series. Laws for alkali metals: Spectral lines of the diffuse series are split into three lines in what is called fine structure. These lines cause the overall line to look diffuse. The reason this happens is that both the P and D levels are split into two closely spaced energies. P is split into P12andP32 . D is split into D32andD52 . Only three of the possible four transitions can take place because the angular momentum change cannot have a magnitude greater than one.In 1896 Arthur Schuster stated his law: "If we subtract the frequency of the fundamental vibration from the convergence frequency of the principal series , we obtain the convergence frequency of the supplementary series". But in the next issue of the journal he realised that Rydberg had published the idea a few months earlier.Rydberg Schuster Law: Using wave numbers, the difference between the diffuse and sharp series limits and principal series limit is the same as the first transition in the principal series. Laws for alkali metals: This difference is the lowest P level.Runge's Law: Using wave numbers the difference between the diffuse series limit and fundamental series limit is the same as the first transition in the diffuse series. This difference is the lowest D level energy. Lithium Lithium has a diffuse series with diffuse lines averaged around 6103.53, 4603.0, 4132.3, 3915.0 and 3794.7 Å. Sodium The sodium diffuse series has wave numbers given by: where n=3,4,5,6,... The sharp series has wave numbers given by: where n=4,5,6,... when n tends to infinity the diffuse and sharp series end up with the same limit. Potassium Alkaline earths: A diffuse series of triplet lines is designated by series letter d and formula 1p-md. The diffuse series of singlet lines has series letter S and formula 1P-mS. Helium Helium is in the same category as alkaline earths with respect to spectroscopy, as it has two electrons in the S subshell as do the other alkaline earths. Helium has a diffuse series of doublet lines with wavelengths 5876, 4472 and 4026 Å. Helium when ionised is termed HeII and has a spectrum very similar to hydrogen but shifted to shorter wavelengths. This has a diffuse series as well with wavelengths at 6678, 4922 and 4388 Å. Magnesium Magnesium has a diffuse series of triplets and a sharp series of singlets. Calcium Calcium has a diffuse series of triplets and a sharp series of singlets. Strontium With strontium vapour, the most prominent lines are from the diffuse series. Barium Barium has a diffuse series running from infrared to ultraviolet with wavelengths at 25515.7, 23255.3, 22313.4; 5818.91, 5800.30, 5777.70; 4493.66, 4489.00; 4087.31, 4084.87; 3898.58, 3894.34; 3789.72, 3788.18; 3721.17, and 3720.85 Å History: At Cambridge University George Liveing and James Dewar set out to systematically measure spectra of elements from groups I, II and III in visible light and longer wave ultraviolet that would transmit through air. They noticed that lines for sodium were alternating sharp and diffuse. They were the first to use the term "diffuse" for the lines. They classified alkali metal spectral lines into sharp and diffuse categories. In 1890 the lines that also appeared in the absorption spectrum were termed the principal series. Rydberg continued the use of sharp and diffuse for the other lines, whereas Kayser and Runge preferred to use the term first subordinate series for the diffuse series.Arno Bergmann found a fourth series in infrared in 1907, and this became known as Bergmann Series or fundamental series.Heinrich Kayser, Carl Runge and Johannes Rydberg found mathematical relations between the wave numbers of emission lines of the alkali metals.Friedrich Hund introduced the s, p, d, f notation for subshells in atoms. Others followed this use in the 1930s and the terminology has remained to this day.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Cartan–Karlhede algorithm** Cartan–Karlhede algorithm: The Cartan–Karlhede algorithm is a procedure for completely classifying and comparing Riemannian manifolds. Given two Riemannian manifolds of the same dimension, it is not always obvious whether they are locally isometric. Élie Cartan, using his exterior calculus with his method of moving frames, showed that it is always possible to compare the manifolds. Carl Brans developed the method further, and the first practical implementation was presented by Anders Karlhede in 1980.The main strategy of the algorithm is to take covariant derivatives of the Riemann tensor. Cartan showed that in n dimensions at most n(n+1)/2 differentiations suffice. If the Riemann tensor and its derivatives of the one manifold are algebraically compatible with the other, then the two manifolds are isometric. The Cartan–Karlhede algorithm therefore acts as a kind of generalization of the Petrov classification. Cartan–Karlhede algorithm: The potentially large number of derivatives can be computationally prohibitive. The algorithm was implemented in an early symbolic computation engine, SHEEP, but the size of the computations proved too challenging for early computer systems to handle. For most problems considered, far fewer derivatives than the maximum are actually required, and the algorithm is more manageable on modern computers. On the other hand, no publicly available version exists in more modern software. Physical applications: The Cartan–Karlhede algorithm has important applications in general relativity. One reason for this is that the simpler notion of curvature invariants fails to distinguish spacetimes as well as they distinguish Riemannian manifolds. This difference in behavior is due ultimately to the fact that spacetimes have isotropy subgroups which are subgroups of the Lorentz group SO+(1,3), which is a noncompact Lie group, while four-dimensional Riemannian manifolds (i.e., with positive definite metric tensor), have isotropy groups which are subgroups of the compact Lie group SO(4). Physical applications: In 4 dimensions, Karlhede's improvement to Cartan's program reduces the maximal number of covariant derivatives of the Riemann tensor needed to compare metrics to 7. In the worst case, this requires 3156 independent tensor components. There are known models of spacetime requiring all 7 covariant derivatives. For certain special families of spacetime models, however, often far fewer often suffice. It is now known, for example, that at most one differentiation is required to compare any two null dust solutions, at most two differentiations are required to compare any two Petrov D vacuum solutions, at most three differentiations are required to compare any two perfect fluid solutions.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**1,3-Dimethylbutylamine** 1,3-Dimethylbutylamine: 1,3-Dimethylbutylamine (1,3-DMBA, dimethylbutylamine, DMBA, 4-amino-2-methylpentane, or AMP), is a stimulant drug structurally related to methylhexanamine where a butyl group replaces the pentyl group. The compound is an aliphatic amine. The hydrochloride and citrate salts of DMBA has been identified as unapproved ingredients in some over-the-counter dietary supplements, in which it is used in an apparent attempt to circumvent bans on methylhexanamine. The U.S. Food and Drug Administration (FDA) considers any dietary supplement containing DMBA to be "adulterated". Despite the FDA's opposition, DMBA continues to be sold in the US.There are no known human safety studies on DMBA and its health effects are entirely unknown.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Autoimmune/inflammatory syndrome induced by adjuvants** Autoimmune/inflammatory syndrome induced by adjuvants: Autoimmune/inflammatory syndrome induced by adjuvants (ASIA), or Shoenfeld's syndrome, is a hypothesised autoimmune disorder proposed by Israeli immunologist Yehuda Shoenfeld in 2011. According to Shoenfeld, the syndrome is triggered by exposure to adjuvants and includes four conditions: "post-vaccination symptoms", macrophagic myofasciitis, Gulf war syndrome, sick building syndrome, and siliconosis. Shoenfeld alleges that the syndrome is caused by adjuvants such as silicone, tetramethylpentadecane, pristane, and aluminum. However, causality is difficult to prove because ASIA only occurs in a small fraction of patients exposed to these adjuvants. Additionally, proponents of this theory allege that the disorder can manifest anywhere from 2 days to 23 years after exposure. Shoenfeld has also named Sjögren's syndrome as potentially being another facet of ASIA.However, apart from the theoretical concept of ASIA, there is a lack of reproducible evidence for any causal relationship between adjuvant and autoimmune condition. A study of 18,000 people showed that there is no merit to the theory of autoimmune/inflammatory syndrome induced by adjuvants.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Language-for-specific-purposes dictionary** Language-for-specific-purposes dictionary: A language-for-specific-purposes dictionary (LSP dictionary) is a reference work which defines the specialised vocabulary used by experts within a particular field, for example, architecture. The discipline that deals with these dictionaries is specialised lexicography. Medical dictionaries are well-known examples of the type. Users: As described in Bergenholtz/Tarp 1995, LSP dictionaries are often made for users who are already specialists with a subject field (experts), but may also be made for semi-experts and laypeople. In contrast to LSP dictionaries, LGP (language for general purposes) dictionaries are made to be used by an average user. LSP dictionaries may have one or more functions. LSP dictionaries may have communicative functions, such as helping users to understand, translate and produce texts. Dictionaries may also have cognitive functions such as helping users to develop knowledge in general or about a specific topic, such as the birthday of a famous person and the inflectional paradigm of a specific verb. Different types: According to Sandro Nielsen, LSP dictionaries may cover one language (monolingual) or two languages (bilingual), and occasionally more (multilingual). An LSP dictionary that attempts to cover as much of the vocabulary in a subject field as possible is classified by Nielsen as a maximizing dictionary, and one that attempts to cover a limited number of terms within a subject field is a minimizing dictionary. Different types: Also, Nielsen 1994 distinguishes between the following types of dictionaries: An LSP dictionary that covers more than one subject field is called a multi-field dictionary, an LSP dictionary that covers one subject field (e.g. a dictionary of law) is called a single-field dictionary, and an LSP dictionary that covers part of a subject field (e.g. a dictionary of contract law) is called a sub-field dictionary. Different types: Usage dictionaries A common form of LSP dictionary is a usage dictionary for a particular field or genre, such as journalism, providing advice on words and phrases to prefer, and distinctions between easily confused usages. Probably the best known of these for news style writing is the AP Stylebook. Many such works also have elements of a style guide, though most of the latter are not in dictionary format, but arranged as a series of rules in sections, and more concerned with grammar and punctuation. Some usage dictionaries are intended for a general rather than specialized audience, and are therefore more comprehensive; two major ones are Fowler's Dictionary of Modern English Usage and Garner's Modern English Usage.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Unit record equipment** Unit record equipment: Starting at the end of the nineteenth century, well before the advent of electronic computers, data processing was performed using electromechanical machines collectively referred to as unit record equipment, electric accounting machines (EAM) or tabulating machines. Unit record equipment: Unit record machines came to be as ubiquitous in industry and government in the first two-thirds of the twentieth century as computers became in the last third. They allowed large volume, sophisticated data-processing tasks to be accomplished before electronic computers were invented and while they were still in their infancy. This data processing was accomplished by processing punched cards through various unit record machines in a carefully choreographed progression. This progression, or flow, from machine to machine was often planned and documented with detailed flowcharts that used standardized symbols for documents and the various machine functions. All but the earliest machines had high-speed mechanical feeders to process cards at rates from around 100 to 2,000 per minute, sensing punched holes with mechanical, electrical, or, later, optical sensors. The operation of many machines was directed by the use of a removable plugboard, control panel, or connection box. Initially all machines were manual or electromechanical. The first use of an electronic component was in 1937 when a photocell was used in a Social Security bill-feed machine.: 65  Electronic components were used on other machines beginning in the late 1940s. Unit record equipment: The term unit record equipment also refers to peripheral equipment attached to computers that reads or writes unit records, e.g., card readers, card punches, printers, MICR readers. IBM was the largest supplier of unit record equipment and this article largely reflects IBM practice and terminology. History: Beginnings In the 1880s Herman Hollerith invented the recording of data on a medium that could then be read by a machine. Prior uses of machine readable media had been for lists of instructions (not data) to drive programmed machines such as Jacquard looms and mechanized musical instruments. "After some initial trials with paper tape, he settled on punched cards [...]". To process these punched cards, sometimes referred to as "Hollerith cards", he invented the keypunch, sorter, and tabulator unit record machines. These inventions were the foundation of the data processing industry. The tabulator used electromechanical relays to increment mechanical counters. Hollerith's method was used in the 1890 census. The company he founded in 1896, the Tabulating Machine Company (TMC), was one of four companies that in 1911 were amalgamated in the forming of a fifth company, the Computing-Tabulating-Recording Company, later renamed IBM. History: Following the 1900 census a permanent Census bureau was formed. The bureau's contract disputes with Hollerith led to the formation of the Census Machine Shop where James Powers and others developed new machines for part of the 1910 census processing. Powers left the Census Bureau in 1911, with rights to patents for the machines he developed, and formed the Powers Accounting Machine Company. In 1927 Powers' company was acquired by Remington Rand. In 1919 Fredrik Rosing Bull, after examining Hollerith's machines, began developing unit record machines for his employer. Bull's patents were sold in 1931, constituting the basis for Groupe Bull. History: These companies, and others, manufactured and marketed a variety of general-purpose unit record machines for creating, sorting, and tabulating punched cards, even after the development of computers in the 1950s. Punched card technology had quickly developed into a powerful tool for business data-processing. Timeline 1884: Herman Hollerith files a patent application titled "Art of Compiling Statistics"; granted U.S. Patent 395,782 on January 8, 1889. 1886: First use of tabulating machine in Baltimore's Department of Health. 1887: Hollerith files a patent application for an integrating tabulator (granted in 1890). 1889: First recorded use of integrating tabulator in the Office of the Surgeon General of the Army. 1890-1895: U.S. Census, Superintendents Robert P. Porter 1889-1893 and Carroll D. Wright 1893-1897, tabulations are done using equipment supplied by Hollerith. 1896: The Tabulating Machine Company founded by Hollerith, trade name for products is Hollerith 1901: Hollerith Automatic Horizontal Sorter 1904: Porter, having returned to England, forms The Tabulator Limited (UK) to market Hollerith's machines. 1905: Hollerith reincorporates the Tabulating Machine Company as The Tabulating Machine Company 1906: Hollerith Type 1 Tabulator, the first tabulator with an automatic card feed and control panel. 1909: The Tabulator Limited renamed as British Tabulating Machine Company (BTM). 1910: Tabulators built by the Census Machine Shop print results. 1910: Willy Heidinger, an acquaintance of Hollerith, licenses Hollerith’s The Tabulating Machine Company patents, creating Dehomag in Germany. 1911: Computing-Tabulating-Recording Company (CTR), a holding company, formed by the amalgamation of The Tabulating Machine Company and three other companies. 1911: James Powers forms Powers Tabulating Machine Company, later renamed Powers Accounting Machine Company. Powers had been employed by the Census Bureau to work on tabulating machine development and was given the right to patent his inventions there. The machines he developed sensed card punches mechanically, as opposed to Hollerith's electric sensing. 1912: The first Powers horizontal sorting machine. 1914: Thomas J. Watson hired by CTR. 1914: The Tabulating Machine Company produces 2 million punched cards per day. 1914: The first Powers printing tabulator. 1915 Powers Tabulating Machine Company establishes European operations through the Accounting and Tabulating Machine Company of Great Britain Limited.: 259  1919: Fredrik Rosing Bull, after studying Hollerith's machines, constructs a prototype 'ordering, recording and adding machine' (tabulator) of his own design. About a dozen machines were produced during the following several years for his employer. 1920s: Early in this decade punched cards began use as bank checks. 1920: BTM begins manufacturing its own machines, rather than simply marketing Hollerith equipment. 1920: The Tabulating Machine Company's first printing tabulator, the Hollerith Type 3. 1921: Powers-Samas develops the first commercial alphabetic punched card representation. 1922: Powers develops an alphabetic printer. 1923: Powers develops a tabulator that accumulates and prints both sub and grand totals (rolling totals). 1923: CTR acquires 90% ownership of Dehomag, thus acquiring patents developed by them. 1924: Computing-Tabulating-Recording Company (CTR) renamed International Business Machines (IBM). There would be no IBM-labeled products until 1933. 1925: The Tabulating Machine Company's first horizontal card sorter, the Hollerith Type 80, processes 400 cards/min. 1927: Remington Typewriter Company and Rand Kardex combine to form Remington Rand. Within a year, Remington Rand acquires the Powers Accounting Machine Company. History: 1928: The Tabulating Machine Company's first tabulator that could subtract, the Hollerith Type IV tabulator. The Tabulating Machine Company begins its collaboration with Benjamin Wood, Wallace John Eckert and the Statistical Bureau at Columbia University.: 67  The Tabulating Machine Company's 80-column card introduced. Comrie uses punched card machines to calculate the motions of the moon. This project, in which 20,000,000 holes are punched into 500,000 cards continues into 1929. It is the first use of punched cards in a purely scientific application. History: 1929 The Accounting and Tabulating Machine Company of Great Britain Limited renamed Powers-Samas Accounting Machine Limited (Samas, full name Societe Anonyme des Machines a Statistiques, had been the Power's sales agency in France, formed in 1922). The informal reference "Acc and Tab" would persist.: 259  1930: The Remington Rand 90 column card, offering "more storage capacity [and] alphabetic capability": 50  1931: H.W.Egli - BULL founded to capitalize on the punched card technology patents of Fredrik Rosing Bull. The Tabulator model T30 is introduced. History: 1931: The Tabulating Machine Company's first punched card machine that could multiply, the 600 Multiplying Punch.: 14  Their first alphabetical accounting machine - although not a complete alphabet, the Alphabetic Tabulator Model B was quickly followed by the full alphabet ATC.: 50  1931: The term "Super Computing Machine" is used by the New York World newspaper to describe the Columbia Difference Tabulator, a one-of-a-kind special purpose tabulator-based machine made for the Columbia Statistical Bureau, a machine so massive it was nicknamed "Packard". The Packard attracted users from across the country: "the Carnegie Foundation, Yale, Pittsburgh, Chicago, Ohio State, Harvard, California and Princeton." 1933: Compagnie des Machines Bull is the new name of the reorganized H.W. Egli - Bull. History: 1933: The Tabulating Machine Company name disappears as subsidiary companies are merged into IBM. The Hollerith trade name is replaced by IBM. IBM introduces removable control panels. 1933: Dehomag's BK tabulator (developed independently of IBM) announced. 1934: IBM renames its Tabulators as Electric Accounting Machines. 1935: BTM Rolling Total Tabulator introduced. 1937: Leslie Comrie establishes the Scientific Computing Service Limited - the first for-profit calculating agency. 1937: The first collator, the IBM 077 Collator The first use of an electronic component in an IBM product was a photocell in a Social Security bill-feed machine.: 65  By 1937 IBM had 32 presses at work in Endicott, N.Y., printing, cutting and stacking five to 10 million punched cards every day. 1938: Powers-Samas multiplying punch introduced. 1941 Introduction of Bull Type A unit record machines based on 80 column card. History: 1943: "IBM had about 10,000 tabulators on rental [...] 601 multipliers numbered about 2000 [...] keypunch[es] 24,500".: 21  1946: The first IBM punched card machine that could divide, the IBM 602, was introduced. Unreliable, it "was upgraded to the 602-A (a '602 that worked') [...] by 1948". The IBM 603 Electronic Multiplier was introduced, "the first electronic calculator ever placed into production.". History: 1948: The IBM 604 Electronic Punch. "No other calculator of comparable size or cost could match its capability".: 62  1949: The IBM 024 Card Punch, 026 Printing Card Punch, 082 Sorter, 403 Accounting machine, 407 Accounting machine, and Card Programmed Calculator (CPC) introduced. 1952: Bull Gamma 3 introduced. An electronic calculator with delay-line memory, programmed by a connection panel, that was connected to a tabulator or card reader-punch. The Gamma 3 had greater capacity, greater speed, and lower rentals than competitive products.: 461–474  1952: Remington Rand 409 Calculator (aka. UNIVAC 60, UNIVC 120) introduced. History: 1952: Underwood Corp acquires the American assets of Powers-Samas.By the 1950s punched cards and unit record machines had become ubiquitous in academia, industry and government. The warning often printed on cards that were to be individually handled, "Do not fold, spindle or mutilate", coined by Charles A. Philips, became a motto for the post-World War II era (even though many people had no idea what spindle meant).With the development of computers punched cards found new uses as their principal input media. Punched cards were used not only for data, but for a new application - computer programs, see: Computer programming in the punched card era. Unit record machines therefore remained in computer installations in a supporting role for keypunching, reproducing card decks, and printing. History: 1955: IBM produces 72.5 million punched cards per day. History: 1957: The IBM 608, a transistor version of the 1948 IBM 604. First commercial all-transistor calculator.: 34  1958: The "Series 50", basic accounting machines, was announced. These were modified machines, with reduced speed and/or function, offered for rental at reduced rates. The name "Series 50" relates to a similar marketing effort, the "Model 50", seen in the IBM 1940 product booklet. An alternate report identifies the modified machines as "Type 5050" introduced in 1959 and notes that Remington-Rand introduced similar products. History: 1959: BTM is merged with rival Powers-Samas to form International Computers and Tabulators(ICT). History: 1959: The IBM 1401, internally known in IBM for a time as "SPACE" for "Stored Program Accounting and Calculating Equipment" and developed in part as a response to the Bull Gamma 3, outperforms three IBM 407s and a 604, while having a much lower rental.: 465–494  That functionality combined with the availability of tape drives, accelerated the decline in unit record equipment usage. History: 1960: The IBM 609 Calculator, an improved 608 with core memory. This will be IBMs last punched card calculator.Many organizations were loath to alter systems that were working, so production unit record installations remained in operation long after computers offered faster and more cost effective solutions. Specialized uses of punched cards, including toll collection, microform aperture cards, and punched card voting, kept unit record equipment in use into the twenty-first century. Another reason was cost or availability of equipment: for example in 1965 an IBM 1620 computer did not have a printer as standard equipment, so it was normal in such installations to punch printed output onto cards, using two cards per line if required and print these cards on an IBM 407 accounting machine and then throw the cards away. History: 1968: International Computers and Tabulators (ICT) is merged with English Electric Computers, forming International Computers Limited (ICL). History: 1969: The IBM System/3, renting for less than $1,000 a month, the ancestor of IBM's midrange computer product line, aka. minicomputers, was aimed at new customers and organizations that still used IBM 1400 series computers or unit record equipment. It featured a new, smaller, punched card with a 96 column format. Instead of the rectangular punches in the classic IBM card, the new cards had tiny (1 mm), circular holes much like paper tape. By July 1974 more than 25,000 System/3s had been installed. History: 1971: The IBM 129 Card Data Recorder (keypunch and auxiliary on-line card reader/punch) is the last, or among the last, 80-column card unit record product announcements (other than card readers and card punches attached to computers). 1975 Cardamation founded, a U.S. company that supplied punched card equipment and supplies until 2011. Endings 1976: The IBM 407 Accounting Machine was withdrawn from marketing. 1978: IBM's Rochester plant made its last shipment of the IBM 082, 084, 085, 087, 514, and 548 machines. The System/3 was succeeded by the System/38. 1980: The last reconditioning of an IBM 519 Document Originating Punch. 1984: The IBM 029 Card Punch, announced in 1964, was withdrawn from marketing. IBM closed its last punch card manufacturing plant. 2010: A group from the Computer History Museum reported that an IBM 402 Accounting Machine and related punched card equipment was still in operation at a filter manufacturing company in Conroe, Texas. The punched card system was still in use as of 2013. 2011: The owner of Cardamation, Robert G. Swartz, dies, and the company, perhaps the last supplier of punch card equipment, ceases operation. 2015: Punched cards for time clocks and some other applications were still available; one supplier was the California Tab Card Company. As of 2018, the web site was no longer in service. Punched cards: The basic unit of data storage was the punched card. The IBM 80-column card was introduced in 1928. The Remington Rand Card with 45 columns in each of two tiers, thus 90 columns, in 1930. Powers-Samas punched cards include one with 130 columns. Columns on different punch cards vary from 5 to 12 punch positions. Punched cards: The method used to store data on punched cards is vendor specific. In general each column represents a single digit, letter or special character. Sequential card columns allocated for a specific use, such as names, addresses, multi-digit numbers, etc., are known as a field. An employee number might occupy 5 columns; hourly pay rate, 3 columns; hours worked in a given week, 2 columns; department number 3 columns; project charge code 6 columns and so on. Keypunching: Original data were usually punched into cards by workers, often women, known as keypunch operators, under the control of a program card (called a drum card because it was installed on a rotating drum in the machine), which could automatically skip or duplicate predefined card columns, enforce numeric-only entry, and, later, right-justify a number entered. Their work was often checked by a second operator using a verifier machine, also under the control of a drum card. The verifier operator re-keyed the source data and the machine compared what was keyed to what had been punched on the original card. Sorting: An activity in many unit record shops was sorting card decks into the order necessary for the next processing step. Sorters, like the IBM 80 series Card Sorters, sorted input cards into one of 13 pockets depending on the holes punched in a selected column and the sorter's settings. The 13th pocket was for blanks and rejects. Cards were sorted on one card column at a time; sorting on, for example, a five digit zip code required that the card deck be processed five times. Sorting an input card deck into ascending sequence on a multiple column field, such as an employee number, was done by a radix sort, bucket sort, or a combination of the two methods. Sorting: Sorters were also used to separate decks of interspersed master and detail cards, either by a significant hole punch or by the cards corner-cut.More advanced functionality was available in the IBM 101 Electronic Statistical Machine, which could Sort Count Accumulate totals Print summaries Send calculated results (counts and totals) to an attached IBM 524 Duplicating Summary Punch.: pp. 5–6 Tabulating: Reports and summary data were generated by accounting or tabulating machines. The original tabulators only counted the presence of a hole at a location on a card. Simple logic, like ands and ors could be done using relays. Tabulating: Later tabulators, such as those in IBM's 300 series, directed by a control panel, could do both addition and subtraction of selected fields to one or more counters and print each card on its own line. At some signal, say a following card with a different customer number, totals could be printed for the just completed customer number. Tabulators became complex: the IBM 405 contained 55,000 parts (2,400 different) and 75 miles of wire; a Remington Rand machine circa 1941 contained 40,000 parts. Calculating: In 1931, IBM introduced the model 600 multiplying punch. The ability to divide became commercially available after World War II. The earliest of these calculating punches were electromechanical. Later models employed vacuum tube logic. Electronic modules developed for these units were used in early computers, such as the IBM 650. The Bull Gamma 3 calculator could be attached to tabulating machines, unlike the stand-alone IBM calculators. Card punching: Card punching operations included: Gang punching - producing a large number of identically punched cards—for example, for inventory tickets. Reproducing - reproducing a card deck in its entirety or just selected fields. A payroll master card deck might be reproduced at the end of a pay period with the hours worked and net pay fields blank and ready for the next pay period's data. Programs in the form of card decks were reproduced for backup. Summary punching - punching new cards with details and totals from an attached tabulating machine. Mark sense reading - detecting electrographic lead pencil marks on ovals printed on the card and punching the corresponding data values into the card.Singularly or in combination, these operations were provided in a variety of machines. The IBM 519 Document-Originating Machine could perform all of the above operations. The IBM 549 Ticket Converter read data from Kimball tags, copying that data to punched cards. With the development of computers, punched cards were also produced by computer output devices. Collating: IBM collators had two input hoppers and four output pockets. These machines could merge or match card decks based on the control panel's wiring as illustrated here. The Remington Rand Interfiling Reproducing Punch Type 310-1 was designed to merge two separate files into a single file. It could also punch additional information into those cards and select desired cards.Collators performed operations comparable to a database join. Interpreting: An interpreter prints characters on a punched card equivalent to the values of all or selected columns. The columns to be printed can be selected and even reordered, based on the machine's control panel wiring. Later models could print on one of several rows on the card. Unlike keypunches, which print values directly above each column, interpreters generally use a font that was a little wider than a column and can only print up to 60 characters per row. Typical models include the IBM 550 Numeric Interpreter, the IBM 557 Alphabetic Interpreter, and the Remington Rand Type 312 Alphabetic Interpreter. Filing: Batches of punched cards were often stored in tub files, where individual cards could be pulled to meet the requirements of a particular application. Transmission of punched card data: Electrical transmission of punched card data was invented in the early 1930s. The device was called an Electrical Remote Control of Office Machines and was assigned to IBM. Inventors were Joseph C. Bolt of Boston & Curt I. Johnson; Worcester, Mass. assors to the Tabulating Machine Co., Endicott, NY. The Distance Control Device received a US patent in Aug.9,1932: U.S. Patent 1,870,230. Letters from IBM talk about filling in Canada in 9/15/1931. Processing punched tape: The IBM 046 Tape-to-Card Punch and the IBM 047 Tape-to-Card Printing Punch (which was almost identical, but with the addition of a printing mechanism) read data from punched paper tape and punched that data into cards. The IBM 063 Card-Controlled Tape Punch read punched cards, punching that data into paper tape. Control panel wiring and Connection boxes: The operation of Hollerith/BTM/IBM/Bull tabulators and many other types of unit record equipment was directed by a control panel. Operation of Powers-Samas/Remington Rand unit record equipment was directed by a connection box.Control panels had a rectangular array of holes called hubs which were organized into groups. Wires with metal ferrules at each end were placed in the hubs to make connections. The output from some card column positions might connected to a tabulating machine's counter, for example. A shop would typically have separate control panels for each task a machine was used for. Paper handling equipment: For many applications, the volume of fan-fold paper produced by tabulators required other machines, not considered to be unit record machines, to ease paper handling. A decollator separated multi-part fan-fold paper into individual stacks of one-part fan-fold and removed the carbon paper. A burster separated one-part fan-fold paper into individual sheets. For some uses it was desirable to remove the tractor-feed holes on either side of the fan-fold paper. In these cases the form's edge strips were perforated and the burster removed them as well.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Rubber science** Rubber science: Rubber science is a science fiction term describing a quasi-scientific explanation for an aspect of a science fiction setting. Rubber science explanations are fictional but convincing enough to avoid upsetting the suspension of disbelief. Rubber science is a feature of most genres of science fiction, with the exception of hard science fiction. It is also frequently invoked in comic books.The term was coined by Norman Spinrad in an essay entitled "Rubber Sciences", in Reginald Bretnor's anthology The Craft of Science Fiction. Usage: Rubber science was Spinrad's term for "pseudo-science ... made up by the writer with literary care that it not be discontinuous with the reader's realm of the possible." The term and concept have subsequently been adopted by science fiction writers to describe science based on "speculation, extrapolation, fabrication or invention." For example, Star Trek: Voyager script consultant Andre Bormanis used "the so-called rubber science or the very speculative, consistent with reality" when he was unable to find scientific explanations "based in fairly well-established real science".Some science fiction authors have used the term disparagingly. Bill Ransom associates rubber science with science fiction of the 1940s-1950s, an era marked by "lots of cool gadgets," before "the genre became more character driven" under the influence of writers such as Frank Herbert and Samuel Delany, focusing on humans rather than technology solving dilemmas. Lucius Shepard, responding to a negative review by George Turner, decried the suggestion that he "haul a gob of rubber science out of the vat in order to justify and explain [his] physics". Ann C. Crispin considered Star Trek's rubber science to be a forgivable flaw.Reviewers have used the term to praise deft or plausible scientific explanations, and to criticise underdeveloped or distracting worldbuilding; for instance, a Washington Post review criticized Orson Scott Card's novel Xenocide for its "chapter long dialogues about rubber science".
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Desiccation** Desiccation: Desiccation (from Latin de- 'thoroughly', and siccare 'to dry') is the state of extreme dryness, or the process of extreme drying. A desiccant is a hygroscopic (attracts and holds water) substance that induces or sustains such a state in its local vicinity in a moderately sealed container. Industry: Desiccation is widely employed in the oil and gas industry. These materials are obtained in a hydrated state, but the water content leads to corrosion or is incompatible with downstream processing. Removal of water is achieved by cryogenic condensation, absorption into glycols, and absorption onto desiccants such as silica gel. Laboratory: A desiccator is a heavy glass or plastic container, now somewhat antiquated, used in practical chemistry for drying or keeping small amounts of materials very dry. The material is placed on a shelf, and a drying agent or desiccant, such as dry silica gel or anhydrous sodium hydroxide, is placed below the shelf. Laboratory: Often some sort of humidity indicator is included in the desiccator to show, by color changes, the level of humidity. These indicators are in the form of indicator plugs or indicator cards. The active chemical is cobalt chloride (CoCl2). Anhydrous cobalt chloride is blue. When it bonds with two water molecules, (CoCl2•2H2O), it turns purple. Further hydration results in the pink hexaaquacobalt(II) chloride complex [Co(H2O)6]2+. Biology and ecology: In biology and ecology, desiccation refers to the drying out of a living organism, such as when aquatic animals are taken out of water, slugs are exposed to salt, or when plants are exposed to sunlight or drought. Ecologists frequently study and assess various organisms' susceptibility to desiccation. For example, in one study the investigators found that Caenorhabditis elegans dauer is a true anhydrobiote that can withstand extreme desiccation and that the basis of this ability is founded in the metabolism of trehalose. Biology and ecology: DNA damage and repair Several bacterial species have been shown to accumulate DNA damages upon desiccation. Deinococcus radiodurans is extremely resistant to ionizing radiation. The functions necessary to survive ionizing radiation are also necessary to survive prolonged desiccation. Radiation resistance is considered to be an incidental consequence of the organism's evolutionary adaptation to dehydration, a common physiological stress in nature. The chromosomal DNA from desiccated D. radiodurans revealed increased DNA double-strand breaks. DNA double-strand breaks are repaired principally by a RecA-dependent recombination process that requires the presence of two genome copies. By this process D. radiodurans can survive thousands of double-strand breaks per cell.Mycobacterium smegmatis mutant strains that are deficient in the ability to repair double-strand breaks by the non-homologous end joining (NHEJ) pathway are more sensitive to prolonged desiccation during stationary phase than wild-type strains. NHEJ appears to be the preferred pathway for repairing double-strand breaks caused by desiccation during stationary phase. NHEJ can repair double-strand breaks even when only one chromosome is present in a cell. Biology and ecology: Upon exposure to extreme dryness, Bacillus subtilis endospores acquire DNA-double strand breaks and DNA-protein crosslinks. Broadcasting: In broadcast engineering, a desiccator may be used to pressurize the feedline of a high-power transmitter. Because it carries a large amount of energy from the transmitter to the antenna, the feedline must have low dielectric losses. Because it must also be lightweight so as not to overload the radio tower, air is often used as the dielectric. Since moisture can condense in these lines, desiccated air or nitrogen gas is pumped in. This pressure also keeps water or other dampness from coming in the line at any point along its length.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**KRAS** KRAS: KRAS (Kirsten rat sarcoma virus) is a gene that provides instructions for making a protein called K-Ras, a part of the RAS/MAPK pathway. The protein relays signals from outside the cell to the cell's nucleus. These signals instruct the cell to grow and divide (proliferate) or to mature and take on specialized functions (differentiate). It is called KRAS because it was first identified as a viral oncogene in the Kirsten RAt Sarcoma virus. The oncogene identified was derived from a cellular genome, so KRAS, when found in a cellular genome, is called a proto-oncogene. KRAS: The K-Ras protein is a GTPase, a class of enzymes which convert the nucleotide guanosine triphosphate (GTP) into guanosine diphosphate (GDP). In this way the K-Ras protein acts like a switch that is turned on and off by the GTP and GDP molecules. To transmit signals, it must be turned on by attaching (binding) to a molecule of GTP. The K-Ras protein is turned off (inactivated) when it converts the GTP to GDP. When the protein is bound to GDP, it does not relay signals to the nucleus. KRAS: The gene product of KRAS, the K-Ras protein, was first found as a p21 GTPase. Like other members of the ras subfamily of GTPases, the K-Ras protein is an early player in many signal transduction pathways. K-Ras is usually tethered to cell membranes because of the presence of an isoprene group on its C-terminus. There are two protein products of the KRAS gene in mammalian cells that result from the use of alternative exon 4 (exon 4A and 4B respectively): K-Ras4A and K-Ras4B. These proteins have different structures in their C-terminal region and use different mechanisms to localize to cellular membranes, including the plasma membrane. Function: KRAS acts as a molecular on/off switch, using protein dynamics. Once it is allosterically activated, it recruits and activates proteins necessary for the propagation of growth factors, as well as other cell signaling receptors like c-Raf and PI 3-kinase. KRAS upregulates the GLUT1 glucose transporter, thereby contributing to the Warburg effect in cancer cells. KRAS binds to GTP in its active state. It also possesses an intrinsic enzymatic activity which cleaves the terminal phosphate of the nucleotide, converting it to GDP. Upon conversion of GTP to GDP, KRAS is deactivated. The rate of conversion is usually slow, but can be increased dramatically by an accessory protein of the GTPase-activating protein (GAP) class, for example RasGAP. In turn, KRAS can bind to proteins of the Guanine Nucleotide Exchange Factor (GEF) class (such as SOS1), which forces the release of bound nucleotide (GDP). Subsequently, KRAS binds GTP present in the cytosol and the GEF is released from ras-GTP. Function: Other members of the Ras family include: HRAS and NRAS. These proteins all are regulated in the same manner and appear to differ in their sites of action within the cell. Clinical significance when mutated: This proto-oncogene is a Kirsten ras oncogene homolog from the mammalian Ras gene family. A single amino acid substitution, and in particular a single nucleotide substitution, is responsible for an activating mutation. The transforming protein that results is implicated in various malignancies, including lung adenocarcinoma, mucinous adenoma, ductal carcinoma of the pancreas and colorectal cancer.Several germline KRAS mutations have been found to be associated with Noonan syndrome and cardio-facio-cutaneous syndrome.Somatic KRAS mutations are found at high rates in leukemias, colorectal cancer, pancreatic cancer and lung cancer. Clinical significance when mutated: Colorectal cancer The impact of KRAS mutations is heavily dependent on the order of mutations. Primary KRAS mutations generally lead to a self-limiting hyperplastic or borderline lesion, but if they occur after a previous APC mutation it often progresses to cancer. KRAS mutations are more commonly observed in cecal cancers than colorectal cancers located in any other places from ascending colon to rectum.As of 2006, KRAS mutation was predictive of a very poor response to panitumumab (Vectibix) and cetuximab (Erbitux) therapy in colorectal cancer.As of 2008, the most reliable way to predict whether a colorectal cancer patient will respond to one of the EGFR-inhibiting drugs was to test for certain “activating” mutations in the gene that encodes KRAS, which occurs in 30%–50% of colorectal cancers. Studies show patients whose tumors express the mutated version of the KRAS gene will not respond to cetuximab or panitumumab.As of 2009 although presence of the wild-type (or normal) KRAS gene does not guarantee that these drugs will work, a number of large studies had shown that cetuximab had efficacy in mCRC patients with KRAS wild-type tumors. In the Phase III CRYSTAL study, published in 2009, patients with the wild-type KRAS gene treated with Erbitux plus chemotherapy showed a response rate of up to 59% compared to those treated with chemotherapy alone. Patients with the KRAS wild-type gene also showed a 32% decreased risk of disease progression compared to patients receiving chemotherapy alone.As o0f 2012 it was known that emergence of KRAS mutations was a frequent driver of acquired resistance to cetuximab anti-EGFR therapy in colorectal cancers. The emergence of KRAS mutant clones can be detected non-invasively months before radiographic progression. It suggests to perform an early initiation of a MEK inhibitor as a rational strategy for delaying or reversing drug resistance. Clinical significance when mutated: KRAS amplification KRAS gene can also be amplified in colorectal cancer and tumors harboring this genetic lesion are not responsive to EGFR inhibitors. Although KRAS amplification is infrequent in colorectal cancer, as of 2013 it was hypothesized to be responsible for precluding response to anti-EGFR treatment in some patients. As of 2015 amplification of wild-type Kras has also been observed in ovarian, gastric, uterine, and lung cancers. Clinical significance when mutated: Lung cancer Whether a patient is positive or negative for a mutation in the epidermal growth factor receptor (EGFR) will predict how patients will respond to certain EGFR antagonists such as erlotinib (Tarceva) or gefitinib (Iressa). Patients who harbor an EGFR mutation have a 60% response rate to erlotinib. However, the mutation of KRAS and EGFR are generally mutually exclusive. Lung cancer patients who are positive for KRAS mutation (and the EGFR status would be wild type) have a low response rate to erlotinib or gefitinib estimated at 5% or less.Different types of data including mutation status and gene expression did not have a significant prognostic power. No correlation to survival was observed in 72% of all studies with KRAS sequencing performed in non-small cell lung cancer (NSCLC). However, KRAS mutations can not only affect the gene itself and the expression of the corresponding protein, but can also influence the expression of other downstream genes involved in crucial pathways regulating cell growth, differentiation and apoptosis. The different expression of these genes in KRAS-mutant tumors might have a more prominent role in affecting patient's clinical outcomes.A 2008 paper published in Cancer Research concluded that the in vivo administration of the compound oncrasin-1 "suppressed the growth of K-ras mutant human lung tumor xenografts by >70% and prolonged the survival of nude mice bearing these tumors, without causing detectable toxicity", and that the "results indicate that oncrasin-1 or its active analogues could be a novel class of anticancer agents which effectively kill K-Ras mutant cancer cells." KRAS testing: In July 2009, the US Food and Drug Administration (FDA) updated the labels of two anti-EGFR monoclonal antibody drugs indicated for treatment of metastatic colorectal cancer, panitumumab (Vectibix) and cetuximab (Erbitux), to include information about KRAS mutations.In 2012, the FDA cleared a genetic test by QIAGEN named therascreen KRAS test, designed to detect the presence of seven mutations in the KRAS gene in colorectal cancer cells. This test aids physicians in identifying patients with metastatic colorectal cancer for treatment with Erbitux. The presence of KRAS mutations in colorectal cancer tissue indicates that the patient may not benefit from treatment with Erbitux. If the test result indicates that the KRAS mutations are absent in the colorectal cancer cells, then the patient may be considered for treatment with Erbitux. As a therapeutic target: As of 2014, driver mutations in KRAS were known to underlie the pathogenesis of up to 20% of human cancers. Hence KRAS is an attractive drug target, but as of 2018 lack of obvious binding sites had hindered pharmaceutical development. One potential drug interaction site is where GTP/GDP binds, but due to the extraordinarily high affinity of GTP/GDP for this site, it appeared unlikely as of 2018 that drug-like small molecule inhibitors could compete with GTP/GDP binding. Other than where GTP/GDP binds, there are no obvious high affinity binding sites for small molecules. As a therapeutic target: G12C mutation One fairly frequent driver mutation is KRASG12C which is adjacent a shallow binding site. As of 2019, this allowed the development of electrophilic KRAS inhibitors that can form irreversible covalent bonds with nucleophilic sulfur atom of Cys-12 and hence selectively target KRASG12C and leave wild-type KRAS untouched.In 2021, the U.S. FDA approved one KRASG12C mutant covalent inhibitor, sotorasib (AMG 510, Amgen) for the treatment of non-small cell lung cancer (NSCLC), the first KRAS inhibitor to reach the market and enter clinical use.A second is adagrasib (MRTX-849, Mirati Therapeutics) while JNJ-74699157 (also known as ARS-3248, Wellspring Biosciences/Janssen) has received an investigational new drug (IND) approval to start clinical trials. An antisense oligonucleotide (ASO) targeting KRAS, AZD4785 (AstraZeneca/Ionis Therapeutics), completed a phase I study but in 2019 was discontinued from further development because of insufficient knockdown of the target. As a therapeutic target: G12D mutation The most common KRAS mutation is G12D which is estimated to be present in up to 37% pancreatic cancers and over 12% of colorectal cancers. Normally amino acid position 12 of the KRAS protein is occupied by glycine but in G12D it is occupied by aspartic acid.As of 2023, there are no commercial drug candidates targeting the KRAS G12D mutation in the clinical phase of development. As of 2021, there were a number of drug candidates in preclinical stages of development targeting the KRAD G12D mutation. Mirati therapeutics has stated it was seeking investigational new drug (IND) approval in H1:2021 to start clinical trials. As of 2022 Revolution Medicines was exploring a small molecule therapy and reported anti-tumor activity in KRAS-G12D mutant tumor models.In 2021, the first clinical trial of a gene therapy targeting KRAS G12D was recruiting patients, sponsored by the National Cancer Institute.In June 2022, a case report was published about a 71-year-old woman with metastatic pancreatic cancer after extensive treatment (Whipple Surgery, radiation and multiple agent chemotherapy) who received a single infusion of her blood with engineered T cells with 2 genes encoding T cell receptors, directed to both the G12D mutation and an HLA allele (HLA-C*08:02). Her tumor regressed persistently. But another similarly treated patient died from the cancer. Interactions: KRAS has been shown to interact with: C-Raf, PIK3CG, RALGDS, and RASSF2. Calmodulin
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Interactive film** Interactive film: An interactive film is a video game or other interactive media that has characteristics of a cinematic film. In the video game industry, the term refers to a movie game, a video game that presents its gameplay in a cinematic, scripted manner, often through the use of full-motion video of either animated or live-action footage. In the film industry, the term "interactive film" refers to interactive cinema, a film where one or more viewers can interact with the film and influence the events that unfold in the film. Design: This genre came about with the invention of laserdiscs and laserdisc players, the first nonlinear or random access video play devices. The fact that a laserdisc player could jump to and play any chapter instantaneously (rather than proceed in a linear path from start to finish like videotape) meant that games with branching plotlines could be constructed from out-of-order video chapters, in much the same way as Choose Your Own Adventure books are constructed from out-of-order pages. Design: Thus, interactive movies were animated or filmed with real actors like movies (or in some later cases, rendered with 3D models) and followed a main storyline. Alternative scenes were filmed to be triggered after wrong (or alternate allowable) actions of the player (such as 'Game Over' scenes). Design: A popular example of a commercial interactive movie was the 1983 arcade game Dragon's Lair, featuring an animated full motion video (FMV) by ex-Disney animator Don Bluth, where the player controlled some of the moves of the main character. When in danger, the player was to decide which move, action, or combination to choose. If they chose the wrong move, they would see a 'lose a life' scene, until they found the correct one which would allow them to see the rest of the story. There was only one possible successful storyline in Dragon's Lair; the only activity the user had was to choose or guess the move the designers intended them to make. Despite the lack of choice, Dragon's Lair was very popular. Design: The hardware for these games consisted of a laserdisc player linked to a processor configured with interface software that assigned a jump-to-chapter function to each of the controller buttons at each decision point. Much as a Choose Your Own Adventure book might say "If you turn left, go to page 7. If you turn right, go to page 8", the controller for Dragon's Lair or Cliff Hanger was programmed to go to the next chapter in the successful story if a player activated the correct control, or to go to the death chapter if they activated the wrong one. Because laserdisc players of the day were not robust enough to handle the wear and tear of constant arcade use, they required frequent replacement. The laserdiscs that contained the footage were ordinary laserdiscs with nothing special about them save for the order of their chapters and, if removed from the arcade console, would play their video on standard, non-interactive laserdisc players. Design: Later advances in technology allowed interactive movies to overlay multiple fields of FMV, called "vites", in much the same way as polygonal models and sprites are overlaid on top of backgrounds in traditional video game graphics. Origins: The earliest rudimentary examples of mechanical interactive cinematic games date back to the early 20th century, with "cinematic shooting gallery" games in the United Kingdom. They were similar to shooting gallery carnival games, except that players shot at a cinema screen displaying film footage of targets. They showed footage of targets, and when a player shot the screen at the right time, it would trigger a mechanism that temporarily pauses the film and registers a point. The first successful example of such a game was Life Targets, released in the UK in 1912. Cinematic shooting gallery games enjoyed short-lived popularity in several parts of Britain during the 1910s, and often had safari animals as targets, with footage recorded from British imperial colonies. Cinematic shooting gallery games declined some time after the 1910s.Capitol Projector's 1954 arcade electro-mechanical game machine Auto Test was a driving test simulation that used a film reel video projector to display pre-recorded driving video footage, awarding the player points for making correct decisions as the footage is played. It was not intended to be cinematic or a racing game, but was a driving simulation designed for educational purposes.An early example of interactive cinema was the 1967 film Kinoautomat , which was written and directed by Radúz Činčera. This movie was first screened at Expo '67 in Montreal. This film was produced before the invention of the laserdisc or similar technology, so a live moderator appeared on stage at certain points to ask the audience to choose between two scenes. The chosen scene would play following an audience vote. Origins: An early example of an interactive movie game was Nintendo's Wild Gunman, a 1974 electro-mechanical arcade game that used film reel projection to display live-action full-motion video (FMV) footage of Wild West gunslingers. In the 1970s, Kasco (Kansei Seiki Seisakusho) released The Driver, a hit electro-mechanical arcade game with live-action FMV, projecting car footage filmed by Toei.In 1975, Nintendo's EVR Race was a horse race betting arcade game that used Electronic Video Recording (EVR) technology to playback video footage of horse races from a video tape. EVR Race was Japan's highest-grossing medal game for three years in a row, from 1976 to 1978. Another horse race betting game, Electro-Sport's Quarter Horse (1982), was the first arcade game to utilize a laserdisc player, though it was only used to play back pre-recorded non-interactive video footage of horse races, with gameplay limited to the player placing bets before the race.An early attempt to combine random access video with computer games was Rollercoaster, written in BASIC for the Apple II by David Lubar for David H. Ahl, editor of Creative Computing. This was a text adventure that could trigger a laserdisc player to play portions of the 1977 American feature film Rollercoaster. The program was conceived and written in 1981, and it was published in the January 1982 issue of Creative Computing along with an article by Lubar detailing its creation, an article by Ahl claiming that Rollercoaster was the first video/computer game hybrid and proposing a theory of video/computer interactivity, and other articles reviewing hardware necessary to run the game and do further experiments. Specialized hardware formats: LaserDisc games A LaserDisc video game is a video game that uses pre-recorded video (either live-action or animation) played from a LaserDisc, either as the entirety of the graphics or as part of the graphics. The first major arcade laserdisc video game was Sega's Astron Belt, a third-person space combat rail shooter featuring live-action full-motion video footage (largely borrowed from a Japanese science fiction film) over which the player/enemy ships and laser fire are superimposed. Developed in 1982, it was unveiled at the September 1982 Amusement Machine Show (AM Show) in Tokyo and the November 1982 AMOA show in Chicago, and was then released in Japan in March 1983. However, its release in the United States was delayed due to several hardware and software bugs, by which time other laserdisc games had beaten it to public release there.The next laserdisc game to be announced was Data East's video game adaptation of the Japanese anime film Genma Taisen (1983), introduced in March 1983, with the game released internationally in June 1983. It introduced a new approach to video game storytelling: using brief full-motion video cutscenes to develop a story between the game's shooting stages; years later, this would become the standard approach to video game storytelling. Bega's Battle also featured a branching storyline.In the United States, the game that popularized the genre was Dragon's Lair, animated by Don Bluth and released by Cinematronics. Released in June 1983, it was the first laserdisc game released in the US. It contained animated scenes, much like a cartoon. The scenes would be played back and at certain points during playback the player would have to press a specific direction on the joystick or the button to advance the game to the next scene, like a quick time event. For instance, a scene begins with the hero, a knight named Dirk, falling through a hole in a drawbridge and being attacked by tentacles. If the player presses the button at this point, Dirk fends off the tentacles with his sword and pulls himself back up out of the hole. If the player fails to press the sword button at the right time, or instead presses a direction on the joystick, Dirk is attacked by the tentacles and crushed. Each unsuccessful move, however, would produce a few moments of black screen, when the LaserDisc switched to the scene showing the death of the character, which interrupted the continuous flow of gameplay found in other video game graphic systems of the time; this was a common criticism of some players and critics. Specialized hardware formats: There were generally two styles of laserdisc games that emerged. Those that followed the lead of Astron Belt integrated pre-recorded laserdisc video with real-time computer graphics and gameplay, making them more like traditional interactive video games. Those that followed the lead of Dragon's Lair integrated animated cartoon laserdisc video with quick time events, making them more like interactive cartoons. The latter style of laserdisc games were generally more successful than the former. Specialized hardware formats: Real-time gameplay Among those that followed the lead of Astron Belt, combining pre-recorded video with real-time computer graphics and gameplay, several were introduced at Tokyo's AM Show in September 1983, with its successor Star Blazer unanimously hailed as the "strongest" laserdisc game of the show. Other games at the show included Funai's Interstellar, a forward-scrolling third-person rail shooter that used pre-rendered 3D computer graphics for the laserdisc video backgrounds and real-time 2D computer graphics for the ships. Cube Quest, introduced at the same AM Show in Tokyo, was a vertical scrolling shooter that used pre-rendered computer animation for the laserdisc video backgrounds and real-time 3D computer graphics for the ships. Later that year, Gottlieb's M.A.C.H. 3 was a vertical scrolling shooter game that combined live-action laserdisc video backgrounds with 2D computer graphics for the ships. Specialized hardware formats: The Firefox (1984) arcade game included a Philips LaserDisc player to combine live action video and sound from the Firefox film with computer generated graphics and sound. The game used a special CAV LaserDisc containing multiple storylines stored in very short, interleaved segments on the disc. The player would seek the short distance to the next segment of a storyline during the vertical retrace interval by adjusting the tracking mirror, allowing perfectly continuous video even as the player switched storylines under control of the game's computer. This method of seeking was noted for being extremely strenuous on the player and frequently led to the machines breaking, slightly hindering the appeal of LaserDisc arcade games. Specialized hardware formats: In the 1990s, American Laser Games produced a wide variety of live-action light gun LaserDisc video games, which played much like the early LaserDisc games, but used a light gun instead of a joystick to affect the action. Specialized hardware formats: Quick-time events Among those that followed the lead of Dragon's Lair, progressing pre-recorded video with quick time events, was its successor Space Ace, another Don Bluth animated game released by Cinematronics later the same year. It featured "branching paths" in which there were multiple "correct moves" at certain points in the animation, and the move the player chose would affect the order of later scenes.The success of Dragon's Lair spawned a number of sequels and similar laserdisc cartoon games incorporating quick time events. However, original animation production was expensive. To cut costs, several companies simply hacked together scenes from Japanese anime that were obscure to American audiences of the day. One such example was Stern's Cliff Hanger (1983), which used footage from the Lupin III movies Castle of Cagliostro (directed by Hayao Miyazaki) and Mystery of Mamo, both originally animated by TMS Entertainment. Anime-based laserdisc games helped expose many Americans in the 1980s to Japanese anime, particularly Cliff Hanger which exposed many Americans to Lupin III and Hayao Miyazaki before any Lupin or Miyazaki anime productions had officially been released theatrically or on home video in the United States.In 1984, Super Don Quix-ote, Esh's Aurunmilla and Ninja Hayate overlaid crude computer graphics on top of the animation to indicate the correct input to the player for quick time event scenes, which the 1985 games Time Gal and Road Blaster also featured. Time Gal also added a time-stopping feature, where specific moments in the game involve Reika stopping time; during these moments, players are presented with a list of three options and have seven seconds to choose the one which will save the character. Another example of an arcade LaserDisc game using a similar style would be Badlands. Specialized hardware formats: Decline After laserdisc arcade fever had peaked in 1983, the laserdisc arcade market declined in 1984. While there were some laserdisc arcade hits that year, such as Space Ace and Cobra Command, they were not able to achieve the same level of mainstream success as earlier laserdisc games the previous year. Following the end of the golden age of arcade video games, there were high expectations for laserdisc games to revive the arcade industry, but laserdisc games failed to live up to those expectations. Instead, the arcade market was being reinvigorated by sports video games such as Karate Champ, Track & Field, Punch-Out and several Nintendo VS. System titles. Specialized hardware formats: VHS and CD-ROM In 1987, the game Night Trap, featuring full-motion video, was created for Hasbro's Control-Vision video game system (originally codenamed "NEMO"), which used VHS tapes. When Hasbro discontinued production of Control-Vision, the footage was placed into archive until it was purchased in 1991 by the founders of Digital Pictures. Digital Pictures ported Night Trap to the Sega CD platform, releasing it in 1992. Specialized hardware formats: In 1988, Epyx announced three VCR games including one based on its video game California Games. They combined videotape footage with a board game. From the late 1980s, American Laser Games started to produce a wide variety of live-action light gun laserdisc games, which played much like the early cartoon games, but used a light gun instead of a joystick to affect the action. Meanwhile, Digital Pictures started to produce a variety of interactive movies for home consoles. Specialized hardware formats: When CD-ROMs were embedded in home consoles such as the Sega CD as well as home computers, games with live action and full motion video featuring actors were considered cutting-edge, and some interactive movies were made. Some notable adventure games from this era are Under a Killing Moon, The Pandora Directive (both part of the Tex Murphy series), The Beast Within: A Gabriel Knight Mystery, Voyeur, Star Trek: Klingon, Star Trek: Borg, Ripper, Snatcher, Black Dahlia, The X-Files Game, Phantasmagoria, Bad Day on the Midway and The Dark Eye. Others in the action genre are Brain Dead 13 and Star Wars: Rebel Assault. Specialized hardware formats: Due to the limitation of memory and disk space, as well as the lengthy timeframes and high costs required for the production, not many variations and alternative scenes for possible player moves were filmed, so the games tended not to allow much freedom and variety of gameplay. Thus, interactive movie games were not usually very replayable after being completed once. Specialized hardware formats: DVD games A DVD game (sometimes called DVDi, "DVD interactive") is a standalone game that can be played on a set-top DVD player. The game takes advantage of technology built into the DVD format to create an interactive gaming environment compatible with most DVD players without requiring additional hardware. DVD TV games were first developed in the late 1990s. They were poorly received and understood as an entertainment medium. However, DVD-based game consoles like the PlayStation 2 popularized DVD-based gaming and also functioned as a DVD video player. In addition, the format has been used to import some video games to the DVD format, allowing them to be played with a standard DVD player rather than requiring a PC. Examples include Dragon's Lair and Who Shot Johnny Rock?. The PC/console game Tomb Raider: The Angel of Darkness was released in 2006 as a DVD game entitled Lara Croft Tomb Raider: The Action Adventure. Specialized hardware formats: Japanese games such as visual novels and eroge that were originally made for PC are commonly ported to DVDPG (a term that stands for DVD Players Game). Instead of standard save methods, DVDPGs use password save systems. Similar game types include BDPG (Blu-ray Disc Players Game) and UMDPG (Universal Media Disc Players Game).From the time of its original introduction, the DVD format specification has included the ability to use an ordinary DVD player to play interactive games, such as Dragon's Lair (which was reissued on DVD), the Scene It? and other series of DVD games, or games that are included as bonus material on movie DVDs. Aftermath Media (founded by Rob Landeros of Trilobyte) released the interactive movies Tender Loving Care and Point of View (P.O.V) for the DVD platform. Such games have appeared on DVDs aimed at younger target audiences, such as the special features discs of the Harry Potter film series. Specialized hardware formats: Live interactive movies The world's first live interactive movie was My One Demand filmed and premiered on 25 June 2015. Created by Blast Theory, the film was streamed live to the TIFF Lightbox on three successive nights. The cast of eight included Julian Richings and Clare Coulter. Audiences in the cinema used mobile phones to answer questions from the narrator, played by Maggie Huculak and their answers were included in the voiceover as well as in the closing credits. Modern developments: Later video games used this approach using fully animated computer-generated scenes, including various adventure games such as the Sound Novel series by Chunsoft, Shenmue series by Sega, Shadow of Memories by Konami, Time Travelers by Level 5, and Fahrenheit by Quantic Dream. During many scenes, the player has limited control of the character and chooses certain actions to progress the story. Other scenes are quick time event action sequences, requiring the player to hit appropriate buttons at the right time to succeed. Some of these games, such as the Sound Novel series, Shadow of Memories, Time Travelers, Until Dawn, Heavy Rain, Beyond: Two Souls and Detroit: Become Human, have numerous branching storylines that result from what actions the player takes or fails to complete properly, which can include the death of major characters or failure to solve the mystery. Cast members' work during the 1990s on interactive movies' chroma key sets was different from traditional filmmaking: They performed multiple possible actions players choose in a game, usually looked into the camera to react to the player, and usually did not react to others on the set. Such products were popular during the early 1990s as CD-ROMs and Laserdiscs made their way into the living rooms, providing an alternative to the low-capacity cartridges of most consoles. As the first CD-based consoles capable of displaying smooth and textured 3D graphics appeared, the full-FMV game had vanished from the mainstream circles around 1995, although it remained an option for PC adventure games for a couple more years. One of the last titles released was the 1998 PC and PlayStation adventure The X-Files: The Game, packed in 7 CDs. That same year, Tex Murphy: Overseer became the first game developed specifically for DVD-ROM and one of the last "interactive movies" to make heavy use of live-action FMV. In 2014, the Tex Murphy series continued with a new FMV game, Tesla Effect: A Tex Murphy Adventure. Modern developments: With advances in computer technology, interactive films waned as more developers used fully digitized characters and scenes. This format was popularized by Telltale Games, achieving success in The Walking Dead series of adventure games. These have sometimes been called interactive movies, as while the player can make choices that affect the game's overall narrative, they do not have direct control over characters, making the experience comparable to watching a sequence of cut scenes. This idea was even further realized in Telltale's The Walking Dead series, where player actions can drastically change future games, for example, different characters may be alive in the end depending on choices made by the player in The Walking Dead season 1, but those same characters affect The Walking Dead: Season Two. Other examples of episodic adventure games include Telltale's The Wolf Among Us series and the Life Is Strange series, created by Dontnod Entertainment. Modern developments: David Cage: video games referred to as interactive films At its release, Heavy Rain (a 2010 video game by Quantic Dream) received very positive reviews and won several gaming and film and television awards. What is most striking, however, is the unanimity of critics in defining it an interactive-film more than a video game. This definition is certainly inspired by the phenomenon, typical of the Nineties, of films available in home video or computer that presented to the viewer a series of pre-recorded sequences, at the end of which it was possible to make choices that directly influenced the direction of the story. Cage himself will define his Heavy Rain as an interactive film and, in fact, the goal of the video game coincides with the type of film just mentioned; to combine the interactive potential of the video game with the expressive richness of cinema. However, unlike its predecessors, Cage chooses not to work with live-action, but to use only synthetic images, avoiding, at least in part, the effect of estrangement typical of interactive films in the passage from moments of exploration to sequences of narrative exposure. From the interactive films on DVD Cage assimilates two different aspects in his videogames, respectively the use of Quick time events (QTE) and the freedom of choice left to the player to determine the development of the plot. In the gameplay of Heavy Rain, however, QTEs are not used solely for the purpose of succeeding in certain actions but also as a vehicle to perform the countless narrative choices placed on the player. In the first case the player will find himself testing his reflexes by pressing the keys that will appear on the screen, In the second case, up to four different keys can appear to be pressed, each of the which represents a choice that will affect the narrative of the video game. As for non-interactive phases, it is difficult to distinguish from the interactive phases, as what can appear as a simple cutscene can often hide several QTEs. Regarding identification with the main characters; Heavy Rain removes each element of the challenge typical of graphic adventures is removed to ensure that the player can be fully focused on it. Also, as already stated, in Heavy Rain there is no game over: depending on the player's actions and choices, the video game shifts to different storylines, culminating in one of the many endings planned for the story. The identification with the characters is not given only by the type of actions that we are asked to perform but also by how, at game design level, the player is required to complete QTEs that aim to make the player feel the physical effort of the avatar. in an interview, director Cage stated that the game was to designed to be focused on physical immersion by letting the player controlling the animation of the character with the right analog stick. The idea behind this is to put the player further in the same physical space as that of the character. Although the innovation given by this type of mechanics in the gameplay is undoubted, interaction remains a very small part of the experience offered by David Cage's titles; The relationship between gameplay and cutscenes in Cage’s works is broken by what we could define as the insertion of the first into the second creating interactive cutscenes. Another example comes from Quantum Break, published by Remedy in 2016. Between the game's acts, episodes from a TV show filmed in live action are displayed to the player: the scenes in these episodes change conforming to the decisions the player has taken and the objects he has interacted with. The looks of the characters are maintained between the live action sequences and the 3D computer generated ones, thanks to the use of the motion capture technique. Modern developments: Interactive films in the internet era With the advent of YouTube annotations in 2008, a series of five Interactive Adventures were created by Chad, Matt & Rob that utilized the annotations to tell interactive stories that allowed the user to guide the narrative. The series included The Time Machine, The Murder, The Birthday Party, The Teleporter, and The Treasure Hunt. Annotations were removed from YouTube in 2019, which makes many of these videos unable to be interacted with.In the 2010s, streaming services like Netflix started to grow in popularity and sophistication. By 2016, Netflix had started experimenting with interactive works aimed at children, including an animated version of Puss in Boots and an adaption of Telltale's Minecraft: Story Mode. Netflix's first major interactive film with live-action scenes was Black Mirror: Bandersnatch, a film in the Black Mirror anthology series and released in December 2018. Netflix worked with Black Mirror's creator Charlie Brooker to develop a narrative that took advantage of the interactive format, while developing their own tools to improve caching of scenes and management of the film's progression to use on future projects. In 2022, another interactive short released by Netflix, called Cat Burglar, which is an interactive trivia cartoon, where the viewer plays a cat burglar named Rowdy who is trying to steal a valuable artwork from a museum which is being protected by security guard dog named Peanut and must answer the correct questions in order to progress through the story. Reception: Although interactive movies had a filmic quality that sprite-based games could not duplicate at the time, they were a niche market— the limited amount of direct interactivity put off many gamers. The popularity of FMV games declined during 1995, as real-time 3D graphics gained increasing attention. The negative response to FMV-based games was so common that it was even acknowledged in game marketing; a print advertisement for the interactive movie Psychic Detective stated, "Yeah, we know full-motion video games in the past sucked."Cost was also an issue, as live action video with decent production values is expensive to film, while video shot on a low budget damages the overall image of the game. Ground Zero: Texas cost Sega around US$3 million, about the same as a low-budget movie would cost in 1994. Reception: Though not as crucial an issue as the limited interactivity, another issue that drew criticism was the quality of the video itself. While the video was often relatively smooth, it was not actually full-motion as it was not of 24 frames per second or higher. In addition to this, the hardware it was displayed on, particularly in the case of the Sega CD, had a limited color palette (of which a maximum of 64 colors were displayable simultaneously), resulting in notably inferior image quality due to the requirement of dithering. Game designer Chris Crawford disparages the concept of interactive movies, except those aimed at elementary-school-age children, in his book Chris Crawford on Game Design. He writes that since the player must process what is known and explore the options, choosing a path at a branch-point is every bit as demanding as making a decision in a conventional game, but with much less reward since the result can only be one of a small number of branches. Reception: Defenders of the genre have argued that, by allowing the player to interact with real people rather than animated characters, interactive full-motion video can produce emotional and visceral reactions that are not possible with either movies or traditional video games. Other uses: Some studios hybridized ordinary computer game play with interactive movie play; the earliest examples of this were the entries in the Origin Systems Wing Commander series starting with Wing Commander III: Heart of the Tiger. Between combat missions, Wing Commander III featured cutscenes with live actors; the game offered limited storyline branching based on whether missions were won or lost and on choices made at decision points during the cutscenes (Wing Commander IV: The Price of Freedom, with some of the same actors, was similar). Other uses: Other games like BioForge would, perhaps erroneously, use the term for a game that has rich action and plot of cinematic proportions—but, in terms of gameplay, has no relation to FMV movies. The term is an ambiguous one since many video games follow a storyline similar to the way movies would.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Cleavage (geology)** Cleavage (geology): Cleavage, in structural geology and petrology, describes a type of planar rock feature that develops as a result of deformation and metamorphism. The degree of deformation and metamorphism along with rock type determines the kind of cleavage feature that develops. Generally, these structures are formed in fine grained rocks composed of minerals affected by pressure solution.Cleavage is a type of rock foliation, a fabric element that describes the way planar features develop in a rock. Foliation is separated into two groups: primary and secondary. Primary deals with igneous and sedimentary rocks, while secondary deals with rocks that undergo metamorphism as a result of deformation. Cleavage is a type of secondary foliation associated with fine grained rocks. For coarser grained rocks, schistosity is used to describe secondary foliation. Cleavage (geology): There are a variety of definitions for cleavage, which may cause confusion and debate. The terminology used in this article is based largely on Passchier and Trouw (2005). They state that cleavage is a type of secondary foliation in fine grained rocks characterized by planar fabric elements that form in a preferred orientation. Some authors choose to use cleavage when describing any form of secondary foliation. Types of cleavage: The presence of fabric elements such as preferred orientation of platy or elongate minerals, compositional layering, grain size variations, etc. determines what type of cleavage forms. Cleavage is categorized as either continuous or spaced. Types of cleavage: Continuous cleavage Continuous or penetrative cleavage describes fine grained rocks consisting of platy minerals evenly distributed in a preferred orientation. The type of continuous cleavage that forms depends on the minerals present. Undeformed platy minerals such as micas and amphiboles align in a preferred orientation, and minerals such as quartz or calcite deform into a grain shape preferred orientation. Continuous cleavage is scale dependent, so a rock with a continuous cleavage on a microscopic level could show signs of spaced cleavage when observed on a macroscopic level. Types of cleavage: Slaty cleavage Since the nature of cleavage is dependent on scale, slaty cleavage is defined as having 0.01 mm or less of space occurring between layers. Slaty cleavage often occurs after diagenesis and is the first cleavage feature to form after deformation begins. The tectonic strain must be enough to allow a new strong foliation to form, i.e. slaty cleavage. Types of cleavage: Spaced cleavage Spaced cleavage occurs in rocks with minerals that are not evenly distributed, and as a result the rock forms discontinuous layers or lenses of different types of minerals. Spaced cleavage contains two types of domains; cleavage domains and microlithons. Cleavage domains are planar boundaries subparallel to the trend of the domain, and microlithons are bounded by the cleavage domains. Spaced cleavages can be categorized based on whether the grains inside the microlithons are randomly oriented or contain microfolds from a previous foliation fabric. Other descriptions for spaced cleavages include the spacing size, the shape and percentage of cleavage domains, and the transition between cleavage domains and microlithons. Types of cleavage: Crenulation cleavage Crenulation cleavage contains microlithons that were warped by a previous foliation. Folding occurs when there are multiple phases of deformation, the latter one causes symmetric or asymmetric microfolds that deform previous foliations. The type of crenulation cleavage pattern that forms depends on lithology and degree of deformation and metamorphism. Types of cleavage: Disjunctive cleavage Disjunctive cleavage describes a type of spaced cleavage where the microlithons are not deformed into microfolds, and formation is independent from any previous foliation present in the rock. A common outdated term for disjunctive cleavage is fracture cleavage. It is recommended that this term be avoided because of the tendency to misinterpret the formation of a cleavage feature. Types of cleavage: Transposition cleavage When an older cleavage foliation is erased and replaced by a younger foliation due to stronger deformation and is evidence for multiple deformation events. Formation: The development of cleavage foliation involves a combination of various mechanisms dependent on the rocks composition, tectonic processes, and metamorphic conditions. The magnitude and orientation of stress coupled with pressure and temperature conditions determine how a mineral is deformed. Cleavages form approximately parallel to the X-Y plane of tectonic strain and are categorized based on the type of strain. The mechanisms currently believed to control cleavage formation are rotation of mineral grains, solution transfer, dynamic recrystallization, and static recrystallization. Formation: Mechanical rotation of grains During ductile deformation, mineral grains with a high aspect ratio are likely to rotate so that their mean orientation is in the same direction as the XY plane of finite strain. Mineral grains may fold if oriented perpendicular to shortening direction. Formation: Solution transfer Cleavage foliations may result due to stress-induced solution transfer by the redistribution of inequant mineral grains by pressure solution and recrystallization. This would also help to increase rotation of elongate and tabular mineral grains. Mica grains undergoing solution transfer will align in a preferred orientation. If the minerals grains affected by pressure solution are deformed through plastic crystal processes, the grain will be extended along the XY-plane of finite strain. This process shapes grains into a preferred orientation. Formation: Dynamic recrystallization Dynamic recrystallization occurs when a rock undergoes metamorphic conditions and reequilibrium of a minerals chemical composition. This happens when there is a decrease in free energy stored in deformed grains. Deformed micas can store a sufficient amount of strain energy that can allow recrystallization to occur. This process allows oriented regrowth of both old and new minerals into the damaged crystal lattice during cleavage development. Formation: Static recrystallization This process occurs either after deformation or in the absence of dynamic deformation. Depending on the intensity of heat during recrystallization, the foliation will either be strengthened or weakened. If the heat is too intense, foliation will be weakened due to the nucleation and growth of new randomly oriented crystals and the rock will become a hornfels. If minimal heat is applied to a rock with a preexisting foliation and without a change in mineral assemblage, the cleavage will be strengthened by growth of micas parallel to foliation. Relationship to folds: Cleavages display a measurable geometric relationship with the axial plane of folds developed during deformation and are referred to as axial planar foliations. The foliations are symmetrically arranged with respect to the axial plane, depending on the composition and competency of a rock. For example, when mixed sandstone and mudstone sequences are folded during very-low to low grade metamorphism, cleavage forms parallel to the fold axial plane, particularly in the clay-rich parts of the sequence. In folded alternations of sandstone and mudstone the cleavage has a fan-like arrangement, divergent in the mudstone layers and convergent in the sandstones. This is thought to be because the folding is controlled by buckling of the stronger sandstone beds with the weaker mudstones deforming to fill the intervening gaps. The result is a feature referred to as foliation fanning. Engineering considerations: In geotechnical engineering a cleavage plane forms a discontinuity that may have a large influence on the mechanical behavior (strength, deformation, etc.) of rock masses in, for example, tunnel, foundation, or slope construction.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Niobium diselenide** Niobium diselenide: Niobium diselenide or niobium(IV) selenide is a layered transition metal dichalcogenide with formula NbSe2. Niobium diselenide is a lubricant, and a superconductor at temperatures below 7.2 K that exhibit a charge density wave (CDW). NbSe2 crystallizes in several related forms, and can be mechanically exfoliated into monatomic layers, similar to other transition metal dichalcogenide monolayers. Monolayer NbSe2 exhibits very different properties from the bulk material, such as of Ising superconductivity, quantum metallic state, and strong enhancement of the CDW. Synthesis: Niobium diselenide crystals and thin films can be grown by chemical vapor deposition (CVD). Niobium oxide, selenium and NaCl powders are heated to different temperatures in the range 300–800 °C at ambient pressure in a furnace that allows maintaining a temperature gradient along its axis. Powders are placed in different locations in the furnace, and a mixture of argon and hydrogen is used as the carrier gas. The NbSe2 thickness can be accurately controlled by varying the temperature of selenium powder.NbSe2 monolayers can also be exfoliated from the bulk or deposited by molecular beam epitaxy. Structure: Niobium diselenide exists in several forms, including 1H, 2H, 4H and 3R, where H stands for hexagonal and R for rhombohedral, and the number 1, 2, etc., refers to the number of Se-Nb-Se layers in a unit cell. The Se-Nb-Se layers are bonded together with relatively weak van der Waals forces, and can be exfoliated into 1H monolayers. They can be offset in a variety of ways to make different crystal structures, the most stable being 2H. Properties: Superconductor NbSe2 is a superconductor with a critical temperature TC = 7.2 K. The critical temperature drops when the NbSe2 layers are intercalated by other atoms, or when the sample thickness decreases, with TC being ~1 K in a monolayer. Recent studies show infrared photodetection in NbSe2 devices. Charge density wave Along with the CDW the lattice develops a periodic lattice distortion around 26 K. This period is three times that of the crystal lattice, so that there is a 3 by 3 superlattice. There is also a Cooper-pair density wave correlated but out of phase by 2π⁄3 with the charge-density wave. Friction NbSe2 sheets develop higher friction when very thin. Intercalation: Because the layers in NbSe2 are only weakly bonded together, different substances can penetrate between the layers to form well defined intercalation compounds. Compounds with helium, rubidium, transition metals, and post-transition metals have been made. Extra niobium atoms, up to one third extra can be added between the layers. Extra metal atoms from first transition metal series can intercalate up to 1:3 ratio. they go in between the layers.Intercalating two atoms of helium per formula increases the layer separation to 2.9 and the Se-Se distance to 3.52. Intercalation: Rubidium When rubidium is intercalated, the NbSe2 layers separate to accommodate it. Each individual layer is also compressed slightly. The Nb-Se distance stays the same, but the Nb-Nb distance in the layer increases. The Se-Se distance on top and bottom of the layer decreases, and the Nb-Se-Nb angle increases. Extra electron density transfers from the Rb atoms to the niobium layer. Intercalation: Vanadium Vanadium can enter the 2H NbSe2 structure to the limit of 1% by substituting for Nb. Between 11% and 20% it forms a 4Hb structure with V in octahedral coordination between layers. Over 30% it forms a 1T structure.Fermi energy is shifted into the d band. Iron When doped with iron at levels greater than 8% NbSe2 can undergo a spin-glass transition at low temperatures. Hydrogen Hydrogen can be intercalated into NbSe2 under high pressure and high temperature. Up to 0.9 atoms of hydrogen per formula can be included while retaining the same structure. Over this ratio the structure changes to that of MoS2. At this transition the crystallographic c-axis increases and paramagnetic susceptibility drops to zero. Hydrogen content can go to 5.2 molar ratio at 50.5 atmospheres. Magnesium When magnesium is intercalated, the electron s-states do not overlap with the selenium, and it only has a small effect in reducing the superconducting critical temperature. Potential applications: Bemol Incorporated manufactured niobium diselenide in the United States for use as a conducting lubricant in vacuum, as it has a wide temperature stability range, very low outgassing, and lower resistance than graphite. NbSe2 was used as motor brushes, or embedded in silver to make a self lubricating surface.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Godzilla (2014 video game)** Godzilla (2014 video game): Godzilla is a 2014 video game developed by Natsume Atari and published by Bandai Namco Games for the PlayStation 3 and PlayStation 4 based on the Japanese monster Godzilla franchise by Toho. It was first released on December 18, 2014, in Japan only for the PlayStation 3. It was released on July 14, 2015, in North America and on July 17, 2015, in Europe. The Western PlayStation 4 version is based on the upgraded Japanese release called Godzilla VS, released on July 14, 2015, containing more content such as additional monsters. Gameplay: God of Destruction mode Destruction mode consists of the player controlling Godzilla as he attacks certain stages (10 areas manually selected at the end of each round, from 25 levels), and is similar to Godzilla Generations. In order to clear the stage, the player must destroy all of the G-Energy Generators in the map, while also being attacked by G-Force and occasionally a boss (bosses must also be defeated to complete an area). Some of these stages are timed and the player must destroy all of the Generators before the timer expires. As Godzilla destroys objects such as buildings, G-Energy Generators, and military vehicles, he will increase in size. Godzilla begins the campaign at 50 meters in height, and can reach an almost limitless size. Bosses that Godzilla faces will be leveled at the appropriate height based on Godzilla's current size, although sometimes they are larger on the harder levels. But if Godzilla dies and the level is retried, the boss becomes smaller in size. In order to complete Destruction Mode and reach the game's true final boss, the player must exceed 100 meters in height by the last stage. After the credits roll the player will begin the final stage as Burning Godzilla and be attacked by the Super X3 and several DAG-MB96 Maser Cannons (also, enemies firing freeze missiles). After this, the Legendary Godzilla will appear as the game's true final boss and must be defeated before the timer expires and Godzilla reaches meltdown. After Legendary Godzilla is defeated, the game's final cutscene will be triggered. If playing through God of Destruction mode as the Legendary Godzilla, the incarnations switch places and Burning Godzilla is the final boss. In the PS4 edition, all the characters are playable, so players can choose any monster of their liking. Gameplay: King of the Monsters mode "King of the Monsters" is a game mode where the player plays through six stages, each with a different monster to fight. The monsters increase in strength the further the player progresses. The weaker monsters will appear in the first two waves (such as Mothra and Jet Jaguar), slightly harder monsters in the third and fourth, and the most powerful monsters in the final two stages (such as King Ghidorah, Gigan and Kiryu). The Heisei Godzilla, Burning Godzilla, the Hollywood (Legendary) Godzilla, and other kaiju are all playable in this mode. Gameplay: Evolution mode As the player defeats certain enemies and destroys certain structures in Destruction Mode, new abilities will be unlocked and can be applied to Godzilla in this mode. Godzilla can receive new attacks, including the "victory dance" from Invasion of Astro-Monster, as well as atomic breath upgrades, including the ability to use atomic breath to fly as in Godzilla vs. Hedorah, as well as fire Minilla's smoke rings or use a white misty atomic breath based on that used by the original Godzilla in 1954. Gameplay: Diorama mode Throughout the game, the player will unlock monster models and other objects that can be placed in an environment and viewed from various angles and used to take screen-shots, allowing the player to recreate battles from the films or the game, or to create fantasy battles. Kaiju Field Guide The player will also unlock biographies for various monsters from the Godzilla series beyond just those featured in the game, which appear here. These bios include pictures of the monsters from films they appear in, as well as information about the monsters' attributes and film appearances. Online multiplayer mode Exclusive to the PS4 version, the game features an online multiplayer mode in which two to three players can battle one another with a selection of kaiju also exclusive to the PS4. Characters: Playable kaiju Godzilla (1989, 1964, 1995, and 2014) Anguirus (Showa) Rodan (Showa) Mothra (Heisei) King Ghidorah (Heisei) Hedorah (Showa) Mechagodzilla (1974 and 1975) Biollante Battra (Larva and Imago) SpaceGodzilla Destoroyah Gigan (Modified 2004) Super Mechagodzilla Mecha-King Ghidorah Type-3 Kiryu Jet Jaguar Development: Godzilla was revealed in Japan on June 26, 2014, with a trailer uploaded to YouTube by Bandai Namco Japan. In late-mid-August, Japan's Famitsu magazine revealed the game would be released on December 18, 2014, in Japan, and on August 29, 2014, Bandai Namco released a second trailer for the game. On November 18, 2014, the third trailer was released by Bandai Namco. On December 5, the game's Japanese demo was released to the Japanese PlayStation Network.On December 5, 2014, the English release of the game was revealed in The Game Awards for PlayStation 3 and PlayStation 4, and was scheduled to be released on July 14, 2015, in North America and July 17, 2015, in Europe. All kaiju who previously appeared in the PlayStation 3 release are included, with the addition of SpaceGodzilla, Mecha-King Ghidorah, Rodan, Anguirus, Mechagodzilla 1974, Godzilla 1964 and Battra (Larva and Imago) as PlayStation 4 exclusives. The game was delisted from PlayStation Store in late 2017, presumably in December. Reception: Upon release in the West, Godzilla was met with a negative response with an average critic score of 38 out of 100 on Metacritic, with many reviewers criticizing an outdated presentation of the graphics and level design, while also noting awkward movement controls and repetition. Jim Sterling stated it "has the look and feel of a small budget game" rather than a "major 'AAA' release" while Jordon Devore reviewing for Destructoid called it "a letdown" given the premise. Several online personalities also noted the lack of local co-op gameplay, the most widely cited complaint being that a one-on-one fighting game should incorporate the most basic function of the genre with the online mode instead being a preferred extra feature. However, some critics did note the faithful recreation of the monsters themselves and amount of content for long time Godzilla fans with Jon Ryan reviewing for IGN noting that while the overall game had a "lack of substantial gameplay", "the spirit of the old-school monster movie is where Bandai Namco absolutely nails it."
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Steak au poivre** Steak au poivre: Steak au poivre (French pronunciation: ​[stɛk‿o pwavʁ], Quebec French pronunciation: [stei̯k‿o pwɑːvʁ]) or pepper steak is a French dish that consists of a steak, traditionally a filet mignon, coated with coarsely cracked peppercorns. The peppercorns form a crust on the steak when cooked and provide a pungent counterpoint to the beef. Steak au poivre may be found in traditional French restaurants in most urban areas. Preparation: The peppercorn crust is made by placing the steak in a bed of cracked black (or mixed) peppercorns. Typically, the steak is seared in a hot skillet with a small amount of butter and oil. The steak is seared at a high temperature to cook the outside quickly and form the crust while leaving the interior rare to medium rare. The steak is left to rest for several minutes before serving.Steak au poivre is often served with pan peppercorn sauce consisting of reduced cognac, heavy cream, and the fond from the bottom of the pan, often including other ingredients such as butter, shallots, and/or Dijon mustard. Common side dishes to steak au poivre are various forms of mashed potatoes and pommes frites (small fried shoestring potatoes).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Segmental arch** Segmental arch: A segmental arch is a type of arch with a circular arc of less than 180 degrees. It is sometimes also called a scheme arch.The segmental arch is one of the strongest arches because it is able to resist thrust. To prevent failure, a segmental arch must have a rise that is equal to at least one-eighth the width of the span. Segmental arches with a rise that is less than one-eighth of the span width must have a permanent support or frame beneath the arch to prevent failure.As far as is known, the ancient Romans were the first to develop the segmental arch. The closed-spandrel Pont-Saint-Martin bridge in the Aosta Valley in Italy dates to 25 BC. The first open-spandrel segmental arch bridge is the Anji Bridge over the Xiao River in Hebei Province in China, which was built in 610 AD.Segmental arches are most commonly used in the 20th century in residential construction over doorways, fireplaces, and windows.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**RTV silicone** RTV silicone: RTV silicone (room-temperature-vulcanizing silicone) is a type of silicone rubber that cures at room temperature. It is available as a one-component product, or mixed from two-components (a base and curative). Manufacturers provide it in a range of hardnesses from very soft to medium—usually from 15 to 40 Shore A. RTV silicones can be cured with a catalyst consisting of either platinum or a tin compound such as dibutyltin dilaurate. Applications include low-temperature over-molding, making molds for reproducing, and lens applications for some optically clear grades. It is also used widely in the automotive industry as an adhesive/sealant, for example to create gaskets in-place. Chemistry: RTV silicones are made from a mixtures of silicone polymers, fillers, and organoreactive silane catalysts. Silicones are formed from a Si-O bond, but can have a wide variety of side chains. The silicone polymers are often made by reacting dimethyl dichlorosilane with water. Fillers such as acetic acid can provide a fast cure time, while oxides and nitrides can provide better thermal conductivity. Tack-free times are typically on the order of minutes, with cure times on the order of hours. One-component silicone One-part silicones make use of moisture in the atmosphere to cure from the outside towards the center. The time to cure will decrease with an increase in temperature, humidity and surface area to volume ratio. Two-component silicone Two-part silicones use moisture in the second component as well as a cross-linker such as active alkoxy to cure the silicone in a process called condensation curing. Two-part silicones can also be platinum catylized in a "addition" reaction. Other reactive species to facilitate cross-linking include acetoxy, amine, octoate, and ketoxime. Applications: To produce the material, a user mixes silicone rubber with the curing agent or vulcanizing agent. Usually, the mixing ratio is a few percent. For RTV silicone to reproduce surface textures, the original must be clean. Vacuum de-airing removes entrained air bubbles from the mixed silicone and catalyst to ensure optimal tensile strength, which affects reproduction times. In casting and mold-making, RTV silicone rubber reproduces fine details and is suitable for a variety of industrial and art-related applications including prototypes, furniture, sculpture, and architectural elements. RTV silicone rubber can be used to cast materials including wax, gypsum, low melt alloys/metals and urethane, epoxy or polyester resins (without using a release agent). A more recent innovation is the ability to 3D print RTV silicones. RTV silicones' industrial applications include aviation, aerospace, consumer electronics, and microelectronics. Some aviation and aerospace product applications are cockpit instruments, engine electronics potting, and engine gasketing. RTV silicones are used for their ability to withstand mechanical and thermal stress. Features: Good characteristics of easy operation Light viscosity and good flow-ability Low shrinkage Favorable tension No deformation Favorable hardness High-temperature resistance, acid and alkali resistance, and ageing resistance Advantages and disadvantages: RTV silicone rubber has excellent release properties compared to mold rubbers, which is especially an advantage when doing production casting of resins (polyurethane, polyester, and epoxy). No release agent is required, obviating post-production cleanup. Silicones also exhibit good chemical resistance and high-temperature resistance (205 °C, 400 °F and higher). For this reason, silicone molds are suitable for casting low-melt metals and alloys (e.g. zinc, tin, pewter, and Wood's metal). RTV silicone rubbers are, however, generally expensive--especially platinum-cure. They are also sensitive to substances (sulfur-containing modelling clay such as Plastilina, for example) that may prevent the silicone from curing (referred to as cure inhibition). Silicones are usually very thick (high viscosity), and must be vacuum degassed prior to pouring, to minimize bubble entrapment. If making a brush-on rubber mold, the curing time factor between coats is long (longer than urethanes or polysulfides, shorter than latex). Silicone components (A+B) must be mixed accurately by weight (scale required) or they do not work. Tin catalyst silicone shrinks somewhat and does not have a long shelf life. Acetoxysilane-based RTV releases acetic acid during the curing process, and this can attack solder joints, detaching the solder from the copper wire.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**IBM Lotus Expeditor** IBM Lotus Expeditor: IBM Lotus Expeditor is a software framework by IBM's Lotus Software division for the construction, integration, and deployment of "managed client applications", which are client applications that are deployed from, configured, and managed onto a desktop, usually by a remote server. The goal is to allow developers to create applications that take advantage of running on a local client, while having the same ease of maintenance as web-based applications. Description: There are several parts to Expeditor: Lotus Expeditor Client for Desktop is used for the running client applications on Microsoft Windows, Mac OS X and Linux. These applications can be written using a combination of OSGi, Java EE, and Eclipse Rich Client Platform (RCP) technologies, running on a Java virtual machine. Lotus Expeditor Client for Devices is a configuration of platform for Microsoft Windows Mobile devices and the Nokia E90 Communicator. This configuration of the platform includes the Eclipse embedded Rich Client Platform (eRCP) running on a Java ME virtual machine. Description: Lotus Expeditor Server is used to deploy, configure and maintain applications running on Lotus Expeditor Clients. It runs on top of the IBM DB2 database management system and the Java EE-based IBM WebSphere Application Server. Expeditor Server is not necessary for Expeditor Client applications to run. Client applications can run standalone, and optionally exploit the services of the Lotus Expeditor Server for data synchronization, transactional messaging and automated, remote, application management.In addition, Lotus Expeditor Toolkit is for developers to create Expeditor applications and create customized Expeditor runtimes. It runs on top of the Eclipse integrated development environment. Description: Lotus Software uses Expeditor in many of its own products, including Notes (from version 8), Sametime (from version 7.5), and Symphony. History: Lotus Expeditor has its roots in IBM's Pervasive Computing (PvC) initiatives. which were associated with the pursuit of ubiquitous computing. Early forms of Lotus Expeditor were first outlined publicly in 2001 in an article on IBM's Pervasive Computing Device Architecture. This architecture served as the basis for IBM PvC embedded software deliveries in many areas, including automotive telematics, industrial control, residential gateways, desktop screenphones, and handheld mobile devices. History: In 2003, the core of the PvC Device Architecture, the OSGi Service Platform, was used in a refactoring of the Eclipse runtime to incubate what became Eclipse 3.0. This incubator project was referred to as Equinox. Eclipse 3.0 was released in 2004 as a refactored runtime (Rich Client Platform or RCP) and an integrated development environment (IDE) that exploited RCP. Later in 2004, IBM announced Workplace Client Technology (WCT) for creating managed client applications targeted at desktops. WCT was an application of the PvC Device Architecture to desktops, which then included the RCP technologies. WCT also came with document editors that could read word processing documents, spreadsheets, and presentations in OpenDocument format.Later that year, IBM rebranded the PvC Device Architecture as a platform called Workplace Client Technology, Micro Edition (WCTME). IBM took the existing Workplace Client Technology and renamed it Workplace Client Technology, Rich Client Edition (later Rich Edition or WCTRE). History: IBM created a configuration of the WCTME platform, called Workplace Client Technology, Micro Edition—Enterprise Offering (WCTME-EO), as the first generally available product to support the construction and deployment of desktop applications for Workplace. WCT Micro Edition—Enterprise Offering had a smaller footprint than WCT Rich Edition by focusing only on the integration of line-of-business applications and, correspondingly, not including the document editors.The names of the technologies continued to evolve in the next couple of years. History: WCT Rich Edition became known as the Workplace Managed Client. History: WCT Micro Edition—Enterprise Offering was briefly renamed Workplace Managed Client for WebSphere before it was released as WebSphere Everyplace Deployment for Windows and Linux. (WebSphere Everyplace Deployment referred to both client and server technologies.)In 2006, IBM started to de-emphasize the Workplace brand in favor of its existing Lotus and WebSphere brands. As part of this effort, it created the Expeditor brand within Lotus: WebSphere Everyplace Deployment became Lotus Expeditor. History: In particular, WebSphere Everyplace Deployment for Windows and Linux became Lotus Expeditor Client for Desktop. Workplace Client Technology, Micro Edition became Lotus Expeditor Client for Devices. The server components from WebSphere Everyplace Deployment products that dealt with managing desktop and mobile applications became Lotus Expeditor Server. Some of the technology in Workplace Managed Client, such as its document editors, were incorporated into Lotus Notes 8 and Lotus Symphony.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Chordboard** Chordboard: The chordboard is an electronic musical instrument based on software, and played by a keyboard controller. One implementation is a set of four MIDI keyboards arranged vertically. The patent for this musical technology obtained by Grant Johnson, inventor, in 1995, specifically identifies the seven chords that exist for each key signature, and how these key signatures can be selected at any time while playing the instrument to achieve a key signature change, and thus an instant change in chords. In every key signature there are seven chords, and each of these chords are identified on the chordboard as follows (signified by the Nashville Number System): I chord is major tonality, referred to as ROOT CHORD ii chord is minor tonality, referred to as SUPER TONIC CHORD iii chord is minor tonality, referred to as MEDIANT CHORD IV chord is minor tonality, referred to as SUB-DOMINANT CHORD V chord is minor tonality, referred to as DOMINANT CHORD vi chord is minor tonality, referred to as SUB-MEDIANT CHORD vii chord is minor tonality, referred to as LEADING TONE CHORDPressing a key on any one of the chordboard's four keyboards sounds a single note within the chord zone. Each key plays only one note. To play a chord, the musician plays two or more keys, similar to a piano. History: The chordboard original started as a prototype stand alone self-contained unit created by Grant Johnson (inventor/musician), but evolved into a software based musical instrument that could work with a variety of hardware devices, as music technology has continually evolved centered around a DAW. With the advent of readily available Digital Audio Workstation (DAW) easily accessible to the consumer masses on account of much lower cost options (with some being freely available i.e. Audacity), the chordboard can be configured using any type of MIDI keyboard connected via USB to a MAC or PC computer. The Chordboard evolved to a set of vertically stacked keyboards and was dubbed the Chordboard STAC, consisting of four midi controller keyboards (61 keys each) and where the black keys were disabled in software and only the white keys were mapped to various harmonic notes within the chord, depending on genre. For instance, in a JAZZ genre, the STAC would utilize chord voicings that were compatible with larger chords such as the 9th, 11th, and 13th. The STAC allowed for key changes, genre changes, on the fly by a programmable section of keys on the top keyboard. All seven chords within any key signature selected were represented by the STAC (see photo). Functionality: Large symphonic chord voicings can be played on the chordboard with 12 notes available for each of the seven chord zones (84 active notes total). Each white key on the MIDI keyboard used represents an individual note within a chord zone, and is mapped to a note within a harmonic chord voicing pattern programmed for each chord, according to major-minor tonality and a particular voicing style (such as classic, jazz, or oriental). Due to the chord voicing mapping that takes place within the software, the entry level learning curve is short. In contrast, because of the millions of possible chord voicings between each of the seven chords in a key signature, more advanced level playing technique can take place, but over a much longer period of time. Functionality: The chordboard interface is made up of 7 different active chords, with the option to change chord values based on every possible key signature variation (24 unique sets of key signatures, 12 major and 12 minor each in the keys of C, Db, D, Eb, E, F, Gb, G, Ab, A, Bb, B). Each chord value can instantly change, depending on which key signature the player selects from the top keyboard in the stack of four MIDI keyboards. For example, in the Key of C, the I chord is C major, and the II chord is D minor, and so on up the scale. The chordboard allows the player to select key signature changes on the fly, and chord style, or genre changes also on the fly, facilitating a vast variety of chord voicings available for each chord. Functionality: The Chordboard STAC functionality is explained visually in the following video: HOW to Play the Chordboard STAC Sample songs created with the STAC include: Music in my Soul Meso Kingman Ballad of Bordeaux STACSOUNDS AFRICANA © 2009
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Lens lantern** Lens lantern: A lens lantern is a small, self-contained lamp structure which may sometimes be used to serve as a lighthouse. Unlike a regular Fresnel lens, the lantern requires no housing to protect it from the weather; its glass sides would refract and magnify the light in the same fashion as would the lens. Lens lanterns were popular alternatives to lighthouses in the nineteenth century; they required less care, were cheaper to erect, and could be fairly easily placed.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Derp (hacker group)** Derp (hacker group): Austin Thompson, known as DerpTrolling, is a hacker that was active from 2011 to 2014. He largely used Twitter to coordinate distributed denial of service attacks on various high traffic websites. In December 2013 he managed to bring down large gaming sites such as League of Legends in an attempt to troll popular livestreamer PhantomL0rd. Public reaction to his presence has been generally negative, largely owing to the unclear nature of his motives. Attacks: Initially, Derp sent a few tweets using the Twitter account “DerpTrolling” to indicate that he were going to bring down the popular gaming website League of Legends. his first attack however, was on a game called Quake Live. Hours afterwards, many of the League of Legends game server regions in North America, Europe, and Oceania, as well as the website and Internet forums were taken down. To bring down the game servers, he used an indirect attack on Riot Games' internet service provider Internap. he revealed to have been targeting a popular livestreamer who goes by the name of PhantomL0rd on the streaming website Twitch. Reddit summarized the report by saying that he had planned to use distributed denial of service attacks to flood traffic on various high-profile gaming websites associated with PhantomL0rd, including League of Legends and Blizzard Entertainment's Battle.net. According to The Escapist, the hacker also issued a threat to take down Dota 2 if PhantomL0rd were to lose his game, which the hacker carried out. However, he only crashed Phantoml0rd's game, while other games in DoTA 2 were running normally. Attacks: When PhantomL0rd asked the hacker why he was attacking these sites, he responded by saying it was "for the lulz" and that it was also partially out of dislike for "money-hungry companies." He also persuaded PhantomL0rd into playing Club Penguin while simultaneously managing to take down Electronic Arts website EA.com. PhantomL0rd's personal information was leaked during the attack and released onto multiple gaming websites, in a process often referred to as doxing. This led to many fake orders of pizza arriving at his house, as well as a police raid on his house when they received reports about a hostage situation. According to PhantomL0rd, at least six policemen searched through his house, but they only realized later that the call was fake. The hacker group claimed to have additionally attacked several other Internet games and websites including World of Tanks, the North Korean news network KCNA, RuneScape, Eve Online, a Westboro Baptist Church website, the website and online servers of Minecraft, and many others. A day after the attacks, Riot Games issued a statement confirming that their League of Legends services had indeed been attacked by the hacker, though the hacker have brought their services back online. Aftermath and reaction: The news website LatinoPost criticized the attack as being "frivolous" and merely "just for attention," unlike so-called hacktivist groups. VentureBeat noted that PhantomL0rd's stream was still drawing in over one hundred thousand viewers and that it is "still good for his traffic." PlayStation LifeStyle stated that they believe the current problems with the PlayStation Network had more to do with the "influx of new PS4 owners and increased holiday online activity" than any effect or damage the hacker attempted on the network. Editor of popular gaming news website Game Informer's Mike Futter also blamed the Twitch streaming service and PhantomL0rd for not shutting the stream immediately despite having received several warnings throughout, and that this was tantamount to playing accomplices to the crime. Varga defended himself by saying that he was merely trying to maintain a business, and that if he did not comply, DerpTrolling would have targeted another streamer.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Cyclododecahexaene** Cyclododecahexaene: Cyclododecahexaene or [12]annulene (C12H12) is a member of the series of annulenes with some interest in organic chemistry with regard to the study of aromaticity. Cyclododecahexaene is non-aromatic due to the lack of planarity of the structure. On the other hand the dianion with 14 electrons is a Hückel aromat and more stable. According to in silico experiments the tri-trans isomer is expected to be the most stable, followed by the 1,7-ditrans and the all cis-isomers (+1 kcal/mol) and by the 1,5-ditrans isomer (+5 kcal/mol). Cyclododecahexaene: The first [12]annulene with sym-tri-trans configuration was synthesized in 1970 from a tricyclic precursor by photolysis at low temperatures. On heating the compound rearranges to a bicyclic [6.4.0] isomer. Reducing the compound at low temperatures allowed analysis of the dianion by proton NMR with the inner protons resonating at -4.5 ppm relative to TMS, evidence of an aromatic diamagnetic ring current. Cyclododecahexaene: In one study the 1,7-ditrans isomer is generated at low temperatures in THF by dehydrohalogenation of a hexabromocyclododecane with potassium tert-butoxide. Reduction of this compound at low temperature with caesium metal leads first to the radical anion and then to the dianion. The chemical shift for the internal protons in this compound is with +0.2 ppm much more modest than in the tri-trans isomer. Cyclododecahexaene: Heating the radical ion solution to room temperature leads to loss of one equivalent of hydrogen and formation of the heptalene radical anion.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Service robot** Service robot: Service robots assist human beings, typically by performing a job that is dirty, dull, distant, dangerous or repetitive. They typically are autonomous and/or operated by a built-in control system, with manual override options. Service robot: The term "service robot" does not have a strict technical definition. The International Organization for Standardization defines a “service robot” as a robot “that performs useful tasks for humans or equipment excluding industrial automation applications”.The first industrial robot arm, "Unimate," was developed by Joseph F. Engelberger, known as the "father of the robot arm," using George Devel.According to ISO 8373 robots require “a degree of autonomy”, which is the “ability to perform intended tasks based on current state and sensing, without human intervention”. For service robots this ranges from partial autonomy - including human-robot interaction - to full autonomy - without active human robot intervention. The International Federation of Robotics (IFR) statistics for service robots therefore include systems based on some degree of human robot interaction or even full tele-operation as well as fully autonomous systems. Service robot: Service robots are categorized according to personal or professional use. They have many forms and structures as well as application areas. Types: The possible applications of robots to assist in human chores is widespread. At present there are a few main categories that these robots fall into. Types: Industrial Industrial service robots can be used to carry out simple tasks, such as examining welding, as well as more complex, harsh-environment tasks, such as aiding in the dismantling of nuclear power stations. Industrial robots have been defined by the International Federation of Robotics as "an automatically controlled, reprogrammable, multipurpose manipulator programmable in three or more axes, which may be either fixed in place or mobile for use in industrial automation applications". Types: Frontline Service Robots Service robots are system-based autonomous and adaptable interfaces that interact, communicate and deliver service to an organization’s customers. Types: Domestic Domestic robots perform tasks that humans regularly perform in non-industrial environments, like people's homes such as for cleaning floors, mowing the lawn and pool maintenance. People with disabilities, as well as people who are older, may soon be able to use service robots to help them live independently. It is also possible to use certain robots as assistants or butlers. Types: Scientific Robotic systems perform many functions such as repetitive tasks performed in research. These range from the multiple repetitive tasks made by gene samplers and sequencers, to systems which can almost replace the scientist in designing and running experiments, analysing data and even forming hypotheses. Types: Autonomous scientific robots perform tasks which humans would find difficult or impossible, from the deep sea to outer space. The Woods Hole Sentry can descend to 4,500 metres and allows a higher payload as it does not need a support ship or the oxygen and other facilities demanded by human piloted vessels. Robots in space include the Mars rovers which could carry out sampling and photography in the harsh environment of the atmosphere on Mars. Types: Event Robots Event Robots are starting to be used within the realms of service Robots to engage with clients and event attendees. Robots provide a great source of interaction. “Eva” photography Robot is a great example of how a Robot can be used for events to engage attendees.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Metaprogramming** Metaprogramming: Metaprogramming is a programming technique in which computer programs have the ability to treat other programs as their data. It means that a program can be designed to read, generate, analyze or transform other programs, and even modify itself while running. In some cases, this allows programmers to minimize the number of lines of code to express a solution, in turn reducing development time. It also allows programs a greater flexibility to efficiently handle new situations without recompilation. Metaprogramming: Metaprogramming can be used to move computations from run-time to compile-time, to generate code using compile time computations, and to enable self-modifying code. The ability of a programming language to be its own metalanguage is called reflection. Reflection is a valuable language feature to facilitate metaprogramming. Metaprogramming was popular in the 1970s and 1980s using list processing languages such as LISP. LISP hardware machines were popular in the 1980s and enabled applications that could process code. They were frequently used for artificial intelligence applications. Approaches: Metaprogramming enables developers to write programs and develop code that falls under the generic programming paradigm. Having the programming language itself as a first-class data type (as in Lisp, Prolog, SNOBOL, or Rebol) is also very useful; this is known as homoiconicity. Generic programming invokes a metaprogramming facility within a language by allowing one to write code without the concern of specifying data types since they can be supplied as parameters when used. Approaches: Metaprogramming usually works in one of three ways. The first approach is to expose the internals of the run-time engine to the programming code through application programming interfaces (APIs) like that for the .NET IL emitter. The second approach is dynamic execution of expressions that contain programming commands, often composed from strings, but can also be from other methods using arguments or context, like JavaScript. Thus, "programs can write programs." Although both approaches can be used in the same language, most languages tend to lean toward one or the other. Approaches: The third approach is to step outside the language entirely. General purpose program transformation systems such as compilers, which accept language descriptions and carry out arbitrary transformations on those languages, are direct implementations of general metaprogramming. This allows metaprogramming to be applied to virtually any target language without regard to whether that target language has any metaprogramming abilities of its own. One can see this at work with Scheme and how it allows tackling some limitations faced in C by using constructs that were part of the Scheme language itself to extend C.Lisp is probably the quintessential language with metaprogramming facilities, both because of its historical precedence and because of the simplicity and power of its metaprogramming. In Lisp metaprogramming, the unquote operator (typically a comma) introduces code that is evaluated at program definition time rather than at run time; see Self-evaluating forms and quoting in Lisp. The metaprogramming language is thus identical to the host programming language, and existing Lisp routines can be directly reused for metaprogramming, if desired. This approach has been implemented in other languages by incorporating an interpreter in the program, which works directly with the program's data. There are implementations of this kind for some common high-level languages, such as RemObjects’ Pascal Script for Object Pascal. Usages: Code generation A simple example of a metaprogram is this POSIX Shell script, which is an example of generative programming: This script (or program) generates a new 993-line program that prints out the numbers 1–992. This is only an illustration of how to use code to write more code; it is not the most efficient way to print out a list of numbers. Nonetheless, a programmer can write and execute this metaprogram in less than a minute, and will have generated over 1000 lines of code in that amount of time. Usages: A quine is a special kind of metaprogram that produces its own source code as its output. Quines are generally of recreational or theoretical interest only. Not all metaprogramming involves generative programming. If programs are modifiable at runtime or if incremental compilation is available (such as in C#, Forth, Frink, Groovy, JavaScript, Lisp, Elixir, Lua, Nim, Perl, PHP, Python, REBOL, Ruby, Rust, SAS, Smalltalk, and Tcl), then techniques can be used to perform metaprogramming without actually generating source code. One style of generative approach is to employ domain-specific languages (DSLs). A fairly common example of using DSLs involves generative metaprogramming: lex and yacc, two tools used to generate lexical analyzers and parsers, let the user describe the language using regular expressions and context-free grammars, and embed the complex algorithms required to efficiently parse the language. Code instrumentation One usage of metaprogramming is to instrument programs in order to do dynamic program analysis. Challenges: Some argue that there is a sharp learning curve to make complete use of metaprogramming features. Since metaprogramming gives more flexibility and configurability at runtime, misuse or incorrect use of the metaprogramming can result in unwarranted and unexpected errors that can be extremely difficult to debug to an average developer. It can introduce risks in the system and make it more vulnerable if not used with care. Some of the common problems which can occur due to wrong use of metaprogramming are inability of the compiler to identify missing configuration parameters, invalid or incorrect data can result in unknown exception or different results. Due to this, some believe that only high-skilled developers should work on developing features which exercise metaprogramming in a language or platform and average developers must learn how to use these features as part of convention. Uses in programming languages: Macro systems Common Lisp and most Lisp dialects. Uses in programming languages: Scheme hygienic macros MacroML Racket (programming language) Template Haskell Scala macros Clojure macros Nim Rust Haxe Julia Elixir Macro assemblers The IBM/360 and derivatives had powerful macro assembler facilities that were often used to generate complete assembly language programs or sections of programs (for different operating systems for instance). Macros provided with CICS transaction processing system had assembler macros that generated COBOL statements as a pre-processing step. Uses in programming languages: Other assemblers, such as MASM, also support macros. Metaclasses Metaclasses are provided by the following programming languages: Common Lisp Python Nil Groovy Ruby Smalltalk Lua Template metaprogramming C "X Macros" C++ Templates D Common Lisp, Scheme and most Lisp dialects by using the quasiquote ("backquote") operator. Nim Staged metaprogramming MetaML MetaOCaml Scala natively or using the Lightweight Modular Staging Framework Terra Dependent types Usage of dependent types allows proving that generated code is never invalid. However, this approach is bleeding-edge and is rarely found outside of research programming languages. Implementations: The list of notable metaprogramming systems is maintained at List of Program Transformation Systems.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Steinitz exchange lemma** Steinitz exchange lemma: The Steinitz exchange lemma is a basic theorem in linear algebra used, for example, to show that any two bases for a finite-dimensional vector space have the same number of elements. The result is named after the German mathematician Ernst Steinitz. The result is often called the Steinitz–Mac Lane exchange lemma, also recognizing the generalization by Saunders Mac Lane of Steinitz's lemma to matroids. Statement: Let U and W be finite subsets of a vector space V . If U is a set of linearly independent vectors, and W spans V , then: 1. |U|≤|W| 2. There is a set W′⊆W with |W′|=|W|−|U| such that U∪W′ spans V Proof: Suppose U={u1,…,um} and W={w1,…,wn} . We wish to show that m≤n , and that after rearranging the wj if necessary, the set {u1,…,um,wm+1,…,wn} spans V . We proceed by induction on m For the base case, suppose m is zero. In this case, the claim holds because there are no vectors ui , and the set {w1,…,wn} spans V by hypothesis. Proof: For the inductive step, assume the proposition is true for m−1 . By the inductive hypothesis we may reorder the wi so that {u1,…,um−1,wm,…,wn} spans V . Since um∈V , there exist coefficients μ1,…,μn such that um=∑j=1m−1μjuj+∑j=mnμjwj .At least one of {μm,…,μn} must be non-zero, since otherwise this equality would contradict the linear independence of {u1,…,um} ; it follows that m≤n . By reordering the μmwm,…,μnwn if necessary, we may assume that μm is nonzero. Therefore, we have wm=1μm(um−∑j=1m−1μjuj−∑j=m+1nμjwj) .In other words, wm is in the span of {u1,…,um,wm+1,…,wn} . Since this span contains each of the vectors u1,…,um−1,wm,wm+1,…,wn , by the inductive hypothesis it contains V Applications: The Steinitz exchange lemma is a basic result in computational mathematics, especially in linear algebra and in combinatorial algorithms.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Epanalepsis** Epanalepsis: Epanalepsis (from the Greek ἐπανάληψις, epanálēpsis "repetition, resumption, taking up again") is the repetition of the initial part of a clause or sentence at the end of that same clause or sentence. The beginning and the end of a sentence are two positions of emphasis, so special attention is placed on the phrase by repeating it in both places. Nested double-epanalepses are antimetaboles. Examples: The king is dead; long live the king! History is ours and people make history. — Salvador Allende They bowed down to him rather, because he was all of these things, and then again he was all of these things because the town bowed down. —Zora Neale Hurston, Their Eyes Were Watching God Beloved is mine; she is Beloved. — Toni Morrison, Beloved Blow winds and crack your cheeks! Rage, blow! — Shakespeare, King Lear, 3.2.1 Once more unto the breach, dear friends, once more; — Shakespeare, Henry V, 3.1.1 Last things first; the slow haul to forgive them ... a telling figure out of rhetoric, | epanalepsis, the same word first and last. — Geoffrey Hill, The Triumph of Love, Section X Nice to see you, to see you, nice. — Bruce Forsyth (As a phrase repeated but inverted, this is also an example of antimetabole.)
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Skeptics in the Pub** Skeptics in the Pub: Skeptics in the Pub (abbreviated SITP) is an informal social event designed to promote fellowship and social networking among skeptics, critical thinkers, freethinkers, rationalists and other like-minded individuals. It provides an opportunity for skeptics to talk, share ideas and have fun in a casual atmosphere, and discuss whatever topical issues come to mind, while promoting skepticism, science, and rationality. Format: "Skeptics in the Pub" is not a protected term, anyone can set one up. There also is no formal procedure to organising an event; organisers can fill in activities as they see fit. There are, however, some common approaches across the world in hosting such events that make them more successful.The usual format of meetings includes an invited speaker who gives a talk on a specific topic, followed by a question-and-answer session. Other meet-ups are informal socials, with no fixed agenda. The groups usually meet once a month at a public venue, most often a local pub. By 2012 there were more than 100 different "SitP" groups running around the world. History: London The earliest and longest-running event is the award-winning London meeting, established by Australian philosophy professor Scott Campbell in 1999. Campbell based the idea around Philosophy in the Pub and Science in the Pub, two groups which had been running in Australia for some time. History: The inaugural speaker was Wendy M. Grossman, the editor and founder of The Skeptic magazine, in February 1999; this first talk attracted 30 attendees. The London group claims to be the "World's largest regular pub meeting," with 200 to 400 people in attendance at each meeting.Campbell ran the London group for three years while there on a teaching sabbatical, and was succeeded after his return to Australia by two sci-fi fans and skeptics, Robert Newman and Marc LaChappelle. Nick Pullar, who made a television appearance as "Convener of Skeptics in the Pub" on the BBC spoof show Shirley Ghostman, then led the group from 2003 to 2008. History: As of 2011, the London group was co-convened by Sid Rodrigues, who has co-organised events in several other cities around the world. This group has conducted experiments on the paranormal as part of James Randi's One Million Dollar Paranormal Challenge and co-organised An Evening with James Randi & Friends.Some of the speakers at London Skeptics in the Pub have been Simon Singh, Victor Stenger, Jon Ronson, Phil Plait, David Colquhoun, Richard J. Evans, S. Fred Singer, Ben Goldacre, David Nutt, and Mark Stevenson. History: Around the world The ease of use of the internet, via social networking sites and content management systems, has led to more than 100 active chapters around the world, including more than 30 in the US and more than 40 in the UK. In 2009, D. J. Grothe described the rise of Skeptics in the Pub across cities in North America and elsewhere as a prominent example of "Skepticism 2.0". SITPs were often founded outside the realm of existing skeptical organisations (mostly centred around magazines), with some successful meetings growing out to become fully-fledged membership organisations."Skeptics in the Pub" would later serve as the template for other skeptical, rationalist, and atheist meet-ups around the globe, including The James Randi Educational Foundation's "The Amazing Meeting", Drinking Skeptically, The Brights, and the British Humanist Association social gatherings.Since 2010 Edinburgh Skeptics in the Pub has extended the Skeptics in the Pub concept over the whole Edinburgh International Festival Fringe, under the banner Skeptics on the Fringe and from 2012 done the same at the Edinburgh International Science Festival with the title At The Fringe of Reason. The Merseyside Skeptics Society and Greater Manchester Skeptics (forming North West Skeptical Events Ltd) hosted three two-day conferences, QED, in February 2011, March 2012 and April 2013. Glasgow Skeptics has also hosted two one-day conferences, as of July 2011.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Free-radical addition** Free-radical addition: In organic chemistry, free-radical addition is an addition reaction which involves free radicals. The addition may occur between a radical and a non-radical, or between two radicals. The basic steps with examples of the free-radical addition (also known as radical chain mechanism) are: Initiation by a radical initiator: A radical is created from a non-radical precursor. Free-radical addition: Chain propagation: A radical reacts with a non-radical to produce a new radical species Chain termination: Two radicals react with each other to create a non-radical speciesFree-radical reactions depend on a reagent having a (relatively) weak bond, allowing it to homolyse to form radicals (often with heat or light). Reagents without such a weak bond would likely proceed via a different mechanism. An example of an addition reaction involving aryl radicals is the Meerwein arylation. Addition of mineral acid to an alkene: To illustrate, consider the alkoxy radical-catalyzed, anti-Markovnikov reaction of hydrogen bromide to an alkene. In this reaction, a catalytic amount of organic peroxide is needed to abstract the acidic proton from HBr and generate the bromine radical, however a full molar equivalent of alkene and acid is required for completion. Note that the radical will be on the more substituted carbon. Free-radical addition does not occur with the molecules HCl or HI. Both reactions are extremely endothermic and are not chemically favored. Self-terminating oxidative radical cyclizations: In one specific type of radical addition called self-terminating oxidative radical cyclization, alkynes are oxidized to ketones through intramolecular radical cyclization where the radical species are inorganic rather than carbon based. This type of reaction is self-terminating because propagating is not possible and the initiator is used in stoichiometric amounts.As an example, a nitrate radical is generated by photolysis of ammonium cerium(IV) nitrate (CAN) which reacts with an alkyne to generate; first, a very reactive vinyl radical; then, via 1,5-hydrogen atom transfer (HAT) and 5-exo-trig ring-closure, a ketyl radical. The produced ketyl dislodges a nitrite radical which is not reactive enough for propagation and thus, the ketone is formed. Self-terminating oxidative radical cyclizations: The radical species in effect is a single oxygen atom synthon. Other inorganic radicals that show this type of reactivity are sulfate radical ions (from ammonium persulfate) and hydroxyl radicals.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Practical Color Coordinate System** Practical Color Coordinate System: The Practical Color Coordinate System (PCCS) is a discrete color space indexed by hue and tone. It was developed by the Japan Color Research Institute.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Roland SP-808** Roland SP-808: The Roland SP-808 GrooveSampler and SP-808EX/E-Mix Studio are both discontinued workstations, which function as digital samplers, synthesizers, and music sequencers. The digital samplers are a part of the long line of both Roland Corporation's and Boss Corporation's Groove Gear, which includes the more popular and successful Boss SP-303 and Roland SP-404 versions. Background: Being an early installment in the SP lineage, the SP-808 GrooveSampler was originally released in the year of 1998. Sometime in the year 2000, the sampler was updated, redesigned, and released as the SP-808EX, with the additional name of "e-Mix Studio." Despite receiving an upgrade, both versions of the SP-808 have and also lack certain features of the succeeding SP installments. Features: Groovesampler The original Roland SP-808 GrooveSampler can play up to four stereo samples simultaneously, with the sample rates of 44.1 and 32 kHz. The maximum sample time allowed is 25 minutes of stereo at the rate of 44.1 kHz. Being an predecessor to more popular SPs, the sampler itself can hold over 1,000 samples, while 100MB Zip disks can store up to 1024 samples, roughly amounting to 64 minutes. Unlike some of the succeeding SP installments, the sampler has no USB or CompactFlash card option. Furthermore, audio samples can only be stored, read, and transferred directly from the zip drive rather than internal RAM. In an effort to maximize storage space on zip disks, Roland decided against the use of AIFF and WAV audio formats. D-Beam controller is also included. Features: E-Mix Studio Being an upgrade from the Groovesampler, the SP-808EX E-Mix Studio includes a virtual monophonic synthesizer for use with the step sequencer and D-Beam controller. Vocal effects, guitar multi-effects, a 10-band Vocoder, Voice Transformer, Mic Simulator, and a number DJ-oriented groove effects were added as well. A larger 250MB Zip drive replaces the original 100MB Zip. Sampling and recording time was extended possible to 61 stereo minutes. Expansion options include the OP-1 interface (6 analog outs, 2 digital I/Os, SCSI) and OP-2 interface (XLR I/Os, digital I/O, SCSI). In regards to storing and transporting audio, the method is the same as the Groovesampler. Notable users: Despite receiving little popularity in comparison to the later SP-303 and SP-404 installments, Slipknot member 133 is known to have utilized the sampler for a number of years. DJ and music producer, Rekha Malhotra is known for utilizing the SP-808 as well.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Organ shoes** Organ shoes: Organ shoes are shoes worn by organists designed to facilitate playing of the organ pedal keyboard. Since organ shoes are worn only at the organ, the use of special footwear also avoids picking up grit or grime that could scar or stain the pedal keys. Description: Organ shoes are typically as narrow as comfortably possible to prevent accidental playing of more than one pedal key at a time. They usually have both leather soles and leather heels that are glued into place (rather than stitched) which allow the organist to slide the feet along and across the pedals easily. The soles should be thin enough to feel the pedal key surfaces reasonably easily, but sufficiently stiff for solid and secure playing contact with the pedal keys. Organ shoes typically have a slightly higher heel of about one inch to ease playing with the heel and to allow non adjacent notes to be played at once by one foot.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Ziegelschiefer Formation** Ziegelschiefer Formation: The Ziegelschiefer Formation is a geologic formation in Germany. It preserves fossils dating back to the Carboniferous period.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Hard NRG** Hard NRG: Hard NRG, NRG, nu-NRG, filthy hard house, or more recently just filth, is an electronic dance music genre similar in structure (with regard to sequencing and programming) to UK hard house form, taking influences from German hard trance. The main difference is in the musical/thematic content of each style. Where UK hard house has uplifting, playfully fun and tough elements, NRG is ominous, dark, aggressive and relentless with its distressed, menacing and gritty sounds on a slightly faster BPM (155–165 average) than UK hard house (150–155 average). In regard to the mechanics of the scene, many of the labels have made a shift from purely vinyl releases to CD single releases and digital downloads. Record labels which produce the genre include Vicious Circle, Flashpoint, Tidy Trax, Kaktai, Tonka Trax, Tinrib Digital, Noir Records and Noir Digital. History: 1980s–early 1990s: Roots in UK hard house and EU techno Acid house of the late 1980s was the 'happier' (playful/fun) side of dance music exemplified by Italian piano-house (Italo disco/hi-NRG of the early 1990s) and this began to progress into a scene of its own. Throughout the first half of the 1990s, house music which was more akin to the soulful, disco influenced dance music of the 1980s, continued to flourish. By the mid 1990s, uplifting house music in this vein was in abundance and producers in the UK such as The Sharp Boys were providing their own interpretation of the sound. They upped the BPMs a little, chopped up the disco samples into bite size loops, chucked a load of filters in and created music that was pure dance floor business. This was the sound that provided the basis for the origins of UK hard house. At the time, it was exclusive to the gay scene in the UK and for a while known as "Hand Bag House" or "Hard Bag". Hard house as a style was epitomized in the early days by producers such as Paul Janes, The Tidy Boys, Pete Wardman, Steve Thomas, Ian M, Alan Thompson, Captain Tinrib, DJ Ziad and Tony De Vit. Some of the above-mentioned names were heavily involved in the club night 'Trade', which is widely regarded as the home and birthplace of UK hard house with Tony De Vit as the 'godfather' of UK hard house. History: In the early 1990s, producers like Joey Beltram had transposed the sound of U.S. techno to Belgium, and added their own twist to it. This new brand of techno was darker, harder, and generally nastier than anything that had preceded it. The techno that had emerged from Detroit in the US had the seemingly paradoxical quality of somehow being soulful while at the same time being 100% electronic. The Belgian techno sound ripped out this soul, and replaced it with something altogether more sinister. It was this style of music that gave birth to the sound of the "Hoover", a gritty sound produced by the Roland Alpha Juno 2 synthesizer and so called because of its apparent similarity to the noises made by vacuum cleaners. History: Late 1990s–2000s: From nu-NRG to hard NRG By 1996–97, there was a steady flow of UK-based hard house that threw away the fun and uplifting parts to incorporate the "Hoover" and other gritty, menacing sounding elements at a slightly higher tempo than the conventional hard house and thus, the style effectively became known as "nu-NRG" when Blu Peter coined the phrase in a magazine interview. Doug Osbourne (Sourmash/Razor's Edge/Illuminatae), Gordon Matthewman (DJ Edge/Illuminatae), Jon Bell (Captain Tinrib), Jon Vaughan (Jon The Dentist/Hyperspace), John Truelove (Lectrolux), Quentin Franglen (Baby Doc/Hyperspace), Owen Swinard and Dom Sweeten (OD404), Paul King, Karim, John Newell (RR Fierce), Ben Keen (BK) and Nick Sentience all had a heavy hand in shaping this sound in the UK specifically. Outside the UK, producers such as DJ Misjah (Dyewitness), Ramon Zenker (E-Trax/Phenomania/Exit EEE), Yoji Biomehanika, Commander Tom, Nuclear Hyde etc., all dabbled with the sound from time to time. History: The late 1990s and early 2000s saw NRG expand a little further when the sound became even fiercer, darker and much more serious than nu-NRG. DJ Kristian then coined the phrase "hard NRG" while Jon Bell (Captain Tinrib/Dr. Base/Fierce Base), John Newell (RR Fierce/Rim 'N Chop/Fierce Base), Karim Lamouri (Karim/Rim 'N Chop) Chris Payne (Casper) and Barmak Hatamian (The Alien Thing/Max and Amino) were instrumental in the development of hard NRG. History: 2010s–present At present, nu-NRG and hard NRG is known simply as NRG throughout the scene, as it has become extremely difficult to draw a line of distinction between the two styles.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Postnikov square** Postnikov square: In algebraic topology, a Postnikov square is a certain cohomology operation from a first cohomology group H1 to a third cohomology group H3, introduced by Postnikov (1949). Eilenberg (1952) described a generalization taking classes in Ht to H2t+1.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Housebreaker (business)** Housebreaker (business): A housebreaker is an organisation that specialises in the disposition of large, old residential buildings. Housebreaker (business): From the late 19th century and peaking in the mid 20th, many large country houses, manors, stately homes, and castles in the United Kingdom became impractical to maintain; initially due to the repeal of the Corn Laws and the late 19th-century agricultural depression, later because of cultural changes following the First World War and then requisitioning during the Second World War. Often, they were sold to housebreakers such as Crowthers of London or Charles Brand of Dundee for disposal of their contents and demolition. Housebreaker (business): Typically, after an initial 'walk-round sale' or auction was carried out, fixtures, fittings, and occasionally whole rooms, were sold off to museums or for re-installation in other properties. The main buildings were then un-roofed or demolished (see Destruction of country houses in 20th-century Britain). From 1969, the destruction of houses of architectural or historical significance was prohibited by law and the job of the housebreakers ended. An estimated 1,800 buildings were disposed of by housebreakers before this time.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Dock3** Dock3: Dedicator of cytokinesis protein 3 (Dock3), also known as MOCA (modifier of cell adhesion) and PBP (presenilin-binding protein), is a large (~180 kDa) protein encoded in the human by the DOCK3 gene, involved in intracellular signalling networks. It is a member of the DOCK-B subfamily of the DOCK family of guanine nucleotide exchange factors (GEFs) which function as activators of small G-proteins. Dock3 specifically activates the small G protein Rac. Discovery: Dock3 was originally discovered in a screen for proteins that bind presenilin (a transmembrane protein which is mutated in early onset Alzheimer's disease). Dock3 is specifically expressed in neurons (primarily in the cerebral cortex and hippocampus). Structure and function: Dock3 is part of a large class of proteins (GEFs) which contribute to cellular signalling events by activating small G proteins. In their resting state G proteins are bound to Guanosine diphosphate (GDP) and their activation requires the dissociation of GDP and binding of guanosine triphosphate (GTP). GEFs activate G proteins by promoting this nucleotide exchange. Dock3 exhibits the same domain arrangement as Dock180 (a member of the DOCK-A subfamily and the archetypal member of the DOCK family) and these proteins share a considerable (40%) degree of sequence similarity. Regulation: Since Dock3 shares the same domain arrangement as Dock180 it is predicted to have a similar array of binding partners, although this has yet to be demonstrated. It contains an N-terminal SH3 domain, which in Dock180 binds ELMO (a family of adaptor proteins which mediate recruitment and efficient GEF activity of Dock180), and a C-terminal proline-rich region which, in Dock180, binds the adaptor protein CRK. Downstream signalling: Dock3 GEF activity is directed specifically at Rac1. Dock3 has not been shown to interact with Rac3, another Rac protein which is expressed in neuronal cells, and this may be because Rac3 is primarily located in the perinuclear region. In fact, Rac1 and Rac3 appear to have distinct and antagonistic roles in these cells. Dock3-mediated Rac1 activation promotes reorganisation of the cytoskeleton in SH-SY5Y neuroblastoma cells and primary cortical neurones as well as morphological changes in fibroblasts. It has also been shown to regulate neurite outgrowth and cell-cell adhesion in B103 and PC12 cells. In neurological disorders: The first indication that Dock3 might be involved in neurological disorders came when Dock3 was shown to bind to presenilin, a transmembrane enzyme involved in the generation of beta amyloid (Aβ), accumulation of which is an important step in the development of Alzheimer's disease. Dock3 has been shown to undergo redistribution and association with neurofibrillary tangles in brain samples from patients with Alzheimer's disease. A mutation in Dock3 was also identified in a family displaying a phenotype resembling attention-deficit hyperactivity disorder (ADHD).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded