text
stringlengths
60
353k
source
stringclasses
2 values
**SURF1** SURF1: Surfeit locus protein 1 (SURF1) is a protein that in humans is encoded by the SURF1 gene. The protein encoded by SURF1 is a component of the mitochondrial translation regulation assembly intermediate of cytochrome c oxidase complex (MITRAC complex), which is involved in the regulation of cytochrome c oxidase assembly. Defects in this gene are a cause of Leigh syndrome, a severe neurological disorder that is commonly associated with systemic cytochrome c oxidase (complex IV) deficiency, and Charcot-Marie-Tooth disease 4K (CMT4K). Structure: SURF1 is located on the q arm of chromosome 9 in position 34.2 and has 9 exons. The SURF1 gene produces a 33.3 kDa protein composed of 300 amino acids. The protein is a member of the SURF1 family, which includes the related yeast protein SHY1 and rickettsial protein RP733. The gene is located in the surfeit gene cluster, a group of very tightly linked genes that do not share sequence similarity, where it shares a bidirectional promoter with SURF2 on the opposite strand. SURF1 is a multi-pass protein that contains two transmembrane regions, one 19 amino acids in length from positions 61-79 and the other 17 amino acids in length from positions 274–290. Function: This gene encodes a protein localized to the inner mitochondrial membrane and thought to be involved in the biogenesis of the cytochrome c oxidase complex. SURF1 is a multi-pass membrane protein component of the mitochondrial translation regulation assembly intermediate of cytochrome c oxidase complex (MITRAC complex). The MITRAC complex regulates cytochrome c oxidase assembly by acting as a central assembly intermediate, receiving subunits imported to the inner mitochondrial membrane and regulating COX1 mRNA translation. Clinical significance: Mutations in SURF1 have been associated with mitochondrial complex IV (cytochrome c oxidase) deficiency with clinical manifestations of Leigh syndrome and Charcot-Marie-Tooth disease 4K (CMT4K). Clinical significance: Mitochondrial complex IV deficiency Mitochondrial complex IV deficiency is a disorder of the mitochondrial respiratory chain with heterogeneous clinical manifestations, ranging from isolated myopathy to severe multisystem disease affecting several tissues and organs. Features include hypertrophic cardiomyopathy, hepatomegaly and liver dysfunction, hypotonia, muscle weakness, exercise intolerance, developmental delay, delayed motor development and mental retardation. Some affected individuals manifest a fatal hypertrophic cardiomyopathy resulting in neonatal death. A subset of patients manifest Leigh syndrome. In patients presenting with pathogenic mutations resulting in dysfunctioning SURF1, cytochrome c oxidase activity is likely to be diminished in one or more types of tissues. Clinical significance: Leigh syndrome Leigh syndrome is an early-onset progressive neurodegenerative disorder characterized by the presence of focal, bilateral lesions in one or more areas of the central nervous system including the brainstem, thalamus, basal ganglia, cerebellum and spinal cord. Clinical features depend on which areas of the central nervous system are involved and include subacute onset of psychomotor retardation, hypotonia, ataxia, weakness, vision loss, eye movement abnormalities, seizures, and dysphagia. There have been over 30 different mutations in SURF1 that have been associated with Leigh syndrome. These mutations, which comprise at least 10 missense or nonsense, 8 splice site, and 12 insertion or deletion mutations, are believed to be the result of dysfunctional SURF1 that results in Leigh syndrome and cytochrome c oxidase deficiency. The most common mutation is believed to be 312_321del 311_312insAT. Clinical significance: Charcot-Marie-Tooth disease 4K (CMT4K) Charcot-Marie-Tooth disease 4K (CMT4K) is an autosomal recessive, demyelinating form of Charcot-Marie-Tooth disease, a disorder of the peripheral nervous system, characterized by progressive weakness and atrophy, initially of the peroneal muscles and later of the distal muscles of the arms. Charcot-Marie-Tooth disease is classified in two main groups on the basis of electrophysiologic properties and histopathology: primary peripheral demyelinating neuropathies (designated CMT1 when they are dominantly inherited) and primary peripheral axonal neuropathies (CMT2). Demyelinating neuropathies are characterized by severely reduced nerve conduction velocities (less than 38 m/sec), segmental demyelination and remyelination with onion bulb formations on nerve biopsy, slowly progressive distal muscle atrophy and weakness, absent deep tendon reflexes, and hollow feet. By convention, autosomal recessive forms of demyelinating Charcot-Marie-Tooth disease are designated CMT4. CMT4K patients manifest upper and lower limbs involvement. Some affected individuals have nystagmus, polyneuropathy, putaminal and periaqueductal lesions, and late-onset cerebellar ataxia. This disease, when associated with mutations in SURF1, has been found to be linked to cytochrome c oxidase deficiency. Variants associated with this CMT4K have included a homozygous splice site mutation, c.107-2A>G, a missense mutation, c.574C>T, and a deletion, c.799_800del. Interactions: SURF1 has been shown to have 11 binary protein-protein interactions including 8 co-complex interactions. SURF1 interacts with COA3 as part of the mitochondrial translation regulation assembly intermediate of cytochrome c oxidase complex (MITRAC complex). PTGES3, SLC25A5, COX6C, COX14, COA1 have all also been found to interact with SURF1.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Mesoscale convective complex** Mesoscale convective complex: A mesoscale convective complex (MCC) is a unique kind of mesoscale convective system which is defined by characteristics observed in infrared satellite imagery. They are long-lived, often form nocturnally, and commonly contain heavy rainfall, wind, hail, lightning, and possibly tornadoes. Size: A mesoscale convective complex has either an area of cloud top of 100,000 km2 or greater with temperature less than or equal to −32 °C, or an area of cloud top of 50,000 km2 with temperature less than or equal to −52 °C. Size definitions must be met for 6 hours or greater. Its maximum extent is defined as when cloud shield reaches maximum area. Its eccentricity (minor axis/major axis) is greater than or equal to 0.7 at maximum extent. Development: MCCs commonly develop from the merging of thunderstorms into a squall line which eventually meet the MCC criteria. Furthermore, some MCC formation can be tracked from the plains in Colorado back to the Rocky Mountains. These are called "orogenic" complexes. The characteristics of the meteorological environment that MCCs form in are strong warm air advection into the formation environment by a southerly low-level jet stream (wind maximum), strong moisture advection which increases the relative humidity of the formation environment, convergence of air near the surface, and divergence of air aloft. These conditions are most prominent in the region ahead of an upper level trough. The systems begin in the afternoon as scattered thunderstorms which organize overnight in the presence of wind shear (wind speed and direction changes with height). The probability for severe weather is highest in the early stages of formation, during the afternoon. The MCC persists at its mature and strongest stage overnight and into the early morning in which the rainfall is characterized as stratiform rainfall (rather than convective rainfall which occurs with thunderstorms). Dissipation of the MCC commonly occurs during the morning hours. After dissipation, a remnant mid-level circulation known as a mesoscale convective vortex can initiate another round of thunderstorms later in the day. Structure: The structure of an MCC can be separated into three layers. The low-levels of the MCC near the surface, the mid-levels in the middle of the troposphere, and the upper-levels in the upper-troposphere. Near the surface, the MCC exhibits high pressure, with an outflow boundary, or mesoscale cold front, at its leading edge. This high pressure is caused by the cooling of the air from the evaporation of rainfall (commonly referred to as a cold pool). In the mid-levels (mid-troposphere), the MCC exhibits a cyclonic (counterclockwise in the Northern Hemisphere) rotating low pressure which is warm compared to the surrounding environment (referred to as a warm core). This mid-level circulation is referred to as a Mesoscale Convective Vortex. The upper-levels contain an anti-cyclonic (clockwise in the Northern Hemisphere) rotating high pressure which is a sign of divergence of air. This high pressure is colder relative to its surrounding environment. This divergence at upper-levels and convergence of air at the surface along the cool pool's outflow boundary results in rising motion which aids maintenance of the MCC. Effects and climatology: MCCs produce heavy rainfall which can lead to flooding and other hydrological impacts. MCCs are found in the United States during the spring and summer months (warm season), the Indian monsoon region, the West Pacific and throughout Africa and South America. In particular, the heavy rainfall from MCCs accounts for a significant portion of the precipitation during the warm season in the United States. As the warm season progresses, the favorable regions for MCC formation shift from the southern plains of the United States northward. By July and August, the north-central states become the most favorable. The mid-level low pressure areas of MCCs have also been tracked to the origin of some tropical cyclones, and on rare occasions, tropical cyclones can generate MCCs. Notable MCCs: One of the most notable MCCs occurred overnight on 19 July 1977, in western Pennsylvania. The MCC resulted in heavy rainfall which led to the disastrous flooding of Johnstown, Pennsylvania. The complex was tracked 96 hours back to South Dakota and produced copious amounts of rain throughout the northern United States before producing up to 12 inches (300 mm) of rain in Johnstown. Notable MCCs: A second notable MCC brought destructive straight-line winds to southern Ontario, Upstate New York, Vermont, Massachusetts, Connecticut, and Rhode Island on the morning of 15 July 1995. The MCC produced winds in excess of 160 km/h (100 mph) and was responsible for seven deaths, widespread destruction of forests in the Adirondack and Berkshire Mountains, and over $500 million in property damage.The formation of large MCCs over the same general area for a large percentage of the nights in April to July 1993 and their tendency to persist well into the next day was a large part of the cause for the flooding in much of the central United States that year. External links and sources: Forecasting MCC's - Hydrometeorological Prediction Center MCC description - Pennsylvania State University MCC description - University of Illinois MCC page - UCAR NOAA Glossary
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Lookout** Lookout: A lookout or look-out is a person in charge of the observation of hazards. The term originally comes from a naval background, where lookouts would watch for other ships, land, and various dangers. The term has now passed into wider parlance. Naval application: Lookouts have been traditionally placed in high on masts, in crow's nests and tops. Naval application: The International Regulations for Preventing Collisions at Sea (1972) says in part: Every vessel must at all times keep a proper look-out by sight (day shape or lights by eyes or visual aids), hearing (sound signal or Marine VHF radio) and all available means (e.g. Radar, ARPA, AIS, GMDSS...) in order to judge if risk of collision exists.Lookouts report anything they see and or hear. When reporting contacts, lookouts give information such as, bearing of the object, which way the object is headed, target angles and position angles and what the contact is. Lookouts should be thoroughly familiar with the various types of distress signals they may encounter at sea. Criminal definition: By analogy, the term "lookout" is also used to describe a person who accompanies criminals during the commission of a crime, and warns them of the impending approach of hazards: that is, police or eyewitnesses. Although lookouts typically do not actually participate in the crime, they can nonetheless be charged with aiding and abetting or with conspiracy, or as accomplices. Railway use: A lookout may be used when performing engineering works on an operational railway. They will be responsible for ensuring that all staff are cleared of the track in advance of an approaching train.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Lignostilbene alphabeta-dioxygenase** Lignostilbene alphabeta-dioxygenase: In enzymology, a lignostilbene alphabeta-dioxygenase (EC 1.13.11.43) is an enzyme that catalyzes the chemical reaction 1,2-bis(4-hydroxy-3-methoxyphenyl)ethylene + O2 ⇌ 2 vanillinThus, the two substrates of this enzyme are 1,2-bis(4-hydroxy-3-methoxyphenyl)ethylene and O2, whereas its product is vanillin. This enzyme belongs to the family of oxidoreductases, specifically those acting on single donors with O2 as oxidant and incorporation of two atoms of oxygen into the substrate (oxygenases). The oxygen incorporated need not be derived from O2. The systematic name of this enzyme class is 1,2-bis(4-hydroxy-3-methoxyphenyl)ethylene:oxygen oxidoreductase (alphabeta-bond-cleaving). It employs one cofactor, iron.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Pilot error** Pilot error: Pilot error generally refers to an accident in which an action or decision made by the pilot was the cause or a contributing factor that led to the accident, but also includes the pilot's failure to make a correct decision or take proper action. Errors are intentional actions that fail to achieve their intended outcomes. The Chicago Convention defines the term "accident" as "an occurrence associated with the operation of an aircraft [...] in which [...] a person is fatally or seriously injured [...] except when the injuries are [...] inflicted by other persons." Hence the definition of "pilot error" does not include deliberate crashing (and such crashes are not classified as accidents). Pilot error: The causes of pilot error include psychological and physiological human limitations. Various forms of threat and error management have been implemented into pilot training programs to teach crew members how to deal with impending situations that arise throughout the course of a flight.Accounting for the way human factors influence the actions of pilots is now considered standard practice by accident investigators when examining the chain of events that led to an accident. Description: Modern accident investigators avoid the words "pilot error", as the scope of their work is to determine the cause of an accident, rather than to apportion blame. Furthermore, any attempt to incriminate the pilots does not consider that they are part of a broader system, which in turn may be accountable for their fatigue, work pressure, or lack of training. The International Civil Aviation Organization (ICAO), and its member states, therefore adopted James Reason's model of causation in 1993 in an effort to better understand the role of human factors in aviation accidents.Pilot error is nevertheless a major cause of air accidents. In 2004, it was identified as the primary reason for 78.6% of disastrous general aviation (GA) accidents, and as the major cause of 75.5% of GA accidents in the United States. There are multiple factors that can cause pilot error; mistakes in the decision-making process can be due to habitual tendencies, biases, as well as a breakdown in the processing of the information coming in. For aircraft pilots, in extreme circumstances these errors are highly likely to result in fatalities. Causes of pilot error: Pilots work in complex environments and are routinely exposed to high amounts of situational stress in the workplace, inducing pilot error which may result in a threat to flight safety. While aircraft accidents are infrequent, they are highly visible and often involve significant numbers of fatalities. For this reason, research on causal factors and methodologies of mitigating risk associated with pilot error is exhaustive. Pilot error results from physiological and psychological limitations inherent in humans. "Causes of error include fatigue, workload, and fear as well as cognitive overload, poor interpersonal communications, imperfect information processing, and flawed decision making." Throughout the course of every flight, crews are intrinsically subjected to a variety of external threats and commit a range of errors that have the potential to negatively impact the safety of the aircraft. Causes of pilot error: Threats The term "threat" is defined as any event "external to flight crew's influence which can increase the operational complexity of a flight." Threats may further be broken down into environmental threats and airline threats. Environmental threats are ultimately out of the hands of crew members and the airline, as they hold no influence on "adverse weather conditions, air traffic control shortcomings, bird strikes, and high terrain." Conversely, airline threats are not manageable by the flight crew, but may be controlled by the airline's management. These threats include "aircraft malfunctions, cabin interruptions, operational pressure, ground/ramp errors/events, cabin events and interruptions, ground maintenance errors, and inadequacies of manuals and charts." Errors The term "error" is defined as any action or inaction leading to deviation from team or organizational intentions. Error stems from physiological and psychological human limitations such as illness, medication, stress, alcohol/drug abuse, fatigue, emotion, etc. Error is inevitable in humans and is primarily related to operational and behavioral mishaps. Errors can vary from incorrect altimeter setting and deviations from flight course, to more severe errors such as exceeding maximum structural speeds or forgetting to put down landing or takeoff flaps. Causes of pilot error: Decision making Reasons for negative reporting of accidents include staff being too busy, confusing data entry forms, lack of training and less education, lack of feedback to staff on reported data and punitive organizational cultures. Wiegmann and Shappell invented three cognitive models to analyze approximately 4,000 pilot factors associated with more than 2,000 U.S. Navy aviation mishaps. Although the three cognitive models have slight differences in the types of errors, all three lead to the same conclusion: errors in judgment. The three steps are decision-making, goal-setting, and strategy-selection errors, all of which were highly related to primary accidents. For example, on 28 December 2014, AirAsia Flight 8501, which was carrying seven crew members and 155 passengers, crashed into the Java Sea due to several fatal mistakes made by the captain in the poor weather conditions. In this case, the captain chose to exceed the maximum climb rate for a commercial aircraft, which caused a critical stall from which he was unable to recover. Threat and error management (TEM): TEM involves the effective detection and response to internal or external factors that have the potential to degrade the safety of an aircraft's operations. Methods of teaching TEM stress replicability, or reliability of performance across recurring situations. TEM aims to prepare crews with the "coordinative and cognitive ability to handle both routine and unforeseen surprises and anomalies." The desired outcome of TEM training is the development of 'resilience'. Resilience, in this context, is the ability to recognize and act adaptively to disruptions which may be encountered during flight operations. TEM training occurs in various forms, with varying levels of success. Some of these training methods include data collection using the line operations safety audit (LOSA), implementation of crew resource management (CRM), cockpit task management (CTM), and the integrated use of checklists in both commercial and general aviation. Some other resources built into most modern aircraft that help minimize risk and manage threat and error are airborne collision and avoidance systems (ACAS) and ground proximity warning systems (GPWS). With the consolidation of onboard computer systems and the implementation of proper pilot training, airlines and crew members look to mitigate the inherent risks associated with human factors. Threat and error management (TEM): Line operations safety audit (LOSA) LOSA is a structured observational program designed to collect data for the development and improvement of countermeasures to operational errors. Through the audit process, trained observers are able to collect information regarding the normal procedures, protocol, and decision making processes flight crews undertake when faced with threats and errors during normal operation. This data driven analysis of threat and error management is useful for examining pilot behavior in relation to situational analysis. It provides a basis for further implementation of safety procedures or training to help mitigate errors and risks. Observers on flights which are being audited typically observe the following: Potential threats to safety How the threats are addressed by the crew members The errors the threats generate How crew members manage these errors (action or inaction) Specific behaviors known to be associated with aviation accidents and incidentsLOSA was developed to assist crew resource management practices in reducing human error in complex flight operations. LOSA produces beneficial data that reveals how many errors or threats are encountered per flight, the number of errors which could have resulted in a serious threat to safety, and correctness of crew action or inaction. This data has proven to be useful in the development of CRM techniques and identification of what issues need to be addressed in training. Threat and error management (TEM): Crew resource management (CRM) CRM is the "effective use of all available resources by individuals and crews to safely and effectively accomplish a mission or task, as well as identifying and managing the conditions that lead to error." CRM training has been integrated and mandatory for most pilot training programs, and has been the accepted standard for developing human factors skills for air crews and airlines. Although there is no universal CRM program, airlines usually customize their training to best suit the needs of the organization. The principles of each program are usually closely aligned. According to the U.S. Navy, there are seven critical CRM skills: Decision making – the use of logic and judgement to make decisions based on available information Assertiveness – willingness to participate and state a given position until convinced by facts that another option is more correct Mission analysis – ability to develop short and long term contingency plans Communication – clear and accurate sending and receiving of information, instructions, commands and useful feedback Leadership – ability to direct and coordinate activities of pilots & crew members Adaptability/flexibility – ability to alter course of action due to changing situations or availability of new information Situational awareness – ability to perceive the environment within time and space, and comprehend its meaningThese seven skills comprise the critical foundation for effective aircrew coordination. With the development and use of these core skills, flight crews "highlight the importance of identifying human factors and team dynamics to reduce human errors that lead to aviation mishaps." Application and effectiveness of CRM Since the implementation of CRM circa 1979, following the need for increased research on resource management by NASA, the aviation industry has seen tremendous evolution of the application of CRM training procedures. The applications of CRM has been developed in a series of generations: First generation: emphasized individual psychology and testing, where corrections could be made to behavior. Threat and error management (TEM): Second generation: featured a shift in focus to cockpit group dynamics. Third evolution: diversification of scope and an emphasis on training crews in how they must function both in and out of the cockpit. Fourth generation: CRM integrated procedure into training, allowing organizations to tailor training to their needs. Threat and error management (TEM): Fifth generation (current): acknowledges that human error is inevitable and provides information to improve safety standards.Today, CRM is implemented through pilot and crew training sessions, simulations, and through interactions with senior ranked personnel and flight instructors such as briefing and debriefing flights. Although it is difficult to measure the success of CRM programs, studies have been conclusive that there is a correlation between CRM programs and better risk management. Threat and error management (TEM): Cockpit task management (CTM) Cockpit task management (CTM) is the "management level activity pilots perform as they initiate, monitor, prioritize, and terminate cockpit tasks." A 'task' is defined as a process performed to achieve a goal (i.e. fly to a waypoint, descend to a desired altitude). CTM training focuses on teaching crew members how to handle concurrent tasks which compete for their attention. This includes the following processes: Task initiation – when appropriate conditions exist Task monitoring – assessment of task progress and status Task prioritization – relative to the importance and urgency for safety Resource allocation – assignment of human and machine resources to tasks which need completion Task interruption – suspension of lower priority tasks for resources to be allocated to higher priority tasks Task resumption – continuing previously interrupted tasks Task termination – the completion or incompletion of tasksThe need for CTM training is a result of the capacity of human attentional facilities and the limitations of working memory. Crew members may devote more mental or physical resources to a particular task which demands priority or requires the immediate safety of the aircraft. CTM has been integrated to pilot training and goes hand in hand with CRM. Some aircraft operating systems have made progress in aiding CTM by combining instrument gauges into one screen. An example of this is a digital attitude indicator, which simultaneously shows the pilot the heading, airspeed, descent or ascent rate and a plethora of other pertinent information. Implementations such as these allow crews to gather multiple sources of information quickly and accurately, which frees up mental capacity to be focused on other, more prominent tasks. Threat and error management (TEM): Checklists The use of checklists before, during and after flights has established a strong presence in all types of aviation as a means of managing error and reducing the possibility of risk. Checklists are highly regulated and consist of protocols and procedures for the majority of the actions required during a flight. The objectives of checklists include "memory recall, standardization and regulation of processes or methodologies." The use of checklists in aviation has become an industry standard practice, and the completion of checklists from memory is considered a violation of protocol and pilot error. Studies have shown that increased errors in judgement and cognitive function of the brain, along with changes in memory function are a few of the effects of stress and fatigue. Both of these are inevitable human factors encountered in the commercial aviation industry. The use of checklists in emergency situations also contributes to troubleshooting and reverse examining the chain of events which may have led to the particular incident or crash. Apart from checklists issued by regulatory bodies such as the FAA or ICAO, or checklists made by aircraft manufacturers, pilots also have personal qualitative checklists aimed to ensure their fitness and ability to fly the aircraft. An example is the IM SAFE checklist (illness, medication, stress, alcohol, fatigue/food, emotion) and a number of other qualitative assessments which pilots may perform before or during a flight to ensure the safety of the aircraft and passengers. These checklists, along with a number of other redundancies integrated into most modern aircraft operation systems, ensure the pilot remains vigilant, and in turn, aims to reduce the risk of pilot error. Notable examples: One of the most famous examples of an aircraft disaster that was attributed to pilot error was the night-time crash of Eastern Air Lines Flight 401 near Miami, Florida on 29 December 1972. The captain, first officer, and flight engineer had become fixated on a faulty landing gear light and had failed to realize that one of the crew had accidentally bumped the flight controls, altering the autopilot settings from level flight to a slow descent. Told by ATC to hold over a sparsely populated area away from the airport while they dealt with the problem (with, as a result, very few lights visible on the ground to act as an external reference), the distracted flight crew did not notice the plane losing height and the aircraft eventually struck the ground in the Everglades, killing 101 of the 176 passengers and crew. The subsequent National Transportation Safety Board (NTSB) report on the incident blamed the flight crew for failing to monitor the aircraft's instruments properly. Details of the incident are now frequently used as a case study in training exercises by aircrews and air traffic controllers. Notable examples: During 2004 in the United States, pilot error was listed as the primary cause of 78.6% of fatal general aviation accidents, and as the primary cause of 75.5% of general aviation accidents overall. For scheduled air transport, pilot error typically accounts for just over half of worldwide accidents with a known cause. 28 July 1945 – A United States Army Air Forces B-25 bomber bound for Newark Airport crashed into the 79th floor of the Empire State Building after the pilot became lost in a heavy fog bank over Manhattan. All three crewmen were killed as well as eleven office workers in the building. Notable examples: 24 December 1958 – BOAC Bristol Britannia 312, registration G-AOVD, crashed as a result of a controlled flight into terrain (CFIT), near Winkton, England, while on a test flight. The crash was caused by a combination of bad weather and a failure on the part of both pilots to read the altimeter correctly. The first officer and two other people survived the crash. Notable examples: 3 January 1961 – Aero Flight 311 crashed near Kvevlax, Finland. All twenty-five occupants were killed in the accident, which was the deadliest in Finnish history. An investigation later determined that both pilots were intoxicated during the flight, and may have been interrupted by a passenger at the time of the crash. 28 February 1966 – American astronauts Elliot See and Charles Bassett were killed when their T-38 Talon crashed into a building at Lambert–St. Louis International Airport during bad weather. A NASA investigation concluded that See had been flying too low on his landing approach. 5 May 1972 - Alitalia Flight 112 crashed into Mount Longa after the flight crew did not adhere to approach procedures established by ATC. All 115 occupants perished. This is the worst single-aircraft disaster in Italian history. 29 December 1972 – Eastern Air Lines Flight 401 crashed into the Florida Everglades after the flight crew failed to notice the deactivation of the plane's autopilot, having been distracted by their own attempts to solve a problem with the landing gear. Out of 176 occupants, 75 survived the crash. 27 March 1977 – The Tenerife airport disaster: a senior KLM pilot failed to hear, understand or follow instructions from the control tower, causing two Boeing 747s to collide on the runway at Tenerife. A total of 583 people were killed in the deadliest aviation accident in history. Notable examples: 28 December 1978 – United Airlines Flight 173: a flight simulator instructor captain allowed his Douglas DC-8 to run out of fuel while investigating a landing gear problem. United Airlines subsequently changed their policy to disallow "simulator instructor time" in calculating a pilot's "total flight time". It was thought that a contributory factor to the accident is that an instructor can control the amount of fuel in simulator training so that it never runs out. Notable examples: 13 January 1982 – Air Florida Flight 90, a Boeing 737-200 with 79 passengers and crew, crashed into the 14th Street Bridge and careened into the Potomac River shortly after taking off from Washington National Airport, killing 75 passengers and crew, and four motorists on the bridge. The NTSB report blamed the flight crew for not properly employing the plane's de-icing system. Notable examples: 19 February 1985 – The crew of China Airlines Flight 006 lost control of their Boeing 747SP over the Pacific Ocean, after the No. 4 engine flamed out. The aircraft descended 30,000 feet in two-and-a-half minutes before control was regained. There were no fatalities but there were several injuries, and the aircraft was badly damaged. Notable examples: 16 August 1987 – The crew of Northwest Airlines Flight 255 omitted their taxi checklist and failed to deploy the aircraft's flaps and slats. Subsequently, the McDonnell Douglas MD-82 did not gain enough lift on takeoff and crashed into the ground, killing all but one of the 155 people on board, as well as two people on the ground. The sole survivor was a four-year-old girl named Cecelia Cichan, who was seriously injured. Notable examples: 28 August 1988 – The Ramstein airshow disaster: a member of an Italian aerobatic team misjudged a maneuver, causing a mid-air collision. Three pilots and 67 spectators on the ground were killed. 31 August 1988 – Delta Air Lines Flight 1141 crashed on takeoff after the crew forgot to deploy the flaps for increased lift. Of the 108 passengers and crew on board, fourteen were killed. Notable examples: 8 January 1989 – In the Kegworth air disaster, a fan blade broke off in the left engine of a new Boeing 737-400, but the pilots mistakenly shut down the right engine. The left engine eventually failed completely and the crew were unable to restart the right engine before the aircraft crashed. Instrumentation on the 737-400 was different from earlier models, but no flight simulator for the new model was available in Britain. Notable examples: 3 September 1989 – The crew of Varig Flight 254 made a series of mistakes so that their Boeing 737 ran out of fuel hundreds of miles off-course above the Amazon jungle. Thirteen died in the ensuing crash landing. 21 October 1989 – Tan-Sahsa Flight 414 crashed into a hill near Toncontin International Airport in Tegucigalpa, Honduras, because of a bad landing procedure by the pilot, killing 131 of the 146 passengers and crew. Notable examples: 14 February 1990 – Indian Airlines Flight 605 crashed into a golf course short of the runway near Hindustan Airport, India. The flight crew failed to pull up after radio callouts of how close they were into the ground. The plane struck a golf course and an embankment, bursting into flames. Of the 146 occupants on the plane, 92 died, including both flight crew. 54 occupants survived the crash. Notable examples: 24 November 1992 – China Southern Airlines Flight 3943 departed Guangzhou on a 55-minute flight to Guilin. During the descent towards Guilin, at an altitude of 7,000 feet (2,100 m), the captain attempted to level off the plane by raising the nose and the plane's auto-throttle was engaged for descent. However, the crew failed to notice that the number 2 power lever was at idle, which led to an asymmetrical power condition. The plane crashed on descent to Guilin Airport, killing all 141 on board. Notable examples: 23 March 1994 – Aeroflot Flight 593, an Airbus A310-300, crashed on its way to Hong Kong. The captain, Yaroslav Kudrinsky, invited his two children into the cockpit, and permitted them to sit at the controls, against airline regulations. His sixteen-year-old son, Eldar Kudrinsky, accidentally disconnected the autopilot, causing the plane to bank to the right before diving. The co-pilot brought up the plane too far, causing it to stall and start a flat spin. The pilots eventually recovered the plane, but it crashed into a forest, killing all 75 people on board. Notable examples: 24 June 1994 – B-52 crashes in Fairchild Air Force Base. The crash was largely attributed to the personality and behavior of Lt Col Arthur "Bud" Holland, the pilot in command, and delayed reactions to the earlier incidents involving this pilot. After past histories, Lt Col Mark McGeehan, a USAF squadron commander, refused to allow any of his squadron members to fly with Holland unless he (McGeehan) was also on the aircraft. This crash is now used in military and civilian aviation environments as a case study in teaching crew resource management. Notable examples: 30 June 1994 – Airbus Industrie Flight 129, a certification test flight of the Airbus A330-300, crashed at Toulouse-Blagnac Airport. While simulating an engine-out emergency just after takeoff with an extreme center of gravity location, the pilots chose improper manual settings which rendered the autopilot incapable of keeping the plane in the air, and by the time the captain regained manual control, it was too late. The aircraft was destroyed, killing the flight crew, a test engineer, and four passengers. The investigative board concluded that the captain was overworked from earlier flight testing that day, and was unable to devote sufficient time to the preflight briefing. As a result, Airbus had to revise the engine-out emergency procedures. Notable examples: 2 July 1994 – USAir Flight 1016 crashed into a residential house due to spatial disorientation. 37 passengers were killed and the airplane was destroyed. Notable examples: 20 December 1995 – American Airlines Flight 965, a Boeing 757-200 with 155 passengers and eight crew members, departed Miami approximately two hours behind schedule at 1835 Eastern Standard Time (EST). The investigators believe that the pilot's unfamiliarity with the modern technology installed in the Boeing 757-200 may have played a role. The pilots did not know their location in relation to a radio beacon in Tulua. The aircraft was equipped to provide that information electronically, but according to sources familiar with the investigation, the pilot apparently did not know how to access the information. The captain input the wrong coordinates, and the aircraft crashed into the mountains, killing 159 of the 163 people on board. Notable examples: 8 May 1997 – China Southern Airlines Flight 3456 crashed into the runway at Shenzhen Huangtian Airport during the crew's second go-around attempt, killing 35 of the 74 people on board. The crew had unknowingly violated landing procedures, due to heavy weather. 6 August 1997 – Korean Air Flight 801, a Boeing 747-300, crashed into Nimitz Hill, three miles from Guam International Airport, killing 228 of the 254 people on board. The captain's failure to properly conduct a non-precision approach contributed to the accident. The NTSB said pilot fatigue was a possible factor. Notable examples: 26 September 1997 - Garuda Indonesia Flight 152, an Airbus A300, crashed into a ravine, killing all 234 people on board. The NTSC concluded that the crash was caused when the pilots turned the aircraft in the wrong direction, along with ATC error. Low visibility and failure of the GPWS to activate were cited as contributing factors to the accident. Notable examples: 12 October 1997 – Singer John Denver died when his newly-acquired Rutan Long-EZ home-built aircraft crashed into the Pacific Ocean off Pacific Grove, California. The NTSB indicated that Denver lost control of the aircraft while attempting to manipulate the fuel selector handle, which had been placed in an inaccessible position by the aircraft's builder. The NTSB cited Denver's unfamiliarity with the aircraft's design as a cause of the crash. Notable examples: 16 February 1998 – China Airlines Flight 676 was attempting to land at Chiang Kai-Shek International Airport but had initiate a go-around due to the bad weather conditions. However, the pilots accidentally disengaged the autopilot and did not notice for 11 seconds. When they did notice, the Airbus A300 had entered a stall. The aircraft crashed into a highway and residential area, and exploded, killing all 196 people on board, as well as seven people on the ground. Notable examples: 16 July 1999 – John F. Kennedy, Jr. died when his plane, a Piper Saratoga, crashed into the Atlantic Ocean off the coast of Martha's Vineyard, Massachusetts. The NTSB officially declared that the crash was caused by "the pilot's failure to maintain control of his airplane during a descent over water at night, which was a result of spatial disorientation". Kennedy did not hold a certification for IFR flight, but did continue to fly after weather conditions obscured visual landmarks. Notable examples: 31 August 1999 – Lineas Aéreas Privadas Argentinas (LAPA) flight 3142 crashed after an attempted take-off with the flaps retracted, killing 63 of the 100 occupants on the plane as well as two people on the ground. 31 October 2000 – Singapore Airlines Flight 006 was a Boeing 747-412 that took off from the wrong runway at the then Chiang Kai-Shek International Airport. It collided with construction equipment on the runway, bursting into flames and killing 83 of its 179 occupants. 12 November 2001 – American Airlines Flight 587 encountered heavy turbulence and the co-pilot over-applied the rudder pedal, turning the Airbus A300 from side to side. The excessive stress caused the rudder to fail. The A300 spun and hit a residential area, crushing five houses and killing 265 people. Contributing factors included wake turbulence and pilot training. 24 November 2001 – Crossair Flight 3597 crashed into a forest on approach to runway 28 at Zurich Airport. This was caused by Captain Lutz descending below the minimum safe altitude of 2400 feet on approach to the runway. 15 April 2002 – Air China Flight 129, a Boeing 767-200, crashed near Busan, South Korea killing 128 of the 166 people on board. The pilot and co-pilot had been flying too low. 25 October 2002 – Eight people, including U.S. Senator Paul Wellstone, were killed in a crash near Eveleth, Minnesota. The NTSB concluded that "the flight crew did not monitor and maintain minimum speed. Notable examples: 3 January 2004 – Flash Airlines Flight 604 dived into the Red Sea shortly after takeoff, killing all 148 people on board. The captain had been experiencing vertigo and had not noticed that his control column was slanted to the right. The Boeing 737 banked until it was no longer able to stay in the air. However, the investigation report was disputed. Notable examples: 26 February 2004 – A Beech 200 carrying Macedonian President Boris Trajkovski crashed, killing the president and eight other passengers. The crash investigation ruled that the accident was caused by "procedural mistakes by the crew" during the landing approach. 14 August 2005 – The pilots of Helios Airways Flight 522 lost consciousness, most likely due to hypoxia caused by failure to switch the cabin pressurization to "Auto" during the pre-flight preparations. The Boeing 737-300 crashed after running out of fuel, killing all on board. Notable examples: 16 August 2005 – The crew of West Caribbean Airways Flight 708 unknowingly (and dangerously) decreased the speed of the McDonnell Douglas MD-82, causing it to enter a stall. The situation was incorrectly handled by the crew, with the captain believing that the engines had flamed out, while the first officer, who was aware of the stall, attempted to correct him. The aircraft crashed into the ground near Machiques, Venezuela, killing all 160 people on board. Notable examples: 3 May 2006 – Armavia Flight 967 lost control and crashed into the Black Sea while approaching Sochi-Adler Airport in Russia, killing all 113 people on board. The pilots were fatigued and flying under stressful conditions. Their stress levels were pushed over the limit, causing them to lose their situational awareness. Notable examples: 27 August 2006 – Comair Flight 5191 failed to become airborne and crashed at Blue Grass Airport, after the flight crew mistakenly attempted to take off from a secondary runway that was much shorter than the intended takeoff runway. All but one of the 50 people on board the plane died, including the 47 passengers. The sole survivor was the flight's first officer, James Polhinke. Notable examples: 1 January 2007 – The crew of Adam Air Flight 574 were preoccupied with a malfunction of the inertial reference system, which diverted their attention from the flight instruments, allowing the increasing descent and bank angle to go unnoticed. Appearing to have become spatially disoriented, the pilots did not detect and appropriately arrest the descent soon enough to prevent loss of control. This caused the aircraft to break up in mid air and crash into the water, killing all 102 people on board. Notable examples: 7 March 2007 – Garuda Indonesia Flight 200: poor Crew Resource Management and the failure to extend the flaps led the aircraft to land at an "unimaginable" speed and run off the end of the runway after landing. Of the 140 occupants, 22 were killed. 17 July 2007 – TAM Airlines Flight 3054: the thrust reverser on the right engine of the Airbus A320 was jammed. Although both crew members were aware, the captain used an outdated braking procedure, and the aircraft overshot the runway and crashed into a building, killing all 187 people on board, as well as 12 people on the ground. 20 August 2008 – The crew of Spanair Flight 5022 failed to deploy the MD-82's flaps and slats. The flight crashed after takeoff, killing 154 out of the 172 passengers and crew on board. Notable examples: 12 February 2009 – Colgan Air Flight 3407 (flying as Continental Connection) entered a stall and crashed into a house in Clarence Center, New York, due to lack of situational awareness of air speed by the captain and first officer and the captain's improper reaction to the plane's stick-shaker stall warning system. All 49 people on board the plane died, as well as one person inside the house. Notable examples: 1 June 2009 – Air France Flight 447 entered a stall and crashed into the Atlantic Ocean following pitot tube failures and improper control inputs by the first officer. All 216 passengers and twelve crew members died. Notable examples: 10 April 2010 – 2010 Polish Air Force Tu-154 crash: during a descent towards Russia's Smolensk North Airport, the flight crew of the Polish presidential jet ignored automatic warnings and attempted a risky landing in heavy fog. The Tupolev Tu-154M descended too low and crashed into a nearby forest; all of the occupants were killed, including Polish president Lech Kaczynski, his wife Maria Kaczynska, and numerous government and military officials. Notable examples: 12 May 2010 – Afriqiyah Airways Flight 771 The aircraft crashed about 1,200 meters (1,300 yd; 3,900 ft) short of Runway 09, outside the perimeter of Tripoli International Airport, killing all but one of the 104 people on board. The sole survivor was a 9-year-old boy named Ruben Van Assouw. On 28 February 2013, the Libyan Civil Aviation Authority announced that the crash was caused by pilot error. Factors that contributed to the crash were lacking/insufficient crew resource management, sensory illusions, and the first officer's inputs to the aircraft side stick; fatigue could also have played a role in the accident. The final report cited the following causes: the pilots' lack of a common action plan during the approach, the final approach being continued below the Minimum Decision Altitude without ground visual reference being acquired; the inappropriate application of flight control inputs during the go-around and after the Terrain Awareness and Warning System had been activated; and the flight crew's failure to monitor and control the flight path. Notable examples: 22 May 2010 – Air India Express Flight 812 overshot the runway at Mangalore Airport, killing 158 people. The plane touched down 610 meters (670 yd) from the usual touchdown point after a steep descent. CVR recordings showed that the captain had been sleeping and had woken up just minutes before the landing. His lack of alertness made the plane land very quickly and steeply and it ran off the end of the tabletop runway. Notable examples: 28 July 2010 – The captain of Airblue Flight 202 became confused with the heading knob and thought that he had carried out the correct action to turn the plane. However, due to his failure to pull the heading knob, the turn was not executed. The Airbus A321 went astray and slammed into the Margalla Hills, killing all 152 people on board. Notable examples: 20 June 2011 – RusAir Flight 9605 crashed onto a motorway while on its final approach to Petrozavodsk Airport in western Russia, after the intoxicated navigator encouraged the captain to land in heavy fog. Only five of the 52 people on board the plane survived the crash. 6 July 2013 – Asiana Airlines Flight 214 tail struck the seawall short of runway 28L at San Francisco International Airport. Of the 307 passengers and crew, three people died and 187 were injured when the aircraft slid down the runway. Investigators said the accident was caused by lower than normal approach speed and incorrect approach path during landing. Notable examples: 23 July 2014 – TransAsia Airways Flight 222 brushed trees and crashed into six houses in a residential area in Xixi Village, Penghu Island, Taiwan. Of the 58 people on board the flight, only ten people survived the crash. The captain was overconfident with his skill and intentionally descended and rolled the plane to the left. Crew members did not realize that they were at a dangerously low altitude and the plane was about to impact terrain until two seconds before the crash. Notable examples: 28 December 2014 - Indonesia AirAsia Flight 8501 crashed into the Java Sea as a result of an aerodynamic stall due to pilot error. The aircraft exceeded the climb rate, way beyond its operational limits. All 155 passengers and 7 crew members on board were killed. Notable examples: 6 February 2015 – TransAsia Airways Flight 235: one of the ATR 72's engines experienced a flameout. As airplanes are able to fly on one engine alone, the pilot then shut down one of the engines. However, he accidentally shut off the engine that was functioning correctly and left the plane powerless, at which point he unsuccessfully tried to restart both engines. The plane then clipped a bridge and plummeted into the Keelung river as the pilot tried to avoid city terrain, killing 43 of the 58 on board.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Rose Guns Days** Rose Guns Days: Rose Guns Days (ローズガンズデイズ, Rōzu Ganzu Deizu) is a four-part Japanese dōjin visual novel series produced by 07th Expansion and playable on Windows PCs. The first game in the series, Season 1, was released on August 11, 2012, and the fourth game, Last Season, was released on December 31, 2013. There have been six manga adaptations based on Rose Guns Days published by Kodansha and Square Enix. Gameplay: As a visual novel, the gameplay in Rose Guns Days is spent on reading the story's narrative and dialogue. The text is accompanied by character sprites over background art made from altering real-world photographs. Throughout gameplay, the player encounters a quick time event minigame of a fist fight battle sequence that resembles a fighting game. The outcome of these minigames does not influence the progression of the story, and the player even has the option to skip them. The minigame has two parts: attack and defense. After the player successfully attacks three times, the player can then perform an overkill attack whose power is determined by the choice of one of six cards. How well the player performs in the minigames determines the total score and emblems awarded, and as the score increases, so does the player's rank, which increases the difficulty of the minigames. Development and release: Rose Guns Days is 07th Expansion's fourth visual novel series. The scenario is written entirely by Ryukishi07, who also provides some of the character designs, which are divided between three additional artists: Jirō Suzuki, Sōichirō and Yaeko Ninagawa. The music of Rose Guns Days is provided by various music artists including both professionals and dōjin artists including: Dai, Luck Ganriki, Rokugen Alice, M. Zakky and Pre-holder. The first game in the series, titled Season 1, was released on August 11, 2012 at Comiket 82 and is playable on Windows PCs. Season 2 was released on December 31, 2012 at Comiket 83. Season 3 was released on August 10, 2013 at Comiket 84. The fourth and final game in the series, Last Season, was released on December 31, 2013 at Comiket 85. A version of Season 1 playable on iOS devices was released on November 9, 2012, followed by a version playable on Android devices released on December 13, 2012. All four seasons were also distributed by MangaGamer between February 7, 2014 and April 25, 2015 for explicit use with the English translation patch. Related media: Manga A manga adaptation of Rose Guns Days Season 1, illustrated by Sōichirō, was serialized between the September 2012 and March 2014 issues of Square Enix's Gangan Joker magazine. Four tankōbon volumes for Season 1 were released between December 22, 2012 and April 22, 2014. A manga adaptation of Rose Guns Days Season 2, illustrated by Nana Natsunishi, was serialized between the February 2013 and April 2014 issues of Square Enix's GFantasy magazine. Three volumes for Season 2 were released between August 22, 2013 and April 22, 2014. A manga adaptation of Rose Guns Days Season 3, illustrated by Yō Ōmura, began serialization in Square Enix's Gangan Online magazine on September 19, 2013. The first volume for Season 3 was released on April 22, 2014. A manga adaptation of Rose Guns Days Last Season, illustrated by Mitsunori Zaki, began serialization in the May 2014 issue of Square Enix's Big Gangan magazine. Yen Press licensed the manga for release in North America.A spin-off manga, titled Rose Guns Days Aishū no Cross Knife (Rose Guns Days 哀愁のクロスナイフ) and illustrated by Yūji Takagi, was serialized in Big Gangan between December 25, 2012 and October 25, 2013. Two volumes for Aishū no Cross Knife were released between August 22 and December 21, 2013. A prologue manga, titled Rose Guns Days Fukushū wa Ōgon no Kaori (Rose Guns Days 復讐は黄金の香り) and illustrated by Mei Renjōji, was serialized between the June 2013 and June 2014 issues of Kodansha's Monthly Shōnen Sirius magazine. The first volume of Fukushū wa Ōgon no Kaori was released on November 8, 2013 and the second and last on July 9, 2014. Related media: Music The opening theme to Rose Guns Days is "Ai wa Omerta" (愛はオメルタ) by Rojak feat. Mayumi. A soundtrack titled Rose Guns Days Sound Tracks 1 was released on August 11, 2012, and a soundtrack titled Rose Guns Days Sound Tracks 2 was released on December 30, 2012.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Properties of water** Properties of water: Water (H2O) is a polar inorganic compound that is at room temperature a tasteless and odorless liquid, which is nearly colorless apart from an inherent hint of blue. It is by far the most studied chemical compound and is described as the "universal solvent" and the "solvent of life". It is the most abundant substance on the surface of Earth and the only common substance to exist as a solid, liquid, and gas on Earth's surface. It is also the third most abundant molecule in the universe (behind molecular hydrogen and carbon monoxide).Water molecules form hydrogen bonds with each other and are strongly polar. This polarity allows it to dissociate ions in salts and bond to other polar substances such as alcohols and acids, thus dissolving them. Its hydrogen bonding causes its many unique properties, such as having a solid form less dense than its liquid form, a relatively high boiling point of 100 °C for its molar mass, and a high heat capacity. Properties of water: Water is amphoteric, meaning that it can exhibit properties of an acid or a base, depending on the pH of the solution that it is in; it readily produces both H+ and OH− ions. Related to its amphoteric character, it undergoes self-ionization. The product of the activities, or approximately, the concentrations of H+ and OH− is a constant, so their respective concentrations are inversely proportional to each other. Physical properties: Water is the chemical substance with chemical formula H2O; one molecule of water has two hydrogen atoms covalently bonded to a single oxygen atom. Water is a tasteless, odorless liquid at ambient temperature and pressure. Liquid water has weak absorption bands at wavelengths of around 750 nm which cause it to appear to have a blue color. This can easily be observed in a water-filled bath or wash-basin whose lining is white. Large ice crystals, as in glaciers, also appear blue. Physical properties: Under standard conditions, water is primarily a liquid, unlike other analogous hydrides of the oxygen family, which are generally gaseous. This unique property of water is due to hydrogen bonding. The molecules of water are constantly moving concerning each other, and the hydrogen bonds are continually breaking and reforming at timescales faster than 200 femtoseconds (2 × 10−13 seconds). However, these bonds are strong enough to create many of the peculiar properties of water, some of which make it integral to life. Physical properties: Water, ice, and vapor Within the Earth's atmosphere and surface, the liquid phase is the most common and is the form that is generally denoted by the word "water". The solid phase of water is known as ice and commonly takes the structure of hard, amalgamated crystals, such as ice cubes, or loosely accumulated granular crystals, like snow. Aside from common hexagonal crystalline ice, other crystalline and amorphous phases of ice are known. The gaseous phase of water is known as water vapor (or steam). Visible steam and clouds are formed from minute droplets of water suspended in the air. Physical properties: Water also forms a supercritical fluid. The critical temperature is 647 K and the critical pressure is 22.064 MPa. In nature, this only rarely occurs in extremely hostile conditions. A likely example of naturally occurring supercritical water is in the hottest parts of deep water hydrothermal vents, in which water is heated to the critical temperature by volcanic plumes and the critical pressure is caused by the weight of the ocean at the extreme depths where the vents are located. This pressure is reached at a depth of about 2200 meters: much less than the mean depth of the ocean (3800 meters). Physical properties: Heat capacity and heats of vaporization and fusion Water has a very high specific heat capacity of 4184 J/(kg·K) at 20 °C (4182 J/(kg·K) at 25 °C) —the second-highest among all the heteroatomic species (after ammonia), as well as a high heat of vaporization (40.65 kJ/mol or 2257 kJ/kg at the normal boiling point), both of which are a result of the extensive hydrogen bonding between its molecules. These two unusual properties allow water to moderate Earth's climate by buffering large fluctuations in temperature. Most of the additional energy stored in the climate system since 1970 has accumulated in the oceans.The specific enthalpy of fusion (more commonly known as latent heat) of water is 333.55 kJ/kg at 0 °C: the same amount of energy is required to melt ice as to warm ice from −160 °C up to its melting point or to heat the same amount of water by about 80 °C. Of common substances, only that of ammonia is higher. This property confers resistance to melting on the ice of glaciers and drift ice. Before and since the advent of mechanical refrigeration, ice was and still is in common use for retarding food spoilage. Physical properties: The specific heat capacity of ice at −10 °C is 2030 J/(kg·K) and the heat capacity of steam at 100 °C is 2080 J/(kg·K). Physical properties: Density of water and ice The density of water is about 1 gram per cubic centimetre (62 lb/cu ft): this relationship was originally used to define the gram. The density varies with temperature, but not linearly: as the temperature increases, the density rises to a peak at 3.98 °C (39.16 °F) and then decreases; the initial increase is unusual because most liquids undergo thermal expansion so that the density only decreases as a function of temperature. The increase observed for water from 0 °C (32 °F) to 3.98 °C (39.16 °F) and for a few other liquids is described as negative thermal expansion. Regular, hexagonal ice is also less dense than liquid water—upon freezing, the density of water decreases by about 9%.These peculiar effects are due to the highly directional bonding of water molecules via the hydrogen bonds: ice and liquid water at low temperature have comparatively low-density, low-energy open lattice structures. The breaking of hydrogen bonds on melting with increasing temperature in the range 0–4 °C allows for a denser molecular packing in which some of the lattice cavities are filled by water molecules. Above 4 °C, however, thermal expansion becomes the dominant effect, and water near the boiling point (100 °C) is about 4% less dense than water at 4 °C (39 °F).Under increasing pressure, ice undergoes a number of transitions to other polymorphs with higher density than liquid water, such as ice II, ice III, high-density amorphous ice (HDA), and very-high-density amorphous ice (VHDA). Physical properties: The unusual density curve and lower density of ice than of water is essential for much of the life on earth—if water were most dense at the freezing point, then in winter the cooling at the surface would lead to convective mixing. Once 0 °C are reached, the water body would freeze from the bottom up, and all life in it would be killed. Furthermore, given that water is a good thermal insulator (due to its heat capacity), some frozen lakes might not completely thaw in summer. As it is, the inversion of the density curve leads to a stable layering for surface temperatures below 4 °C, and with the layer of ice that floats on top insulating the water below, even e.g., Lake Baikal in central Siberia freezes only to about 1 m thickness in winter. In general, for deep enough lakes, the temperature at the bottom stays constant at about 4 °C (39 °F) throughout the year (see diagram). Physical properties: Density of saltwater and ice The density of saltwater depends on the dissolved salt content as well as the temperature. Ice still floats in the oceans, otherwise, they would freeze from the bottom up. However, the salt content of oceans lowers the freezing point by about 1.9 °C (due to freezing-point depression of a solvent containing a solute) and lowers the temperature of the density maximum of water to the former freezing point at 0 °C. This is why, in ocean water, the downward convection of colder water is not blocked by an expansion of water as it becomes colder near the freezing point. The oceans' cold water near the freezing point continues to sink. So creatures that live at the bottom of cold oceans like the Arctic Ocean generally live in water 4 °C colder than at the bottom of frozen-over fresh water lakes and rivers. Physical properties: As the surface of saltwater begins to freeze (at −1.9 °C for normal salinity seawater, 3.5%) the ice that forms is essentially salt-free, with about the same density as freshwater ice. This ice floats on the surface, and the salt that is "frozen out" adds to the salinity and density of the seawater just below it, in a process known as brine rejection. This denser saltwater sinks by convection and the replacing seawater is subject to the same process. This produces essentially freshwater ice at −1.9 °C on the surface. The increased density of the seawater beneath the forming ice causes it to sink towards the bottom. On a large scale, the process of brine rejection and sinking cold salty water results in ocean currents forming to transport such water away from the Poles, leading to a global system of currents called the thermohaline circulation. Physical properties: Miscibility and condensation Water is miscible with many liquids, including ethanol in all proportions. Water and most oils are immiscible usually forming layers according to increasing density from the top. This can be predicted by comparing the polarity. Water being a relatively polar compound will tend to be miscible with liquids of high polarity such as ethanol and acetone, whereas compounds with low polarity will tend to be immiscible and poorly soluble such as with hydrocarbons. Physical properties: As a gas, water vapor is completely miscible with air. On the other hand, the maximum water vapor pressure that is thermodynamically stable with the liquid (or solid) at a given temperature is relatively low compared with total atmospheric pressure. For example, if the vapor's partial pressure is 2% of atmospheric pressure and the air is cooled from 25 °C, starting at about 22 °C, water will start to condense, defining the dew point, and creating fog or dew. The reverse process accounts for the fog burning off in the morning. If the humidity is increased at room temperature, for example, by running a hot shower or a bath, and the temperature stays about the same, the vapor soon reaches the pressure for phase change and then condenses out as minute water droplets, commonly referred to as steam. Physical properties: A saturated gas or one with 100% relative humidity is when the vapor pressure of water in the air is at equilibrium with vapor pressure due to (liquid) water; water (or ice, if cool enough) will fail to lose mass through evaporation when exposed to saturated air. Because the amount of water vapor in the air is small, relative humidity, the ratio of the partial pressure due to the water vapor to the saturated partial vapor pressure, is much more useful. Vapor pressure above 100% relative humidity is called supersaturated and can occur if the air is rapidly cooled, for example, by rising suddenly in an updraft. Physical properties: Vapor pressure Compressibility The compressibility of water is a function of pressure and temperature. At 0 °C, at the limit of zero pressure, the compressibility is 5.1×10−10 Pa−1. At the zero-pressure limit, the compressibility reaches a minimum of 4.4×10−10 Pa−1 around 45 °C before increasing again with increasing temperature. As the pressure is increased, the compressibility decreases, being 3.9×10−10 Pa−1 at 0 °C and 100 megapascals (1,000 bar).The bulk modulus of water is about 2.2 GPa. The low compressibility of non-gasses, and of water in particular, leads to their often being assumed as incompressible. The low compressibility of water means that even in the deep oceans at 4 km depth, where pressures are 40 MPa, there is only a 1.8% decrease in volume.The bulk modulus of water ice ranges from 11.3 GPa at 0 K up to 8.6 GPa at 273 K. The large change in the compressibility of ice as a function of temperature is the result of its relatively large thermal expansion coefficient compared to other common solids. Physical properties: Triple point The temperature and pressure at which ordinary solid, liquid, and gaseous water coexist in equilibrium is a triple point of water. Since 1954, this point had been used to define the base unit of temperature, the kelvin, but, starting in 2019, the kelvin is now defined using the Boltzmann constant, rather than the triple point of water.Due to the existence of many polymorphs (forms) of ice, water has other triple points, which have either three polymorphs of ice or two polymorphs of ice and liquid in equilibrium. Gustav Heinrich Johann Apollon Tammann in Göttingen produced data on several other triple points in the early 20th century. Kamb and others documented further triple points in the 1960s. Physical properties: Melting point The melting point of ice is 0 °C (32 °F; 273 K) at standard pressure; however, pure liquid water can be supercooled well below that temperature without freezing if the liquid is not mechanically disturbed. It can remain in a fluid state down to its homogeneous nucleation point of about 231 K (−42 °C; −44 °F). The melting point of ordinary hexagonal ice falls slightly under moderately high pressures, by 0.0073 °C (0.0131 °F)/atm or about 0.5 °C (0.90 °F)/70 atm as the stabilization energy of hydrogen bonding is exceeded by intermolecular repulsion, but as ice transforms into its polymorphs (see crystalline states of ice) above 209.9 MPa (2,072 atm), the melting point increases markedly with pressure, i.e., reaching 355 K (82 °C) at 2.216 GPa (21,870 atm) (triple point of Ice VII). Physical properties: Electrical properties Electrical conductivity Pure water containing no exogenous ions is an excellent electronic insulator, but not even "deionized" water is completely free of ions. Water undergoes autoionization in the liquid state when two water molecules form one hydroxide anion (OH−) and one hydronium cation (H3O+). Because of autoionization, at ambient temperatures pure liquid water has a similar intrinsic charge carrier concentration to the semiconductor germanium and an intrinsic charge carrier concentration three orders of magnitude greater than the semiconductor silicon, hence, based on charge carrier concentration, water can not be considered to be a completely dielectric material or electrical insulator but to be a limited conductor of ionic charge.Because water is such a good solvent, it almost always has some solute dissolved in it, often a salt. If water has even a tiny amount of such an impurity, then the ions can carry charges back and forth, allowing the water to conduct electricity far more readily. Physical properties: It is known that the theoretical maximum electrical resistivity for water is approximately 18.2 MΩ·cm (182 kΩ·m) at 25 °C. This figure agrees well with what is typically seen on reverse osmosis, ultra-filtered and deionized ultra-pure water systems used, for instance, in semiconductor manufacturing plants. A salt or acid contaminant level exceeding even 100 parts per trillion (ppt) in otherwise ultra-pure water begins to noticeably lower its resistivity by up to several kΩ·m.In pure water, sensitive equipment can detect a very slight electrical conductivity of 0.05501 ± 0.0001 μS/cm at 25.00 °C. Water can also be electrolyzed into oxygen and hydrogen gases but in the absence of dissolved ions this is a very slow process, as very little current is conducted. In ice, the primary charge carriers are protons (see proton conductor). Ice was previously thought to have a small but measurable conductivity of 1×10−10 S/cm, but this conductivity is now thought to be almost entirely from surface defects, and without those, ice is an insulator with an immeasurably small conductivity. Physical properties: Polarity and hydrogen bonding An important feature of water is its polar nature. The structure has a bent molecular geometry for the two hydrogens from the oxygen vertex. The oxygen atom also has two lone pairs of electrons. One effect usually ascribed to the lone pairs is that the H–O–H gas-phase bend angle is 104.48°, which is smaller than the typical tetrahedral angle of 109.47°. The lone pairs are closer to the oxygen atom than the electrons sigma bonded to the hydrogens, so they require more space. The increased repulsion of the lone pairs forces the O–H bonds closer to each other. Another consequence of its structure is that water is a polar molecule. Due to the difference in electronegativity, a bond dipole moment points from each H to the O, making the oxygen partially negative and each hydrogen partially positive. A large molecular dipole, points from a region between the two hydrogen atoms to the oxygen atom. The charge differences cause water molecules to aggregate (the relatively positive areas being attracted to the relatively negative areas). This attraction, hydrogen bonding, explains many of the properties of water, such as its solvent properties.Although hydrogen bonding is a relatively weak attraction compared to the covalent bonds within the water molecule itself, it is responsible for several of the water's physical properties. These properties include its relatively high melting and boiling point temperatures: more energy is required to break the hydrogen bonds between water molecules. In contrast, hydrogen sulfide (H2S), has much weaker hydrogen bonding due to sulfur's lower electronegativity. H2S is a gas at room temperature, despite hydrogen sulfide having nearly twice the molar mass of water. The extra bonding between water molecules also gives liquid water a large specific heat capacity. This high heat capacity makes water a good heat storage medium (coolant) and heat shield. Physical properties: Cohesion and adhesion Water molecules stay close to each other (cohesion), due to the collective action of hydrogen bonds between water molecules. These hydrogen bonds are constantly breaking, with new bonds being formed with different water molecules; but at any given time in a sample of liquid water, a large portion of the molecules are held together by such bonds.Water also has high adhesion properties because of its polar nature. On clean, smooth glass the water may form a thin film because the molecular forces between glass and water molecules (adhesive forces) are stronger than the cohesive forces. In biological cells and organelles, water is in contact with membrane and protein surfaces that are hydrophilic; that is, surfaces that have a strong attraction to water. Irving Langmuir observed a strong repulsive force between hydrophilic surfaces. To dehydrate hydrophilic surfaces—to remove the strongly held layers of water of hydration—requires doing substantial work against these forces, called hydration forces. These forces are very large but decrease rapidly over a nanometer or less. They are important in biology, particularly when cells are dehydrated by exposure to dry atmospheres or to extracellular freezing. Physical properties: Surface tension Water has an unusually high surface tension of 71.99 mN/m at 25 °C which is caused by the strength of the hydrogen bonding between water molecules. This allows insects to walk on water. Capillary action Because water has strong cohesive and adhesive forces, it exhibits capillary action. Strong cohesion from hydrogen bonding and adhesion allows trees to transport water more than 100 m upward. Physical properties: Water as a solvent Water is an excellent solvent due to its high dielectric constant. Substances that mix well and dissolve in water are known as hydrophilic ("water-loving") substances, while those that do not mix well with water are known as hydrophobic ("water-fearing") substances. The ability of a substance to dissolve in water is determined by whether or not the substance can match or better the strong attractive forces that water molecules generate between other water molecules. If a substance has properties that do not allow it to overcome these strong intermolecular forces, the molecules are precipitated out from the water. Contrary to the common misconception, water and hydrophobic substances do not "repel", and the hydration of a hydrophobic surface is energetically, but not entropically, favorable. Physical properties: When an ionic or polar compound enters water, it is surrounded by water molecules (hydration). The relatively small size of water molecules (~ 3 angstroms) allows many water molecules to surround one molecule of solute. The partially negative dipole ends of the water are attracted to positively charged components of the solute, and vice versa for the positive dipole ends. Physical properties: In general, ionic and polar substances such as acids, alcohols, and salts are relatively soluble in water, and nonpolar substances such as fats and oils are not. Nonpolar molecules stay together in water because it is energetically more favorable for the water molecules to hydrogen bond to each other than to engage in van der Waals interactions with non-polar molecules. Physical properties: An example of an ionic solute is table salt; the sodium chloride, NaCl, separates into Na+ cations and Cl− anions, each being surrounded by water molecules. The ions are then easily transported away from their crystalline lattice into solution. An example of a nonionic solute is table sugar. The water dipoles make hydrogen bonds with the polar regions of the sugar molecule (OH groups) and allow it to be carried away into solution. Physical properties: Quantum tunneling The quantum tunneling dynamics in water was reported as early as 1992. At that time it was known that there are motions which destroy and regenerate the weak hydrogen bond by internal rotations of the substituent water monomers. On 18 March 2016, it was reported that the hydrogen bond can be broken by quantum tunneling in the water hexamer. Unlike previously reported tunneling motions in water, this involved the concerted breaking of two hydrogen bonds. Later in the same year, the discovery of the quantum tunneling of water molecules was reported. Physical properties: Electromagnetic absorption Water is relatively transparent to visible light, near ultraviolet light, and far-red light, but it absorbs most ultraviolet light, infrared light, and microwaves. Most photoreceptors and photosynthetic pigments utilize the portion of the light spectrum that is transmitted well through water. Microwave ovens take advantage of water's opacity to microwave radiation to heat the water inside of foods. Water's light blue color is caused by weak absorption in the red part of the visible spectrum. Structure: A single water molecule can participate in a maximum of four hydrogen bonds because it can accept two bonds using the lone pairs on oxygen and donate two hydrogen atoms. Other molecules like hydrogen fluoride, ammonia, and methanol can also form hydrogen bonds. However, they do not show anomalous thermodynamic, kinetic, or structural properties like those observed in water because none of them can form four hydrogen bonds: either they cannot donate or accept hydrogen atoms, or there are steric effects in bulky residues. In water, intermolecular tetrahedral structures form due to the four hydrogen bonds, thereby forming an open structure and a three-dimensional bonding network, resulting in the anomalous decrease in density when cooled below 4 °C. This repeated, constantly reorganizing unit defines a three-dimensional network extending throughout the liquid. This view is based upon neutron scattering studies and computer simulations, and it makes sense in the light of the unambiguously tetrahedral arrangement of water molecules in ice structures. Structure: However, there is an alternative theory for the structure of water. In 2004, a controversial paper from Stockholm University suggested that water molecules in the liquid state typically bind not to four but only two others; thus forming chains and rings. The term "string theory of water" (which is not to be confused with the string theory of physics) was coined. These observations were based upon X-ray absorption spectroscopy that probed the local environment of individual oxygen atoms. Structure: Molecular structure The repulsive effects of the two lone pairs on the oxygen atom cause water to have a bent, not linear, molecular structure, allowing it to be polar. The hydrogen–oxygen–hydrogen angle is 104.45°, which is less than the 109.47° for ideal sp3 hybridization. The valence bond theory explanation is that the oxygen atom's lone pairs are physically larger and therefore take up more space than the oxygen atom's bonds to the hydrogen atoms. The molecular orbital theory explanation (Bent's rule) is that lowering the energy of the oxygen atom's nonbonding hybrid orbitals (by assigning them more s character and less p character) and correspondingly raising the energy of the oxygen atom's hybrid orbitals bonded to the hydrogen atoms (by assigning them more p character and less s character) has the net effect of lowering the energy of the occupied molecular orbitals because the energy of the oxygen atom's nonbonding hybrid orbitals contributes completely to the energy of the oxygen atom's lone pairs while the energy of the oxygen atom's other two hybrid orbitals contributes only partially to the energy of the bonding orbitals (the remainder of the contribution coming from the hydrogen atoms' 1s orbitals). Chemical properties: Self-ionization In liquid water there is some self-ionization giving hydronium ions and hydroxide ions. Chemical properties: 2 H2O ⇌ H3O+ + OH−The equilibrium constant for this reaction, known as the ionic product of water, Kw=[H3O+][OH−] , has a value of about 10−14 at 25 °C. At neutral pH, the concentration of the hydroxide ion (OH−) equals that of the (solvated) hydrogen ion (H+), with a value close to 10−7 mol L−1 at 25 °C. See data page for values at other temperatures. Chemical properties: The thermodynamic equilibrium constant is a quotient of thermodynamic activities of all products and reactants including water: Keq=aH3O+⋅aOH−aH2O2 However, for dilute solutions, the activity of a solute such as H3O+ or OH− is approximated by its concentration, and the activity of the solvent H2O is approximated by 1, so that we obtain the simple ionic product Keq≈Kw=[H3O+][OH−] Geochemistry The action of water on rock over long periods of time typically leads to weathering and water erosion, physical processes that convert solid rocks and minerals into soil and sediment, but under some conditions chemical reactions with water occur as well, resulting in metasomatism or mineral hydration, a type of chemical alteration of a rock which produces clay minerals. It also occurs when Portland cement hardens. Chemical properties: Water ice can form clathrate compounds, known as clathrate hydrates, with a variety of small molecules that can be embedded in its spacious crystal lattice. The most notable of these is methane clathrate, 4 CH4·23H2O, naturally found in large quantities on the ocean floor. Acidity in nature Rain is generally mildly acidic, with a pH between 5.2 and 5.8 if not having any acid stronger than carbon dioxide. If high amounts of nitrogen and sulfur oxides are present in the air, they too will dissolve into the cloud and raindrops, producing acid rain. Isotopologues: Several isotopes of both hydrogen and oxygen exist, giving rise to several known isotopologues of water. Vienna Standard Mean Ocean Water is the current international standard for water isotopes. Naturally occurring water is almost completely composed of the neutron-less hydrogen isotope protium. Only 155 ppm include deuterium (2H or D), a hydrogen isotope with one neutron, and fewer than 20 parts per quintillion include tritium (3H or T), which has two neutrons. Oxygen also has three stable isotopes, with 16O present in 99.76%, 17O in 0.04%, and 18O in 0.2% of water molecules.Deuterium oxide, D2O, is also known as heavy water because of its higher density. It is used in nuclear reactors as a neutron moderator. Tritium is radioactive, decaying with a half-life of 4500 days; THO exists in nature only in minute quantities, being produced primarily via cosmic ray-induced nuclear reactions in the atmosphere. Water with one protium and one deuterium atom HDO occur naturally in ordinary water in low concentrations (~0.03%) and D2O in far lower amounts (0.000003%) and any such molecules are temporary as the atoms recombine. Isotopologues: The most notable physical differences between H2O and D2O, other than the simple difference in specific mass, involve properties that are affected by hydrogen bonding, such as freezing and boiling, and other kinetic effects. This is because the nucleus of deuterium is twice as heavy as protium, and this causes noticeable differences in bonding energies. The difference in boiling points allows the isotopologues to be separated. The self-diffusion coefficient of H2O at 25 °C is 23% higher than the value of D2O. Because water molecules exchange hydrogen atoms with one another, hydrogen deuterium oxide (DOH) is much more common in low-purity heavy water than pure dideuterium monoxide D2O. Isotopologues: Consumption of pure isolated D2O may affect biochemical processes—ingestion of large amounts impairs kidney and central nervous system function. Small quantities can be consumed without any ill-effects; humans are generally unaware of taste differences, but sometimes report a burning sensation or sweet flavor. Very large amounts of heavy water must be consumed for any toxicity to become apparent. Rats, however, are able to avoid heavy water by smell, and it is toxic to many animals.Light water refers to deuterium-depleted water (DDW), water in which the deuterium content has been reduced below the standard 155 ppm level. Occurrence: Water is the most abundant substance on Earth's surface and also the third most abundant molecule in the universe, after H2 and CO. 0.23 ppm of the earth's mass is water and 97.39% of the global water volume of 1.38×109 km3 is found in the oceans.Water is far more prevalent in the outer Solar System, beyond a point called the frost line, where the Sun's radiation is too weak to vaporize solid and liquid water (as well as other elements and chemical compounds with relatively low melting points, such as methane and ammonia). In the inner Solar System, planets, asteroids, and moons formed almost entirely of metals and silicates. Water has since been delivered to the inner Solar System via an as-yet unknown mechanism, theorized to be the impacts of asteroids or comets carrying water from the outer Solar System, where bodies contain much more water ice. The difference between planetary bodies located inside and outside the frost line can be stark. Earth's mass is 0.000023% water, while Tethys, a moon of Saturn, is almost entirely made of water. Reactions: Acid-base reactions Water is amphoteric: it has the ability to act as either an acid or a base in chemical reactions. According to the Brønsted-Lowry definition, an acid is a proton (H+) donor and a base is a proton acceptor. When reacting with a stronger acid, water acts as a base; when reacting with a stronger base, it acts as an acid. For instance, water receives an H+ ion from HCl when hydrochloric acid is formed: HCl(acid) + H2O(base) ⇌ H3O+ + Cl−In the reaction with ammonia, NH3, water donates a H+ ion, and is thus acting as an acid: NH3(base) + H2O(acid) ⇌ NH+4 + OH−Because the oxygen atom in water has two lone pairs, water often acts as a Lewis base, or electron-pair donor, in reactions with Lewis acids, although it can also react with Lewis bases, forming hydrogen bonds between the electron pair donors and the hydrogen atoms of water. HSAB theory describes water as both a weak hard acid and a weak hard base, meaning that it reacts preferentially with other hard species: H+(Lewis acid) + H2O(Lewis base) → H3O+Fe3+(Lewis acid) + H2O(Lewis base) → Fe(H2O)3+6Cl−(Lewis base) + H2O(Lewis acid) → Cl(H2O)−6When a salt of a weak acid or of a weak base is dissolved in water, water can partially hydrolyze the salt, producing the corresponding base or acid, which gives aqueous solutions of soap and baking soda their basic pH: Na2CO3 + H2O ⇌ NaOH + NaHCO3 Ligand chemistry Water's Lewis base character makes it a common ligand in transition metal complexes, examples of which include metal aquo complexes such as Fe(H2O)2+6 to perrhenic acid, which contains two water molecules coordinated to a rhenium center. In solid hydrates, water can be either a ligand or simply lodged in the framework, or both. Thus, FeSO4·7H2O consists of [Fe2(H2O)6]2+ centers and one "lattice water". Water is typically a monodentate ligand, i.e., it forms only one bond with the central atom. Reactions: Organic chemistry As a hard base, water reacts readily with organic carbocations; for example in a hydration reaction, a hydroxyl group (OH−) and an acidic proton are added to the two carbon atoms bonded together in the carbon-carbon double bond, resulting in an alcohol. When the addition of water to an organic molecule cleaves the molecule in two, hydrolysis is said to occur. Notable examples of hydrolysis are the saponification of fats and the digestion of proteins and polysaccharides. Water can also be a leaving group in SN2 substitution and E2 elimination reactions; the latter is then known as a dehydration reaction. Reactions: Water in redox reactions Water contains hydrogen in the oxidation state +1 and oxygen in the oxidation state −2. It oxidizes chemicals such as hydrides, alkali metals, and some alkaline earth metals. One example of an alkali metal reacting with water is: 2 Na + 2 H2O → H2 + 2 Na+ + 2 OH−Some other reactive metals, such as aluminium and beryllium, are oxidized by water as well, but their oxides adhere to the metal and form a passive protective layer. Note that the rusting of iron is a reaction between iron and oxygen that is dissolved in water, not between iron and water. Reactions: Water can be oxidized to emit oxygen gas, but very few oxidants react with water even if their reduction potential is greater than the potential of O2/H2O. Almost all such reactions require a catalyst. An example of the oxidation of water is: 4 AgF2 + 2 H2O → 4 AgF + 4 HF + O2 Electrolysis Water can be split into its constituent elements, hydrogen, and oxygen, by passing an electric current through it. This process is called electrolysis. The cathode half reaction is: 2 H+ + 2 e− → H2The anode half reaction is: 2 H2O → O2 + 4 H+ + 4 e−The gases produced bubble to the surface, where they can be collected or ignited with a flame above the water if this was the intention. The required potential for the electrolysis of pure water is 1.23 V at 25 °C. The operating potential is actually 1.48 V or higher in practical electrolysis. History: Henry Cavendish showed that water was composed of oxygen and hydrogen in 1781. The first decomposition of water into hydrogen and oxygen, by electrolysis, was done in 1800 by English chemist William Nicholson and Anthony Carlisle. In 1805, Joseph Louis Gay-Lussac and Alexander von Humboldt showed that water is composed of two parts hydrogen and one part oxygen.Gilbert Newton Lewis isolated the first sample of pure heavy water in 1933.The properties of water have historically been used to define various temperature scales. Notably, the Kelvin, Celsius, Rankine, and Fahrenheit scales were, or currently are, defined by the freezing and boiling points of water. The less common scales of Delisle, Newton, Réaumur, and Rømer were defined similarly. The triple point of water is a more commonly used standard point today. Nomenclature: The accepted IUPAC name of water is oxidane or simply water, or its equivalent in different languages, although there are other systematic names which can be used to describe the molecule. Oxidane is only intended to be used as the name of the mononuclear parent hydride used for naming derivatives of water by substituent nomenclature. These derivatives commonly have other recommended names. For example, the name hydroxyl is recommended over oxidanyl for the –OH group. The name oxane is explicitly mentioned by the IUPAC as being unsuitable for this purpose, since it is already the name of a cyclic ether also known as tetrahydropyran.The simplest systematic name of water is hydrogen oxide. This is analogous to related compounds such as hydrogen peroxide, hydrogen sulfide, and deuterium oxide (heavy water). Using chemical nomenclature for type I ionic binary compounds, water would take the name hydrogen monoxide, but this is not among the names published by the International Union of Pure and Applied Chemistry (IUPAC). Another name is dihydrogen monoxide, which is a rarely used name of water, and mostly used in the dihydrogen monoxide parody. Nomenclature: Other systematic names for water include hydroxic acid, hydroxylic acid, and hydrogen hydroxide, using acid and base names. None of these exotic names are used widely. The polarized form of the water molecule, H+OH−, is also called hydron hydroxide by IUPAC nomenclature.Water substance is a term used for hydrogen oxide (H2O) when one does not wish to specify whether one is speaking of liquid water, steam, some form of ice, or a component in a mixture or mineral.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Dual-use technology** Dual-use technology: In politics, diplomacy and export control, dual-use items refers to goods, software and technology that can be used for both civilian and military applications.More generally speaking, dual-use can also refer to any goods or technology which can satisfy more than one goal at any given time. Thus, expensive technologies that would otherwise benefit only civilian commercial interests can also be used to serve military purposes if they are not otherwise engaged, such as the Global Positioning System. Dual-use technology: The "dual-use dilemma" was first noted with the discovery of the process for synthesizing and mass-producing ammonia which revolutionized agriculture with modern fertilizers but also led to the creation of chemical weapons during World War I. The dilemma has long been known in chemistry and physics, and has led to international conventions and treaties, including the Chemical Weapons Convention and the Treaty on the Non-Proliferation of Nuclear Weapons. Drone: UAV is considered a challenge for military. No drone zones are areas where drones or unmanned aircraft systems (UAS) cannot be operated Missile: Originally developed as weapons during the Cold War, the United States and the Soviet Union spent billions of dollars developing rocket technology which could carry humans into space (and even eventually to the moon). The development of this peaceful rocket technology paralleled the development of intercontinental ballistic missile technology; and was a way of demonstrating to the other side the potential of one's own rockets. Missile: Those who seek to develop ballistic missiles may claim that their rockets are for peaceful purposes; for example, for commercial satellite launching or scientific purposes. However, even genuinely peaceful rockets may be converted into weapons and provide the technological basis to do so. Missile: Within peaceful rocket programs, different peaceful applications can be seen as having parallel military roles. For example, the return of scientific payloads safely to earth from orbit would indicate re-entry vehicle capability and demonstrating the ability to launch multiple satellites with a single launch vehicle can be seen in a military context as having the potential to deploy multiple independently targetable reentry vehicles. Nuclear: Dual-use nuclear technology refers to the possibility of military use of civilian nuclear power technology. Many technologies and materials associated with the creation of a nuclear power program have a dual-use capability, in that several stages of the nuclear fuel cycle allow diversion of nuclear materials for nuclear weapons. When this happens a nuclear power program can become a route leading to the atomic bomb or a public annex to a secret bomb program. The crisis over Iran's nuclear activities is a case in point.Many UN and US agencies warn that building more nuclear reactors unavoidably increases nuclear proliferation risks. A fundamental goal for American and global security is to minimize the proliferation risks associated with the expansion of nuclear power. If this development is "poorly managed or efforts to contain risks are unsuccessful, the nuclear future will be dangerous". For nuclear power programs to be developed and managed safely and securely, it is important that countries have domestic “good governance” characteristics that will encourage proper nuclear operations and management: These characteristics include low degrees of corruption (to avoid officials selling materials and technology for their own personal gain as occurred with the A.Q. Khan smuggling network in Pakistan), high degrees of political stability (defined by the World Bank as “likelihood that the government will be destabilized or overthrown by unconstitutional or violent means, including politically-motivated violence and terrorism”), high governmental effectiveness scores (a World Bank aggregate measure of “the quality of the civil service and the degree of its independence from political pressures [and] the quality of policy formulation and implementation”), and a strong degree of regulatory competence. Artificial intelligence: As more advances are made towards artificial intelligence (AI), it garners more and more attention on its capability as a dual-use technology and the security risks it may pose. Artificial intelligence can be applied within many different fields and can be easily integrated throughout current technology's cyberspace. With the use of AI, technology has become capable of running multiple algorithms that could solve difficult problems, from detecting anomalies in samples during MRI scans, to providing surveillance of an entire country's residents. Within China's mass surveillance, the government uses AI in order to distinguish citizens with less than satisfactory records among crowds. Every new invention or application made with AI comes with its own set of positive and negative effects. Some claim that, as potential uses for AI grow in number, nations need to start regulating it as a dual-use technology. Chemical: The modern history of chemical weapons can be traced back to the chemical industries of the belligerent nations of World War I, especially that of Germany. Many industrial chemical processes produce toxic intermediary stages, final products, and by-products, and any nation with a chemical industry has the potential to create weaponised chemical agents. Chlorine is a chemical agent found within several household items such as Bleach and provides various benefits with its wide array of applications. However, its gaseous form can also be used as a chemical weapon. Biological: That the July 2007 terrorist attacks in central London and at Glasgow airport may have involved National Health Service medical professionals was a recent wake-up call that screening people with access to pathogens may be necessary. The challenge remains to maintain security without impairing the contributions to progress afforded by research.Reports from the project on building a sustainable culture in dual-use bioethics suggest that, as a result of perceived changes in both science and security over the past decade, several states and multilateral bodies have underlined the importance of making life scientists aware of concerns over dual-use and the legal obligations underpinning the prevention of biological weapons. One of the key mechanisms that have been identified to achieve this is through the education of life science students, with the objective of building what has been termed a “culture of responsibility”. Biological: At the 2008 Meeting of States Parties to the Biological and Toxin Weapons Convention (BTWC), it was agreed by consensus that: States Parties recognized the importance of ensuring that those working in the biological sciences are aware of their obligations under the convention and relevant national legislation and guidelines...States Parties noted that formal requirements for seminars, modules or courses, including possible mandatory components, in relevant scientific and engineering training programmes and continuing professional education could assist in raising awareness and in implementing the convention.The World Health Organization in 2010 developed a "guidance document" for what it called "Dual Use Research of Concern" (DURC) in the life sciences, regarding “research that is intended [to] benefit, but which might easily be misapplied to do harm".Along with several similar stipulations from other states and regional organisations, biosecurity education has become more important. Unfortunately, both the policy and academic literature show that life scientists across the globe are frequently uninformed or underinformed about biosecurity, dual-use, the BTWC and national legislation outlawing biological weapons. Moreover, despite numerous declarations by states and multilateral organisations, the extent to which statements at the international level have trickled down to multifaceted activity at the level of scientists remains limited.The US federal government (USG) developed at least two policy document in light of the WHO guidance document on DURC. In March 2012, The "United States Government Policy for Oversight of Life Sciences Dual use Research of Concern" was published, in order to establish regular review by oversight bodies into USG-funded DURC. In September 2014, the "United States Government Policy for Institutional Oversight of Life Sciences Dual Use Research of Concern" was published, in order to identify DURC and mitigate it "at the institutional level" such as universities. Night vision and thermal imaging: Night-vision devices with extraordinary performance characteristics (high gain, specific spectral sensitivity, fine resolution, low noise) are heavily export-restricted by the few states capable of producing them, mainly to limit their proliferation to enemy combatants, but also to slow the inevitable reverse-engineering undertaken by other world powers. These precision components, such as the image intensifiers used in night vision goggles and the focal plane arrays found in surveillance satellites and thermal cameras, have numerous civil applications which include nature photography, medical imaging, firefighting, and population control of predator species. Night vision and thermal imaging: Night scenes of wild elephants and rhinos in the BBC nature documentary series Africa were shot on a Lunax Starlight HD camera (a custom-built digital cinema rig encompassing a Generation 3 image intensifier), and recolored digitally.In the United States, civilians are free to buy and sell American-made night vision and thermal systems, such as those manufactured by defense contractors Harris, L3 Insight, and FLIR Systems, with very few restrictions. However, American night vision owners may not bring the equipment out of the country, sell it internationally, or even invite non-citizens to examine the technology, per International Traffic in Arms Regulations.Export of American image intensifiers is selectively permitted under license by the United States Department of Commerce and the State Department. Contributing factors in acquiring a license include diplomatic relations with the destination country, number of pieces to be sold, and the relative quality of the equipment itself, expressed using a Figure Of Merit (FOM) score calculated from several key performance characteristics. Night vision and thermal imaging: Competing international manufacturers (European defense contractor Photonis Group, Japanese scientific instrument giant Hamamatsu Photonics, and Russian state-financed laboratory JSC Katod) have entered the American market through licensed importers. In spite of their foreign origin, re-export of these components outside of the United States is restricted similarly to domestic components. A 2012 assessment of the sector by the Department of Commerce and Bureau of Industry and Security made the case for relaxing export controls in light of the narrowing performance gap and increased competition internationally, and a review period undertaken by the Directorate of Defense Trade Controls in 2015 introduced much more granular performance definitions. Other technologies: In addition to obvious and headline-grabbing dual-use technologies there are some less obvious ones, in that many erstwhile peaceful technologies can be used in weapons. One example during the First and Second World War is the role of German toy manufacturers: Germany was one of the leading nations in the production of wind-up toys, and the ability to produce large numbers of small and reliable clockwork motors was converted into the ability to produce shell and bomb fuzes. During its early stages of release, the PlayStation 2 was considered to be a dual-use technology. The gaming console had to receive special import regulations before being shipped towards the U.S. and European markets. This is due to the console's and its included GPU's capability to process high quality images at high speeds, a shared trait with missile guidance systems. Other technologies: HoloLens 2 Early 2019, Microsoft announced the HoloLens 2, smart glasses that will allow consumers to experience augmented reality within the real world. However, it was revealed Microsoft made a 479 million dollar deal with the U.S. government. This contract would have Microsoft create and supply the U.S. Army a separate version of the HoloLens smart glasses called the Integrated Visual Augmented System (IVAS). The IVAS would be used to train soldiers, as well as field medics with battlefield experience within a virtual environment. This version of the HoloLens allowed the soldiers to have a virtual map of their current environment, friendly units' locations, and much more. An anonymous Microsoft employee published an open letter demanding that Microsoft terminate the IVAS contract. Microsoft president Brad Smith had previously made a public blog post outlining the company's stance on "how technology companies should work with the government, and specifically whether companies should supply digital technology to the military." Control: Most industrial countries have export controls on certain types of designated dual-use technologies, and they are required by a number of treaties as well. These controls restrict the export of certain commodities and technologies without the permission of the government. In the context of sanctions regimes, dual-use can be construed broadly because there are few things which do not have the potential for both military and civilian uses. United States The principal agency for investigating violations of dual-use export controls in the United States is the Bureau of Industry and Security (BIS) Office of Export Enforcement (OEE). Interagency coordination of export control cases are conducted through the Export Enforcement Coordination Center (E2C2). The International Traffic in Arms Regulations is the US regime that the BIS OEE enforces. Canada The Canadian legislation to govern the trade in dual-use technology is known as the Export and Imports Permits Act. European Union The European Union governs dual-use technology through the Control List of Dual Use Items. Control: International regimes There are several international arrangements among countries which seek to harmonize lists of dual-use (and military) technologies to control. These include the Nuclear Suppliers Group, the Australia Group, which looks at chemical and biological technologies, the Missile Technology Control Regime, which covers delivery systems for weapons of mass destruction, and the Wassenaar Arrangement, which covers conventional arms and dual-use technologies.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Quantum secret sharing** Quantum secret sharing: Quantum secret sharing (QSS) is a quantum cryptographic scheme for secure communication that extends beyond simple quantum key distribution. It modifies the classical secret sharing (CSS) scheme by using quantum information and the no-cloning theorem to attain the ultimate security for communications. The method of secret sharing consists of a sender who wishes to share a secret with a number of receiver parties in such a way that the secret is fully revealed only if a large enough portion of the receivers work together. However, if not enough receivers work together to reveal the secret, the secret remains completely unknown. Quantum secret sharing: The classical scheme was independently proposed by Adi Shamir and George Blakley in 1979. In 1998, Mark Hillery, Vladimír Bužek, and André Berthiaume extended the theory to make use of quantum states for establishing a secure key that could be used to transmit the secret via classical data. In the years following, more work was done to extend the theory to transmitting quantum information as the secret, rather than just using quantum states for establishing the cryptographic key.QSS has been proposed for being used in quantum money as well as for joint checking accounts, quantum networking, and distributed quantum computing, among other applications. Protocol: The simplest case: GHZ states This example follows the original scheme laid out by Hillery et al. in 1998 which makes use of Greenberger–Horne–Zeilinger (GHZ) states. A similar scheme was developed shortly thereafter which used two-particle entangled states instead of three-particle states. In both cases, the protocol is essentially an extension of quantum key distribution to two receivers instead of just one. Protocol: Following the typical language, let the sender be denoted as Alice and two receivers as Bob and Charlie. Alice's objective is to send each receiver a "share" of her secret key (really just a quantum state) in such a way that: Neither Bob's nor Charlie's share contains any information about Alice's original message, and therefore neither can extract the secret on their own. Protocol: The secret can only be extracted if Bob and Charlie work together, in which case the secret is fully revealed. Protocol: The presence of either an outside eavesdropper or a dishonest receiver (either Bob or Charlie) can be detected without the secret being revealed.Alice initiates the protocol by sharing with each of Bob and Charlie one particle from a GHZ triplet in the (standard) Z-basis, holding onto the third particle herself: 000 111 ⟩2, where |0⟩ and |1⟩ are orthogonal modes in an arbitrary Hilbert space. Protocol: After each participant measures their particle in the X- or Y-basis (chosen at random), they share (via a classical, public channel) which basis they used to make the measurement, but not the result itself. Upon combining their measurement results, Bob and Charlie can deduce what Alice measured 50% of the time. Repeating this process many times, and using a small fraction to verify that no malicious actors are present, the three participants can establish a joint key for communicating securely. Consider the following for a clear example of how this will work. Protocol: Let us define the x and y eigenstates in the following, standard way: |+x⟩=|0⟩+|1⟩2,|−x⟩=|0⟩−|1⟩2 |+y⟩=|0⟩+i|1⟩2,|−y⟩=|0⟩−i|1⟩2 .The GHZ state can then be rewritten as |Ψ⟩GHZ=122[(|+x⟩a|+x⟩b+|−x⟩a|−x⟩b)(|0⟩c+|1⟩c)+(|+x⟩a|−x⟩b+|+x⟩a|−x⟩b)(|0⟩c−|1⟩c)] ,where (a, b, c) denote the particles for (Alice, Bob, Charlie) and Alice's and Bob's states have been written in the X-basis. Using this form, it is evident that their exists a correlation between Alice's and Bob's measurements and Charlie's single-particle state: if Alice and Bob have correlated results then Charlie has the state |0⟩c+|1⟩c2 and if Alice and Bob have anticorrelated results then Charlie has the state |0⟩c−|1⟩c2 . It is clear from the table summarizing these correlations that by knowing the measurement bases of Alice and Bob, Charlie can use his own measurement result to deduce whether Alice and Bob had the same or opposite results. Note however that to make this deduction, Charlie must choose the correct measurement basis for measuring his own particle. Since he chooses between two noncommuting bases at random, only half of the time will he be able to extract useful information. The other half of the time the results must be discarded. Additionally, from the table one can see that Charlie has no way of determining who measured what, only if the results of Alice and Bob were correlated or anticorrelated. Thus the only way for Charlie to figure out Alice's measurement is by working together with Bob and sharing their results. In doing so, they can extract Alice's results for every measurement and use this information to create a cryptographic key that only they know. Protocol: ((k,n)) threshold scheme The simple case described above can be extended similarly to that done in CSS by Shamir and Blakley via a thresholding scheme. In the ((k,n)) threshold scheme (double parentheses denoting a quantum scheme), Alice splits her secret key (quantum state) into n shares such that any k≤n shares are required to extract the full information but k-1 or less shares cannot extract any information about Alice's key. Protocol: The number of users needed to extract the secret is bounded by n/2 < k ≤ n. Consider for n ≥ 2k, if a ((k,n)) threshold scheme is applied to two disjoint sets of k in n, then two independent copies of Alice's secret can be reconstructed. This of course would violate the no-cloning theorem and is why n must be less than 2k. Protocol: As long as a ((k,n)) threshold scheme exists, a ((k,n-1)) threshold scheme can be constructed by simply discarding one share. This method can be repeated until k=n. Protocol: The following outlines a simple ((2,3)) threshold scheme, and more complicated schemes can be imagined by increasing the number of shares Alice splits her original state into: Consider Alice beginning with the single qutrit state |Ψ⟩a=α|0⟩+β|1⟩+γ|2⟩, and then mapping it to three qutrits 000 111 222 012 120 201 021 102 210 ⟩) and sharing one qutrit with each of the 3 receivers. It is evident that a single share does not give any information about Alice's original state, since each share is in the maximally mixed state. However, two shares could be used to reconstruct Alice's original state. Assume the first two shares are given. Add the first share to the second (modulo three) and then add the new value of the second share to the first. The resulting state is 00 12 21 ⟩) where the first qutrit is exactly Alice's original state. Via this method, the sender's original state can be reconstructed at one of the receivers' particles, but it is crucial that no measurements be made during this reconstruction process or any superposition within the quantum state will collapse. Security: The security of QSS relies upon the no-cloning theorem to protect against possible eavesdroppers as well as dishonest users. This section adopts the two-particle entanglement protocol very briefly mentioned above. Security: Eavesdropping QSS promises security against eavesdropping in the exact same way as quantum key distribution. Consider an eavesdropper, Eve, who is assumed to be capable of perfectly discriminating and creating the quantum states used in the QSS protocol. Eve's objective is to intercept one of the receivers' (say Bob's) shares, measure it, then recreate the state and send it on to whomever the share was initially intended for. The issue with this method is that Eve needs to randomly choose a basis to measure in, and half of the time she will choose the wrong basis. When she chooses the correct basis, she will get the correct measurement result with certainty and can recreate the state she measured and send it off to Bob without her presence being detected. However, when she chooses the wrong basis, she will end up sending one of the two states from the incorrect basis. Bob will measure the state she sent him and half of the time this will be the correct detection, but only because the state from the wrong basis is an equal superposition of the two states in the correct basis. Thus, half of the time that Eve measures in the wrong basis and therefore sends the incorrect state, Bob will measure the wrong state. This intervention on Eve's part leads to causing an error in the protocol on an extra 25% of trials. Therefore, with enough measurements, it will be nearly impossible to miss the protocol errors occurring with a 75% probability instead of the 50% probability predicted by the theory, thus signaling that there is an eavesdropper within the communication channel. Security: More complex eavesdropping strategies can be performed using ancilla states, but the eavesdropper will still be detectable in a similar manner. Security: Dishonest participant Now, consider the case where one of the participants of the protocol (say Bob) is acting as a malicious user by trying to obtain the secret without the other participants being aware. Analyzing the possibilities, one learns that choosing the proper order in which Bob and Charlie release their measurement bases and results when testing for eavesdropping can promise the detection of any cheating that may be occurring. The proper order turns out to be: Receiver 1 releases measurement results. Security: Receiver 2 releases measurement results. Receiver 2 releases measurement basis. Security: Receiver 1 releases measurement basis.This ordering prevents receiver 2 from knowing which basis to share for tricking the other participants because receiver 2 does not yet know what basis receiver 1 is going to announce was used. Similarly, since receiver 1 must release their results first, they cannot control if the measurements should be correlated or anticorrelated for the valid combination of bases used. In this way, acting dishonestly will introduce errors in the eavesdropper testing phase whether the dishonest participant is receiver 1 or receiver 2. Thus, the ordering of releasing the data must be carefully chosen so as to prevent any dishonest user from acquiring the secret without being noticed by the other participants. Experimental realization: This section follows from the first experimental demonstration of QSS in 2001 which was made possible via advances in techniques of quantum optics.The original idea for QSS using GHZ states was more challenging to implement because of the difficulties in producing three-particle correlations via either down-conversion processes with χ3 nonlinearities or three-photon positronium annihilation, both of which are rare events. Instead, the original experiment was performed via the two-particle scheme using a standard χ2 spontaneous parametric down-conversion (SPDC) process with the third correlated photon being the pump photon. Experimental realization: The experimental setup works as follows: Alice: A pulsed laser emitted at time t0 enters an interferometer with unequal path lengths such that the pump is split into two distinct temporal pulses with equal amplitude. One arm of the interferometer contains a phase shifter to control the phase difference of the two arms, denoted α. The pump pulses are focused onto a nonlinear crystal where some of the pump photons are down-converted into photon pairs via SPDC. The SPDC pairs are then split, with one being sent to Bob and the other to Charlie. Experimental realization: Bob and Charlie: Both receivers have identical interferometers to the one used by Alice such that the exact same time difference between the two arms is achieved and each has a phase shifter denoted as β for Bob and γ for Charlie. The different possible trajectories of each interferometer lead to three distinct time differences between when Alice's pump pulse is emitted and when Bob's and Charlie's SPDC photons are detected ( tB and tC , respectively), as well as three time differences between the detections at each of Bob's and Charlie's detectors.Using |X⟩i,|Y⟩j where X and Y are either 'S' for short path or 'L' for long path and i and j are one of 'A', 'B', or 'C' to label a participant's interferometer, this notation describes the arbitrary path taken for any combination of two participants. Notice that |S⟩A,|L⟩j and |L⟩A,|S⟩j where j is either 'B' or 'C' are indistinguishable processes as the time difference between the two processes are exactly the same. The same is true for |S⟩B,|S⟩C and |L⟩B,|L⟩C. Experimental realization: Describing these indistinguishable processes mathematically, |ψ⟩=12(|L⟩A,|S⟩B|S⟩C+ei(α+β+γ)|S⟩A,|L⟩B|L⟩C)), which can be thought of as a "pseudo-GHZ state" where the difference from a true GHZ state is that the three photons do not exist simultaneously. Nonetheless, the triple "coincidences" can be described by exactly the same probability function as for the true GHZ state, cos ⁡(α+β+γ)), implying that QSS will work just the same for this 2-particle source. Experimental realization: By setting the phases α,β, and γ to either 0 or π2 in much the same way as two-photon Bell tests, it can be shown that this setup violates a Bell-type inequality for three particles, S3=|E(α′+β+γ)+E(α+β′+γ)+E(α+β+γ′)−E(α′+β′+γ′)|≤2 ,where E(α+β+γ) is the expectation value for a coincidence measurement with phase shifter settings (α,β,γ) . For this experiment, the Bell-type inequality was violated, with 3.69 , suggesting that this setup exhibits quantum nonlocality. Experimental realization: This seminal experiment showed that the quantum correlations from this setup are indeed described by the probability function Pi,j,k. The simplicity of the SPDC source allowed for coincidences at much higher rates than traditional three-photon entanglement sources, making QSS more practical. This was the first experiment to prove the feasibility of a QSS protocol.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Functor represented by a scheme** Functor represented by a scheme: In algebraic geometry, a functor represented by a scheme X is a set-valued contravariant functor on the category of schemes such that the value of the functor at each scheme S is (up to natural bijections) the set of all morphisms S→X . The scheme X is then said to represent the functor and that classify geometric objects over S given by F.The best known example is the Hilbert scheme of a scheme X (over some fixed base scheme), which, when it exists, represents a functor sending a scheme S to a flat family of closed subschemes of X×S .In some applications, it may not be possible to find a scheme that represents a given functor. This led to the notion of a stack, which is not quite a functor but can still be treated as if it were a geometric space. (A Hilbert scheme is a scheme, but not a stack because, very roughly speaking, deformation theory is simpler for closed schemes.) Some moduli problems are solved by giving formal solutions (as opposed to polynomial algebraic solutions) and in that case, the resulting functor is represented by a formal scheme. Such a formal scheme is then said to be algebraizable if there is another scheme that can represent the same functor, up to some isomorphisms. Motivation: The notion is an analog of a classifying space in algebraic topology. In algebraic topology, the basic fact is that each principal G-bundle over a space S is (up to natural isomorphisms) the pullback of a universal bundle EG→BG along some map from S to BG . In other words, to give a principal G-bundle over a space S is the same as to give a map (called a classifying map) from a space S to the classifying space BG of G. Motivation: A similar phenomenon in algebraic geometry is given by a linear system: to give a morphism from a projective variety to a projective space is (up to base loci) to give a linear system on the projective variety. Yoneda's lemma says that a scheme X determines and is determined by its points. Functor of points: Let X be a scheme. Its functor of points is the functor Hom(−,X) : (Affine schemes)op ⟶ Sets sending an affine scheme Y to the set of scheme maps Y→X .A scheme is determined up to isomorphism by its functor of points. This is a stronger version of the Yoneda lemma, which says that a X is determined by the map Hom(−,X):Schemesop → Sets. Functor of points: Conversely, a functor F:(Affine schemes)op → Sets is the functor of points of some scheme if and only if F is a sheaf with respect to the Zariski topology on (Affine schemes), and F admits an open cover by affine schemes. Examples: Points as characters Let X be a scheme over the base ring B. If x is a set-theoretic point of X, then the residue field of x is the residue field of the local ring OX,x (i.e., the quotient by the maximal ideal). For example, if X is an affine scheme Spec(A) and x is a prime ideal p , then the residue field of x is the function field of the closed subscheme Spec ⁡(A/p) For simplicity, suppose Spec ⁡(A) . Then the inclusion of a set-theoretic point x into X corresponds to the ring homomorphism: A→k(x) (which is A→Ap→k(p) if x=p .) Points as sections By the universal property of fiber product, each R-point of a scheme X determines a morphism of R-schemes Spec Spec Spec ⁡(R) ;i.e., a section of the projection Spec ⁡(R) . If S is a subset of X(R), then one writes |S|⊂XR for the set of the images of the sections determined by elements in S. Examples: Spec of the ring of dual numbers Let Spec ⁡(k[t]/(t2)) , the Spec of the ring of dual numbers over a field k and X a scheme over k. Then each D→X amounts to the tangent vector to X at the point that is the image of the closed point of the map. In other words, X(D) is the set of tangent vectors to X. Universal object: Let F be the functor represented by a scheme X. Under the isomorphism Mor ⁡(X,X) , there is a unique element of F(X) that corresponds to the identity map 1X:X→X . It is called the universal object or the universal family (when the objects that are being classified are families).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Hexagonal architecture (software)** Hexagonal architecture (software): The hexagonal architecture, or ports and adapters architecture, is an architectural pattern used in software design. It aims at creating loosely coupled application components that can be easily connected to their software environment by means of ports and adapters. This makes components exchangeable at any level and facilitates test automation. Origin: The hexagonal architecture was invented by Alistair Cockburn in an attempt to avoid known structural pitfalls in object-oriented software design, such as undesired dependencies between layers and contamination of user interface code with business logic, and published in 2005.The term "hexagonal" comes from the graphical conventions that shows the application component like a hexagonal cell. The purpose was not to suggest that there would be six borders/ports, but to leave enough space to represent the different interfaces needed between the component and the external world. Principle: The hexagonal architecture divides a system into several loosely-coupled interchangeable components, such as the application core, the database, the user interface, test scripts and interfaces with other systems. This approach is an alternative to the traditional layered architecture. Each component is connected to the others through a number of exposed "ports". Communication through these ports follow a given protocol depending on their purpose. Ports and protocols define an abstract API that can be implemented by any suitable technical means (e.g. method invocation in an object-oriented language, remote procedure calls, or Web services). Principle: The granularity of the ports and their number is not constrained: a single port could in some case be sufficient (e.g. in the case of a simple service consumer) ; typically, there are ports for event sources (user interface, automatic feeding), notifications (outgoing notifications), database (in order to interface the component with any suitable DBMS), and administration (for controlling the component); in an extreme case, there could be a different port for every use case, if needed.Adapters are the glue between components and the outside world. They tailor the exchanges between the external world and the ports that represent the requirements of the inside of the application component. There can be several adapters for one port, for example, data can be provided by a user through a GUI or a command-line interface, by an automated data source, or by test scripts. Criticism: The term "hexagonal" implies that there are 6 parts to the concept, whereas there are only 4 key areas. The term’s usage comes from the graphical conventions that shows the application component like a hexagonal cell. The purpose was not to suggest that there would be six borders/ports, but to leave enough space to represent the different interfaces needed between the component and the external world.According to Martin Fowler, the hexagonal architecture has the benefit of using similarities between presentation layer and data source layer to create symmetric components made of a core surrounded by interfaces, but with the drawback of hiding the inherent asymmetry between a service provider and a service consumer that would better be represented as layers. Evolution: According to some authors, the hexagonal architecture is at the origin of the microservices architecture. Variants: The onion architecture proposed by Jeffrey Palermo in 2008 is similar to the hexagonal architecture: it also externalizes the infrastructure with interfaces to ensure loose coupling between the application and the database. It decomposes further the application core into several concentric rings using inversion of control.The clean architecture proposed by Robert C. Martin in 2012 combines the principles of the hexagonal architecture, the onion architecture and several other variants; It provides additional levels of detail of the component, which are presented as concentric rings. It isolates adapters and interfaces (user interface, databases, external systems, devices) in the outer rings of the architecture and leaves the inner rings for use cases and entities. The clean architecture uses the principle of dependency inversion with the strict rule that dependencies shall only exist between an outer ring to an inner ring and never the contrary.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Mystery of the Snow Pearls** Mystery of the Snow Pearls: Mystery of the Snow Pearls (ISBN 0-88038-196-5) is a 1985 adventure module for the Dungeons & Dragons roleplaying game. Its associated code is CM5 and the TSR product number is TSR 9154. Synopsis: Mystery of the Snow Pearls is a solo adventure scenario for one player character who will need to answer the riddles of an evil mage to get back the magic pearl that keeps the character's village safe; the adventure can also be adapted to use with a party of player characters.The player character is a Companion level elf, responsible for safeguarding one of the four, magical pearls that protect the land of Tarylon. Milgo, an evil wizard with a sense of humour, challenges the elf to find and return the lost item. Without it, the entire region is threatened. Synopsis: This adventure includes a piece of colored film known as a "Magic Viewer" that allows the players to read the hidden results of their choices in the text. This includes encounters, puzzles, and traps. Publication history: Mystery of the Snow Pearls was written by Anne Gray McCready, with a cover by Larry Elmore, and was published by TSR in 1985 as a 32-page booklet with an outer folder, and also includes a large map and colored film.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Transgressive segregation** Transgressive segregation: In genetics, transgressive segregation is the formation of extreme phenotypes, or transgressive phenotypes, observed in segregated hybrid populations compared to phenotypes observed in the parental lines. The appearance of these transgressive (extreme) phenotypes can be either positive or negative in terms of fitness. If both parents' favorable alleles come together, it will result in a hybrid having a higher fitness than the two parents. The hybrid species will show more genetic variation and variation in gene expression than their parents. As a result, the hybrid species will have some traits that are transgressive (extreme) in nature. Transgressive segregation can allow a hybrid species to populate different environments/niches in which the parent species do not reside, or compete in the existing environment with the parental species. Causes: Genetic There are many causes for transgressive segregation in hybrids. One cause can be due to recombination of additive alleles. Recombination results in new pairs of alleles at two or more loci. These different pairs of alleles can give rise to new phenotypes if gene expression has been changed at these loci. Another cause can be elevated mutation rate. When mutation rates are high, it is more probable that a mutation will occur and cause an extreme phenotypic change. Reduced developmental stability is another cause for transgressive segregation. Developmental stability refers to the capability of a genotype to go through a constant development of a phenotype in a certain environmental setting. If there is a disturbance due to genetic or environmental factors, the genotype will be more sensitive to phenotypic changes. Another cause arises from the interaction between two alleles of two different genes, also known as the epistatic effect. Epistasis is the event when one allele at a locus prevents an allele at another locus to express its product as if it is masking its effect. Therefore, epistasis can be related to gene over dominance caused by heterozygosity at specific loci.[2] What this means is that the heterozygote (hybrid) when compared to the homozygote (parent) is better adapted and therefore shows more transgressive, extreme phenotypes. All of these causes lead to the appearance of these extreme phenotypes and creates a hybrid species that will deviate away from the parent species niche and eventually create an individual "hybrid" species. Causes: Environmental Other than the genetic factors solely causing transgressive segregation, environmental factors can cause genetic factors to take place. Environmental factors that cause transgressive segregation can be influenced by human activity and climate change. Both human activity and climate change have the capability to force species of a specific genome to interact with other species with different genomes. Causes: For example, if a bridge is built that connects two isolated areas to one another, a gene flow door would open. This open door will increase the interactions between different species with different genomes can create hybrid species that can potentially show transgressive phenotypes. Human activity can open the gene flow door by pursuing harmful actions such as cutting down forests and pollution. Climate change on the other hand can open the gene flow door by breaking climate and environmental barriers that were present before. This convergence between species can give rise to a hybrid species that will have more phenotypic variation when compared to the parent species. This increase in phenotypic variation has the potential for transgressive segregation to occur. Examples of transgressive segregation: In Kenya, there is a fungus called septoria tritici blotch (STB) that diminishes yield in wheat crop. The parent species of wheat had little resistance toward STB, but the hybrid species due to transgressive segregation showed a higher resistance toward STB and therefore a higher fitness. You can create a higher resistance to STB by crossing genes together that are efficient. In result, out of 36 crosses there were 31 that showed a higher mean fitness than the control, parent value. These 31 crosses indicate a higher resistance to STB. The crosses used were from other commercial wheat's that were high yielding which is advantageous because there is a lower chance of deleterious (unwanted traits) appearing and therefore an increase in beneficial traits. Transgressive segregation has been found to be useful to create a resistance toward this organism in order to increase the yield of wheat crop.Rieseberg used sunflowers to show the transgressive segregation of parental traits. Helianthus annuus and Helianthus petiolaris are the two parent groups for the hybrids. Ultimately there were three hybrid sunflower species. When compared to the fitness of the parents, the hybrids showed a higher tolerance in areas which the parent species would not be able to survive i.e. salt marsh, sand dunes, and deserts. Transgressive segregation allowed these hybrids to survive in areas that the parent would not be able to. Therefore, the hybrids were populated in areas where the parent species were not. This is due to hybrid species showing more gene expression (phenotypes) than their parents and also having some genes that are transgressive (extreme) in nature. Testing for transgressive segregation: There are many ways to test if transgressive segregation occurred within a population. One common way to test for transgressive segregation is to use a Dunnett's test. This test looks at whether the hybrid species' performance was different from the control group by looking whether or not the mean of the control group (parent species) differs significantly from mean of the other groups. If there is a difference, that is an indication of transgressive segregation. Another commonly used test is the use of quantitative trait loci (QTL) to assess transgressive segregation. Alleles with QTL that were opposed (either by overdomiance or underdominance) of the parental parent QTL indicate that transgressive segregation occurred. Alleles with QTL that was the same as the predicted parent QTL showed that there was no transgressive segregation. Importance: Transgressive segregation creates an opportunity for new hybrid species to arise that are more fit than their ancestors. As seen with the STB in Kenya and Rieseberg's sunflowers, transgressive segregation can be used to create a species that is more adaptable and resistant in areas where there is environmental stress. Transgressive segregation can be seen as genetic engineering in the way that the goal for each of these events is to create an organism that is more fit than the last.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Invincible error** Invincible error: Vincible ignorance is, in Catholic moral theology, ignorance that a person could remove by applying reasonable diligence in the given set of circumstances. It contrasts with invincible ignorance, which a person is either entirely incapable of removing, or could only do so by supererogatory efforts (i.e., efforts above and beyond normal duty).The first Pope to use the term invincible ignorance officially seems to have been Pope Pius IX in the allocution Singulari Quadam (9 December 1854) and the encyclicals Singulari Quidem (17 March 1856) and Quanto Conficiamur Moerore (10 August 1863). The term, however, is far older than that. Aquinas, for instance, uses it in his Summa Theologica (written 1265–1274), and discussion of the concept can be found as far back as Origen (3rd century). Doctrine of vincible ignorance: It is culpable to remain willfully ignorant of matters that one is obligated to know. While invincible ignorance eliminates culpability, vincible ignorance at most mitigates it, and may even aggravate guilt. The guilt of an action performed in vincible ignorance ought to be measured by the degree of diligence or negligence shown in performing the act. An individual is morally responsible for their ignorance and for the acts resulting from it. If some insufficient diligence was shown in dispelling ignorance, it is termed merely vincible; it may diminish culpability to the point of rendering a sin venial. When little or no effort is made to remove ignorance, the ignorance is termed crass or supine; it removes little or no guilt. Deliberately fostered ignorance is affected or studied; it can increase guilt.Ignorance may be: Of law, when one is unaware of the existence of the law itself, or at least that a particular case is comprised under its provisions. Doctrine of vincible ignorance: Of fact, when not the relation of something to the law but the thing itself or some circumstance is unknown. Of penalty, when a person is not cognizant that a sanction has been attached to a particular crime. This is especially to be considered when there is question of more serious punishment. Doctrine of invincible ignorance: "Invincible ignorance excuses from all culpability. An action committed in ignorance of the law prohibiting it, or of the facts of the case, is not a voluntary act." On the other hand, it is culpable to remain willfully ignorant of matters that one is obligated to know (vincible ignorance). In this case the individual is morally responsible for their ignorance, and for the acts resulting from it. The guilt associated with an offense committed in ignorance is less than it would have been if the act were committed in full knowledge, because in that case the offense is less voluntary. Protestant view: Protestants diverged from Catholic doctrine in this area during the Reformation. Martin Luther believed that invincible ignorance was only a valid excuse for offenses against human law. In his view, humans are ignorant of divine law because of original sin, for which all bear guilt. John Calvin agreed that ignorance of God's law is always vincible.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Total enclosure fetishism** Total enclosure fetishism: Total enclosure fetishism is a form of sexual fetishism whereby a person becomes aroused when having entire body enclosed in a certain way. Total enclosure is often accompanied by some element of bondage. Examples: Some total enclosure activities include: In rubber fetishism, rubber suits, gas masks and similar garments and accessories are used for total enclosure. Vacuum beds rigidly enclose the entire body under a rubber sheet with a small breathing tube. Sleepsacks and body bags are also used as less rigid enclosure alternative to the vacuum beds, although some are made in inflatable form to increase pressure on the occupant's body. Examples: In spandex fetishism, zentai suits are used for total enclosure in skintight fabric from head to toe. In the case of zentai, the wearer breathes through the loose-woven fabric itself, the garment is not as tight as a rubber or PVC garment would be, and the costume generally comes off with a zipper that can be operated by the wearer. Examples: Being sealed within a giant stuffed animal or murrsuit (sexual fursuit).Although experiences of these activities are regarded as claustrophobic, total enclosure fetishists like to practice these activities, sometimes combining them with bondage to intensify feelings of helplessness. Risks: As with all activities involving bondage or potential risk to breathing, this is a risky activity. Maintaining an airway, preventing positional asphyxia, and ensuring that the enclosed person has a means of escape at all times are of paramount importance, if these activities are not to result in death. See the articles on bondage and erotic asphyxiation for some discussion of the risks involved. Sources: Gillian Freeman, "The Undergrowth of Literature", Nelson, 1967, pp. 141–143 David Kunzle, "Fashion and fetishism: a social history of the corset, tight-lacing, and other forms of body-sculpture in the West", Rowman and Littlefield, 1982, ISBN 0-8476-6276-4, p. 39 Simon LeVay, Sharon McBride Valente, "Human sexuality", Sinauer Associates, 2006, ISBN 0-87893-465-0, p. 494 http://en.wikifur.com/wiki/Murrsuit
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Tide** Tide: Tides are the rise and fall of sea levels caused by the combined effects of the gravitational forces exerted by the Moon (and to a much lesser extent, the Sun) and are also caused by the Earth and Moon orbiting one another. Tide: Tide tables can be used for any given locale to find the predicted times and amplitude (or "tidal range"). The predictions are influenced by many factors including the alignment of the Sun and Moon, the phase and amplitude of the tide (pattern of tides in the deep ocean), the amphidromic systems of the oceans, and the shape of the coastline and near-shore bathymetry (see Timing). They are however only predictions, the actual time and height of the tide is affected by wind and atmospheric pressure. Many shorelines experience semi-diurnal tides—two nearly equal high and low tides each day. Other locations have a diurnal tide—one high and low tide each day. A "mixed tide"—two uneven magnitude tides a day—is a third regular category.Tides vary on timescales ranging from hours to years due to a number of factors, which determine the lunitidal interval. To make accurate records, tide gauges at fixed stations measure water level over time. Gauges ignore variations caused by waves with periods shorter than minutes. These data are compared to the reference (or datum) level usually called mean sea level.While tides are usually the largest source of short-term sea-level fluctuations, sea levels are also subject to change from thermal expansion, wind, and barometric pressure changes, resulting in storm surges, especially in shallow seas and near coasts. Tide: Tidal phenomena are not limited to the oceans, but can occur in other systems whenever a gravitational field that varies in time and space is present. For example, the shape of the solid part of the Earth is affected slightly by Earth tide, though this is not as easily seen as the water tidal movements. Characteristics: Tide changes proceed via the two main stages: The water stops falling, reaching a local minimum called low tide. The water stops rising, reaching a local maximum called high tide.In some regions, there are additional two possible stages: Sea level rises over several hours, covering the intertidal zone; flood tide. Characteristics: Sea level falls over several hours, revealing the intertidal zone; ebb tide.Oscillating currents produced by tides are known as tidal streams or tidal currents. The moment that the tidal current ceases is called slack water or slack tide. The tide then reverses direction and is said to be turning. Slack water usually occurs near high water and low water, but there are locations where the moments of slack tide differ significantly from those of high and low water.Tides are commonly semi-diurnal (two high waters and two low waters each day), or diurnal (one tidal cycle per day). The two high waters on a given day are typically not the same height (the daily inequality); these are the higher high water and the lower high water in tide tables. Similarly, the two low waters each day are the higher low water and the lower low water. The daily inequality is not consistent and is generally small when the Moon is over the Equator. Characteristics: Reference levels The following reference tide levels can be defined, from the highest level to the lowest: Highest astronomical tide (HAT) – The highest tide which can be predicted to occur. Note that meteorological conditions may add extra height to the HAT. Mean high water springs (MHWS) – The average of the two high tides on the days of spring tides. Mean high water neaps (MHWN) – The average of the two high tides on the days of neap tides. Mean sea level (MSL) – This is the average sea level. The MSL is constant for any location over a long period. Mean low water neaps (MLWN) – The average of the two low tides on the days of neap tides. Mean low water springs (MLWS) – The average of the two low tides on the days of spring tides. Lowest astronomical tide (LAT) – The lowest tide which can be predicted to occur. Tidal constituents: Tidal constituents are the net result of multiple influences impacting tidal changes over certain periods of time. Primary constituents include the Earth's rotation, the position of the Moon and Sun relative to the Earth, the Moon's altitude (elevation) above the Earth's Equator, and bathymetry. Variations with periods of less than half a day are called harmonic constituents. Conversely, cycles of days, months, or years are referred to as long period constituents. Tidal constituents: Tidal forces affect the entire earth, but the movement of solid Earth occurs by mere centimeters. In contrast, the atmosphere is much more fluid and compressible so its surface moves by kilometers, in the sense of the contour level of a particular low pressure in the outer atmosphere. Tidal constituents: Principal lunar semi-diurnal constituent In most locations, the largest constituent is the principal lunar semi-diurnal, also known as the M2 tidal constituent or M2 tidal constituent. Its period is about 12 hours and 25.2 minutes, exactly half a tidal lunar day, which is the average time separating one lunar zenith from the next, and thus is the time required for the Earth to rotate once relative to the Moon. Simple tide clocks track this constituent. The lunar day is longer than the Earth day because the Moon orbits in the same direction the Earth spins. This is analogous to the minute hand on a watch crossing the hour hand at 12:00 and then again at about 1:05½ (not at 1:00). Tidal constituents: The Moon orbits the Earth in the same direction as the Earth rotates on its axis, so it takes slightly more than a day—about 24 hours and 50 minutes—for the Moon to return to the same location in the sky. During this time, it has passed overhead (culmination) once and underfoot once (at an hour angle of 00:00 and 12:00 respectively), so in many places the period of strongest tidal forcing is the above-mentioned, about 12 hours and 25 minutes. The moment of highest tide is not necessarily when the Moon is nearest to zenith or nadir, but the period of the forcing still determines the time between high tides. Tidal constituents: Because the gravitational field created by the Moon weakens with distance from the Moon, it exerts a slightly stronger than average force on the side of the Earth facing the Moon, and a slightly weaker force on the opposite side. The Moon thus tends to "stretch" the Earth slightly along the line connecting the two bodies. The solid Earth deforms a bit, but ocean water, being fluid, is free to move much more in response to the tidal force, particularly horizontally (see equilibrium tide). Tidal constituents: As the Earth rotates, the magnitude and direction of the tidal force at any particular point on the Earth's surface change constantly; although the ocean never reaches equilibrium—there is never time for the fluid to "catch up" to the state it would eventually reach if the tidal force were constant—the changing tidal force nonetheless causes rhythmic changes in sea surface height. Tidal constituents: When there are two high tides each day with different heights (and two low tides also of different heights), the pattern is called a mixed semi-diurnal tide. Tidal constituents: Range variation: springs and neaps The semi-diurnal range (the difference in height between high and low waters over about half a day) varies in a two-week cycle. Approximately twice a month, around new moon and full moon when the Sun, Moon, and Earth form a line (a configuration known as a syzygy), the tidal force due to the Sun reinforces that due to the Moon. The tide's range is then at its maximum; this is called the spring tide. It is not named after the season, but, like that word, derives from the meaning "jump, burst forth, rise", as in a natural spring. Spring tides are sometimes referred to as syzygy tides.When the Moon is at first quarter or third quarter, the Sun and Moon are separated by 90° when viewed from the Earth, and the solar tidal force partially cancels the Moon's tidal force. At these points in the lunar cycle, the tide's range is at its minimum; this is called the neap tide, or neaps. "Neap" is an Anglo-Saxon word meaning "without the power", as in forðganges nip (forth-going without-the-power). Tidal constituents: Neap tides are sometimes referred to as quadrature tides.Spring tides result in high waters that are higher than average, low waters that are lower than average, "slack water" time that is shorter than average, and stronger tidal currents than average. Neaps result in less extreme tidal conditions. There is about a seven-day interval between springs and neaps. Tidal constituents: Lunar distance The changing distance separating the Moon and Earth also affects tide heights. When the Moon is closest, at perigee, the range increases, and when it is at apogee, the range shrinks. Six or eight times a year perigee coincides with either a new or full moon causing perigean spring tides with the largest tidal range. The difference between the height of a tide at perigean spring tide and the spring tide when the moon is at apogee depends on location but can be large as a foot higher. Tidal constituents: Other constituents These include solar gravitational effects, the obliquity (tilt) of the Earth's Equator and rotational axis, the inclination of the plane of the lunar orbit and the elliptical shape of the Earth's orbit of the Sun. A compound tide (or overtide) results from the shallow-water interaction of its two parent waves. Tidal constituents: Phase and amplitude Because the M2 tidal constituent dominates in most locations, the stage or phase of a tide, denoted by the time in hours after high water, is a useful concept. Tidal stage is also measured in degrees, with 360° per tidal cycle. Lines of constant tidal phase are called cotidal lines, which are analogous to contour lines of constant altitude on topographical maps, and when plotted form a cotidal map or cotidal chart. High water is reached simultaneously along the cotidal lines extending from the coast out into the ocean, and cotidal lines (and hence tidal phases) advance along the coast. Semi-diurnal and long phase constituents are measured from high water, diurnal from maximum flood tide. This and the discussion that follows is precisely true only for a single tidal constituent. Tidal constituents: For an ocean in the shape of a circular basin enclosed by a coastline, the cotidal lines point radially inward and must eventually meet at a common point, the amphidromic point. The amphidromic point is at once cotidal with high and low waters, which is satisfied by zero tidal motion. (The rare exception occurs when the tide encircles an island, as it does around New Zealand, Iceland and Madagascar.) Tidal motion generally lessens moving away from continental coasts, so that crossing the cotidal lines are contours of constant amplitude (half the distance between high and low water) which decrease to zero at the amphidromic point. For a semi-diurnal tide the amphidromic point can be thought of roughly like the center of a clock face, with the hour hand pointing in the direction of the high water cotidal line, which is directly opposite the low water cotidal line. High water rotates about the amphidromic point once every 12 hours in the direction of rising cotidal lines, and away from ebbing cotidal lines. This rotation, caused by the Coriolis effect, is generally clockwise in the southern hemisphere and counterclockwise in the northern hemisphere. The difference of cotidal phase from the phase of a reference tide is the epoch. The reference tide is the hypothetical constituent "equilibrium tide" on a landless Earth measured at 0° longitude, the Greenwich meridian.In the North Atlantic, because the cotidal lines circulate counterclockwise around the amphidromic point, the high tide passes New York Harbor approximately an hour ahead of Norfolk Harbor. South of Cape Hatteras the tidal forces are more complex, and cannot be predicted reliably based on the North Atlantic cotidal lines. History: History of tidal theory Investigation into tidal physics was important in the early development of celestial mechanics, with the existence of two daily tides being explained by the Moon's gravity. Later the daily tides were explained more precisely by the interaction of the Moon's and the Sun's gravity. History: Seleucus of Seleucia theorized around 150 BC that tides were caused by the Moon. The influence of the Moon on bodies of water was also mentioned in Ptolemy's Tetrabiblos.In De temporum ratione (The Reckoning of Time) of 725 Bede linked semidurnal tides and the phenomenon of varying tidal heights to the Moon and its phases. Bede starts by noting that the tides rise and fall 4/5 of an hour later each day, just as the Moon rises and sets 4/5 of an hour later. He goes on to emphasise that in two lunar months (59 days) the Moon circles the Earth 57 times and there are 114 tides. Bede then observes that the height of tides varies over the month. Increasing tides are called malinae and decreasing tides ledones and that the month is divided into four parts of seven or eight days with alternating malinae and ledones. In the same passage he also notes the effect of winds to hold back tides. Bede also records that the time of tides varies from place to place. To the north of Bede's location (Monkwearmouth) the tides are earlier, to the south later. He explains that the tide "deserts these shores in order to be able all the more to be able to flood other [shores] when it arrives there" noting that "the Moon which signals the rise of tide here, signals its retreat in other regions far from this quarter of the heavens".Medieval understanding of the tides was primarily based on works of Muslim astronomers, which became available through Latin translation starting from the 12th century. Abu Ma'shar al-Balkhi (d. circa 886), in his Introductorium in astronomiam, taught that ebb and flood tides were caused by the Moon. Abu Ma'shar discussed the effects of wind and Moon's phases relative to the Sun on the tides. In the 12th century, al-Bitruji (d. circa 1204) contributed the notion that the tides were caused by the general circulation of the heavens.Simon Stevin, in his 1608 De spiegheling der Ebbenvloet (The theory of ebb and flood), dismissed a large number of misconceptions that still existed about ebb and flood. Stevin pleaded for the idea that the attraction of the Moon was responsible for the tides and spoke in clear terms about ebb, flood, spring tide and neap tide, stressing that further research needed to be made.In 1609 Johannes Kepler also correctly suggested that the gravitation of the Moon caused the tides, which he based upon ancient observations and correlations. History: Galileo Galilei in his 1632 Dialogue Concerning the Two Chief World Systems, whose working title was Dialogue on the Tides, gave an explanation of the tides. The resulting theory, however, was incorrect as he attributed the tides to the sloshing of water caused by the Earth's movement around the Sun. He hoped to provide mechanical proof of the Earth's movement. The value of his tidal theory is disputed. Galileo rejected Kepler's explanation of the tides. History: Isaac Newton (1642–1727) was the first person to explain tides as the product of the gravitational attraction of astronomical masses. His explanation of the tides (and many other phenomena) was published in the Principia (1687) and used his theory of universal gravitation to explain the lunar and solar attractions as the origin of the tide-generating forces. History: Newton and others before Pierre-Simon Laplace worked the problem from the perspective of a static system (equilibrium theory), that provided an approximation that described the tides that would occur in a non-inertial ocean evenly covering the whole Earth. The tide-generating force (or its corresponding potential) is still relevant to tidal theory, but as an intermediate quantity (forcing function) rather than as a final result; theory must also consider the Earth's accumulated dynamic tidal response to the applied forces, which response is influenced by ocean depth, the Earth's rotation, and other factors.In 1740, the Académie Royale des Sciences in Paris offered a prize for the best theoretical essay on tides. Daniel Bernoulli, Leonhard Euler, Colin Maclaurin and Antoine Cavalleri shared the prize.Maclaurin used Newton's theory to show that a smooth sphere covered by a sufficiently deep ocean under the tidal force of a single deforming body is a prolate spheroid (essentially a three-dimensional oval) with major axis directed toward the deforming body. Maclaurin was the first to write about the Earth's rotational effects on motion. Euler realized that the tidal force's horizontal component (more than the vertical) drives the tide. In 1744 Jean le Rond d'Alembert studied tidal equations for the atmosphere which did not include rotation. History: In 1770 James Cook's barque HMS Endeavour grounded on the Great Barrier Reef. Attempts were made to refloat her on the following tide which failed, but the tide after that lifted her clear with ease. Whilst she was being repaired in the mouth of the Endeavour River Cook observed the tides over a period of seven weeks. At neap tides both tides in a day were similar, but at springs the tides rose 7 feet (2.1 m) in the morning but 9 feet (2.7 m) in the evening.Pierre-Simon Laplace formulated a system of partial differential equations relating the ocean's horizontal flow to its surface height, the first major dynamic theory for water tides. The Laplace tidal equations are still in use today. William Thomson, 1st Baron Kelvin, rewrote Laplace's equations in terms of vorticity which allowed for solutions describing tidally driven coastally trapped waves, known as Kelvin waves.Others including Kelvin and Henri Poincaré further developed Laplace's theory. Based on these developments and the lunar theory of E W Brown describing the motions of the Moon, Arthur Thomas Doodson developed and published in 1921 the first modern development of the tide-generating potential in harmonic form: Doodson distinguished 388 tidal frequencies. Some of his methods remain in use. History: History of tidal observation From ancient times, tidal observation and discussion has increased in sophistication, first marking the daily recurrence, then tides' relationship to the Sun and moon. Pytheas travelled to the British Isles about 325 BC and seems to be the first to have related spring tides to the phase of the moon. History: In the 2nd century BC, the Hellenistic astronomer Seleucus of Seleucia correctly described the phenomenon of tides in order to support his heliocentric theory. He correctly theorized that tides were caused by the moon, although he believed that the interaction was mediated by the pneuma. He noted that tides varied in time and strength in different parts of the world. According to Strabo (1.1.9), Seleucus was the first to link tides to the lunar attraction, and that the height of the tides depends on the moon's position relative to the Sun.The Naturalis Historia of Pliny the Elder collates many tidal observations, e.g., the spring tides are a few days after (or before) new and full moon and are highest around the equinoxes, though Pliny noted many relationships now regarded as fanciful. In his Geography, Strabo described tides in the Persian Gulf having their greatest range when the moon was furthest from the plane of the Equator. All this despite the relatively small amplitude of Mediterranean basin tides. (The strong currents through the Euripus Strait and the Strait of Messina puzzled Aristotle.) Philostratus discussed tides in Book Five of The Life of Apollonius of Tyana. Philostratus mentions the moon, but attributes tides to "spirits". In Europe around 730 AD, the Venerable Bede described how the rising tide on one coast of the British Isles coincided with the fall on the other and described the time progression of high water along the Northumbrian coast. History: The first tide table in China was recorded in 1056 AD primarily for visitors wishing to see the famous tidal bore in the Qiantang River. The first known British tide table is thought to be that of John Wallingford, who died Abbot of St. Albans in 1213, based on high water occurring 48 minutes later each day, and three hours earlier at the Thames mouth than upriver at London.In 1614 Claude d'Abbeville published the work "Histoire de la mission de pères capucins en l’Isle de Maragnan et terres circonvoisines", where he exposed that the Tupinambá people already had an understanding of the relation between the Moon and the tides before Europe.William Thomson (Lord Kelvin) led the first systematic harmonic analysis of tidal records starting in 1867. The main result was the building of a tide-predicting machine using a system of pulleys to add together six harmonic time functions. It was "programmed" by resetting gears and chains to adjust phasing and amplitudes. Similar machines were used until the 1960s.The first known sea-level record of an entire spring–neap cycle was made in 1831 on the Navy Dock in the Thames Estuary. Many large ports had automatic tide gauge stations by 1850. History: John Lubbock was one of the first to map co-tidal lines, for Great Britain, Ireland and adjacent coasts, in 1840. William Whewell expanded this work ending with a nearly global chart in 1836. In order to make these maps consistent, he hypothesized the existence of a region with no tidal rise or fall where co-tidal lines meet in the mid-ocean. The existence of such an amphidromic point, as they are now known, was confirmed in 1840 by Captain William Hewett, RN, from careful soundings in the North Sea. Physics: Forces The tidal force produced by a massive object (Moon, hereafter) on a small particle located on or in an extensive body (Earth, hereafter) is the vector difference between the gravitational force exerted by the Moon on the particle, and the gravitational force that would be exerted on the particle if it were located at the Earth's center of mass. Physics: Whereas the gravitational force subjected by a celestial body on Earth varies inversely as the square of its distance to the Earth, the maximal tidal force varies inversely as, approximately, the cube of this distance. If the tidal force caused by each body were instead equal to its full gravitational force (which is not the case due to the free fall of the whole Earth, not only the oceans, towards these bodies) a different pattern of tidal forces would be observed, e.g. with a much stronger influence from the Sun than from the Moon: The solar gravitational force on the Earth is on average 179 times stronger than the lunar, but because the Sun is on average 389 times farther from the Earth, its field gradient is weaker. The overal proportionality is tidal force ∝Md3∝ρ(rd)3, where M is the mass of the heavenly body, d is its distance, ρ is its average density, and r is its radius. The ratio r/d is related to the angle subtended by the object in the sky. Since the Sun and the Moon have practically the same diameter in the sky, the tidal force of the Sun is less than that of the Moon because its average density is much less, and it is only 46% as large as the lunar, thus during a spring tide, the Moon contributes 69% while the Sun contributes 31%. More precisely, the lunar tidal acceleration (along the Moon–Earth axis, at the Earth's surface) is about 1.1×10−7 g, while the solar tidal acceleration (along the Sun–Earth axis, at the Earth's surface) is about 0.52×10−7 g, where g is the gravitational acceleration at the Earth's surface. The effects of the other planets vary as their distances from Earth vary. When Venus is closest to Earth, its effect is 0.000113 times the solar effect. At other times, Jupiter or Mars may have the most effect. Physics: The ocean's surface is approximated by a surface referred to as the geoid, which takes into consideration the gravitational force exerted by the earth as well as centrifugal force due to rotation. Now consider the effect of massive external bodies such as the Moon and Sun. These bodies have strong gravitational fields that diminish with distance and cause the ocean's surface to deviate from the geoid. They establish a new equilibrium ocean surface which bulges toward the moon on one side and away from the moon on the other side. The earth's rotation relative to this shape causes the daily tidal cycle. The ocean surface tends toward this equilibrium shape, which is constantly changing, and never quite attains it. When the ocean surface is not aligned with it, it's as though the surface is sloping, and water accelerates in the down-slope direction. Physics: Equilibrium The equilibrium tide is the idealized tide assuming a landless Earth. It would produce a tidal bulge in the ocean, elongated towards the attracting body (Moon or Sun). It is not caused by the vertical pull nearest or farthest from the body, which is very weak; rather, it is caused by the tangent or "tractive" tidal force, which is strongest at about 45 degrees from the body, resulting in a horizontal tidal current. Laplace's tidal equations Ocean depths are much smaller than their horizontal extent. Thus, the response to tidal forcing can be modelled using the Laplace tidal equations which incorporate the following features: The vertical (or radial) velocity is negligible, and there is no vertical shear—this is a sheet flow. The forcing is only horizontal (tangential). The Coriolis effect appears as an inertial force (fictitious) acting laterally to the direction of flow and proportional to velocity. The surface height's rate of change is proportional to the negative divergence of velocity multiplied by the depth. As the horizontal velocity stretches or compresses the ocean as a sheet, the volume thins or thickens, respectively.The boundary conditions dictate no flow across the coastline and free slip at the bottom. The Coriolis effect (inertial force) steers flows moving towards the Equator to the west and flows moving away from the Equator toward the east, allowing coastally trapped waves. Finally, a dissipation term can be added which is an analog to viscosity. Physics: Amplitude and cycle time The theoretical amplitude of oceanic tides caused by the Moon is about 54 centimetres (21 in) at the highest point, which corresponds to the amplitude that would be reached if the ocean possessed a uniform depth, there were no landmasses, and the Earth were rotating in step with the Moon's orbit. The Sun similarly causes tides, of which the theoretical amplitude is about 25 centimetres (9.8 in) (46% of that of the Moon) with a cycle time of 12 hours. At spring tide the two effects add to each other to a theoretical level of 79 centimetres (31 in), while at neap tide the theoretical level is reduced to 29 centimetres (11 in). Since the orbits of the Earth about the Sun, and the Moon about the Earth, are elliptical, tidal amplitudes change somewhat as a result of the varying Earth–Sun and Earth–Moon distances. This causes a variation in the tidal force and theoretical amplitude of about ±18% for the Moon and ±5% for the Sun. If both the Sun and Moon were at their closest positions and aligned at new moon, the theoretical amplitude would reach 93 centimetres (37 in). Physics: Real amplitudes differ considerably, not only because of depth variations and continental obstacles, but also because wave propagation across the ocean has a natural period of the same order of magnitude as the rotation period: if there were no land masses, it would take about 30 hours for a long wavelength surface wave to propagate along the Equator halfway around the Earth (by comparison, the Earth's lithosphere has a natural period of about 57 minutes). Earth tides, which raise and lower the bottom of the ocean, and the tide's own gravitational self attraction are both significant and further complicate the ocean's response to tidal forces. Physics: Dissipation Earth's tidal oscillations introduce dissipation at an average rate of about 3.75 terawatts. About 98% of this dissipation is by marine tidal movement. Dissipation arises as basin-scale tidal flows drive smaller-scale flows which experience turbulent dissipation. This tidal drag creates torque on the moon that gradually transfers angular momentum to its orbit, and a gradual increase in Earth–moon separation. The equal and opposite torque on the Earth correspondingly decreases its rotational velocity. Thus, over geologic time, the moon recedes from the Earth, at about 3.8 centimetres (1.5 in)/year, lengthening the terrestrial day.Day length has increased by about 2 hours in the last 600 million years. Assuming (as a crude approximation) that the deceleration rate has been constant, this would imply that 70 million years ago, day length was on the order of 1% shorter with about 4 more days per year. Physics: Bathymetry The shape of the shoreline and the ocean floor changes the way that tides propagate, so there is no simple, general rule that predicts the time of high water from the Moon's position in the sky. Coastal characteristics such as underwater bathymetry and coastline shape mean that individual location characteristics affect tide forecasting; actual high water time and height may differ from model predictions due to the coastal morphology's effects on tidal flow. However, for a given location the relationship between lunar altitude and the time of high or low tide (the lunitidal interval) is relatively constant and predictable, as is the time of high or low tide relative to other points on the same coast. For example, the high tide at Norfolk, Virginia, U.S., predictably occurs approximately two and a half hours before the Moon passes directly overhead. Physics: Land masses and ocean basins act as barriers against water moving freely around the globe, and their varied shapes and sizes affect the size of tidal frequencies. As a result, tidal patterns vary. For example, in the U.S., the East coast has predominantly semi-diurnal tides, as do Europe's Atlantic coasts, while the West coast predominantly has mixed tides. Human changes to the landscape can also significantly alter local tides. Observation and prediction: Timing The tidal forces due to the Moon and Sun generate very long waves which travel all around the ocean following the paths shown in co-tidal charts. The time when the crest of the wave reaches a port then gives the time of high water at the port. The time taken for the wave to travel around the ocean also means that there is a delay between the phases of the Moon and their effect on the tide. Springs and neaps in the North Sea, for example, are two days behind the new/full moon and first/third quarter moon. This is called the tide's age.The ocean bathymetry greatly influences the tide's exact time and height at a particular coastal point. There are some extreme cases; the Bay of Fundy, on the east coast of Canada, is often stated to have the world's highest tides because of its shape, bathymetry, and its distance from the continental shelf edge. Measurements made in November 1998 at Burntcoat Head in the Bay of Fundy recorded a maximum range of 16.3 metres (53 ft) and a highest predicted extreme of 17 metres (56 ft). Similar measurements made in March 2002 at Leaf Basin, Ungava Bay in northern Quebec gave similar values (allowing for measurement errors), a maximum range of 16.2 metres (53 ft) and a highest predicted extreme of 16.8 metres (55 ft). Ungava Bay and the Bay of Fundy lie similar distances from the continental shelf edge, but Ungava Bay is only free of pack ice for about four months every year while the Bay of Fundy rarely freezes. Observation and prediction: Southampton in the United Kingdom has a double high water caused by the interaction between the M2 and M4 tidal constituents (Shallow water overtides of principal lunar). Portland has double low waters for the same reason. The M4 tide is found all along the south coast of the United Kingdom, but its effect is most noticeable between the Isle of Wight and Portland because the M2 tide is lowest in this region. Observation and prediction: Because the oscillation modes of the Mediterranean Sea and the Baltic Sea do not coincide with any significant astronomical forcing period, the largest tides are close to their narrow connections with the Atlantic Ocean. Extremely small tides also occur for the same reason in the Gulf of Mexico and Sea of Japan. Elsewhere, as along the southern coast of Australia, low tides can be due to the presence of a nearby amphidrome. Observation and prediction: Analysis Isaac Newton's theory of gravitation first enabled an explanation of why there were generally two tides a day, not one, and offered hope for a detailed understanding of tidal forces and behavior. Although it may seem that tides could be predicted via a sufficiently detailed knowledge of instantaneous astronomical forcings, the actual tide at a given location is determined by astronomical forces accumulated by the body of water over many days. In addition, accurate results would require detailed knowledge of the shape of all the ocean basins—their bathymetry, and coastline shape. Observation and prediction: Current procedure for analysing tides follows the method of harmonic analysis introduced in the 1860s by William Thomson. It is based on the principle that the astronomical theories of the motions of Sun and Moon determine a large number of component frequencies, and at each frequency there is a component of force tending to produce tidal motion, but that at each place of interest on the Earth, the tides respond at each frequency with an amplitude and phase peculiar to that locality. At each place of interest, the tide heights are therefore measured for a period of time sufficiently long (usually more than a year in the case of a new port not previously studied) to enable the response at each significant tide-generating frequency to be distinguished by analysis, and to extract the tidal constants for a sufficient number of the strongest known components of the astronomical tidal forces to enable practical tide prediction. The tide heights are expected to follow the tidal force, with a constant amplitude and phase delay for each component. Because astronomical frequencies and phases can be calculated with certainty, the tide height at other times can then be predicted once the response to the harmonic components of the astronomical tide-generating forces has been found. Observation and prediction: The main patterns in the tides are the twice-daily variation the difference between the first and second tide of a day the spring–neap cycle the annual variationThe Highest Astronomical Tide is the perigean spring tide when both the Sun and Moon are closest to the Earth. Observation and prediction: When confronted by a periodically varying function, the standard approach is to employ Fourier series, a form of analysis that uses sinusoidal functions as a basis set, having frequencies that are zero, one, two, three, etc. times the frequency of a particular fundamental cycle. These multiples are called harmonics of the fundamental frequency, and the process is termed harmonic analysis. If the basis set of sinusoidal functions suit the behaviour being modelled, relatively few harmonic terms need to be added. Orbital paths are very nearly circular, so sinusoidal variations are suitable for tides. Observation and prediction: For the analysis of tide heights, the Fourier series approach has in practice to be made more elaborate than the use of a single frequency and its harmonics. The tidal patterns are decomposed into many sinusoids having many fundamental frequencies, corresponding (as in the lunar theory) to many different combinations of the motions of the Earth, the Moon, and the angles that define the shape and location of their orbits. Observation and prediction: For tides, then, harmonic analysis is not limited to harmonics of a single frequency. In other words, the harmonies are multiples of many fundamental frequencies, not just of the fundamental frequency of the simpler Fourier series approach. Their representation as a Fourier series having only one fundamental frequency and its (integer) multiples would require many terms, and would be severely limited in the time-range for which it would be valid. Observation and prediction: The study of tide height by harmonic analysis was begun by Laplace, William Thomson (Lord Kelvin), and George Darwin. A.T. Doodson extended their work, introducing the Doodson Number notation to organise the hundreds of resulting terms. This approach has been the international standard ever since, and the complications arise as follows: the tide-raising force is notionally given by sums of several terms. Each term is of the form cos ⁡(ωt+p), where Ao is the amplitude, ω is the angular frequency, usually given in degrees per hour, corresponding to t measured in hours, p is the phase offset with regard to the astronomical state at time t = 0.There is one term for the Moon and a second term for the Sun. The phase p of the first harmonic for the Moon term is called the lunitidal interval or high water interval. The next refinement is to accommodate the harmonic terms due to the elliptical shape of the orbits. To do so, the value of the amplitude is taken to be not a constant, but varying with time, about the average amplitude Ao. To do so, replace Ao in the above equation with A(t) where A is another sinusoid, similar to the cycles and epicycles of Ptolemaic theory. This gives cos ⁡(ωat+pa)), which is to say an average value Ao with a sinusoidal variation about it of magnitude Aa, with frequency ωa and phase pa. Substituting this for Ao in the original equation gives a product of two cosine factors: cos cos ⁡(ωt+p). Observation and prediction: Given that for any x and y cos cos cos cos ⁡(x−y), it is clear that a compound term involving the product of two cosine terms each with their own frequency is the same as three simple cosine terms that are to be added at the original frequency and also at frequencies which are the sum and difference of the two frequencies of the product term. (Three, not two terms, since the whole expression is cos cos ⁡y .) Consider further that the tidal force on a location depends also on whether the Moon (or the Sun) is above or below the plane of the Equator, and that these attributes have their own periods also incommensurable with a day and a month, and it is clear that many combinations result. With a careful choice of the basic astronomical frequencies, the Doodson Number annotates the particular additions and differences to form the frequency of each simple cosine term. Observation and prediction: Remember that astronomical tides do not include weather effects. Also, changes to local conditions (sandbank movement, dredging harbour mouths, etc.) away from those prevailing at the measurement time affect the tide's actual timing and magnitude. Organisations quoting a "highest astronomical tide" for some location may exaggerate the figure as a safety factor against analytical uncertainties, distance from the nearest measurement point, changes since the last observation time, ground subsidence, etc., to avert liability should an engineering work be overtopped. Special care is needed when assessing the size of a "weather surge" by subtracting the astronomical tide from the observed tide. Observation and prediction: Careful Fourier data analysis over a nineteen-year period (the National Tidal Datum Epoch in the U.S.) uses frequencies called the tidal harmonic constituents. Nineteen years is preferred because the Earth, Moon and Sun's relative positions repeat almost exactly in the Metonic cycle of 19 years, which is long enough to include the 18.613 year lunar nodal tidal constituent. This analysis can be done using only the knowledge of the forcing period, but without detailed understanding of the mathematical derivation, which means that useful tidal tables have been constructed for centuries. The resulting amplitudes and phases can then be used to predict the expected tides. These are usually dominated by the constituents near 12 hours (the semi-diurnal constituents), but there are major constituents near 24 hours (diurnal) as well. Longer term constituents are 14 day or fortnightly, monthly, and semiannual. Semi-diurnal tides dominated coastline, but some areas such as the South China Sea and the Gulf of Mexico are primarily diurnal. In the semi-diurnal areas, the primary constituents M2 (lunar) and S2 (solar) periods differ slightly, so that the relative phases, and thus the amplitude of the combined tide, change fortnightly (14 day period).In the M2 plot above, each cotidal line differs by one hour from its neighbors, and the thicker lines show tides in phase with equilibrium at Greenwich. The lines rotate around the amphidromic points counterclockwise in the northern hemisphere so that from Baja California Peninsula to Alaska and from France to Ireland the M2 tide propagates northward. In the southern hemisphere this direction is clockwise. On the other hand, M2 tide propagates counterclockwise around New Zealand, but this is because the islands act as a dam and permit the tides to have different heights on the islands' opposite sides. (The tides do propagate northward on the east side and southward on the west coast, as predicted by theory.) The exception is at Cook Strait where the tidal currents periodically link high to low water. This is because cotidal lines 180° around the amphidromes are in opposite phase, for example high water across from low water at each end of Cook Strait. Each tidal constituent has a different pattern of amplitudes, phases, and amphidromic points, so the M2 patterns cannot be used for other tide components. Observation and prediction: Example calculation Because the Moon is moving in its orbit around the Earth and in the same sense as the Earth's rotation, a point on the Earth must rotate slightly further to catch up so that the time between semi-diurnal tides is not twelve but 12.4206 hours—a bit over twenty-five minutes extra. The two peaks are not equal. The two high tides a day alternate in maximum heights: lower high (just under three feet), higher high (just over three feet), and again lower high. Likewise for the low tides. Observation and prediction: When the Earth, Moon, and Sun are in line (Sun–Earth–Moon, or Sun–Moon–Earth) the two main influences combine to produce spring tides; when the two forces are opposing each other as when the angle Moon–Earth–Sun is close to ninety degrees, neap tides result. As the Moon moves around its orbit it changes from north of the Equator to south of the Equator. The alternation in high tide heights becomes smaller, until they are the same (at the lunar equinox, the Moon is above the Equator), then redevelop but with the other polarity, waxing to a maximum difference and then waning again. Observation and prediction: Current The tides' influence on current or flow is much more difficult to analyze, and data is much more difficult to collect. A tidal height is a scalar quantity and varies smoothly over a wide region. A flow is a vector quantity, with magnitude and direction, both of which can vary substantially with depth and over short distances due to local bathymetry. Also, although a water channel's center is the most useful measuring site, mariners object when current-measuring equipment obstructs waterways. A flow proceeding up a curved channel may have similar magnitude, even though its direction varies continuously along the channel. Surprisingly, flood and ebb flows are often not in opposite directions. Flow direction is determined by the upstream channel's shape, not the downstream channel's shape. Likewise, eddies may form in only one flow direction. Observation and prediction: Nevertheless, tidal current analysis is similar to tidal heights analysis: in the simple case, at a given location the flood flow is in mostly one direction, and the ebb flow in another direction. Flood velocities are given positive sign, and ebb velocities negative sign. Analysis proceeds as though these are tide heights. Observation and prediction: In more complex situations, the main ebb and flood flows do not dominate. Instead, the flow direction and magnitude trace an ellipse over a tidal cycle (on a polar plot) instead of along the ebb and flood lines. In this case, analysis might proceed along pairs of directions, with the primary and secondary directions at right angles. An alternative is to treat the tidal flows as complex numbers, as each value has both a magnitude and a direction. Observation and prediction: Tide flow information is most commonly seen on nautical charts, presented as a table of flow speeds and bearings at hourly intervals, with separate tables for spring and neap tides. The timing is relative to high water at some harbour where the tidal behaviour is similar in pattern, though it may be far away. As with tide height predictions, tide flow predictions based only on astronomical factors do not incorporate weather conditions, which can completely change the outcome. Observation and prediction: The tidal flow through Cook Strait between the two main islands of New Zealand is particularly interesting, as the tides on each side of the strait are almost exactly out of phase, so that one side's high water is simultaneous with the other's low water. Strong currents result, with almost zero tidal height change in the strait's center. Yet, although the tidal surge normally flows in one direction for six hours and in the reverse direction for six hours, a particular surge might last eight or ten hours with the reverse surge enfeebled. In especially boisterous weather conditions, the reverse surge might be entirely overcome so that the flow continues in the same direction through three or more surge periods. Observation and prediction: A further complication for Cook Strait's flow pattern is that the tide at the south side (e.g. at Nelson) follows the common bi-weekly spring–neap tide cycle (as found along the west side of the country), but the north side's tidal pattern has only one cycle per month, as on the east side: Wellington, and Napier. Observation and prediction: The graph of Cook Strait's tides shows separately the high water and low water height and time, through November 2007; these are not measured values but instead are calculated from tidal parameters derived from years-old measurements. Cook Strait's nautical chart offers tidal current information. For instance the January 1979 edition for 41°13·9’S 174°29·6’E (north west of Cape Terawhiti) refers timings to Westport while the January 2004 issue refers to Wellington. Near Cape Terawhiti in the middle of Cook Strait the tidal height variation is almost nil while the tidal current reaches its maximum, especially near the notorious Karori Rip. Aside from weather effects, the actual currents through Cook Strait are influenced by the tidal height differences between the two ends of the strait and as can be seen, only one of the two spring tides at the north west end of the strait near Nelson has a counterpart spring tide at the south east end (Wellington), so the resulting behaviour follows neither reference harbour. Observation and prediction: Power generation Tidal energy can be extracted by two means: inserting a water turbine into a tidal current, or building ponds that release/admit water through a turbine. In the first case, the energy amount is entirely determined by the timing and tidal current magnitude. However, the best currents may be unavailable because the turbines would obstruct ships. In the second, the impoundment dams are expensive to construct, natural water cycles are completely disrupted, ship navigation is disrupted. However, with multiple ponds, power can be generated at chosen times. So far, there are few installed systems for tidal power generation (most famously, La Rance at Saint Malo, France) which face many difficulties. Aside from environmental issues, simply withstanding corrosion and biological fouling pose engineering challenges. Observation and prediction: Tidal power proponents point out that, unlike wind power systems, generation levels can be reliably predicted, save for weather effects. While some generation is possible for most of the tidal cycle, in practice turbines lose efficiency at lower operating rates. Since the power available from a flow is proportional to the cube of the flow speed, the times during which high power generation is possible are brief. Navigation: Tidal flows are important for navigation, and significant errors in position occur if they are not accommodated. Tidal heights are also important; for example many rivers and harbours have a shallow "bar" at the entrance which prevents boats with significant draft from entering at low tide. Navigation: Until the advent of automated navigation, competence in calculating tidal effects was important to naval officers. The certificate of examination for lieutenants in the Royal Navy once declared that the prospective officer was able to "shift his tides".Tidal flow timings and velocities appear in tide charts or a tidal stream atlas. Tide charts come in sets. Each chart covers a single hour between one high water and another (they ignore the leftover 24 minutes) and show the average tidal flow for that hour. An arrow on the tidal chart indicates the direction and the average flow speed (usually in knots) for spring and neap tides. If a tide chart is not available, most nautical charts have "tidal diamonds" which relate specific points on the chart to a table giving tidal flow direction and speed. Navigation: The standard procedure to counteract tidal effects on navigation is to (1) calculate a "dead reckoning" position (or DR) from travel distance and direction, (2) mark the chart (with a vertical cross like a plus sign) and (3) draw a line from the DR in the tide's direction. The distance the tide moves the boat along this line is computed by the tidal speed, and this gives an "estimated position" or EP (traditionally marked with a dot in a triangle). Navigation: Nautical charts display the water's "charted depth" at specific locations with "soundings" and the use of bathymetric contour lines to depict the submerged surface's shape. These depths are relative to a "chart datum", which is typically the water level at the lowest possible astronomical tide (although other datums are commonly used, especially historically, and tides may be lower or higher for meteorological reasons) and are therefore the minimum possible water depth during the tidal cycle. "Drying heights" may also be shown on the chart, which are the heights of the exposed seabed at the lowest astronomical tide. Navigation: Tide tables list each day's high and low water heights and times. To calculate the actual water depth, add the charted depth to the published tide height. Depth for other times can be derived from tidal curves published for major ports. The rule of twelfths can suffice if an accurate curve is not available. This approximation presumes that the increase in depth in the six hours between low and high water is: first hour — 1/12, second — 2/12, third — 3/12, fourth — 3/12, fifth — 2/12, sixth — 1/12. Biological aspects: Intertidal ecology Intertidal ecology is the study of ecosystems between the low- and high-water lines along a shore. At low water, the intertidal zone is exposed (or emersed), whereas at high water, it is underwater (or immersed). Intertidal ecologists therefore study the interactions between intertidal organisms and their environment, as well as among the different species. The most important interactions may vary according to the type of intertidal community. The broadest classifications are based on substrates — rocky shore or soft bottom. Biological aspects: Intertidal organisms experience a highly variable and often hostile environment, and have adapted to cope with and even exploit these conditions. One easily visible feature is vertical zonation, in which the community divides into distinct horizontal bands of specific species at each elevation above low water. A species' ability to cope with desiccation determines its upper limit, while competition with other species sets its lower limit. Biological aspects: Humans use intertidal regions for food and recreation. Overexploitation can damage intertidals directly. Other anthropogenic actions such as introducing invasive species and climate change have large negative effects. Marine Protected Areas are one option communities can apply to protect these areas and aid scientific research. Biological aspects: Biological rhythms The approximately 12-hour and fortnightly tidal cycle has large effects on intertidal and marine organisms. Hence their biological rhythms tend to occur in rough multiples of these periods. Many other animals such as the vertebrates, display similar circatidal rhythms. Examples include gestation and egg hatching. In humans, the menstrual cycle lasts roughly a lunar month, an even multiple of the tidal period. Such parallels at least hint at the common descent of all animals from a marine ancestor. Other tides: When oscillating tidal currents in the stratified ocean flow over uneven bottom topography, they generate internal waves with tidal frequencies. Such waves are called internal tides. Other tides: Shallow areas in otherwise open water can experience rotary tidal currents, flowing in directions that continually change and thus the flow direction (not the flow) completes a full rotation in 12+1⁄2 hours (for example, the Nantucket Shoals).In addition to oceanic tides, large lakes can experience small tides and even planets can experience atmospheric tides and Earth tides. These are continuum mechanical phenomena. The first two take place in fluids. The third affects the Earth's thin solid crust surrounding its semi-liquid interior (with various modifications). Other tides: Lake tides Large lakes such as Superior and Erie can experience tides of 1 to 4 cm (0.39 to 1.6 in), but these can be masked by meteorologically induced phenomena such as seiche. The tide in Lake Michigan is described as 1.3 to 3.8 cm (0.5 to 1.5 in) or 4.4 cm (1+3⁄4 in). This is so small that other larger effects completely mask any tide, and as such these lakes are considered non-tidal. Other tides: Atmospheric tides Atmospheric tides are negligible at ground level and aviation altitudes, masked by weather's much more important effects. Atmospheric tides are both gravitational and thermal in origin and are the dominant dynamics from about 80 to 120 kilometres (50 to 75 mi), above which the molecular density becomes too low to support fluid behavior. Other tides: Earth tides Earth tides or terrestrial tides affect the entire Earth's mass, which acts similarly to a liquid gyroscope with a very thin crust. The Earth's crust shifts (in/out, east/west, north/south) in response to lunar and solar gravitation, ocean tides, and atmospheric loading. While negligible for most human activities, terrestrial tides' semi-diurnal amplitude can reach about 55 centimetres (22 in) at the Equator—15 centimetres (5.9 in) due to the Sun—which is important in GPS calibration and VLBI measurements. Precise astronomical angular measurements require knowledge of the Earth's rotation rate and polar motion, both of which are influenced by Earth tides. The semi-diurnal M2 Earth tides are nearly in phase with the Moon with a lag of about two hours. Other tides: Galactic tides Galactic tides are the tidal forces exerted by galaxies on stars within them and satellite galaxies orbiting them. The galactic tide's effects on the Solar System's Oort cloud are believed to cause 90 percent of long-period comets. Misnomers: Tsunamis, the large waves that occur after earthquakes, are sometimes called tidal waves, but this name is given by their resemblance to the tide, rather than any causal link to the tide. Other phenomena unrelated to tides but using the word tide are rip tide, storm tide, hurricane tide, and black or red tides. Many of these usages are historic and refer to the earlier meaning of tide as "a portion of time, a season" and "a stream, current or flood".
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**PL-4** PL-4: PL-4 or POS-PHY Level 4 was the name of the interface that the interface SPI-4.2 is based on. It was proposed by PMC-Sierra to the Optical Internetworking Forum. The name means Packet Over SONET Physical layer level 4. PL-4 was developed by PMC-Sierra in conjunction with the Saturn Development Group. Context: There are two broad categories of chip-to-chip interfaces. The first, exemplified by PCI-Express and HyperTransport, supports reads and writes of memory addresses. The second broad category carries user packets over 1 or more channels and is exemplified by the IEEE 802.3 family of Media Independent Interfaces and the Optical Internetworking Forum family of System Packet Interfaces. Of these last two, the family of System Packet Interfaces is optimized to carry user packets from many channels. The family of System Packet Interfaces is the most important packet-oriented, chip-to-chip interface family used between devices in the Packet over SONET and Optical Transport Network, which are the principal protocols used to carry the internet between cities. Applications: PL-4 was designed to be used in systems that support OC-192 SONET interfaces and is sometimes used in 10 Gigabit Ethernet based systems. A typical application of PL-4 (SPI-4.2) is to connect a framer device to a network processor. It has been widely adopted by the high speed networking marketplace. Technical details: The interface consists of (per direction): sixteen LVDS pairs for the data path one LVDS pair for control one LVDS pair for clock at half of the data rate two FIFO status lines running at 1/8 of the data rate one status clockThe clocking is Source-synchronous and operates around 700 MHz. Implementations of SPI-4.2 (PL-4) have been produced which allow somewhat higher clock rates. This is important when overhead bytes are added to incoming packets. Trivia: The name is an acronym of an acronym of an acronym as the P in PL stands for POS-PHY and the S in POS-PHY stands for SONET (Synchronous Optical Network). History: PL-4 is a descendant of PL-3 which itself is a descendant of the ATM Forum UTOPIA family of standards. The UTOPIA standards were developed by the SATURN Development Group for use in ATM systems.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**INO80B (gene)** INO80B (gene): INO80 complex subunit B is a protein that in humans is encoded by the INO80B gene. Function: This gene encodes a subunit of an ATP-dependent chromatin remodeling complex, INO80, which plays a role in DNA and nucleosome-activated ATPase activity and ATP-dependent nucleosome sliding. Readthrough transcription of this gene into the neighboring downstream gene, which encodes WW domain-binding protein 1, generates a non-coding transcript.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Prohead** Prohead: A prohead or procapsid is an immature viral capsid structure formed in the early stages of self-assembly of some bacteriophages, including the Caudovirales or tailed bacteriophages. Production and assembly of stable proheads is an essential precursor to bacteriophage genome packaging; this packaging activity can be replicated in vitro. The prohead structure may take a different shape from the head of a mature virion, as seen with the prohead of Bacillus subtilis phage φ29.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Fruit and vegetable wash** Fruit and vegetable wash: A vegetable wash is a cleaning product designed to aid in the removal process of dirt, wax and pesticides from fruit and vegetables before they are consumed. Contents and use: All fresh produce, even organic, can harbor residual pesticides, dirt or harmful microorganisms on the surface. Vegetable wash also removes germs, waxes on vegetable and fruits, and also the pesticides. Vegetable washes may either be a number of specially-marketed commercial brands, or they may be home recipes. Commercial vegetable washes generally contain surfactants, along with chelating agents, antioxidants, and other agents. Home recipes are generally dilutions of hydrogen peroxide or vinegar, the former of which may be dangerous at high concentrations. Effectiveness: Neither the U.S. Food and Drug Administration nor the United States Department of Agriculture recommend washing fruits and vegetables in anything other than cold water. To date there is little evidence that vegetable washes are effective at reducing the presence of harmful microorganisms, though their application in removing simple dirt and wax is not contested.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**AM-087** AM-087: AM-087 (part of the AM cannabinoid series) is an analgesic drug which acts as a cannabinoid agonist. It is a derivative of Δ8-THC, substituted on the 3-position side chain. AM-087 is a potent CB1 agonist with a Ki of 0.43 nM, making it around 100 times more potent than THC itself. This is most likely due to the bulky bromine substituent on the side chain.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Buffer (application)** Buffer (application): Buffer is a software application for the web and mobile, designed to manage accounts in social networks, by providing the means for a user to schedule posts to Twitter, Facebook, Mastodon, Instagram, Instagram Stories, Pinterest, and LinkedIn, as well as analyze their results and engage with their community. It is owned by remote company Buffer Inc. Buffer (application): The application was designed by a group of European experts in San Francisco, most notably Joel Gascoigne and Leo Widrich. Gascoigne is currently the CEO of Buffer. By August 2021, the team had reached 85 people working remotely from 15 countries in different parts of the world, more than 4.5 million registered users and over $16 million in annual revenue. History: Buffer began its development in October 2010 in Birmingham, United Kingdom by co-founder Joel Gascoigne, who established the idea of the social media application while he was in the United Kingdom. Once he developed the idea he created a landing page to see if enough people were interested in the product to make it a profitable venture. After reaching a critical mass of registrations, Gascoigne built and designed the first version of the application software over a span of 7 weeks.On November 30, 2010, the initial version of Buffer was launched. It contained limited features which only allowed access to Twitter. Four days after the software's launch Buffer gained its first paying user. A few weeks after this, the number of users reached 100, and then that number multiplied to 100,000 users within the next 9 months.In July 2011, the cofounders decided to move the startup venture from the United Kingdom to San Francisco in the United States, and Buffer was converted into an incorporation. Whilst in San Francisco, the cofounders dealt with the San Franciscan startup incubators AngelPad. This was due to the increase in cost after moving from Birmingham. Throughout December 2011, cofounders Joel and Leo were able to secure 18 investors to their company, after being refused by 88% of the people they met with to offer an investment to their company. The investors include Maneesh Arora, the founder of MightyText, Thomas Korte, the founder of AngelPad, and Andy McLoughlin, the co-founder of the software company Huddle.Due to visa issues with the co-founders, the company's base shifted to Hong Kong in January 2012. Then in August 2012, following more visa issues, it migrated again to Tel Aviv, Israel. In October 2012, Joel Gascoigne reported that "1.5-2% of users are on the paid plan, so we’re currently on a $800,000 annual run rate". In May 2013, the company's base shifted back to the United States, after the co-founders’ visa predicaments were resolved. Around this time Buffer intentionally made its salary calculation algorithm public (along with the calculated salaries of its 13 employees; this number has since grown to exceed 80, almost all of whom opt-in to the salary-publishing culture). Features: Free features Buffer allows users to schedule posts sent through the application to the user's social media accounts (you can connect 3 social accounts via the free version). This feature can schedule and send posts to Twitter, Instagram, Facebook, LinkedIn and Pinterest. There are various default time slots in the application, which are based on the times during the day when social media users are most active online. However, Buffer does allow its users to mend or remove the default time slots if they wish to do so. The free version of the application allows a maximum limit of 10 posts to be scheduled at any given time, and only allows the management of one social media account per social media website. Buffer also contains features that give post suggestions to users, and gives information on the number of clicks, retweets, likes, favorites, mentions and potential views each post has, which is based on the number of feeds that single feed would show up on.The Buffer application is compatible with three different platforms: Browser: allows the application to work as downloaded extensions for three browsers: Google Chrome, Safari and Mozilla Firefox. Features: Mobile: allows the application to be installed on iOS systems and Android phones. Newsreader: allows the application to be integrated with various newsreader applications, such as Flipboard and Zite. Features: Paid features Buffer offers a paid plan, named Pro, which gives paying users access to additional features, such as the Feeds feature that adds an RSS feed to a user's Buffer profile, displaying suggested links from external websites chosen by the user. Additional features include analytics for the number of posts sent out and the number of active users over a span of time. The plan also allows an increased limit of 100 posts at any single time, and the option to manage 8 social account profiles. On August 6, 2019, Buffer announced a new feature for paid plans - the Hashtag Manager. The new feature allows paying members to create and save groups of hashtags directly within the Buffer composer. Features: Buffer for Business Buffer for Business is an extension of the Buffer application intended for managing social media accounts for businesses and corporations. It launched in 2013. It contains a similar interface to Buffer, but with additional features, including: More specified analytics that allow comparisons between different metrics, such as between retweets clicks and posts, sorting of different data to the preference of the user, and accumulation of statistics Team collaboration features that contains approval features, admin privileges that give privileges to users assigned as managers over any other member assigned as a contributor, and allows more members of a team to access and manage the accounts Exporting options that allow data, statistics and analysis to be exported to any reports or documentsThe costs of Buffer for Business differ depending on the scale and size of the potential user's business. The categories are: Small Business: which allows the management of 25 social media accounts and the access of 5 team members Medium Business: which allows management of 50 social media accounts and the access of 10 team members Large Business or Agency: which allows management of 50 social media accounts and the access of 10 team membersBuffer for Business produced over 10% of the company's total revenue in December 2013. In 2014, the app was used by over 2,500 publishers and agencies. Organizations that use Buffer for Business include About.com, Fortune, and Business Insider. Popularity and growth: After its establishment in 2010, Buffer's total revenue per year increased to $1 million in January 2013, and then crossed $2 million in September of the same year through the growth of customers using the application. By September 2013, Buffer gained 1 million users, with around 16,000 paying users. The number of posts shared through Buffer application crossed 87,790,000 posts and the number of accounts that were used through the application reached 1,266,722, with an average of 70 posts per account. By February 2014, the number of Buffer users reached 1.3 million. The organization's annual revenue reached $3,900,000, a 38.3% increase since December 2013. Acquisitions: In December 2015, Buffer acquired Respondly, a social media customer service and brand monitoring tool, which it has since rebranded to Reply. According to the terms of the contract, the cost of the acquisition was not released. Partnerships: Buffer is partnered with various other software applications and companies. Most notably, Buffer is an official Facebook Marketing Partner under Community Management. Additionally, Buffer has partnerships with WordPress, Twitter, Zapier, IFTTT, Feedly, Pocket, Shopify, Reeder, and Quuu. Security: In October 2013, Buffer's system was hacked, allowing the hackers to get access to many users’ accounts. This resulted in the hackers posting spam posts through many of the user's social media accounts. On October 26, 2013, Buffer was temporarily suspended as a result of the hacking. Co-founder Joel Gascoigne sent an email to all users, apologizing about the issue and advising Buffer users about what steps they should take. Buffer was then unsuspended within the same week. Related products and services: Daily, launched in May 2014, was an iOS app developed by Buffer that helped users manage their social media accounts on Twitter, Facebook, LinkedIn and Google+. In the app, a user could accept and share, or dismiss, suggested links and headline through Tinder-style swiping gestures.In March 2015, Buffer launched Pablo, a social media image creation tool. Its aim is to create engaging pictures for social media within 30 seconds.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Ion beam analysis** Ion beam analysis: Ion beam analysis (IBA) is an important family of modern analytical techniques involving the use of MeV ion beams to probe the composition and obtain elemental depth profiles in the near-surface layer of solids. All IBA methods are highly sensitive and allow the detection of elements in the sub-monolayer range. The depth resolution is typically in the range of a few nanometers to a few ten nanometers. Atomic depth resolution can be achieved, but requires special equipment. The analyzed depth ranges from a few ten nanometers to a few ten micrometers. IBA methods are always quantitative with an accuracy of a few percent. Channeling allows to determine the depth profile of damage in single crystals. Ion beam analysis: RBS: Rutherford backscattering is sensitive to heavy elements in a light matrix EBS: Elastic (non-Rutherford) backscattering spectrometry can be sensitive even to light elements in a heavy matrix. The term EBS is used when the incident particle is going so fast that it exceeds the "Coulomb barrier" of the target nucleus, which therefore cannot be treated by Rutherford's approximation of a point charge. In this case Schrödinger's equation should be solved to obtain the scattering cross-section (see http://www-nds.iaea.org/sigmacalc/ Archived 2013-07-28 at the Wayback Machine). Ion beam analysis: ERD: Elastic recoil detection is sensitive to light elements in a heavy matrix PIXE: Particle-induced X-ray emission gives the trace and minor elemental composition NRA: Nuclear reaction analysis is sensitive to particular isotopes Channelling: The fast ion beam can be aligned accurately with major axes of single crystals; then the strings of atoms "shadow" each other and the backscattering yield falls dramatically. Any atoms off their lattice sites will give visible extra scattering. Thus damage to the crystal is visible, and point defects (interstitials) can even be distinguished from dislocations.The quantitative evaluation of IBA methods requires the use of specialized simulation and data analysis software. SIMNRA and DataFurnace are popular programs for the analysis of RBS, ERD and NRA, while GUPIX is popular for PIXE. A review of IBA software was followed by an intercomparison of several codes dedicated to RBS, ERD and NRA, organized by the International Atomic Energy Agency.IBA is an area of active research. The last major Nuclear Microbeam conference in Debrecen (Hungary) was published in NIMB 267(12-13). Overview: Ion beam analysis works on the basis that ion-atom interactions are produced by the introduction of ions to the sample being tested. Major interactions result in the emission of products that enable information regarding the number, type, distribution and structural arrangement of atoms to be collected. To use these interactions to determine sample composition a technique must be selected along with irradiation conditions and the detection system that will best isolate the radiation of interest providing the desired sensitivity and detection limits. Overview: The basic layout of an ion beam apparatus is an accelerator which produces an ion beam that is feed through an evacuated beam-transport tube to a beam handling device. This device isolates the ion species and charge of interest which then are transported through an evacuated beam-transport tube into the target chamber. This chamber is where the refined ion beam will come into contact with the sample and thus the resulting interactions can be observed. The configuration of the ion beam apparatus can be changed and made more complex with the incorporation of additional components. The techniques for ion beam analysis are designed for specific purposes. Some techniques and ion sources are shown in table 1. Detector types and arrangements for ion beam techniques are shown in table 2. Applications: Ion beam analysis has found use in a number of variable applications, ranging from biomedical uses to studying ancient artifacts. The popularity of this technique stems from the sensitive data that can be collected without significant distortion to the system on which it is studying. The unparalleled success found in using ion beam analysis has been virtually unchallenged over the past thirty years until very recently with new developing technologies. Even then, the use of ion beam analysis has not faded, and more applications are being found that take advantage of its superior detection capabilities. In an era where older technologies can become obsolete at an instant, ion beam analysis has remained a mainstay and only appears to be growing as researchers are finding greater use for the technique. Applications: Biomedical elemental analysis Gold nanoparticles have been recently used as a basis for a count of atomic species, especially with studying the content of cancer cells. Ion beam analysis is a great way to count the amount of atomic species per cell. Scientists have found an effective way to make accurate quantitative data available by using ion beam analysis in conjunction with elastic backscattering spectrometry (EBS). The researchers of a gold nanoparticle study were able to find much greater success using ion beam analysis in comparison to other analytical techniques, such as PIXE or XRF. This success is due to the fact that the EBS signal can directly measure depth information using ion beam analysis, whereas this cannot be done with the other two methods. The unique properties of ion beam analysis make great use in a new line of cancer therapy. Applications: Cultural heritage studies Ion beam analysis also has a very unique application in the use of studying archaeological artifacts, also known as archaeometry. For the past three decades, this has been the much preferred method to study artifacts while preserving their content. What many have found useful in using this technique is its offering of excellent analytical performance and non-invasive character. More specifically, this technique offers unparalleled performance in terms of sensitivity and accuracy. Recently however, there have been competing sources for archaeometry purposes using X-ray based methods such as XRF. Nonetheless, the most preferred and accurate source is ion beam analysis, which is still unmatched in its analysis of light elements and chemical 3D imaging applications (i.e. artwork and archaeological artifacts). Applications: Forensic analysis A third application of ion beam analysis is in forensic studies, particularly with gunshot residue characterization. Current characterization is done based on heavy metals found in bullets, however, manufacturing changes are slowly making these analyses obsolete. The introduction of techniques such as ion beam analysis are believed to alleviate this issue. Researchers are currently studying the use of ion beam analysis in conjunction with a scanning electron microscope and an Energy Dispersive X-ray spectrometer (SEM-EDS). The hope is that this setup will detect the composition of new and old chemicals that older analyses could not efficiently detect in the past. The greater amount of analytical signal used and more sensitive lighting found in ion beam analysis gives great promise to the field of forensic science. Applications: Lithium battery development The spatially resolved detection of light elements, for example lithium, remains challenging for most techniques based on the electronic shell of the target atoms such as XRF or SEM-EDS. For lithium and lithium ion batteries, the quantification of the lithium stoichiometry and its spatial distribution are important to understand the mechanisms behind dis-/charging and aging. Through ion beam focussing and a combination of methods, ion beam analysis offers the unique possibility for measuring the local state of charge (SoC) on the µm-scale. Iterative IBA: Ion beam-based analytical techniques represent a powerful set of tools for non-destructive, standard-less, depth-resolved and highly accurate elemental composition analysis in the depth regime from several nm up to few μm. By changing type of incident ion, the geometry of experiment, particle energy, or by acquiring different products originating from ion-solid interaction, complementary information can be extracted. However, analysis is often challenged either in terms of mass resolution—when several comparably heavy elements are present in the sample—or in terms of sensitivity—when light species are present in heavy matrices. Hence, a combination of two or more ion beam-based techniques can overcome the limitations of each individual method and provide complementary information about the sample. Iterative IBA: An iterative and self-consistent analysis also enhances the accuracy of the information that can be obtained from each independent measurement. Software and simulation: Dating back to the 1960s the data collected via ion beam analysis has been analyzed through a multitude of computer simulation programs. Researchers who frequently use ion beam analysis in conjunction with their work require that this software be accurate and appropriate for describing the analytical process they are observing. Applications of these software programs range from data analysis to theoretical simulations and modeling based on assumptions about the atomic data, mathematics and physics properties that detail the process in question. As the purpose and implementation of ion beam analysis has changed over the years, so has the software and codes used to model it. Such changes are detailed through the five classes by which the updated software are categorized. Software and simulation: Class-A Includes all programs developed in the late 1960s and early 1970s. This class of software solved specific problems in the data; niy did not provide the full potential to analyze a spectrum of a full general case. The prominent pioneering program was IBA, developed by Ziegler and Baglin in 1971. At the time, the computational models only tackled the analysis associated with the back-scattering techniques of ion beam analysis and performed calculation based on a slab analysis. A variety of other programs arose during this time, such as RBSFIT, though due to the lack of in-depth knowledge on ion beam analysis, it became increasingly hard to develop programs that accurate. Software and simulation: Class-B A new wave of programs sought to solve this accuracy problem in this next class of software. Developed during the 1980s, programs like SQEAKIE and BEAM EXPERT, afforded an opportunity to solve the complete general case by employing codes to perform direct analysis. This direct approach unfolds the produced spectrum with no assumptions made about the sample. Instead it calculates through separated spectrum signals and solves a set of linear equations for each layer. Problems still arise, though, and adjustments made to reduce noise in the measurements and room for uncertainty. Software and simulation: Class-C In a trip back to square one, this third class of programs, created in the 1990s, take a few principles from Class A in accounting for the general case, however, now through the use of indirect methods. RUMP and SENRAS, for example, use an assumed model of the sample and simulate a comparative theoretical spectra, which afforded such properties as fine structure retention and uncertainty calculations. In addition to the improvement in software analysis tools came the ability to analyze other techniques aside from back-scattering; i.e. ERDA and NRA. Software and simulation: Class-D Exiting the Class C era and into the early 2000s, software and simulation programs for ion beam analysis were tackling a variety of data collecting techniques and data analysis problems. Following along with the world's technological advancements, adjustments were made to enhance the programs into a state more generalized codes, spectrum evaluation, and structural determination. Programs produced like SIMNRA now account for the more complex interactions with the beam and sample; also providing a known database of scattering data. Software and simulation: Class-E This most recently developed class, having similar characteristics to the previous, makes use of primary principles in the Monte Carlo computational techniques. This class applies molecular dynamic calculations that are able to analyze both low and high energy physical interactions taking place in the ion beam analysis. A key and popular feature that accompanies such techniques is the possibility for the computations to be incorporated in real time with the ion beam analysis experiment itself.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Female queen (drag)** Female queen (drag): A female queen, diva queen, or hyper queen is a drag queen who is a woman. These performers are generally indistinguishable from the more common male drag queens in artistic style and techniques. Terminology: Other terms still used both by performers and in the media are considered offensive. The term "faux queen" is rejected and considered outdated by many drag artists for implying that female drag queens are not as "real" as cisgender male drag queens, and the term "female queen" is considered by many performers to be transphobic as they imply that a transgender woman who performs as a drag queen is not female. Other descriptions include "biologically challenged" drag queen, "female female impersonator", or "female impersonator impersonator." Concept: Like all drag performers, female drag queens play with traditional gender roles and gender norms to educate and entertain. Female queens can appear alongside female drag kings, male drag kings or male drag queens at drag shows and are interchangeable with other drag queens as emcees, performers, hostesses, and spokesmodels.For some it can be a way to redefine postmodern feminism; female drag queen Ms. Lucia Love stated, "Drag queens would be nowhere without women." For others it simply is about dressing up and having fun.In San Francisco, the first ever "faux queen" pageant was produced as a benefit for the drag performer Diet Popstitute. The first title-holder was Coca Dietetica, a.k.a. Laurie Bushman. The Klubstitute Kollective was formed after Diet Popstitute's 1995 death to continue to raise funds and provide a space for the performers who, at the time, were not always welcome in typical drag venues. Pageant organizer Ruby Toosday had "friends who got fired (from drag clubs) for being women...it seemed like we had definitely hit a nerve. Contestants were judged on drag, talent, and personality by a panel of judges and the winner helped "femcee" the following year. The pageants were held from 1996 to 2005. The Faux Queen Pageant was resurrected in 2012 by former title holder Bea Dazzler, and will continue to be a yearly competition in San Francisco. Concept: The dancer and performance artist Fauxnique (Monique Jenkinson) became the first cisgender female drag queen to win a major drag pageant—competing against cis male or trans female drag queens—when she was crowned Miss Trannyshack 2003. From Bust Magazine: "'(drag) comes down to a sort of self-awareness, a self-consciousness about playing around with femininity,' says Fauxnique. She adds that while drag for her is primarily about performance, it's also a 'rejection of traditional oppressive forms of masculinity—and that's part of an affinity with gay men as well. I wouldn't say every faux queen is a feminist, but I would say that a part of them is in some way.'"In the 1970s and 1980s, German-born Brazilian cisgender female queen Elke Maravilha became a popular TV personality after participating as a judge in the Chacrinha and Silvio Santos talent shows. According to her, "many people think I am a transvestite. When they ask me this, I jokingly reply that I'm a man indeed. And of the most gifted ones".The comedy films Connie and Carla and Victor/Victoria both center on cisgender female drag queens, but the main characters of both films are women who are forced by circumstance to work as drag queens. They keep their gender a secret and impersonate men when off-stage, unlike their real-life counterparts. Concept: The 2020 US reality TV vogue competition Legendary was the first US reality television show to include cisgender women performing and competing as drag queens, including the all-female team representing House of Ninja. The reality competition Dragula featured two performers who were AFAB (assigned female at birth) in their third season, but winner Landon Cider performs as a drag king and contestant Hollow Eve identifies as a non-binary drag artist, not specifically a drag queen. Female drag queen Sigourney Beaver competed in season 4, being one of the four finalists. In 2021, the third season of RuPaul's Drag Race UK was the first season in the franchise to feature a queen who was both assigned female at birth and identified as female, Victoria Scone. Controversy: Female drag queens are not always permitted or welcomed within drag spaces, which are typically owned and run by cisgender gay men. RuPaul, the producer and host of the reality TV competition RuPaul's Drag Race, originally banned female artists from his shows, stating "Drag loses its sense of danger and its sense of irony once it's not men doing it, because at its core it's a social statement and a big f-you to male-dominated culture. So for men to do it, it's really punk rock, because it's a real rejection of masculinity." After significant backlash, RuPaul amended this response in 2019 to state "I've learnt to never say never."There are widely-held beliefs within the community that female drag queens do not face the same challenges as cisgender male drag queens, and do not need to use padding, makeup, or tucking to appear feminine. Female-bodied artists typically counter that they use the same makeup techniques to create exaggerated femininity, and many do use padding and corsets to create an extreme body shape. Female queens on Instagram often mock this belief that they do not "transform" their bodies by sharing strikingly different images of themselves in and out of drag with the hashtag #wheresthetransformationsis, started by female queen Creme Fatale.The rejection of female drag queens is often closely linked to other reports of discrimination and objectification that women, transgender men, and nonbinary people assigned female at birth face within LGBT spaces. These artists frequently report groping and harassment from cisgender gay men in gay bars and performance spaces, and report less pay and less tips from audiences. Artists also have complained about drag terminology that they state is exclusionary or offensive; non-binary artist Hollow Eve sparked a significant debate in 2019 when an episode of Dragula aired where they spoke out against the term "fishy," used to mean a drag queen who looks like a cisgender woman and referring negatively to the smell of a vulva.The rising prominence of female drag queens and increased dialogue around inclusivity has resulted in many drag artists rejecting any distinction based on their gender and calling for drag competitions to remove all gender and assigned sex requirements for contestants.However, the first cis female Drag Queen Victoria Scone competed on RuPaul's Drag Race UK Season 3, and Canada's Drag Race vs The World, where she made runner-up. The second ever cis female Drag Queen to compete of a Drag Race franchise is Clover Bish, who competed on Drag Race Espana.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**PF-514273** PF-514273: PF-514273 is a drug developed by Pfizer, which acts as an extremely selective antagonist for the CB1 receptor, with approximately 10,000x selectivity over the closely related CB2 receptor. This very high selectivity makes it useful for scientific research into these receptors, as many commonly used cannabinoid receptor antagonists also block the CB2 receptor to some extent.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Vanadium redox battery** Vanadium redox battery: The vanadium redox battery (VRB), also known as the vanadium flow battery (VFB) or vanadium redox flow battery (VRFB), is a type of rechargeable flow battery. It employs vanadium ions as charge carriers. The battery uses vanadium's ability to exist in a solution in four different oxidation states to make a battery with a single electroactive element instead of two. For several reasons, including their relative bulkiness, vanadium batteries are typically used for grid energy storage, i.e., attached to power plants/electrical grids.Numerous companies and organizations are involved in funding and developing vanadium redox batteries. History: Pissoort mentioned the possibility of VRFBs in the 1930s. NASA researchers and Pellegri and Spaziante followed suit in the 1970s, but neither was successful. Maria Skyllas-Kazacos presented the first successful demonstration of an All-Vanadium Redox Flow Battery employing dissolved vanadium in a solution of sulfuric acid in the 1980s. Her design used sulfuric acid electrolytes, and was patented by the University of New South Wales in Australia in 1986.One of the important breakthroughs achieved by Skyllas-Kazacos and coworkers was the development of a number of processes to produce vanadium electrolytes of over 1.5 M concentration using the lower cost, but insoluble vanadium pentoxide as starting material. These processes involved chemical and electrochemical dissolution and were patented by the University of NSW in 1989. During the 1990s the UNSW group conducted extensive research on membrane selection, graphite felt activation, conducting plastic bipolar electrode fabrication, electrolyte characterisation and optimisation as well as modelling and simulation. Several 1-5 kW VFB prototype batteries were assembled and field tested in a Solar House in Thailand and in an electric golf cart at UNSW.The UNSW All-Vanadium Redox Flow Battery patents and technology were licensed to Mitsubishi Chemical Corporation and Kashima-Kita Electric Power Corporation in the mid-1990s and subsequently acquired by Sumitomo Electric Industries where extensive field testing was conducted in a wide range of applications in the late 1990s and early 2000s.In order to extend the operating temperature range of the battery and prevent precipitation of vanadium in the electrolyte at temperatures above 40oC in the case of V(V), or below 10oC in case of the negative half-cell solution, Skyllas-Kazacos and coworkers tested hundreds of organic and inorganic additives as potential precipitation inhibitors. They discovered that inorganic phosphate and ammonium compounds were effective in inhibiting precipitation of 2 M vanadium solutions in both the negative and positive half-cell at temperatures of 5 and 45°C respectively and ammonium phosphate was selected as the most effective stabilising agent. Ammonium and phosphate additives were used to prepare and test a 3 M vanadium electrolyte in a flow cell with excellent results. Advantages and disadvantages: Advantages VRFBs' main advantages over other types of battery: no limit on energy capacity can remain discharged indefinitely without damage mixing electrolytes causes no permanent damage single charge state across the electrolytes avoids capacity degradation safe, non-flammable aqueous electrolyte no noise or emissions battery modules can be added to meet demand wide operating temperature range including passive cooling long charge/discharge cycle lives: 15,000-20,000 cycles and 10–20 years. Advantages and disadvantages: low levelized cost: (a few tens of cents), approaching the 2016 $0.05 target stated by the US Department of Energy and the European Commission Strategic Energy Technology Plan €0.05 target. Advantages and disadvantages: Disadvantages VRFBs' main disadvantages compared to other types of battery: high and volatile prices of vanadium minerals (i.e. the cost of VRFB energy) relatively poor round trip efficiency (compared to lithium-ion batteries) heavy weight of aqueous electrolyte relatively poor energy-to-volume ratio compared to standard storage batteries having moving parts in the pumps that produce the flow of electrolyte solution toxicity of vanadium (V) compounds. Materials: A vanadium redox battery consists of an assembly of power cells in which two electrolytes are separated by a proton exchange membrane. The electrodes in a VRB cell are carbon based. The most common types are carbon felt, carbon paper, carbon cloth, graphite felt, and carbon nanotubes.Both electrolytes are vanadium-based. The electrolyte in the positive half-cells contains VO2+ and VO2+ ions, while the electrolyte in the negative half-cells consists of V3+ and V2+ ions. The electrolytes can be prepared by several processes, including electrolytically dissolving vanadium pentoxide (V2O5) in sulfuric acid (H2SO4). The solution is strongly acidic in use. Materials: The most common membrane material is perfluorinated sulfonic acid (PFSA or Nafion). However, vanadium ions can penetrate a PFSA membrane and destabilize the cell. A 2021 study found that penetration is reduced with hybrid sheets made by growing tungsten trioxide nanoparticles on the surface of single-layered graphene oxide sheets. These hybrid sheets are then embedded into a sandwich structured PFSA membrane reinforced with polytetrafluoroethylene (Teflon). The nanoparticles also promote proton transport, offering high Coulombic efficiency and energy efficiency of more than 98.1 percent and 88.9 percent, respectively. Operation: The reaction uses the half-reactions: VO+2 + 2H+ + e− → VO2+ + H2O (E° = +1.00 V) V3+ + e− → V2+ (E° = −0.26 V)Other useful properties of vanadium flow batteries are their fast response to changing loads and their overload capacities. They can achieve a response time of under half a millisecond for a 100% load change, and allow overloads of as much as 400% for 10 seconds. Response time is limited mostly by the electrical equipment. Unless specifically designed for colder or warmer climates, most sulfuric acid-based vanadium batteries work between about 10 and 40 °C. Below that temperature range, the ion-infused sulfuric acid crystallizes. Round trip efficiency in practical applications is around 70–80 %. Operation: Proposed improvements The original VRFB design by Skyllas-Kazacos employed sulfate (added as vanadium sulfate(s) and sulfuric acid) as the only anion in VRFB solutions, which limited the maximum vanadium concentration to 1.7 M of vanadium ions. In the 1990s, Skyllas-Kazacos discovered the use of ammonium phosphate and other inorganic compounds as precipitation inhibitors to stabilise 2 M vanadium solutions over a temperature range of 5 to 45 oC and a Stabilising Agent patent was filed by UNSW in 1993. This discovery was largely overlooked however and in around 2010 a team from Pacific Northwest National Laboratory proposed a mixed sulfate-chloride electrolyte, that allowed for the use in VRFBs solutions with the vanadium concentration of 2.5 M over a whole temperature range between −20 and +50 °C. Based on the standard equilibrium potential of the V5+/V4+ couple it is expected to oxidize chloride, and for this reason chloride solutions were avoided in earlier VRFB studies. The surprising oxidative stability (albeit only at the state of charge below ca. 80%) of V5+ solutions in the presence of chloride was explained on the basis of activity coefficients. Many researchers explain the increased stability of V(V) at elevated temperatures by the higher proton concentration in the mixed acid electrolyte that shifts the thermal precipitation equilibrium of V(V) away from V2O5. Nevertheless, because of a high vapor pressure of HCl solutions and the possibility of chlorine generation during charging, such mixed electrolytes have not been widely adopted.Another variation is the use of vanadium bromide salts. Since the redox potential of Br2/2Br- couple is more negative than that of V5+/V4+, the positive electrode operates via the bromine process. However, due to problems with volatility and corrosivity of Br2, they did not gain much popularity (see zinc-bromine battery for a similar problem). A vanadium/cerium flow battery has also been proposed . Specific energy and energy density: VRBs achieve a specific energy of about 20 Wh/kg (72 kJ/kg) of electrolyte. Precipitation inhibitors can increase the density to about 35 Wh/kg (126 kJ/kg), with higher densities possible by controlling the electrolyte temperature. The specific energy is low compared to other rechargeable battery types (e.g., lead–acid, 30–40 Wh/kg (108–144 kJ/kg); and lithium ion, 80–200 Wh/kg (288–720 kJ/kg)). Applications: VRFBs' large potential capacity may be best-suited to buffer the irregular output of utility-scale wind and solar systems.Their reduced self-discharge makes them potentially appropriate in applications that require long-term energy storage with little maintenance—as in military equipment, such as the sensor components of the GATOR mine system.They feature rapid response times well suited to uninterruptible power supply (UPS) applications, where they can replace lead–acid batteries or diesel generators. Fast response time is also beneficial for frequency regulation. These capabilities make VRFBs an effective "all-in-one" solution for microgrids, frequency regulation and load shifting. Companies funding or developing vanadium redox batteries: Companies funding or developing vanadium redox batteries include Sumitomo Electric Industries, CellCube (Enerox), UniEnergy Technologies, StorEn Technologies in Australia, Largo Energy and Ashlawn Energy in the United States; H2 in Gyeryong-si, South Korea; Renewable Energy Dynamics Technology, Invinity Energy Systems in the United Kingdom, VoltStorage and Schmalz in Europe; Prudent Energy in China; Australian Vanadium, CellCube and North Harbour Clean Energy in Australia; Yadlamalka Energy Trust and Invinity Energy Systems in Australia; EverFlow Energy JV SABIC SCHMID Group in Saudi Arabia and Bushveld Minerals in South Africa. General and cited references: Presentation paper from the IEEE summer 2001 conference UNSW Site on Vanadium batteries Report by World Energy World Map Of Global Vanadium Deposits Vanadium geology is fairly unusual compared to a base metals ore body. "Improved Redox Flow Batteries For Electric Cars". ScienceDaily/Fraunhofer-Gesellschaft. 13 October 2009. Retrieved 21 June 2014.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Colors (motorcycling)** Colors (motorcycling): Colors are the insignia, or "patches", worn by motorcycle club members on cut-off vests to identify membership of their club and territorial location. Club patches have been worn by many different groups since the 1960s. They are regarded by many to symbolize an elite amongst motorcyclists and the style has been widely copied by other subcultures and commercialized.Colors are considered to represent "significant markers of the socialization" of new members to clubs, rank and present a dominant symbol of identity and are marked with related symbolism. They can be embroidered patches sewn onto clothing or stenciled in paint, the primary symbol being the back patch of the club's insignia or logo and generally remain the property of the club. Wearing such clothing is referred to as "flying one's colors". The term has its roots in military history, originating with regimental colours. Meaning: Colors identify the rank of members within clubs from new members, to "prospects" to full members known as "patch-holders", and usually consist of a top and bottom circumferential badge called a rocker, due to the curved shape, with the top rocker stating the club name, the bottom rocker stating the location or territory, and a central logo of the club's insignia, with a fourth, smaller badge carrying the initials "MC" standing for "motorcycle club". Female clubs spell out “motorcycle club” on their vests. Meaning: The badges are used to create a social bond and boundaries and, generally, belong to the clubs involved rather than the individual wearing them. Although, bikers perform community service and give away thousands of dollars to charity, the wearing of them can often lead individuals to be refused service at related businesses and bars, and some biker bars have a "no colors" policy, to reduce conflict. Claiming STATE territory by wearing a bottom STATE rocker can lead to violent conflict with a rival club, such as in the 2015 Waco shootout, which was partially caused by a club wearing a "Texas" bottom rocker.Many motorcyclists wearing colors are from "family oriented" motorcycling clubs chartered by the American Motorcyclist Association and wear one-piece patches to differentiate themselves from the three piece patches of 99% & outlaw bikers. These generally do not state a territorial location and can be any format other than a three piece patch. Coed clubs also break up the M and C to denote a difference. Cubes also denote a traditional MC. The motorcycle manufacturer Harley-Davidson notably adopted the style in its branding and community-building effort, the Harley Owners Group. Law and order colors and/or insignia: As with outlaw motorcycle clubs visual identification of a member of a club is indicated by a specific large club patch or set of patches usually located in the middle of the back of a vest or jacket. The patches may contain a club logo, the name of the club and other chapter identification. Law and order colors and/or insignia: In most motorcycle clubs the patch representing membership in the organization is often referred to as "the club colors" or simply "the colors". Each club has rules on how the colors are treated and when it is proper to wear them. Well structured clubs have bylaws dictating the behavior of its members and thus the proper use of their colors. Tattoos: Tattoos may also come under the category of club colors.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Brown clustering** Brown clustering: Brown clustering is a hard hierarchical agglomerative clustering problem based on distributional information proposed by Peter Brown, William A. Brown, Vincent Della Pietra, Peter V. de Souza, Jennifer Lai, and Robert Mercer. The method, which is based on bigram language models, is typically applied to text, grouping words into clusters that are assumed to be semantically related by virtue of their having been embedded in similar contexts. Introduction: In natural language processing, Brown clustering or IBM clustering is a form of hierarchical clustering of words based on the contexts in which they occur, proposed by Peter Brown, William A. Brown, Vincent Della Pietra, Peter de Souza, Jennifer Lai, and Robert Mercer of IBM in the context of language modeling. The intuition behind the method is that a class-based language model (also called cluster n-gram model), i.e. one where probabilities of words are based on the classes (clusters) of previous words, is used to address the data sparsity problem inherent in language modeling. The method has been successfully used to improve parsing , domain adaptation, and name d entity recognition.Jurafsky and Martin give the example of a flight reservation system that needs to estimate the likelihood of the bigram "to Shanghai", without having seen this in a training set. The system can obtain a good estimate if it can cluster "Shanghai" with other city names, then make its estimate based on the likelihood of phrases such as "to London", "to Beijing" and "to Denver". Technical definition: Brown groups items (i.e., types) into classes, using a binary merging criterion based on the log-probability of a text under a class-based language model, i.e. a probability model that takes the clustering into account. Thus, average mutual information (AMI) is the optimization function, and merges are chosen such that they incur the least loss in global mutual information. As a result, the output can be thought of not only as a binary tree but perhaps more helpfully as a sequence of merges, terminating with one big class of all words. This model has the same general form as a hidden Markov model, reduced to bigram probabilities in Brown's solution to the problem. MI is defined as: MI Pr log Pr Pr Pr (⟨∗,cj⟩) Finding the clustering that maximizes the likelihood of the data is computationally expensive. The approach proposed by Brown et al. is a greedy heuristic. Technical definition: The work also suggests use of Brown clusterings as a simplistic bigram class-based language model. Given cluster membership indicators ci for the tokens wi in a text, the probability of the word instance wi given preceding word wi-1 is given by: Pr Pr Pr (ci|ci−1) This has been criticised as being of limited utility, as it only ever predicts the most common word in any class, and so is restricted to |c| word types; this is reflected in the low relative reduction in perplexity found when using this model and Brown. Technical definition: When applied to Twitter data, for example, Brown clustering assigned a binary tree path to each word in unlabelled tweets during clustering. The prefixes to these paths are used as new features for the tagger. Variations: Brown clustering has also been explored using trigrams.Brown clustering as proposed generates a fixed number of output classes. It is important to choose the correct number of classes, which is task-dependent. The cluster memberships of words resulting from Brown clustering can be used as features in a variety of machine-learned natural language processing tasks.A generalization of the algorithm was published in the AAAI conference in 2016, including a succinct formal definition of the 1992 version and then also the general form. Core to this is the concept that the classes considered for merging do not necessarily represent the final number of classes output, and that altering the number of classes considered for merging directly affects the speed and quality of the final result. Variations: There are no known theoretical guarantees on the greedy heuristic proposed by Brown et al. (as of February 2018). However, the clustering problem can be framed as estimating the parameters of the underlying class-based language model: it is possible to develop a consistent estimator for this model under mild assumptions.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**UK Rapid Test Consortium** UK Rapid Test Consortium: The UK Rapid Test Consortium (UK-RTC) is a United Kingdom industry consortium created to produce a lateral flow rapid test for COVID-19. Rapid tests are a form of COVID-19 testing technology that was originally developed from significant investment by the United Kingdom government to develop new forms of COVID-19 testing that provided advantages over existing forms such as PCR. Its members include Abingdon Health, BBI Solutions, CIGA Healthcare, Omega Diagnostics, and Oxford University.In 2020, the consortium developed the AbC-19 rapid antibody test to meet UK government requirements. The government ordered 1 million of the UK-RTC's rapid tests in October 2020.CIGA Healthcare was made responsible for assembly and distribution, and was awarded distribution to the United States in November 2020 after approval was given by the FDA.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Coherent sheaf** Coherent sheaf: In mathematics, especially in algebraic geometry and the theory of complex manifolds, coherent sheaves are a class of sheaves closely linked to the geometric properties of the underlying space. The definition of coherent sheaves is made with reference to a sheaf of rings that codifies this geometric information. Coherent sheaves can be seen as a generalization of vector bundles. Unlike vector bundles, they form an abelian category, and so they are closed under operations such as taking kernels, images, and cokernels. The quasi-coherent sheaves are a generalization of coherent sheaves and include the locally free sheaves of infinite rank. Coherent sheaf cohomology is a powerful technique, in particular for studying the sections of a given coherent sheaf. Definitions: A quasi-coherent sheaf on a ringed space (X,OX) is a sheaf F of OX -modules which has a local presentation, that is, every point in X has an open neighborhood U in which there is an exact sequence OX⊕I|U→OX⊕J|U→F|U→0 for some (possibly infinite) sets I and J A coherent sheaf on a ringed space (X,OX) is a sheaf F satisfying the following two properties: F is of finite type over OX , that is, every point in X has an open neighborhood U in X such that there is a surjective morphism OXn|U→F|U for some natural number n for any open set U⊆X , any natural number n , and any morphism φ:OXn|U→F|U of OX -modules, the kernel of φ is of finite type.Morphisms between (quasi-)coherent sheaves are the same as morphisms of sheaves of OX -modules. Definitions: The case of schemes When X is a scheme, the general definitions above are equivalent to more explicit ones. A sheaf F of OX -modules is quasi-coherent if and only if over each open affine subscheme Spec ⁡A the restriction F|U is isomorphic to the sheaf M~ associated to the module M=Γ(U,F) over A . When X is a locally Noetherian scheme, F is coherent if and only if it is quasi-coherent and the modules M above can be taken to be finitely generated. Definitions: On an affine scheme Spec ⁡A , there is an equivalence of categories from A -modules to quasi-coherent sheaves, taking a module M to the associated sheaf M~ . The inverse equivalence takes a quasi-coherent sheaf F on U to the A -module F(U) of global sections of F Here are several further characterizations of quasi-coherent sheaves on a scheme. Properties: On an arbitrary ringed space quasi-coherent sheaves do not necessarily form an abelian category. On the other hand, the quasi-coherent sheaves on any scheme form an abelian category, and they are extremely useful in that context.On any ringed space X , the coherent sheaves form an abelian category, a full subcategory of the category of OX -modules. (Analogously, the category of coherent modules over any ring A is a full abelian subcategory of the category of all A -modules.) So the kernel, image, and cokernel of any map of coherent sheaves are coherent. The direct sum of two coherent sheaves is coherent; more generally, an OX -module that is an extension of two coherent sheaves is coherent.A submodule of a coherent sheaf is coherent if it is of finite type. A coherent sheaf is always an OX -module of finite presentation, meaning that each point x in X has an open neighborhood U such that the restriction F|U of F to U is isomorphic to the cokernel of a morphism OXn|U→OXm|U for some natural numbers n and m . If OX is coherent, then, conversely, every sheaf of finite presentation over OX is coherent. Properties: The sheaf of rings OX is called coherent if it is coherent considered as a sheaf of modules over itself. In particular, the Oka coherence theorem states that the sheaf of holomorphic functions on a complex analytic space X is a coherent sheaf of rings. The main part of the proof is the case X=Cn . Likewise, on a locally Noetherian scheme X , the structure sheaf OX is a coherent sheaf of rings. Basic constructions of coherent sheaves: An OX -module F on a ringed space X is called locally free of finite rank, or a vector bundle, if every point in X has an open neighborhood U such that the restriction F|U is isomorphic to a finite direct sum of copies of OX|U . If F is free of the same rank n near every point of X , then the vector bundle F is said to be of rank n .Vector bundles in this sheaf-theoretic sense over a scheme X are equivalent to vector bundles defined in a more geometric way, as a scheme E with a morphism π:E→X and with a covering of X by open sets Uα with given isomorphisms π−1(Uα)≅An×Uα over Uα such that the two isomorphisms over an intersection Uα∩Uβ differ by a linear automorphism. (The analogous equivalence also holds for complex analytic spaces.) For example, given a vector bundle E in this geometric sense, the corresponding sheaf F is defined by: over an open set U of X , the O(U) -module F(U) is the set of sections of the morphism π−1(U)→U . The sheaf-theoretic interpretation of vector bundles has the advantage that vector bundles (on a locally Noetherian scheme) are included in the abelian category of coherent sheaves.Locally free sheaves come equipped with the standard OX -module operations, but these give back locally free sheaves.Let Spec ⁡(R) , R a Noetherian ring. Then vector bundles on X are exactly the sheaves associated to finitely generated projective modules over R , or (equivalently) to finitely generated flat modules over R .Let Proj ⁡(R) , R a Noetherian N -graded ring, be a projective scheme over a Noetherian ring R0 . Then each Z -graded R -module M determines a quasi-coherent sheaf F on X such that F|{f≠0} is the sheaf associated to the R[f−1]0 -module M[f−1]0 , where f is a homogeneous element of R of positive degree and Spec ⁡R[f−1]0 is the locus where f does not vanish.For example, for each integer n , let R(n) denote the graded R -module given by R(n)l=Rn+l . Then each R(n) determines the quasi-coherent sheaf OX(n) on X . If R is generated as R0 -algebra by R1 , then OX(n) is a line bundle (invertible sheaf) on X and OX(n) is the n -th tensor power of OX(1) . In particular, OPn(−1) is called the tautological line bundle on the projective n -space.A simple example of a coherent sheaf on P2 which is not a vector bundle is given by the cokernel in the following sequence O(1)→⋅(x2−yz,y3+xy2−xyz)O(3)⊕O(4)→E→0 this is because E restricted to the vanishing locus of the two polynomials has two-dimensional fibers, and has one-dimensional fibers elsewhere.Ideal sheaves: If Z is a closed subscheme of a locally Noetherian scheme X , the sheaf IZ/X of all regular functions vanishing on Z is coherent. Likewise, if Z is a closed analytic subspace of a complex analytic space X , the ideal sheaf IZ/X is coherent.The structure sheaf OZ of a closed subscheme Z of a locally Noetherian scheme X can be viewed as a coherent sheaf on X . To be precise, this is the direct image sheaf i∗OZ , where i:Z→X is the inclusion. Likewise for a closed analytic subspace of a complex analytic space. The sheaf i∗OZ has fiber (defined below) of dimension zero at points in the open set X−Z , and fiber of dimension 1 at points in Z . There is a short exact sequence of coherent sheaves on X 0. Basic constructions of coherent sheaves: Most operations of linear algebra preserve coherent sheaves. In particular, for coherent sheaves F and G on a ringed space X , the tensor product sheaf F⊗OXG and the sheaf of homomorphisms HomOX(F,G) are coherent.A simple non-example of a quasi-coherent sheaf is given by the extension by zero functor. For example, consider i!OX for Spec Spec ⁡(C[x])=Y Since this sheaf has non-trivial stalks, but zero global sections, this cannot be a quasi-coherent sheaf. This is because quasi-coherent sheaves on an affine scheme are equivalent to the category of modules over the underlying ring, and the adjunction comes from taking global sections. Functoriality: Let f:X→Y be a morphism of ringed spaces (for example, a morphism of schemes). If F is a quasi-coherent sheaf on Y , then the inverse image OX -module (or pullback) f∗F is quasi-coherent on X . For a morphism of schemes f:X→Y and a coherent sheaf F on Y , the pullback f∗F is not coherent in full generality (for example, f∗OY=OX , which might not be coherent), but pullbacks of coherent sheaves are coherent if X is locally Noetherian. An important special case is the pullback of a vector bundle, which is a vector bundle. Functoriality: If f:X→Y is a quasi-compact quasi-separated morphism of schemes and F is a quasi-coherent sheaf on X , then the direct image sheaf (or pushforward) f∗F is quasi-coherent on Y .The direct image of a coherent sheaf is often not coherent. For example, for a field k , let X be the affine line over k , and consider the morphism Spec ⁡(k) ; then the direct image f∗OX is the sheaf on Spec ⁡(k) associated to the polynomial ring k[x] , which is not coherent because k[x] has infinite dimension as a k -vector space. On the other hand, the direct image of a coherent sheaf under a proper morphism is coherent, by results of Grauert and Grothendieck. Local behavior of coherent sheaves: An important feature of coherent sheaves F is that the properties of F at a point x control the behavior of F in a neighborhood of x , more than would be true for an arbitrary sheaf. For example, Nakayama's lemma says (in geometric language) that if F is a coherent sheaf on a scheme X , then the fiber Fx⊗OX,xk(x) of F at a point x (a vector space over the residue field k(x) ) is zero if and only if the sheaf F is zero on some open neighborhood of x . A related fact is that the dimension of the fibers of a coherent sheaf is upper-semicontinuous. Thus a coherent sheaf has constant rank on an open set, while the rank can jump up on a lower-dimensional closed subset. Local behavior of coherent sheaves: In the same spirit: a coherent sheaf F on a scheme X is a vector bundle if and only if its stalk Fx is a free module over the local ring OX,x for every point x in X .On a general scheme, one cannot determine whether a coherent sheaf is a vector bundle just from its fibers (as opposed to its stalks). On a reduced locally Noetherian scheme, however, a coherent sheaf is a vector bundle if and only if its rank is locally constant. Examples of vector bundles: For a morphism of schemes X→Y , let Δ:X→X×YX be the diagonal morphism, which is a closed immersion if X is separated over Y . Let I be the ideal sheaf of X in X×YX . Then the sheaf of differentials ΩX/Y1 can be defined as the pullback Δ∗I of I to X . Sections of this sheaf are called 1-forms on X over Y , and they can be written locally on X as finite sums ∑fjdgj for regular functions fj and gj . If X is locally of finite type over a field k , then ΩX/k1 is a coherent sheaf on X If X is smooth over k , then Ω1 (meaning ΩX/k1 ) is a vector bundle over X , called the cotangent bundle of X . Then the tangent bundle TX is defined to be the dual bundle (Ω1)∗ . For X smooth over k of dimension n everywhere, the tangent bundle has rank n If Y is a smooth closed subscheme of a smooth scheme X over k , then there is a short exact sequence of vector bundles on Y :0→TY→TX|Y→NY/X→0, which can be used as a definition of the normal bundle NY/X to Y in X For a smooth scheme X over a field k and a natural number i , the vector bundle Ωi of i-forms on X is defined as the i -th exterior power of the cotangent bundle, Ωi=ΛiΩ1 . For a smooth variety X of dimension n over k , the canonical bundle KX means the line bundle Ωn . Thus sections of the canonical bundle are algebro-geometric analogs of volume forms on X . For example, a section of the canonical bundle of affine space An over k can be written as f(x1,…,xn)dx1∧⋯∧dxn, where f is a polynomial with coefficients in k Let R be a commutative ring and n a natural number. For each integer j , there is an important example of a line bundle on projective space Pn over R , called O(j) . To define this, consider the morphism of R -schemes π:An+1−0→Pn given in coordinates by (x0,…,xn)↦[x0,…,xn] . (That is, thinking of projective space as the space of 1-dimensional linear subspaces of affine space, send a nonzero point in affine space to the line that it spans.) Then a section of O(j) over an open subset U of Pn is defined to be a regular function f on π−1(U) that is homogeneous of degree j , meaning that f(ax)=ajf(x) as regular functions on ( A1−0)×π−1(U) . For all integers i and j , there is an isomorphism O(i)⊗O(j)≅O(i+j) of line bundles on Pn In particular, every homogeneous polynomial in x0,…,xn of degree j over R can be viewed as a global section of O(j) over Pn . Note that every closed subscheme of projective space can be defined as the zero set of some collection of homogeneous polynomials, hence as the zero set of some sections of the line bundles O(j) . This contrasts with the simpler case of affine space, where a closed subscheme is simply the zero set of some collection of regular functions. The regular functions on projective space Pn over R are just the "constants" (the ring R ), and so it is essential to work with the line bundles O(j) Serre gave an algebraic description of all coherent sheaves on projective space, more subtle than what happens for affine space. Namely, let R be a Noetherian ring (for example, a field), and consider the polynomial ring S=R[x0,…,xn] as a graded ring with each xi having degree 1. Then every finitely generated graded S -module M has an associated coherent sheaf M~ on Pn over R . Every coherent sheaf on Pn arises in this way from a finitely generated graded S -module M . (For example, the line bundle O(j) is the sheaf associated to the S -module S with its grading lowered by j .) But the S -module M that yields a given coherent sheaf on Pn is not unique; it is only unique up to changing M by graded modules that are nonzero in only finitely many degrees. More precisely, the abelian category of coherent sheaves on Pn is the quotient of the category of finitely generated graded S -modules by the Serre subcategory of modules that are nonzero in only finitely many degrees.The tangent bundle of projective space Pn over a field k can be described in terms of the line bundle O(1) . Namely, there is a short exact sequence, the Euler sequence: 0. Examples of vector bundles: It follows that the canonical bundle KPn (the dual of the determinant line bundle of the tangent bundle) is isomorphic to O(−n−1) . This is a fundamental calculation for algebraic geometry. For example, the fact that the canonical bundle is a negative multiple of the ample line bundle O(1) means that projective space is a Fano variety. Over the complex numbers, this means that projective space has a Kähler metric with positive Ricci curvature. Examples of vector bundles: Vector bundles on a hypersurface Consider a smooth degree- d hypersurface X⊂Pn defined by the homogeneous polynomial f of degree d . Then, there is an exact sequence 0→OX(−d)→i∗ΩPn→ΩX→0 where the second map is the pullback of differential forms, and the first map sends ϕ↦d(f⋅ϕ) Note that this sequence tells us that O(−d) is the conormal sheaf of X in Pn . Dualizing this yields the exact sequence 0→TX→i∗TPn→O(d)→0 hence O(d) is the normal bundle of X in Pn . If we use the fact that given an exact sequence 0→E1→E2→E3→0 of vector bundles with ranks r1 ,r2 ,r3 , there is an isomorphism Λr2E2≅Λr1E1⊗Λr3E3 of line bundles, then we see that there is the isomorphism i∗ωPn≅ωX⊗OX(−d) showing that ωX≅OX(d−n−1) Serre construction and vector bundles: One useful technique for constructing rank 2 vector bundles is the Serre constructionpg 3 which establishes a correspondence between rank 2 vector bundles E on a smooth projective variety X and codimension 2 subvarieties Y using a certain Ext 1 -group calculated on X . This is given by a cohomological condition on the line bundle ∧2E (see below). Serre construction and vector bundles: The correspondence in one direction is given as follows: for a section s∈Γ(X,E) we can associated the vanishing locus V(s)⊂X . If V(s) is a codimension 2 subvariety, then It is a local complete intersection, meaning if we take an affine chart Ui⊂X then s|Ui∈Γ(Ui,E) can be represented as a function si:Ui→A2 , where si(p)=(si1(p),si2(p)) and V(s)∩Ui=V(si1,si2) The line bundle ωX⊗∧2E|V(s) is isomorphic to the canonical bundle ωV(s) on V(s) In the other direction, for a codimension 2 subvariety Y⊂X and a line bundle L→X such that H1(X,L)=H2(X,L)=0 ωY≅(ωX⊗L)|Y there is a canonical isomorphism Hom Ext 1(IY⊗L,OX) which is functorial with respect to inclusion of codimension 2 subvarieties. Moreover, any isomorphism given on the left corresponds to a locally free sheaf in the middle of the extension on the right. That is, for Hom ((ωX⊗L)|Y,ωY) which is an isomorphism there is a corresponding locally free sheaf E of rank 2 which fits into a short exact sequence 0→OX→E→IY⊗L→0 This vector bundle can then be further studied using cohomological invariants to determine if it is stable or not. This forms the basis for studying moduli of stable vector bundles in many specific cases, such as on principally polarized abelian varieties and K3 surfaces. Chern classes and algebraic K-theory: A vector bundle E on a smooth variety X over a field has Chern classes in the Chow ring of X , ci(E) in CHi(X) for i≥0 . These satisfy the same formal properties as Chern classes in topology. For example, for any short exact sequence 0→A→B→C→0 of vector bundles on X , the Chern classes of B are given by ci(B)=ci(A)+c1(A)ci−1(C)+⋯+ci−1(A)c1(C)+ci(C). Chern classes and algebraic K-theory: It follows that the Chern classes of a vector bundle E depend only on the class of E in the Grothendieck group K0(X) . By definition, for a scheme X , K0(X) is the quotient of the free abelian group on the set of isomorphism classes of vector bundles on X by the relation that [B]=[A]+[C] for any short exact sequence as above. Although K0(X) is hard to compute in general, algebraic K-theory provides many tools for studying it, including a sequence of related groups Ki(X) for integers i>0 A variant is the group G0(X) (or K0′(X) ), the Grothendieck group of coherent sheaves on X . (In topological terms, G-theory has the formal properties of a Borel–Moore homology theory for schemes, while K-theory is the corresponding cohomology theory.) The natural homomorphism K0(X)→G0(X) is an isomorphism if X is a regular separated Noetherian scheme, using that every coherent sheaf has a finite resolution by vector bundles in that case. For example, that gives a definition of the Chern classes of a coherent sheaf on a smooth variety over a field. Chern classes and algebraic K-theory: More generally, a Noetherian scheme X is said to have the resolution property if every coherent sheaf on X has a surjection from some vector bundle on X . For example, every quasi-projective scheme over a Noetherian ring has the resolution property. Chern classes and algebraic K-theory: Applications of resolution property Since the resolution property states that a coherent sheaf E on a Noetherian scheme is quasi-isomorphic in the derived category to the complex of vector bundles : Ek→⋯→E1→E0 we can compute the total Chern class of E with c(E)=c(E0)c(E1)−1⋯c(Ek)(−1)k For example, this formula is useful for finding the Chern classes of the sheaf representing a subscheme of X . If we take the projective scheme Z associated to the ideal (xy,xz)⊂C[x,y,z,w] , then c(OZ)=c(O)c(O(−3))c(O(−2)⊕O(−2)) since there is the resolution 0→O(−3)→O(−2)⊕O(−2)→O→OZ→0 over CP3 Bundle homomorphism vs. sheaf homomorphism: When vector bundles and locally free sheaves of finite constant rank are used interchangeably, care must be given to distinguish between bundle homomorphisms and sheaf homomorphisms. Specifically, given vector bundles p:E→X,q:F→X , by definition, a bundle homomorphism φ:E→F is a scheme morphism over X (i.e., p=q∘φ ) such that, for each geometric point x in X , φx:p−1(x)→q−1(x) is a linear map of rank independent of x . Thus, it induces the sheaf homomorphism φ~:E→F of constant rank between the corresponding locally free OX -modules (sheaves of dual sections). But there may be an OX -module homomorphism that does not arise this way; namely, those not having constant rank. Bundle homomorphism vs. sheaf homomorphism: In particular, a subbundle E⊂F is a subsheaf (i.e., E is a subsheaf of F ). But the converse can fail; for example, for an effective Cartier divisor D on X , OX(−D)⊂OX is a subsheaf but typically not a subbundle (since any line bundle has only two subbundles). The category of quasi-coherent sheaves: The quasi-coherent sheaves on any fixed scheme form an abelian category. Gabber showed that, in fact, the quasi-coherent sheaves on any scheme form a particularly well-behaved abelian category, a Grothendieck category. A quasi-compact quasi-separated scheme X (such as an algebraic variety over a field) is determined up to isomorphism by the abelian category of quasi-coherent sheaves on X , by Rosenberg, generalizing a result of Gabriel. Coherent cohomology: The fundamental technical tool in algebraic geometry is the cohomology theory of coherent sheaves. Although it was introduced only in the 1950s, many earlier techniques of algebraic geometry are clarified by the language of sheaf cohomology applied to coherent sheaves. Broadly speaking, coherent sheaf cohomology can be viewed as a tool for producing functions with specified properties; sections of line bundles or of more general sheaves can be viewed as generalized functions. In complex analytic geometry, coherent sheaf cohomology also plays a foundational role. Coherent cohomology: Among the core results of coherent sheaf cohomology are results on finite-dimensionality of cohomology, results on the vanishing of cohomology in various cases, duality theorems such as Serre duality, relations between topology and algebraic geometry such as Hodge theory, and formulas for Euler characteristics of coherent sheaves such as the Riemann–Roch theorem.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Sports photography** Sports photography: Sports photography refers to the genre of photography that covers all types of sports. In the majority of cases, professional sports photography is a branch of photojournalism, while amateur sports photography, such as photos of children playing association football, is a branch of vernacular photography. The main application of professional sports photography is for editorial purposes; dedicated sports photographers usually work for newspapers, major wire agencies or dedicated sports magazines. However, sports photography is also used for advertising purposes both to build a brand and as well as to promote a sport in a way that cannot be accomplished by editorial means. Equipment: Equipment typically used for sports photography includes a digital single-lens reflex (DSLR) camera or Mirrorless Camera with high continuous shooting speeds and interchangeable lenses ranging from 14mm to 400mm or longer in focal length, depending on the type of sport. The proper lenses are very important as they allow the photographer to reach closer or farther as quickly as possible to keep up with the game play. Essential accessories include a monopod or tripod for stability and extra batteries. Longer focal length lenses are typically used to photograph action in sports such as football, while wide angle lenses can be used for sideline and close-up athlete photos. Equipment: Camera bodies The preferred camera bodies for modern sports photography have fast autofocus and high burst rates, typically 8 frames per second or faster. The current flagship sports DSLR cameras produced by Canon and Nikon are the Canon EOS-1D X Mark III and the Nikon D6; these are popular in professional sports photography. But there are multiple other camera bodies to choose from. If you are a fan of the latest mirrorless cameras, bodies like the Canon R5, the Canon R6, the Sony A1 and the Sony A9 offer full frame sensors to get the highest quality image without compromising ISO, Aperture, and Shutter Speed in your camera settings. Equipment: Lenses Different sports favor different lenses, but sports photography usually requires fast (wide aperture) telephoto lenses, with fast autofocus performance. Fast autofocus is needed to focus on movement, telephoto to get close to the action, and wide aperture for several reasons: The background is dramatically put out of focus due to a shallow depth of field, resulting in better subject isolation. Equipment: The lenses can focus more quickly due to the increase in light entering the lens – important with fast-moving action. Equipment: Faster shutter speeds can be used to freeze the action.Extremely wide apertures (such as f/1.2 or f/1.4) are more rarely used, because at these apertures the depth of field is very shallow, which makes focusing more difficult and slows down autofocus. The main distinction is between outdoor sports and indoor sports – in outdoor sports the distances are greater and the light brighter, while in indoor sports the distances are lesser and the light dimmer. Accordingly, outdoor sports tend to have longer focal length long focus lenses with slower apertures, while indoor sports tend to have shorter lenses with faster apertures. Equipment: Both zoom and prime lenses are used; zoom lenses (generally in the 70–200, 75–300, 100–400 or 200-400 range) allow a greater range of framing; primes are faster, cheaper, lighter, and optically superior, but are more restricted in framing. As an example the Nikon AF-S NIKKOR 400mm f/2.8G ED VR AF lens and the Canon EF 300mm f/2.8L IS II USM lens are both fixed telephoto lenses which cannot zoom. Equipment: Apertures of f/2.8 or faster are most often used, though f/4 is also found, particularly on brighter days. Particularly visible are the Canon super telephoto lenses, whose distinctive white casing (to dissipate the sun's heat) is recognizable at many sporting events. Of these, the Canon 400mm f/2.8 is particularly recommended for field sports such as football.This varies with sport and preference; for example golf photographers may prefer to use a 500mm f/4 as opposed to a 400mm f/2.8 as it is a lighter lens to be carried around all day. Equipment: Indoor sports photography, as mentioned earlier, can present its own challenges with less distance between the action and photographer and extreme lighting. For example, competition cheerleading allows for photographers to be up close to the action while looking upwards directly into harsh stage lighting against a black background. A different approach to such a situation is to use the prime lens named a "nifty fifty". The shutter speed is extremely fast while still setting the aperture to bring in enough light. In this scenario a budget telephoto lens would produce both dark and blurry images. Using a prime 50mm lens is a budget friendly option for many other indoor events such as school plays, concerts, dance recitals, etc. Equipment: Remote cameras Sports photographers may use remote cameras triggered by wireless shutter devices (i.e. Pocket Wizards) to photograph from places they could not otherwise stay, for example in an elevated position such as above a basketball basket, or to be in two places at once, i.e. at the start and the finish - such as at horse racing. Technique: In order to minimize motion blur of moving subjects, the light sensitivity ("ISO" value) is increased, which shortens the necessary exposure time to capture sufficient light. The trade-off of increasing light sensitivity is increased noise, so sports photography is most effective in daylight and with higher-end cameras that are equipped with larger image sensors that capture more light and support higher light sensitivities. Technique: Location is often important for sports photography. At big events, professional photographers often shoot from VIP spots with the best views, usually as close to the action as possible. Most sports require the photographer to frame their images with speed and adjust camera settings spontaneously to prevent blurring or incorrect exposure. Some sports photography is also done from a distance to give the game a unique effect. Technique: Getting to know your subjects is critical in capturing emotion. Effects and editing can only do so much for a photo. Understanding who athletes are by having a conversation with them can change your view on the person, making you a better photographer. Knowing the game. Predicting what happens next in a sports game is critical in understanding how to compose your shot. The action moves fast so you take the time to prepare yourself before going out and taking photos. Technique: Shutter speed is critical to catching motion, thus sports photography is often done in shutter priority mode or manual. A frequent goal is to capture an instant with minimal blur, in which case a minimal shutter speed is desired, but in other cases a slower shutter speed is used so that blur shows to capture the motion, not simply the instant. A particular technique is panning, where the camera uses an intermediate shutter speed and pans with the subject, yielding a relatively sharp subject and a background blurred in the direction of motion, yielding a sense of speed – compare speed lines. Technique: ISO speed is often high (to allow faster shutter speeds) and may be left in auto. Photos are often taken in burst mode to capture the best moment, sometimes in combination with JPEG rather than RAW shooting (JPEG files being smaller, these allow longer bursts). Strip photography While the vast majority of sports photography focuses on capturing a moment, possibly with some blur, the technique of strip photography is sometimes used to instead show motion over time. This is most prominent in a photo finish, but can also be used for other purposes, often yielding unusually distorted images. Type: Commemorative photograph In association football, before kick-off, a starting XI commemorative photograph is taken. Type: The tradition of taking a starting XI photograph has existed since 1863, when one was taken for Wanderers F.C. and following inaugural 1871–72 FA Cup, starting XI photograph became common throughout England.Taking a starting XI photograph also occurred in 1930 FIFA World Cup, and, at present, in international A matches and international club matches such as UEFA Champions League, taking a starting XI commemorative photograph is included in match day protocols. Type: On occasion, some teams took both starting XI photograph and full squad photograph in their historic matches, for example, Brazil in 2002 FIFA World Cup Final and Tottenham Hotspur in the 2019 UEFA Champions League Final. Notable photographers: A number of notable international photographers are well known for their sports photography work; Some of them have often worked for the magazines Life or Sports Illustrated. Russ Adams (tennis photographer) Marc Aspland Andrew D. Bernstein Chris Burkard Gerry Cranham James Drake Bill Frakes Scott Kelby Neil Leifer Carol Newsom Adam Pretty John G. Zimmerman
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**PTC Scheduler** PTC Scheduler: PTC Scheduler is a Windows based batch scheduling application. Via the use of either "Agents" or Telnet connections, PTC Scheduler is able to schedule and monitor batch processes on the following platforms: Windows (NT/XP/2000/2003/Vista) Sun Solaris 8/9/10 Red Hat Linux 7.3 Red Hat Fedora 4 AIX GCOS 7 Overview: Some of the features of PTC Scheduler are: Monitoring changes in batch duration to allow alerting of abnormal job execution Alerting upon job failure via SMS, Telephony & Email Message users for confirmation of steps to take during the batch execution Dashboard display of job execution status History: PTC Software is a UK based company which specialises in the development and distribution of Enterprise Systems Management (ESM) software products. PTC was formed in 1983 to provide a range of Systems Management utilities to the users of Honeywell Bull’s large mainframe computers. PTC’s first package was an early Job Scheduling solution called "Job Flow Control Facility". Indeed, this was probably one of the first job scheduling systems available anywhere. As the Honeywell Bull marketplace has eroded over the last twenty years, so PTC has expanded its product set to support many operating systems including Unix, Windows and Vax, whilst remaining firmly rooted to the original areas of expertise in Systems Management. The latest Scheduling tool, PTC Scheduler, is the fourth generation product and encompasses over twenty years of direct experience gained from the needs and wishes of many large (200 servers) and small customers (1-5 servers). The new generation of tools has been expanded into the areas of service availability and management. Areas which are becoming increasingly important for Technology departments or companies who wish to provide a measurably significant quality of service to their customers. This generation also incorporates full alert escalation ability and a Microsoft SQL Server database for permanent storage. In recent years, PTC Scheduler has been enhanced to provide scheduling for various housing applications including Pericles IBS
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Zero-product property** Zero-product property: In algebra, the zero-product property states that the product of two nonzero elements is nonzero. In other words, if then or 0. Zero-product property: This property is also known as the rule of zero product, the null factor law, the multiplication property of zero, the nonexistence of nontrivial zero divisors, or one of the two zero-factor properties. All of the number systems studied in elementary mathematics — the integers \mathbb {Z} , the rational numbers \mathbb {Q} , the real numbers \mathbb {R} , and the complex numbers \mathbb {C} — satisfy the zero-product property. In general, a ring which satisfies the zero-product property is called a domain. Algebraic context: Suppose is an algebraic structure. We might ask, does have the zero-product property? In order for this question to have meaning, must have both additive structure and multiplicative structure. Usually one assumes that is a ring, though it could be something else, e.g. the set of nonnegative integers {0,1,2,…} with ordinary addition and multiplication, which is only a (commutative) semiring. Algebraic context: Note that if satisfies the zero-product property, and if is a subset of , then also satisfies the zero product property: if and are elements of such that ab=0 , then either a=0 or b=0 because and can also be considered as elements of Examples: A ring in which the zero-product property holds is called a domain. A commutative domain with a multiplicative identity element is called an integral domain. Any field is an integral domain; in fact, any subring of a field is an integral domain (as long as it contains 1). Similarly, any subring of a skew field is a domain. Thus, the zero-product property holds for any subring of a skew field. Examples: If p p is a prime number, then the ring of integers modulo p p has the zero-product property (in fact, it is a field). The Gaussian integers are an integral domain because they are a subring of the complex numbers. In the strictly skew field of quaternions, the zero-product property holds. This ring is not an integral domain, because the multiplication is not commutative. The set of nonnegative integers { 0 , 1 , 2 , … } \{0,1,2,\ldots \} is not a ring (being instead a semiring), but it does satisfy the zero-product property. Non-examples: Let Zn denote the ring of integers modulo . Then Z6 does not satisfy the zero product property: 2 and 3 are nonzero elements, yet mod 2\cdot 3\equiv 0{\pmod {6}} In general, if is a composite number, then Zn does not satisfy the zero-product property. Namely, if n=qm where 0<q,m<n , then and are nonzero modulo , yet mod qm\equiv 0{\pmod {n}} The ring Z2×2 of 2×2 matrices with integer entries does not satisfy the zero-product property: if and N=(0101), then yet neither nor is zero. Non-examples: The ring of all functions f:[0,1]→R , from the unit interval to the real numbers, has nontrivial zero divisors: there are pairs of functions which are not identically equal to zero yet whose product is the zero function. In fact, it is not hard to construct, for any n ≥ 2, functions f_{1},\ldots ,f_{n} , none of which is identically zero, such that f_{i}\,f_{j} is identically zero whenever i\neq j The same is true even if we consider only continuous functions, or only even infinitely smooth functions. On the other hand, analytic functions have the zero-product property. Application to finding roots of polynomials: Suppose and are univariate polynomials with real coefficients, and is a real number such that P(x)Q(x)=0 . (Actually, we may allow the coefficients and to come from any integral domain.) By the zero-product property, it follows that either P(x)=0 or Q(x)=0 . In other words, the roots of PQ are precisely the roots of together with the roots of Thus, one can use factorization to find the roots of a polynomial. For example, the polynomial x^{3}-2x^{2}-5x+6 factorizes as (x-3)(x-1)(x+2) ; hence, its roots are precisely 3, 1, and −2. Application to finding roots of polynomials: In general, suppose is an integral domain and is a monic univariate polynomial of degree d\geq 1 with coefficients in . Suppose also that has distinct roots r_{1},\ldots ,r_{d}\in R . It follows (but we do not prove here) that factorizes as f(x)=(x-r_{1})\cdots (x-r_{d}) . By the zero-product property, it follows that r_{1},\ldots ,r_{d} are the only roots of : any root of must be a root of (x-r_{i}) for some . In particular, has at most distinct roots. Application to finding roots of polynomials: If however is not an integral domain, then the conclusion need not hold. For example, the cubic polynomial x^{3}+3x^{2}+2x has six roots in Z6 (though it has only three roots in \mathbb {Z} ).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Getty (Unix)** Getty (Unix): getty, short for "get tty", is a Unix program running on a host computer that manages physical or virtual terminals (TTYs). When it detects a connection, it prompts for a username and runs the 'login' program to authenticate the user. Originally, on traditional Unix systems, getty handled connections to serial terminals (often Teletype machines) connected to a host computer. The tty part of the name stands for Teletype, but has come to mean any type of text terminal. One getty process serves one terminal. In some systems, for example, Solaris, getty was replaced by ttymon. Personal computers running Unix-like operating systems, even if they do not provide any remote login services, may still use getty as a means of logging in on a local virtual console. Instead of the login program, getty may also be set up by the system administrator to run any other program, for example pppd (point-to-point protocol daemon) to provide a dial-up Internet connection.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Foundation universe** Foundation universe: The Foundation universe is the future history of humanity's colonisation of the galaxy, spanning nearly 25,000 years, created through the gradual fusion of the Robot, Galactic Empire, and Foundation book series written by American author Isaac Asimov. Works set in the universe: Asimov's Greater Foundation series Merging the Robot, the Empire and the Foundation series The Foundation series is set in the same universe as Asimov's first published novel, Pebble in the Sky, although Foundation takes place about 10,000 years later. Pebble in the Sky became the basis for the Galactic Empire series. Then, at some unknown date (prior to writing Foundation's Edge) Asimov decided to merge the Foundation/Galactic Empire series with his Robot series. Thus, all three series are set in the same universe, giving them a combined length of 18 novels, and a total of about 1,500,000 words. The merge also created a time-span of the series of around 20,000 years. Works set in the universe: The Stars, Like Dust states explicitly that the Earth is radioactive because of a nuclear war. Asimov later explained that the in-universe reason for this perception was that it was formulated by Earthmen many centuries after the event, and which had become distorted, due to the loss of much of their planetary history. This work is generally regarded as part of the Empire series, but does not directly mention either Trantor or the Spacer worlds. One character is said to have a visi-sonor, the same musical instrument that is played by the clown Magnifico in Foundation and Empire. Asimov integrated the Robot series into his all-encompassing Foundation series, making R. Daneel Olivaw appear again twenty thousand years later in the age of the Galactic Empire, in sequels and prequels to the original Foundation trilogy; and in the final book of the Robots series, Robots and Empire, Asimov describes how the worlds that later formed the Empire were settled, and how Earth became radioactive (which was first mentioned in Pebble in the Sky). Works set in the universe: The stand-alone novel Nemesis is also in the same continuity; being referenced in Forward the Foundation, where Hari Seldon refers to a twenty-thousand-year-old story of "a young woman that could communicate with an entire planet that circled a sun named Nemesis". Commentators noted that Nemesis contains barely disguised references to the Spacers and their calendar system, the Galactic Empire, and even to Hari Seldon which seem to have been deliberately placed for the purpose of later integration into the Foundation universe. Works set in the universe: Asimov's "Author's Note" in Prelude to Foundation The foreword to Prelude to Foundation contains the chronological ordering of Asimov's science fiction books. Asimov stated that the books of his Robot, Empire, and Foundation series "offer a kind of history of the future, which is, perhaps, not completely consistent, since I did not plan consistency, to begin with." Asimov also noted that the books in his list "were not written in the order in which (perhaps) they should be read". In the Author's Note, Asimov noted that there is room for a book between Robots and Empire and The Currents of Space, and that he could follow Foundation and Earth with additional volumes. Works set in the universe: Forward the Foundation, Nemesis, and The Positronic Man do not appear in Asimov's list, as they were not yet published at the time, and the order of the Empire novels in Asimov's list is not entirely consistent with other lists. For example, the 1983 Ballantine Books printing of The Robots of Dawn lists the Empire novels as: The Stars, Like Dust, The Currents of Space, and Pebble in the Sky. Given that The Currents of Space includes Trantor and that The Stars, Like Dust does not, these two books possibly were accidentally reversed in Asimov's list. Works set in the universe: Standalone novels set in the universe While not mentioned in the "Author's Note" of Prelude to Foundation, the novels The End of Eternity (1955), Nemesis (1989), and The Positronic Man (1992) (written by Robert Silverberg) are related to the greater Foundation series. Works set in the universe: The End of Eternity is vaguely referenced in Foundation's Edge, where a character mentions the Eternals, whose "task it was to choose a reality that would be most suitable to Humanity". (The End of Eternity also refers to a "Galactic Empire" within its story.) Asimov himself did not mention The End of Eternity in the series listing from Prelude to Foundation. As for Nemesis, it was written after Prelude to Foundation, but in the author's note Asimov explicitly states that the book is not part of the Foundation or Empire series, but that someday he might tie it to the others. Works set in the universe: In Forward the Foundation, Hari Seldon refers to a 20-thousand-year-old story of "a young woman that could communicate with an entire planet that circled a sun named Nemesis", a reference to Nemesis. In Nemesis, the main colony is one of the Fifty Settlements, a collection of orbital colonies that form a state. The Fifty Settlements possibly were the basis for the fifty Spacer worlds in the Robot stories. The implication at the end of Nemesis that the inhabitants of the off-Earth colonies are splitting off from Earthbound humans could also be connected to a similar implication about the Spacers in Mark W. Tiedemann's Robot books. According to Alasdair Wilkins, in a discussion posted on Gizmodo, "Asimov absolutely loves weird, elliptical structures. All three of his non-robot/Foundation science fiction novels – The End of Eternity, The Gods Themselves, and Nemesis – leaned heavily on non-chronological narratives, and he does it with gusto in The Gods Themselves."In The Robots of Dawn, Dr. Han Fastolfe briefly summarizes the story from "The Bicentennial Man" (1976), which was later expanded by Robert Silverberg into the novel The Positronic Man (1992). Works set in the universe: Works set in universe The foreword to Prelude to Foundation contains the chronological ordering of Asimov's science fiction books. Asimov stated that the books of his Robot, Galactic Empire, and Foundation series "offer a kind of history of the future, which is, perhaps, not completely consistent, since I did not plan consistency to begin with." Asimov also noted that the books in his list "were not written in the order in which (perhaps) they should be read."The following works are listed in chronological order by narrative: Robot series (I): Short stories about robots, set from the 20th to 22nd centuries (1995—2180), collected in The Rest of the Robots (1964), The Complete Robot (1982), Robot Dreams (1986), Robot Visions (1990) and Gold: The Final Science Fiction Collection (1995) I, Robot (1950) - a fixup novel composed of 9 short stories about robots, set in the 21st century (1998—2052) on Earth The Positronic Man (1992) - a standalone robot novel, written by Robert Silverberg, based on Asimov's 1976 novelette "The Bicentennial Man", set from the 22nd to 24th centuries (2160—2360) Nemesis (1989) - a standalone novel, set in the 23rd century (2236) in a star system about 2 light years from Earth, when interstellar travel was new Robot series (II): "Mother Earth" (1949) - short story, set in the 25th century (2421) The Caves of Steel (1954) - first novel, set in the 35th century (3421) on Earth The Naked Sun (1957) - second novel, set in the 35th century (3422) on the Spacer planet Solaria "Mirror Image" (1972) - short story, set in the 35th century (3423) The Robots of Dawn (1983) - third novel, set in the 35th century (3424) on the Spacer planet Aurora Robots and Empire (1985) - fourth novel, set in the 37th century (3624) on Earth, Solaria, Aurora, and Baleyworld Galactic Empire series The Stars, Like Dust (1951) - first novel, set in the 49th century (4850), thousands of years in the future before the founding of a Galactic Empire The Currents of Space (1952) - second novel, set in the 112th century (11129), set thousands of years in the future during Trantor's unification of the galaxy into a Galactic Empire Pebble in the Sky (1950) - third novel, set in the 125th century (12411), primarily set thousands of years in the future on Earth, when the galaxy is unified into a Galactic Empire "Blind Alley" (1945) - short story, set in the 126th century (12561—12562) Foundation series Prelude to Foundation (1988) - first novel, set in the 237th century (23604) Forward the Foundation (1993) - second novel, set in the 237th century (23612—23653) Foundation (1951) - third novel, set from the 237th to 239th centuries (23651—23812) Foundation and Empire (1952) - fourth novel, set from the 239th to 240th centuries (23847—23963) Second Foundation (1953) - fifth novel, set from the 240th to 241st centuries (23968—24029) Foundation's Edge (1982) - sixth novel, set in the 242nd century (24150) Foundation and Earth (1986) - seventh novel, set in the 242nd century (24150) The End of Eternity (1955) - a standalone novel, about Eternity, an organization "outside time" which aims to improve human happiness by altering history Timeline Other authors contributing to the expanded series Asimov's novels covered only 500 of the expected 1,000 years it would take for the Foundation to become a galactic empire. The novels that were written after Asimov did not continue the timeline but rather sought to fill in gaps in the earlier stories. The Foundation universe was once again revisited in 1989's Foundation's Friends, a collection of short stories written by many prominent science fiction authors of that time. Orson Scott Card's "The Originist" clarifies the founding of the Second Foundation shortly after Seldon's death; Harry Turtledove's "Trantor Falls" tells of the efforts by the Second Foundation to survive during the sacking of Trantor, the imperial capital and Second Foundation's home; and George Zebrowski's "Foundation's Conscience" is about the efforts of a historian to document Seldon's work following the rise of the Second Galactic Empire. Works set in the universe: Also, shortly before his death in 1992, Asimov approved an outline for three novels by Roger MacBride Allen, known as the Caliban trilogy, set between Robots and Empire and the Empire series. The Caliban trilogy describes the terraforming of the Spacer world Inferno, a planet where an ecological crisis forces the Spacers to abandon many long-cherished parts of their culture. Allen's novels echo the uncertainties that Asimov's later books express about the Three Laws of Robotics, and in particular the way a thoroughly roboticized culture can degrade human initiative. Works set in the universe: After Asimov's death and at the request of Janet Asimov and the Asimov estate's representative, Ralph Vicinanza approached Gregory Benford and asked him to write another Foundation story. He eventually agreed, and with Vicinanza and after speaking "to several authors about [the] project", formed a plan for a trilogy with "two hard SF writers broadly influenced by Asimov and of unchallenged technical ability: Greg Bear and David Brin." Foundation's Fear (1997) takes place chronologically between part one and part two of Asimov's second prequel novel, Forward the Foundation; Foundation and Chaos (1998) is set at the same time as the first chapter of Foundation, filling in the background; Foundation's Triumph (1999) covers ground following the recording of the holographic messages to the Foundation, and ties together a number of loose ends. These books are now claimed by some to collectively be a "Second Foundation trilogy", although they are inserts into pre-existing prequels and some of the earlier Foundation storylines and not generally recognized as a new Trilogy. Works set in the universe: In an epilogue to Foundation's Triumph, Brin noted he could imagine himself or a different author writing another sequel to add to Foundation's Triumph, feeling that Hari Seldon's story was not yet necessarily finished. He later published a possible start of such a book on his website.More recently, the Asimov estate authorized the publication of another trilogy of robot mysteries by Mark W. Tiedemann. These novels, which take place several years before Asimov's Robots and Empire, are Mirage (2000), Chimera (2001), and Aurora (2002). These were followed by yet another robot mystery, Alexander C. Irvine's Have Robot, Will Travel (2004), set five years after the Tiedemann trilogy. Works set in the universe: In 2001, Donald Kingsbury published the novel Psychohistorical Crisis, set in the Foundation universe after the start of the Second Empire. Novels by various authors (Isaac Asimov's Robot City, Robots and Aliens and Robots in Time series) are loosely connected to the Robot series, but contain many inconsistencies with Asimov's books, and are not generally considered part of the Foundation series. Works set in the universe: In November 2009, the Isaac Asimov estate announced the publication of a prequel to I, Robot under the working title Robots and Chaos, the first volume in a prequel trilogy featuring Susan Calvin by fantasy author Mickey Zucker Reichert. The first book was published in November 2011 under the title I, Robot: To Protect, followed by I, Robot: To Obey in 2013 and I, Robot: To Preserve in 2016. Works set in the universe: Stories set in the Foundation universe, including works by other authors The following works are listed in chronological order by narrative: Planetary systems, stars and planets: 61 Cygni A star system advanced by Lord Dorwin as the potential site for a planet of origin for the human species. Lord Dorwin cites 'Sol' (meaning Earth's Sun) and three other planetary systems in the Sirius Sector, along with Arcturus in the Arcturus Sector, as potential original worlds. Claims were made as early as 1942 that 61 Cygni had a planetary system, though, to date, none has been verified. Planetary systems, stars and planets: Achilles A gas giant planet in the Anacreon system. Its size is somewhere between Saturn and Neptune, about 1.7 times as dense as water, with a strong equatorial bulge. In appearance, it is a dark, yellow-biased red, with scattered orange patches indicating storm systems. Achilles has several moons, such as one within the orbit of Neoptolemus, four between that and Ulysses, and twenty outside of Ulysses; all of these others are captured asteroids. Planetary systems, stars and planets: Alpha It orbits the star Alpha Centauri A (only 4.2 ly from Sol). The Empire terraformed this planet to hold Earth's inhabitants after it was devastated by radiation, but the project was never completed. Covered almost entirely with water, save for a fifteen-thousand-square-kilometer island, this planet was considered by Lord Dorwin to be the original system of humanity. The inhabitants call it New Earth and live a simple lifestyle that of women and men are completely shirtless, weather permitting, and the men engage in long sea voyages to fish. About halfway into the thousand-year darkness after the fall of the Empire, Golan Trevize ventured to this planet in his search for Earth. The inhabitants seemed nice enough but tried to infect him and his crew with a disease. After leaving, Trevize headed to the Solar System. Planetary systems, stars and planets: Anacreon (also known as Anacreon A II) A planet near the outer end of the periphery. As part of the Galactic Empire was the capital of Anacreon subprefecture, Anacreon prefecture, and Anacreon Province, and later the Anacreon Kingdom. Anacreon is a binary star system. The pair orbit at 73.8 AU with a period of, in Earth terms, 181 yr, 84 days, 14 hr. Planetary systems, stars and planets: Arcturus One of the major planets. It is the capital world of the Sirius Sector in the Galactic Empire. It seems to have been named for the star Arcturus in Boötes. Planetary systems, stars and planets: Aurora Originally named New Earth, in later millennia the planet would be renamed "Aurora", which means "dawn", to signify the dawning of a new age for the Spacer culture. It is an Earthlike planet, the innermost planet orbiting the star Tau Ceti (12 ly from Sol). It was the first Spacer planet colonized, established in 2065. Its capital is Eon (about 20,000 inhabitants). As it was highly populated and developed, it was considered the "capital" of the Spacers. The planet has two moons: Tithonus I and Tithonus II. Aurora at its height had a population of 200 million humans, and 10 billion robots. The head of its planetary government was called the "Chairman". The largest city on the planet was Eos (which means also "dawn"), the administrative and robotic centre of Aurora, where Han Fastolfe and Gladia Solaria lived. The University of Eos and the Auroran Robotics Institute were both located within Eos. After the decline of the Spacers, the planet's remaining inhabitants are believed to have emigrated to Trantor, settling in the Mycogen Sector. The descendants of the Aurorans, or Mycogenians, never forgot Aurora, but they apparently evolved to the point where they were indistinguishable from Settlers. The scripture of the Mycogenians mentions Aurora, robots, and other topics; Hari Seldon peruses this document and finds the "corpse" of a robot in Mycogen also. Ironically, the culture of Mycogen appears to be in many ways a complete opposite of Aurora. Where the society of Aurora had complete gender equality and social mobility, Mycogen has a restrictive caste system with women apparently taking the place of Auroran robots, with absolutely no rights. It is also very restrictive sexually, where Aurora was basically a free love society. Mycogen, a sector of Trantor, identifies Aurora as the first planet and gives a high value to the Robots, lamenting his loss. The searchers for Earth visit Aurora, along with other ancient settlements. The planet is by then not inhabited by human beings, and its desertified ecology is dominated by feral dogs. Planetary systems, stars and planets: Comporellon (originally Baleyworld) A planet located near Gaia and Sayshell, Comporellon was renowned for its particularly old age. It was founded by the second wave of space colonists, known as the Settlers, and thus had a very superstitious attitude toward the first wave, the Spacers. They were also superstitious about Earth. Golan Trevize, Janov Pelorat, and Bliss visit Comporellon in Foundation and Earth, and acquire the coordinates of three Spacer worlds: Solaria, Aurora, and Melpomenia from a historian. Comporellon was under the political influence of the First Foundation, but its awkward situation caused resentment toward Foundationers. Its inhabitants preferred clothes that were white, gray, and black. Trevize comments that their food could be very good. Astronomically, Comporellon was a very cold ice world. Planetary systems, stars and planets: Earth (sometimes called Old Earth, Gaia or Terra) A planet, the most common setting of his robot short stories. Earth is the planet upon which humans have lived for longer than anyone remembers. Earth features in one of several Origin Myths found throughout the Galactic Empire. Its history, however, is shrouded in the mists of time. Earth was the third planet from its sun (called Sol) and had one large moon (Luna). From millions of years BC to the early Galactic Era, Earth was one of the most, if not the most, important planets in the galaxy, being one of the only planets to ever develop life without being colonized by other worlds, and being the origin planet of the human race, who would go on to dominate the galaxy through the Galactic Empire. Around 65,000,000 BC, the dinosaurs, the original dominant race of Earth, were killed by a race of small intelligent lizards armed with guns, which either left Earth or died out. Eventually, humans evolved on the planet. Up until the 20th century AD, the human race progressed, having wars and developing technology, experiencing the ups and downs of civilization, but nothing extremely radical happened, and, most importantly, no one made an attempt to leave Earth and colonize new worlds. In the early 20th century, two world wars were endured, WW1 in the 1910s and WW2 in the 1940s. Eventually, in 1973, the human race reached for the moon. The Prometheus failed, but, after some complicated series of events, the New Prometheus reached the moon in 1978, achieving the goal of leaving Earth, if only slightly. From around 1979 to 1982, WW3 took place, ending nationalism, and splitting Earth into Regions. From there, the planet experienced a new renaissance, developing positronic brains in the 1980s and 1990s, governed by the Three Laws of Robotics. One of the most important early pioneers in robotics was Susan Calvin (1982-2064), who was the first and chief robopsychologist at US Robots and Mechanical Men from 2007 to 2058. Robots eventually grew very advanced. In 2065, Earth colonized the first extrasolar world, Aurora, the World of the Dawn. This was the first of the great Spacer Worlds, which were colonized over thousands of years across the stars. Around 3720, they rebelled against Earth, winning the Three-Week War, and would become higher in society than Earth. In 4724, detective Elijah Baley managed to allow the colonization of new worlds by Earth, which had been suppressed, and the Settler worlds were made. These were threatened in 4922 but were saved due to the efforts of Gladia Solaria, R. Daneel Olivaw and R. Giskard Reventlov, at the cost of Earth being made radioactive by Levular Mandamus. Eventually, the Settler worlds spread across the galaxy, outnumbering the Spacer worlds greatly, and Earth sank into unimportance, but was still known of and not looked down on. 1,000 years into the radioactivity, it was believed to be the result of a nuclear war. Thousands of years later, in the year 500 of the Foundation Era, Daneel Olivaw was on the Moon of Earth and encountered Golan Trevize. Planetary systems, stars and planets: Fomalhaut A star mentioned in novel Pebble in the Sky. Joseph Schwartz of Chicago is transported by a stray beam of radiation to the Earth of the far future, which is part of a galactic empire ruled by the planet Trantor. Finding himself in wild countryside, he searches far and wide for help until he stumbles upon a cottage—only he can't understand the dwellers, nor they, him. One of them theorizes, "He must come from some far-off corner of the Galaxy ... They say the men of Fomalhaut have to learn practically a new language to be understood at the Emperor's court on Trantor." Asimov would later substantially abandon using any real star names at all in the empire. Planetary systems, stars and planets: Gaia Whose people are known by the same name or the Anti-Mules, is a planet described in the novel Foundation's Edge and referred to in Foundation and Earth. The name is derived from the Gaia hypothesis, which is itself eponymous to Gaia, the Earth goddess. Gaia is located in the Sayshell Sector, about ten parsecs (32 light-years) from the system Sayshell itself. It orbits a G-4 class star and has one natural satellite (50 km or 31 miles in diameter). Its axial inclination is 12°, and a Gaian day lasts 0.92 Galactic Standard Days. In its course of settlement, the human beings on Gaia, under robotic guidance, not only evolved their ability to form an ongoing group consciousness but also extended this consciousness to the fauna and flora of the planet itself, even including inanimate matter. As a result, the entire planet became a super-organism. Gaia was founded by R. Daneel Olivaw during the Empire's reign. Even then, the galaxy left it alone and it evaded taxes. By 498 F.E., Gaia had a population of one billion, a high population for a planet at that time. The inhabitants hoped eventually to create a complex ecology; all human-settled planets in the Galaxy —except Earth— had simple ecologies. The inhabitants of Gaia were all tied together into a telepathic group consciousness when it was founded; this consciousness was eventually extended to the non-human life, and later to the inorganic material of the planet. This would explain the Mule's incredible psychic powers, as Gaia was said to be his home planet. Planetary systems, stars and planets: Gamma Andromeda A star system mentioned in the novel Foundation. A catastrophic nuclear reactor meltdown occurred on Gamma Andromeda V in the year 50 F.E. The meltdown killed several million people and destroyed at least half the planet. Jennisek A planet in close vicinity of Helicon, its traditional rival. This planet was described by Hari Seldon in Prelude to Foundation. Planetary systems, stars and planets: Kalgan A planet located in the Periphery, Kalgan was a world of no particular resource or strategic value which rose to prominence during the reign of the Galactic Empire as a pleasure planet. Imperial nobles would visit Kalgan as a means to indulge themselves, making the planet and its leadership immensely prosperous. Because of its ability to stay neutral from conflict and to provide tourism as its main amenity, Kalgan survived the decline of the Empire with ease and eventually came under the control of a warlord. In 310 F.E., the Mule, as chronicled in Foundation and Empire, took over Kalgan by Converting its Warlord into his mind-slave. For a brief time, over a third of the Galaxy was ruled from Kalgan through the Mule's Union of Worlds, but after his death, the Lords of Kalgan was unable to maintain this level of control. The Union disintegrated into a mere 27 worlds and was almost completely encapsulated by the economic and political control of the Foundation. In 376 F.E., Lord Stettin, urged by his own egomania, decided to invade the Foundation. For a brief time, the power of Kalgan was extended, before the morale boost of the Seldon Plan caught up to Kalganians fighting on the front. Demoralized, they were easily overcome by the brilliant technical maneuvers of the Foundationer Navy. The peace deals following the Stettinian War made the subject worlds of Kalgan autonomous and, through popular vote, they were permitted to become independent or to join the Foundation Federation. After this crushing defeat, Kalgan ceased to play a major role in galactic history. Planetary systems, stars and planets: Korell A planet in novel Foundation. Located in the Whassalian Rift, it was the capital of the Republic of Korell. Korell was one of those frequent phenomena of a republic only in name. The dictator, called the Commdor 'first citizen of the state', is elected every year. Through some twist or another, a member of the Argo family is always chosen. According to Hober Mallow, people who didn't like this arrangement had "things" happen to them. Unlike a de jure monarch, the de facto monarchy associated with the status of the Commdor was not moderated by the typical influences of '"honour"' and '"court etiquette"'. Korell was the third Seldon Crisis because it was the first nation encountered by the Foundation with an effective system of nucleics. When Hober Mallow was sent to investigate. Mallow visited Asper Argo, Commdor of Korell, and opened up trade with his people through him. Despite discovering the steel foundries were not nuclear, Mallow did spot nuclear blasters provided by the Galactic Empire. Otherwise, Korell was decadent. The only remains of the Empire were 'silent memorials' and 'broken buildings'; the navy consisted of 'tiny, limping relics' and 'battered, clumsy hulks'. Mallow later learned that the viceroy of the Normannic Sector was providing Korell with nuclear blasters and with ships (five by the time Korell declared war with the Foundation; a sixth was promised). Mallow quickly realised that the real enemy was the Empire, not Korell; forcing himself into the office of mayor, Mallow was able to destroy the threat of Korell by doing nothing. Since his visit three years before had made Korell dependent on Foundation-made goods, the Korellians raised a good deal of complaints about their minor inconveniences. Since there was no threat of foreign conquest, the people became rebellious. Faced with this situation, the Commdor was forced to surrender to the Foundation unconditionally. The planets of the Korellian Republic eventually entered the Foundation's hands. They were captured briefly by Kalgan during the early stages of the war with Kalgan. Planetary systems, stars and planets: Melpomenia A planet in the novel Foundation and Earth and it was one of the fifty Spacer worlds, colonized by the first wave of settlers from Earth. It was nineteenth in the order of settlement. In their search for the planet of origin of the human species, the third set of coordinates given to Golan Trevize, Janov Pelorat, and Bliss points to Melpomenia, an old, dead Spacer world with a very thin atmosphere and almost no signs of civilization except for some old ruins. One of these ruins was found to be the "Hall of the Worlds", within which was a wall inscribed with the names, coordinates, and dates of the settlement of all the fifty Spacer worlds in chronological order. This information is later used by Janov Pelorat to deduce the approximate position of Earth in the galaxy. The first try led Pelorat not to Earth but to the nearby planet Alpha (in the system of Alpha Centauri, approximately four light-years away from Earth). In addition, they also happen to find that even in this harsh, nearly air-less world, life does manage to survive in the form of a kind of moss that lives on the faintest traces of carbon dioxide. The moss starts to grow along the edges of the face-plates of their spacesuits, and Trevize realizes that if it were to somehow get within their ship, the Far Star, it would become impossible to control. It would follow the trail of carbon dioxide along their nostrils and into their lungs and kill them. Using his blaster on minimum power, he burns away the moss on their spacesuits and also along the Far Star's airlock so that they can get inside safely. Planetary systems, stars and planets: Neotrantor (originally named Delicass) Was a small agricultural planet located near the center of the galaxy, near Trantor. After the Great Sack, it was the location of the last seat of the Galactic Empire. Dagobert IX short reign stemmed from Neotrantor. In Foundation and Empire, Toran and Bayta Darell, Ebling Mis, and the Mule visit Neotrantor during their search for the Second Foundation. They are given permission by Dagobert IX to enter the Imperial Library on Trantor but are stopped by Dagobert X, who wishes to marry Bayta. After the Mule kills the crown prince, the group leaves the disenfranchised Neotrantor. Planetary systems, stars and planets: Nishaya A planet mentioned in the novel Forward the Foundation. Part of the pre-Imperial Kingdom of Trantor. At the end of the Empire, the planet was noted for its goat herding and high-quality cheeses. Laskin Joranum pretended to be from Nishaya during his campaign to overthrow Eto Demerzel. His identity was compromised when Hari Seldon noted that he had a perfect fluency of the Trantorian dialect and was opposed to native Nishayans who spoke a very different dialect of Galactic standard. Planetary systems, stars and planets: Santanni A planet 9000 parsecs (29,000 light-years) from Trantor and 800 parsecs (2600 light-years) from Locris. In 12,058 G.E. the population of Santanni attempted to rebel against the Galactic Empire. Raych Seldon, son of psychohistorian Hari Seldon, was killed in the rebellion, valiantly defending the University of Santanni. After the founding of the Foundation, Santanni traded with it until the trade route was cut off by the rebellion of Anacreon. One known thing of Santanni make was the cigar box possessed by Jord Fara, and later by Salvor Hardin. It was captured in the early stages of the war with Kalgan. After the death of the Mule, Santanni was instrumental in breaking the siege on Terminus levied by the Mule's successor, Han Pritcher in 308 F.E. Planetary systems, stars and planets: Sayshell Was a planet in the Periphery. It was the capital of the Sayshell Union, which was renowned for having resisted the control of the Foundation Federation for several hundred years during the Interregnum, despite being completely surrounded by Federation territory. Sayshell features heavily in Foundation's Edge. According to the legends of Sayshell, the planet was founded by a group of colonists who were not known to hail from any other colonized world, leading some historians, such as Janov Pelorat, to conclude that Sayshell was a colony founded directly from Earth. The Sayshellians themselves believed (incorrectly) that Earth was located somewhere within Sayshell Sector. Due to the protection of the mentalic planet Gaia, Sayshell was never truly threatened by outside forces for all of its history. Under the Galactic Empire, Sayshell received minimal taxation and enjoyed a large degree of independence from Imperial controls. Later, after the fall of the Empire, Sayshell remained untouched by the anarchic war which consumed most of the Galaxy and eventually from the control of the Mule's Union of Worlds and the Foundation Federation. Sayshell was briefly threatened by the Foundation Federation under Harla Branno, who in reality wished to destroy the planet Gaia (which lay completely within Sayshellian territory). However, Gaia used her mentalic influence to convince the Sayshellians that, in the end, Mayor Branno was looking for a neutrality-trade treaty, marking the end of Sayshell's brief stint in galactic affairs. Sayshellian culture was noticeably different from that of the ultra-scientific Foundation Federation. It stressed mysticism (especially the influence of dreams) and the respect of nature, as evidenced by the percentage of Sayshellian wildlife that was still preserved from human influence. Sayshellians also had excellent cuisine, and a minor dislike toward outsiders, especially Foundationers. The religion and philosophy of Sayshell seem to be modeled on Buddhism. Planetary systems, stars and planets: Siwenna A planet prominent in Foundation and Foundation and Empire. It was the capital of the Normannic Sector of the Galactic Empire, and once one of its richest planets. Shortly after 100 F.E., Wiscard, the viceroy of Siwenna, rebelled. Most of the subjects remained loyal to the Empire and overthrew Wiscard, led by Patrician Onum Barr. The Imperial Admiral dispatched to Siwenna was angered at this, because it robbed him of his glory. So, he put most of the population of Siwenna under the atom blast, charging them with the crime of rebelling against an Imperial viceroy (Wiscard). Much of its population was killed, and Barr himself lost five sons and a daughter; only his sixth son, Ducem Barr, survived. Because of the rampant destruction of Siwenna, the Admiral set himself up as viceroy but moved the capital of the Normannic Sector to Orsha II. Between this time and its conquest by Bel Riose in 200 F.E., Siwenna rebelled five times, eventually becoming independent. When the campaign led by Riose against the Foundation ended, the Siwenna province transferred to the Foundation, the first Imperial province to pass directly from the Empire to the Foundation. After the beginning of the Foundation Era, Siwenna began to run downhill. 'The physical resources of twenty-five first rank planets take a long time to use up. Compared to the wealth of the last century, though, we have gone a long way downhill—and there is no sign of turning, not yet,' –Onum Barr, to Hober Mallow, 150 F.E. About 50 years before, Stanel VI died, ending a reign under which Siwenna came close to achieving its ancient prosperity. Little is known of Siwennian culture, except that when Riose first met Ducem Barr, it was 'socially impossible not to drink tea on Siwenna'. Planetary systems, stars and planets: Smyrno A planet located in the Anacreon Province. It is originally a prefect but later becomes one of the Four Kingdoms in the Anacreon Province that broke away from the Galactic Empire c. 50 F.E. The kingdom of Smyrno has no nuclear power until the Foundation arrives. The planet itself is located a little less than 50 parsecs (163 light-years) from Terminus. Its name is a parallel with Smyrna, an important city of the Roman Empire in Anatolia. Smyrno is hot and dry, the rooms smell of sulphur, and people live underground. Its most famous citizen is Hober Mallow, one of the major characters from the Foundation series. Its citizens are often discriminated against by Foundation members. Smyrnians are often seen as unintelligent and untrustworthy. Jorane Sutt, a political enemy, to Mallow, who eventually becomes mayor, uses Mallow's ethnicity against him. Planetary systems, stars and planets: Solaria A planet in Robot and Foundation series. Inhabited by Spacer descendants, Solaria is the fiftieth and last Spacer world settled in the first wave of interstellar settlement. It was occupied from approximately 4627 AD by inhabitants of the neighboring world Nexon, originally for summer homes. It was ruled by a Regent after it became independent around roughly 4727 AD. It had perhaps the most eccentric culture of all of them. The Solarians specialized in the construction of robots, which they exported to the other Spacer Worlds. Solarian robots were noted for their variety and excellence. They also exported their grain, which was used to make a pastry known as the pachinka. Originally, there were about 20,000 people living in vast estates individually or as married couples. There were thousands of robots for every Solarian. Almost all of the work and manufacturing was conducted by robots. The population was kept stable through strict birth and immigration controls. In the era of Robots and Empire, no more than five thousand Solarians were known to remain. Twenty thousand years later, the population was twelve hundred, with just one human per estate. Solarians hated physical contact with others and only communicated with each other via holograms. A few hundred years after Elijah Baley's visit to the planet, Solarians retreated from the Galactic scene and fled underground. The Solarians genetically altered themselves to be hermaphroditic and have the ability to use telekinesis. They specially made robots that were made to kill any foreigners who came to the planet. In 499 F.E. (approximately 25,066 AD), Solaria was visited by Golan Trevize, Janov Pelorat, and Bliss. They landed on the estate of Sarton Bander, the "Ruler" of a Solarian estate (note that Sarton was the last name of R. Daneel Olivaw's designer, Roj Nemennuh Sarton of Aurora). They learned of the sociological developments of Solaria through Bander, who apparently took a secret pleasure in having something close to intellectual companionship, or at least an intellectual audience. To prevent them from providing information to the Galaxy about Solaria and in keeping with Solarian customs and beliefs, not to mention preventing other Solarians' discovery of shameful personal contact with offworlders, Bander attempted to kill the visitors but was instead killed in self-defense by Bliss, resulting in the shutdown of all of the robots and other machinery of the Bander Estate. The visitors were able to escape, but not before discovering a child in one of the countless rooms of the estate, Fallom, assuming it to be a successor to Bander (who had not mentioned the existence of an heir, but had mentioned that there would be one at the appropriate time or in the case of an unforeseen accident), whom they would ultimately bring with them to Earth. Had they left Fallom on Solaria, the child would almost certainly have been killed, because it was seen as a surplus child and also had not as yet developed its transducer lobes, therefore not counting as a Solarian and being expendable. Fallom demonstrated great precocity in learning Galactic and would eventually stay on the Moon of Earth to mentally merge with Daneel Olivaw. Planetary systems, stars and planets: Synnax A planet mentioned in novel Foundation. Synnax circles a star at the edges of the Blue Drift. Its inhabitants are considered "provincial" by the more urbanite Trantorians. It was the homeworld of the psychohistorian Gaal Dornick. It is mentioned in the first book that despite its "provincial" nature, it had not been kept away from civilization. Imperial coronations have been broadcast properly in the world. Synnax has only one satellite. Planetary systems, stars and planets: Terminus The capital planet of the First Foundation. It is located at the edge of the Galaxy. It was the sole planet orbiting its isolated star and had almost no metals. The nearest planet was Anacreon, 8 parsecs (26 light-years) away. Being on the fringe of the galaxy, there are almost no stars in the sky. It lay on the edge of the Galaxy that was opposite the planet Siwenna. It was the planet farthest from the Galactic Centre; its name reflects that fact: Latin terminus means 'end of the line'. It had a very high water-to-land ratio. The only large island was the one on which Terminus City lay. A total of ten thousand inhabited islands existed on the planet. The climate was mild. Prior to human occupation, there was some life on Terminus. However, once humans arrived (along with their supporting species), these native life forms were crowded out and became extinct. The capital of Terminus Planet is Terminus City. Three other cities are known: Agyropol, Newton City, Stanmark (Arkady Darell's hometown). Planetary systems, stars and planets: Trantor Known after the Great Sack as Home, or Hame, to its people, is a fictional world in the Foundation universe. Trantor has a very long history first as the throne of the Kingdom of Trantor and later as the administrative center of the galaxy.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Rectifier (neural networks)** Rectifier (neural networks): In the context of artificial neural networks, the rectifier or ReLU (rectified linear unit) activation function is an activation function defined as the positive part of its argument: where x is the input to a neuron. This is also known as a ramp function and is analogous to half-wave rectification in electrical engineering. This activation function was introduced by Kunihiko Fukushima in 1969 in the context of visual feature extraction in hierarchical neural networks. It was later argued that it has strong biological motivations and mathematical justifications. In 2011 it was found to enable better training of deeper networks, compared to the widely used activation functions prior to 2011, e.g., the logistic sigmoid (which is inspired by probability theory; see logistic regression) and its more practical counterpart, the hyperbolic tangent. The rectifier is, as of 2017, the most popular activation function for deep neural networks.Rectified linear units find applications in computer vision and speech recognition using deep neural nets and computational neuroscience. Advantages: Sparse activation: For example, in a randomly initialized network, only about 50% of hidden units are activated (have a non-zero output). Better gradient propagation: Fewer vanishing gradient problems compared to sigmoidal activation functions that saturate in both directions. Efficient computation: Only comparison, addition and multiplication. Advantages: Scale-invariant: max max for a≥0 .Rectifying activation functions were used to separate specific excitation and unspecific inhibition in the neural abstraction pyramid, which was trained in a supervised way to learn several computer vision tasks. In 2011, the use of the rectifier as a non-linearity has been shown to enable training deep supervised neural networks without requiring unsupervised pre-training. Rectified linear units, compared to sigmoid function or similar activation functions, allow faster and effective training of deep neural architectures on large and complex datasets. Potential problems: Non-differentiable at zero; however, it is differentiable anywhere else, and the value of the derivative at zero can be arbitrarily chosen to be 0 or 1. Not zero-centered. Unbounded. Potential problems: Dying ReLU problem: ReLU (rectified linear unit) neurons can sometimes be pushed into states in which they become inactive for essentially all inputs. In this state, no gradients flow backward through the neuron, and so the neuron becomes stuck in a perpetually inactive state and "dies". This is a form of the vanishing gradient problem. In some cases, large numbers of neurons in a network can become stuck in dead states, effectively decreasing the model capacity. This problem typically arises when the learning rate is set too high. It may be mitigated by using leaky ReLUs instead, which assign a small positive slope for x < 0; however, the performance is reduced. Variants: Piecewise-linear variants Leaky ReLU Leaky ReLUs allow a small, positive gradient when the unit is not active, helping to mitigate the vanishing gradient problem. Parametric ReLU Parametric ReLUs (PReLUs) take this idea further by making the coefficient of leakage into a parameter that is learned along with the other neural-network parameters. Note that for a ≤ 1, this is equivalent to max (x,ax) and thus has a relation to "maxout" networks. Variants: Other non-linear variants Gaussian-error linear unit (GELU) GELU is a smooth approximation to the rectifier: where Φ(x) is the cumulative distribution function of the standard normal distribution. Φ(x)=P(X⩽x) This activation function is illustrated in the figure at the start of this article. It has a non-monotonic “bump” when x < 0 and serves as the default activation for models such as BERT. Variants: SiLU The SiLU (sigmoid linear unit) or swish function is another smooth approximation, first coined in the GELU paper: where sigmoid ⁡(x) is the sigmoid function. Variants: Softplus A smooth approximation to the rectifier is the analytic function which is called the softplus or SmoothReLU function. For large negative x it is roughly ln ⁡1 , so just above 0, while for large positive x it is roughly ln ⁡(ex) , so just above x . A sharpness parameter k may be included: The derivative of softplus is the logistic function. Variants: The logistic sigmoid function is a smooth approximation of the derivative of the rectifier, the Heaviside step function. The multivariable generalization of single-variable softplus is the LogSumExp with the first argument set to zero: := LSE ln ⁡(1+ex1+⋯+exn). The LogSumExp function is LSE ln ⁡(ex1+⋯+exn), and its gradient is the softmax; the softmax with the first argument set to zero is the multivariable generalization of the logistic function. Both LogSumExp and softmax are used in machine learning. ELU Exponential linear units try to make the mean activations closer to zero, which speeds up learning. It has been shown that ELUs can obtain higher classification accuracy than ReLUs. Variants: In these formulas, a is a hyper-parameter to be tuned with the constraint a≥0 The ELU can be viewed as a smoothed version of a shifted ReLU (SReLU), which has the form max (−a,x) , given the same interpretation of a Mish The mish function could also be used as a smooth approximation of the rectifier. It is defined as tanh softplus ⁡(x)), where tanh ⁡(x) is the hyperbolic tangent, and softplus ⁡(x) is the softplus function. Variants: Mish is non-monotonic and self-gated. It was inspired by Swish, itself a variant of ReLU. Variants: Squareplus Squareplus is the function squareplus b⁡(x)=x+x2+b2 where b≥0 is a hyperparameter that determines the "size" of the curved region near x=0 . (For example, letting b=0 yields ReLU, and letting b=4 yields the metallic mean function.) Squareplus shares many properties with softplus: It is monotonic, strictly positive, approaches 0 as x→−∞ , approaches the identity as x→+∞ , and is C∞ smooth. However, squareplus can be computed using only algebraic functions, making it well-suited for settings where computational resources or instruction sets are limited. Additionally, squareplus requires no special consideration to ensure numerical stability when x is large.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**LAGEOS** LAGEOS: LAGEOS, Laser Geodynamics Satellite or Laser Geometric Environmental Observation Survey, are a series of two scientific research satellites designed to provide an orbiting laser ranging benchmark for geodynamical studies of the Earth. Each satellite is a high-density passive laser reflector in a very stable medium Earth orbit (MEO). Function and operation: The spacecraft are aluminum-covered brass spheres with diameters of 60 centimetres (24 in) and masses of 400 and 411 kilograms (882 and 906 pounds), covered with 426 cube-corner retroreflectors, giving them the appearance of disco balls. Of these retroreflectors, 422 are made from fused silica glass while the remaining 4 are made from germanium to obtain measurements in the infrared for experimental studies of reflectivity and satellite orientation. They have no on-board sensors or electronics, and are not attitude-controlled. Function and operation: They orbit at an altitude of 5,900 kilometres (3,700 mi), well above low Earth orbit and well below geostationary orbit, at orbital inclinations of 109.8 and 52.6 degrees. Measurements are made by transmitting pulsed laser beams from Earth ground stations to the satellites. The laser beams then return to Earth after hitting the reflecting surfaces; the travel times are precisely measured, permitting ground stations in different parts of the Earth to measure their separations to better than one inch in thousands of miles. The LAGEOS satellites make it possible to determine positions of points on the Earth with extremely high accuracy due to the stability of their orbits. The high mass-to-area ratio and the precise, stable (attitude-independent) geometry of the LAGEOS spacecraft, together with their extremely regular orbits, make these satellites the most precise position references available. Mission goals: The LAGEOS mission consists of the following key goals: Provide an accurate measurement of the satellite's position with respect to Earth. Determine the planet's shape (geoid). Determine tectonic plate movements associated with continental drift.Ground tracking stations located in many countries (including the US, Mexico, France, Germany, Poland, Australia, Egypt, China, Peru, Italy, and Japan) have ranged to the satellites and data from these stations are available worldwide to investigators studying crustal dynamics. There are two LAGEOS spacecraft, LAGEOS-1 launched in 1976, and LAGEOS-2 launched in 1992. As of May 2011, both LAGEOS spacecraft are routinely tracked by the ILRS network. Time capsule: LAGEOS-1 (which is predicted to re-enter the atmosphere in 8.4 million years) also contains a 4 in × 7 in plaque designed by Carl Sagan to indicate to future humanity when LAGEOS-1 was launched. The plaque includes the numbers 1 to 10 in binary. In the upper right is a diagram of the Earth orbiting the Sun, with a binary number 1 indicating one revolution, equaling one year. It then shows 268,435,456 years in the past (binary: 228), indicated by a left arrow and the arrangement of the Earth's continents at that time (during the Permian period). The present arrangement of the Earth's continents is indicated with a 0 and both forward and backward arrows. Then the estimated arrangement of the continents in 8.4 million years with a right facing arrow and 8,388,608 in binary (223). LAGEOS itself is shown at launch on the 0 year, and falling to the Earth in the 8.4 million year diagram. Launch data: LAGEOS 1, launched 4 May 1976, NSSDC ID 1976-039A, NORAD number 8820 LAGEOS 2, deployed 23 October 1992 from STS-52, NSSDC ID 1992-070B, NORAD number 22195
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Center of mass** Center of mass: In physics, the center of mass of a distribution of mass in space (sometimes referred to as the balance point) is the unique point at any given time where the weighted relative position of the distributed mass sums to zero. This is the point to which a force may be applied to cause a linear acceleration without an angular acceleration. Calculations in mechanics are often simplified when formulated with respect to the center of mass. It is a hypothetical point where the entire mass of an object may be assumed to be concentrated to visualise its motion. In other words, the center of mass is the particle equivalent of a given object for application of Newton's laws of motion. Center of mass: In the case of a single rigid body, the center of mass is fixed in relation to the body, and if the body has uniform density, it will be located at the centroid. The center of mass may be located outside the physical body, as is sometimes the case for hollow or open-shaped objects, such as a horseshoe. In the case of a distribution of separate bodies, such as the planets of the Solar System, the center of mass may not correspond to the position of any individual member of the system. Center of mass: The center of mass is a useful reference point for calculations in mechanics that involve masses distributed in space, such as the linear and angular momentum of planetary bodies and rigid body dynamics. In orbital mechanics, the equations of motion of planets are formulated as point masses located at the centers of mass. The center of mass frame is an inertial frame in which the center of mass of a system is at rest with respect to the origin of the coordinate system. History: The concept of center of gravity or weight was studied extensively by the ancient Greek mathematician, physicist, and engineer Archimedes of Syracuse. He worked with simplified assumptions about gravity that amount to a uniform field, thus arriving at the mathematical properties of what we now call the center of mass. Archimedes showed that the torque exerted on a lever by weights resting at various points along the lever is the same as what it would be if all of the weights were moved to a single point—their center of mass. In his work On Floating Bodies, Archimedes demonstrated that the orientation of a floating object is the one that makes its center of mass as low as possible. He developed mathematical techniques for finding the centers of mass of objects of uniform density of various well-defined shapes.Other ancient mathematicians who contributed to the theory of the center of mass include Hero of Alexandria and Pappus of Alexandria. In the Renaissance and Early Modern periods, work by Guido Ubaldi, Francesco Maurolico, Federico Commandino, Evangelista Torricelli, Simon Stevin, Luca Valerio, Jean-Charles de la Faille, Paul Guldin, John Wallis, Christiaan Huygens, Louis Carré, Pierre Varignon, and Alexis Clairaut expanded the concept further.Newton's second law is reformulated with respect to the center of mass in Euler's first law. Definition: The center of mass is the unique point at the center of a distribution of mass in space that has the property that the weighted position vectors relative to this point sum to zero. In analogy to statistics, the center of mass is the mean location of a distribution of mass in space. Definition: A system of particles In the case of a system of particles Pi, i = 1, ..., n , each with mass mi that are located in space with coordinates ri, i = 1, ..., n , the coordinates R of the center of mass satisfy the condition Solving this equation for R yields the formula A continuous volume If the mass distribution is continuous with the density ρ(r) within a solid Q, then the integral of the weighted position coordinates of the points in this volume relative to the center of mass R over the volume V is zero, that is Solve this equation for the coordinates R to obtain where M is the total mass in the volume. Definition: If a continuous mass distribution has uniform density, which means that ρ is constant, then the center of mass is the same as the centroid of the volume. Definition: Barycentric coordinates The coordinates R of the center of mass of a two-particle system, P1 and P2, with masses m1 and m2 is given by Let the percentage of the total mass divided between these two particles vary from 100% P1 and 0% P2 through 50% P1 and 50% P2 to 0% P1 and 100% P2, then the center of mass R moves along the line from P1 to P2. The percentages of mass at each point can be viewed as projective coordinates of the point R on this line, and are termed barycentric coordinates. Another way of interpreting the process here is the mechanical balancing of moments about an arbitrary point. The numerator gives the total moment that is then balanced by an equivalent total force at the center of mass. This can be generalized to three points and four points to define projective coordinates in the plane, and in space, respectively. Definition: Systems with periodic boundary conditions For particles in a system with periodic boundary conditions two particles can be neighbours even though they are on opposite sides of the system. This occurs often in molecular dynamics simulations, for example, in which clusters form at random locations and sometimes neighbouring atoms cross the periodic boundary. When a cluster straddles the periodic boundary, a naive calculation of the center of mass will be incorrect. A generalized method for calculating the center of mass for periodic systems is to treat each coordinate, x and y and/or z, as if it were on a circle instead of a line. The calculation takes every particle's x coordinate and maps it to an angle, where xmax is the system size in the x direction and max ) . From this angle, two new points (ξi,ζi) can be generated, which can be weighted by the mass of the particle xi for the center of mass or given a value of 1 for the geometric center: In the (ξ,ζ) plane, these coordinates lie on a circle of radius 1. From the collection of ξi and ζi values from all the particles, the averages ξ¯ and ζ¯ are calculated. Definition: where M is the sum of the masses of all of the particles. Definition: These values are mapped back into a new angle, θ¯ , from which the x coordinate of the center of mass can be obtained: The process can be repeated for all dimensions of the system to determine the complete center of mass. The utility of the algorithm is that it allows the mathematics to determine where the "best" center of mass is, instead of guessing or using cluster analysis to "unfold" a cluster straddling the periodic boundaries. If both average values are zero, (ξ¯,ζ¯)=(0,0) , then θ¯ is undefined. This is a correct result, because it only occurs when all particles are exactly evenly spaced. In that condition, their x coordinates are mathematically identical in a periodic system. Center of gravity: A body's center of gravity is the point around which the resultant torque due to gravity forces vanishes. Where a gravity field can be considered to be uniform, the mass-center and the center-of-gravity will be the same. However, for satellites in orbit around a planet, in the absence of other torques being applied to a satellite, the slight variation (gradient) in gravitational field between closer-to (stronger) and further-from (weaker) the planet can lead to a torque that will tend to align the satellite such that its long axis is vertical. In such a case, it is important to make the distinction between the center-of-gravity and the mass-center. Any horizontal offset between the two will result in an applied torque. Center of gravity: It is useful to note that the mass-center is a fixed property for a given rigid body (e.g. with no slosh or articulation), whereas the center-of-gravity may, in addition, depend upon its orientation in a non-uniform gravitational field. In the latter case, the center-of-gravity will always be located somewhat closer to the main attractive body as compared to the mass-center, and thus will change its position in the body of interest as its orientation is changed. Center of gravity: In the study of the dynamics of aircraft, vehicles and vessels, forces and moments need to be resolved relative to the mass center. That is true independent of whether gravity itself is a consideration. Referring to the mass-center as the center-of-gravity is something of a colloquialism, but it is in common usage and when gravity gradient effects are negligible, center-of-gravity and mass-center are the same and are used interchangeably. Center of gravity: In physics the benefits of using the center of mass to model a mass distribution can be seen by considering the resultant of the gravity forces on a continuous body. Consider a body Q of volume V with density ρ(r) at each point r in the volume. In a parallel gravity field the force f at each point r is given by, where dm is the mass at the point r, g is the acceleration of gravity, and k ^ {\textstyle \mathbf {\hat {k}} } is a unit vector defining the vertical direction. Center of gravity: Choose a reference point R in the volume and compute the resultant force and torque at this point, and If the reference point R is chosen so that it is the center of mass, then which means the resultant torque T = 0. Because the resultant torque is zero the body will move as though it is a particle with its mass concentrated at the center of mass. Center of gravity: By selecting the center of gravity as the reference point for a rigid body, the gravity forces will not cause the body to rotate, which means the weight of the body can be considered to be concentrated at the center of mass. Linear and angular momentum: The linear and angular momentum of a collection of particles can be simplified by measuring the position and velocity of the particles relative to the center of mass. Let the system of particles Pi, i = 1, ..., n of masses mi be located at the coordinates ri with velocities vi. Select a reference point R and compute the relative position and velocity vectors, The total linear momentum and angular momentum of the system are and If R is chosen as the center of mass these equations simplify to where m is the total mass of all the particles, p is the linear momentum, and L is the angular momentum. Linear and angular momentum: The law of conservation of momentum predicts that for any system not subjected to external forces the momentum of the system will remain constant, which means the center of mass will move with constant velocity. This applies for all systems with classical internal forces, including magnetic fields, electric fields, chemical reactions, and so on. More formally, this is true for any internal forces that cancel in accordance with Newton's Third Law. Locating the center of mass: The experimental determination of a body's centre of mass makes use of gravity forces on the body and is based on the fact that the centre of mass is the same as the centre of gravity in the parallel gravity field near the earth's surface. Locating the center of mass: The center of mass of a body with an axis of symmetry and constant density must lie on this axis. Thus, the center of mass of a circular cylinder of constant density has its center of mass on the axis of the cylinder. In the same way, the center of mass of a spherically symmetric body of constant density is at the center of the sphere. In general, for any symmetry of a body, its center of mass will be a fixed point of that symmetry. Locating the center of mass: In two dimensions An experimental method for locating the center of mass is to suspend the object from two locations and to drop plumb lines from the suspension points. The intersection of the two lines is the center of mass.The shape of an object might already be mathematically determined, but it may be too complex to use a known formula. In this case, one can subdivide the complex shape into simpler, more elementary shapes, whose centers of mass are easy to find. If the total mass and center of mass can be determined for each area, then the center of mass of the whole is the weighted average of the centers. This method can even work for objects with holes, which can be accounted for as negative masses.A direct development of the planimeter known as an integraph, or integerometer, can be used to establish the position of the centroid or center of mass of an irregular two-dimensional shape. This method can be applied to a shape with an irregular, smooth or complex boundary where other methods are too difficult. It was regularly used by ship builders to compare with the required displacement and center of buoyancy of a ship, and ensure it would not capsize. Locating the center of mass: In three dimensions An experimental method to locate the three-dimensional coordinates of the center of mass begins by supporting the object at three points and measuring the forces, F1, F2, and F3 that resist the weight of the object, W=−Wk^ (k^ is the unit vector in the vertical direction). Let r1, r2, and r3 be the position coordinates of the support points, then the coordinates R of the center of mass satisfy the condition that the resultant torque is zero, or This equation yields the coordinates of the center of mass R* in the horizontal plane as, The center of mass lies on the vertical line L, given by The three-dimensional coordinates of the center of mass are determined by performing this experiment twice with the object positioned so that these forces are measured for two different horizontal planes through the object. The center of mass will be the intersection of the two lines L1 and L2 obtained from the two experiments. Applications: Engineering designs Automotive applications Engineers try to design a sports car so that its center of mass is lowered to make the car handle better, which is to say, maintain traction while executing relatively sharp turns. The characteristic low profile of the U.S. military Humvee was designed in part to allow it to tilt farther than taller vehicles without rolling over, by ensuring its low center of mass stays over the space bounded by the four wheels even at angles far from the horizontal. Applications: Aeronautics The center of mass is an important point on an aircraft, which significantly affects the stability of the aircraft. To ensure the aircraft is stable enough to be safe to fly, the center of mass must fall within specified limits. If the center of mass is ahead of the forward limit, the aircraft will be less maneuverable, possibly to the point of being unable to rotate for takeoff or flare for landing. If the center of mass is behind the aft limit, the aircraft will be more maneuverable, but also less stable, and possibly unstable enough so as to be impossible to fly. The moment arm of the elevator will also be reduced, which makes it more difficult to recover from a stalled condition.For helicopters in hover, the center of mass is always directly below the rotorhead. In forward flight, the center of mass will move forward to balance the negative pitch torque produced by applying cyclic control to propel the helicopter forward; consequently a cruising helicopter flies "nose-down" in level flight. Applications: Astronomy The center of mass plays an important role in astronomy and astrophysics, where it is commonly referred to as the barycenter. The barycenter is the point between two objects where they balance each other; it is the center of mass where two or more celestial bodies orbit each other. When a moon orbits a planet, or a planet orbits a star, both bodies are actually orbiting a point that lies away from the center of the primary (larger) body. For example, the Moon does not orbit the exact center of the Earth, but a point on a line between the center of the Earth and the Moon, approximately 1,710 km (1,062 miles) below the surface of the Earth, where their respective masses balance. This is the point about which the Earth and Moon orbit as they travel around the Sun. If the masses are more similar, e.g., Pluto and Charon, the barycenter will fall outside both bodies. Applications: Rigging and safety Knowing the location of the center of gravity when rigging is crucial, possibly resulting in severe injury or death if assumed incorrectly. A center of gravity that is at or above the lift point will most likely result in a tip-over incident. In general, the further the center of gravity below the pick point, the more safe the lift. There are other things to consider, such as shifting loads, strength of the load and mass, distance between pick points, and number of pick points. Specifically, when selecting lift points, it's very important to place the center of gravity at the center and well below the lift points. Applications: Body motion In kinesiology and biomechanics, the center of mass is an important parameter that assists people in understanding their human locomotion. Typically, a human's center of mass is detected with one of two methods: the reaction board method is a static analysis that involves the person lying down on that instrument, and use of their static equilibrium equation to find their center of mass; the segmentation method relies on a mathematical solution based on the physical principle that the summation of the torques of individual body sections, relative to a specified axis, must equal the torque of the whole system that constitutes the body, measured relative to the same axis.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Gun turret** Gun turret: A gun turret (or simply turret) is a mounting platform from which weapons can be fired that affords protection, visibility and ability to turn and aim. A modern gun turret is generally a rotatable weapon mount that houses the crew or mechanism of a projectile-firing weapon and at the same time lets the weapon be aimed and fired in some degree of azimuth and elevation (cone of fire). Description: Rotating gun turrets protect the weapon and its crew as they rotate. When this meaning of the word "turret" started being used at the beginning of the 1860s, turrets were normally cylindrical. Barbettes were an alternative to turrets; with a barbette the protection was fixed, and the weapon and crew were on a rotating platform inside the barbette. In the 1890s, armoured hoods (also known as "gun houses") were added to barbettes; these rotated with the platform (hence the term "hooded barbette"). By the early 20th century, these hoods were known as turrets. Modern warships have gun-mountings described as turrets, though the "protection" on them is limited to protection from the weather. Description: Rotating turrets can be mounted on a fortified building or structure such as a coastal blockhouse, be part of a land battery, be mounted on a combat vehicle, a naval ship, or a military aircraft, they may be armed with one or more machine guns, automatic cannons, large-calibre guns, or missile launchers. They may be manned or remotely controlled and are most often protected to some degree, if not actually armoured. Description: The protection provided by the turret may be against battle damage, the weather conditions, general environment in which the weapon or its crew will be operating. The name derives from the pre-existing noun turret, from the French "touret", diminutive of the word "tower", meaning a self-contained protective position which is situated on top of a fortification or defensive wall as opposed to rising directly from the ground, in which case it constitutes a tower. Description: Cupolas A small turret, or sub-turret set on top of a larger one, is called a cupola. The term cupola is also used for a rotating turret that carries a sighting device rather than weaponry, such as that used by a tank commander. Warships: Before the development of large-calibre, long-range guns in the mid-19th century, the classic battleship design used rows of gunport-mounted guns on each side of the ship, often mounted in casemates. Firepower was provided by a large number of guns, each of which could traverse only in a limited arc. Due to stability issues, fewer large (and thus heavy) guns can be carried high on a ship, but as this set casemates low and thus near the waterline they were vulnerable to flooding, effectively restricted their use to calm seas. Additionally casemate mounts had to be recessed into the side of a vessel to afford a wide arc of fire, and such recesses presented shot traps, compromising the integrity of armour plating.Rotating turrets were weapon mounts designed to protect the crew and mechanism of the artillery piece and with the capability of being aimed and fired over a broad arc, typically between a three-quarter circle up to a full 360 degrees. These presented the opportunity to concentrate firepower in fewer, better-sited positions by eliminating redundancy, in other words combining the firepower of those guns unable to engage an enemy because they sited on the wrong beam into a more powerful, and more versatile unified battery. Warships: History Designs for a rotating gun turret date back to the late 18th century. In the mid 19th century, during the Crimean War, Captain Cowper Phipps Coles constructed a raft with guns protected by a 'cupola' and used the raft, named the Lady Nancy, to shell the Russian town of Taganrog in the Black Sea during the Siege of Taganrog. The Lady Nancy "proved a great success" and Coles patented his rotating turret design after the war. Warships: UK: first designs The British Admiralty ordered a prototype of Coles's patented design in 1859, which was installed in the ironclad floating battery, HMS Trusty, for trials in 1861, becoming the first warship to be fitted with a revolving gun turret. Coles's aim was to create a ship with the greatest possible all round arc of fire, as low in the water as possible to minimise the target. Warships: The Admiralty accepted the principle of the turret gun as a useful innovation, and incorporated it into other new designs. Coles submitted a design for a ship having ten domed turrets each housing two large guns. Warships: The design was rejected as impractical, although the Admiralty remained interested in turret ships and instructed its own designers to create better designs. Coles enlisted the support of Prince Albert, who wrote to the first Lord of the Admiralty, the Duke of Somerset, supporting the construction of a turret ship. In January 1862, the Admiralty agreed to construct a ship, HMS Prince Albert which had four turrets and a low freeboard, intended only for coastal defence. Warships: While Coles designed the turrets, the ship was the responsibility of Chief Constructor Isaac Watts. Another ship using Coles' turret designs, HMS Royal Sovereign, was completed in August 1864. Its existing broadside guns were replaced with four turrets on a flat deck and the ship was fitted with 5.5 inches (140 mm) of armour in a belt around the waterline.Early ships like the Royal Sovereign had little sea-keeping qualities being limited to coastal waters. Warships: Sir Edward James Reed, went on to design and build HMS Monarch, the first seagoing warship to carry her guns in turrets. Laid down in 1866 and completed in June 1869, it carried two turrets, although the inclusion of a forecastle and poop prevented the turret guns firing fore and aft. Warships: United States: USS Monitor The gun turret was independently invented in the United States by the Swedish inventor John Ericsson, although his design was technologically inferior to Coles's version. Ericsson designed USS Monitor in 1861, its most prominent feature being a large, cylindrical gun turret mounted amidships above the low-freeboard upper hull, also referred to as the "raft". This extended well past the sides of the lower, more traditionally shaped hull. Warships: A small, armoured pilot house was fitted on the upper deck towards the bow; however, its position prevented Monitor from firing her guns straight forward. Like Coles's, one of Ericsson's goals in designing the ship was to present the smallest possible target to enemy gunfire. The turret's rounded shape helped to deflect cannon shot. A pair of donkey engines rotated the turret through a set of gears; a full rotation was made in 22.5 seconds during testing on 9 February 1862. However, fine control of the turret proved to be difficult, as it would have to be reversed if it overshot its mark. In lieu of reversing the turret, a full rotation would have to be made to train the guns where desired. Warships: Including the guns, the turret weighed approximately 160 long tons (163 t); the entire weight rested on an iron spindle that had to be jacked up using a wedge before the turret was free to rotate. The spindle was 9 inches (23 cm) in diameter which gave it ten times the strength needed in preventing the turret from sliding sideways.When not in use, the turret rested on a brass ring on the deck that was intended to form a watertight seal. However, in service, the interface between the turret and deck ring heavily leaked, despite caulking by the crew.The gap between the turret and the deck proved to be another kind of problem for several Passaic-class monitors, which used the same turret design, as debris and shell fragments entered the gap and jammed the turrets during the First Battle of Charleston Harbor in April 1863. Direct hits at the turret with heavy shot also had the potential to bend the spindle, which could also jam the turret.Monitor was originally intended to mount a pair of 15-inch (380 mm) smoothbore Dahlgren guns, but they were not ready in time and 11-inch (280 mm) guns were substituted, each gun weighing approximately 16,000 pounds (7,300 kg). Warships: Monitor's guns used the standard propellant charge of 15 pounds (6.8 kg) specified by the 1860 ordnance instructions for targets "distant", "near", and "ordinary", established by the gun's designer Dahlgren himself. They could fire a 136-pound (61.7 kg) round shot or shell up to a range of 3,650 yards (3,340 m) at an elevation of +15°. Warships: Later designs HMS Thunderer (1872) represented the culmination of this pioneering work. An ironclad turret ship designed by Edward James Reed, she was equipped with revolving turrets that used pioneering hydraulic turret machinery to maneouvre the guns. She was also the world's first mastless battleship, built with a central superstructure layout, and became the prototype for all subsequent warships. With her sister HMS Devastation of 1871 she was another pivotal design, and led directly to the modern battleship. Warships: The US Navy tried to save weight and deck space, and allow the much faster firing 8-inch to shoot during the long reload time necessary for 12-inch guns by superposing secondary gun turrets directly on top of the primary turrets (as in the Kearsarge and Virginia-class battleships), but the idea proved to be practically unworkable and was soon abandoned.With the advent of the South Carolina-class battleships in 1908, the main battery turrets were designed so as to superfire, to improve fire arcs on centerline mounted weapons. This was necessitated by a need to move all main battery turrets to the vessel's centerline for improved structural support. The 1906 HMS Dreadnought, while revolutionary in many other ways, had retained wing turrets due to concerns about muzzle blast affecting the sighting mechanisms of a turret below. A similar advancement was in the Kongō-class battlecruisers and Queen Elizabeth-class battleships, which dispensed with the "Q" turret amidships in favour of heavier guns in fewer mountings. Warships: Like pre-dreadnoughts, the first dreadnoughts had two guns in each turret; however, later ships began to be fitted with triple turrets. The first ship to be built with triple turrets was the Italian Dante Alighieri, although the first to be actually commissioned was the Austro-Hungarian SMS Viribus Unitis of the Tegetthoff class. By the beginning of World War II, most battleships used triple or, occasionally, quadruple turrets, which reduced the total number of mountings and improved armour protection. However, quadruple turrets proved to be extremely complex to arrange, making them unwieldy in practice. Warships: The largest warship turrets were in World War II battleships where a heavily armoured enclosure protected the large gun crew during battle. The calibre of the main armament on large battleships was typically 300 to 460 mm (12 to 18 in). The turrets carrying three 460 mm guns of Yamato each weighed around 2,500 tonnes. The secondary armament of battleships (or the primary armament of light cruisers) was typically between 127 and 152 mm (5.0 and 6.0 in). Smaller ships typically mounted guns of 76 mm (3.0 in) and larger, although these rarely required a turret mounting, except for large destroyers, like the American Fletcher and the German Narvik classes. Warships: Layout In naval terms, turret traditionally and specifically refers to a gun mounting where the entire mass rotates as one, and has a trunk that projects below the deck. The rotating part of a turret seen above deck is the gunhouse, which protects the mechanism and crew, and is where the guns are loaded. The gunhouse is supported on a bed of rotating rollers, and is not necessarily physically attached to the ship at the base of the rotating structure. In the case of the German battleship Bismarck, the turrets were not vertically restrained and fell out when she sank. The British battlecruiser Hood, like some American battleships, did have vertical restraints.Below the gunhouse there may be a working chamber, where ammunition is handled, and the main trunk, which accommodates the shell and propellant hoists that bring ammunition up from the magazines below. There may be a combined hoist (cf the animated British turret) or separate hoists (cf the US turret cutaway). The working chamber and trunk rotate with the gunhouse, and sit inside a protective armoured barbette. The barbette extends down to the main armoured deck (red in the animation). At the base of the turret sit handing rooms, where shell and propelling charges are passed from the shell room and magazine to the hoists. Warships: The handling equipment and hoists are complex arrangements of machinery that transport the shells and charges from the magazine into the base of the turret. Bearing in mind that shells can weigh around a ton, the hoists have to be powerful and rapid; a 15-inch turret of the type in the animation was expected to perform a complete loading and firing cycle in a minute. Warships: The loading system is fitted with a series of mechanical interlocks that ensure that there is never an open path from the gunhouse to the magazine down which an explosive flash might pass. Flash-tight doors and scuttles open and close to allow the passage between areas of the turret. Generally, with large-calibre guns, powered or assisted ramming is required to force the heavy shell and charge into the breech. Warships: As the hoist and breech must be aligned for ramming to occur, there is generally a restricted range of elevations at which the guns can be loaded; the guns return to the loading elevation, are loaded, then return to the target elevation, at which time they are said to be "in battery". The animation illustrates a turret where the rammer is fixed to the cradle that carries the guns, allowing loading to occur across a wider range of elevations. Warships: Earlier turrets differed significantly in their operating principles. It was not until the last of the "rotating drum" designs described in the previous section were phased out that the "hooded barbette" arrangement above became the standard. Wing turrets A wing turret is a gun turret mounted along the side, or the wings, of a warship, off the centerline. Warships: The positioning of a wing turret limits the gun's arc of fire, so that it generally can contribute to only the broadside weight of fire on one side of the ship. This is the major weakness of wing turrets as broadsides were the most prevalent type of gunnery duels. Depending on the configurations of ships, such as HMS Dreadnought but not SMS Blücher, the wing turrets could fire fore and aft, so this somewhat reduced the danger when an opponent crossed the T enabling it to fire a full broadside. Warships: Attempts were made to mount turrets en echelon so that they could fire on either beam, such as the Invincible-class and SMS Von der Tann battlecruisers, but this tended to cause great damage to the ships' deck from the muzzle blast. Warships: Wing turrets were commonplace on capital ships and cruisers during the late 19th century up until the 1910s. In pre-dreadnought battleships, the wing turret contributed to the secondary battery of sub-calibre weapons. In large armoured cruisers, wing turrets contributed to the main battery, although the casemate mounting was more common. At the time, large numbers of smaller calibre guns contributing to the broadside were thought to be of great value in demolishing a ship's upperworks and secondary armaments, as distances of battle were limited by fire control and weapon performance. Warships: In the early 1900s, weapon performance, armour quality and vessel speeds generally increased along with the distances of engagement; the utility of large secondary batteries reducing as a consequence, and in addition at extreme range it was impossible to see the fall of lesser weapons and so correct the aim. Therefore, most early dreadnought battleships featured "all big gun" armaments of identical calibre, typically 11 or 12 inches, some of which were mounted in wing turrets. This arrangement was not satisfactory, however, as the wing turrets not only had a reduced fire arc for broadsides, but also because the weight of the guns put great strain on the hull and it was increasingly difficult to properly armour them. Warships: Larger and later dreadnought battleships carried superimposed or superfiring turrets (i.e. one turret mounted higher than and firing over those in front of and below it). This allowed all turrets to train on either beam, and increased the weight of fire forward and aft. The superfiring or superimposed arrangement had not been proven until after South Carolina went to sea, and it was initially feared that the weakness of the previous Virginia-class ship's stacked turrets would repeat itself. Larger and later guns (such as the US Navy's ultimate big gun design, the 16"/50 Mark 7) also could not be shipped in wing turrets, as the strain on the hull would have been too great. Warships: Modern turrets Many modern surface warships have mountings for large calibre guns, although the calibres are now generally between 3 and 5 inches (76 and 127 mm). The gunhouses are often just weatherproof covers for the gun mounting equipment and are made of light un-armoured materials such as glass-reinforced plastic. Modern turrets are often automatic in their operation, with no humans working inside them and only a small team passing fixed ammunition into the feed system. Smaller calibre weapons often operate on the autocannon principle, and indeed may not even be turrets at all; they may just be bolted directly to the deck. Warships: Turret identification On board warships, each turret is given an identification. In the British Royal Navy, these would be letters: "A" and "B" were for the turrets from the front of the ship backwards in front of the bridge, and letters near the end of the alphabet (i.e., "X", "Y", etc.) were for turrets behind the bridge ship, "Y" being the rearmost. Mountings in the middle of the ship would be "P", "Q", "R", etc. Confusingly, the Dido-class cruisers had a "Q" and the Nelson-class battleships had an "X" turret in what would logically be "C" position; the latter being mounted at the main deck level in front of the bridge and behind the "B" turret, thus having restricted training fore and aft.Secondary turrets were named "P" and "S" (port and starboard) and numbered from fore to aft, e.g. P1 being the forward port turret.There were exceptions; the battleship HMS Agincourt had the uniquely large number of seven turrets. These were numbered "1" to "7" but were unofficially nicknamed "Sunday", Monday", etc. through to "Saturday".In German use, turrets were generally named "A", "B", "C", "D", "E", going from bow to stern. Usually the radio alphabet was used on naming the turrets (e.g. "Anton", "Bruno" or "Berta", "Caesar", "Dora") as on the German battleship Bismarck.In the United States Navy, main battery turrets are numbered fore to aft. Secondary gun mounts are numbered by gun muzzle diameter in inches followed by a second digit indicating the position of the mount, with the second digit increasing fore to aft. Gun mounts not on the centerline would be assigned odd numbers on the port side and even numbers on the starboard side. For example, "Mount 52" would be the forwardmost 5-inch gun mount on the starboard side of the ship. Aircraft: History During World War I, air gunners initially operated guns that were mounted on pedestals or swivel mounts known as pintles. The latter evolved into the Scarff ring, a rotating ring mount which allowed the gun to be turned to any direction with the gunner remaining directly behind it, the weapon held in an intermediate elevation by bungee cord, a simple and effective mounting for single weapons such as the Lewis Gun though less handy when twin mounted as with the British Bristol F.2 Fighter and German "CL"-class two-seaters such as the Halberstadt and Hannover-designed series of compact two-seat combat aircraft. In a failed 1916 experiment, a variant of the SPAD S.A two-seat fighter was probably the first aircraft to be fitted with a remotely-controlled gun, which was located in a nose nacelle. Aircraft: As aircraft flew higher and faster, the need for protection from the elements led to the enclosure or shielding of the gun positions, as in the "lobsterback" rear seat of the Hawker Demon biplane fighter. Aircraft: The first British operational bomber to carry an enclosed, power-operated turret was the Boulton & Paul Overstrand twin-engined biplane, which first flew in 1933. The Overstrand was similar to its First World War predecessors in that it had open cockpits and hand-operated machine guns. However, unlike its predecessors, the Overstrand could fly at 140 mph (230 km/h) making operating the exposed gun positions difficult, particularly in the aircraft's nose. To overcome this problem, the Overstrand was fitted with an enclosed and powered nose turret, mounting a Lewis gun. Rotation was handled by pneumatic motors while elevation and depression of the gun used hydraulic rams. The pilot's cockpit was also enclosed but the dorsal (upper) and ventral (belly) gun positions remained open, though shielded. Aircraft: The Martin B-10 all-metal monocoque monoplane bomber introduced turret-mounted defensive armament within the United States Army Air Corps, almost simultaneously with the RAF's Overstrand biplane bomber design. The Martin XB-10 prototype aircraft first featured the nose turret in June 1932—roughly a year before the less advanced Overstrand airframe design—and was first produced as the YB-10 service test version by November 1933. The production B-10B version started service with the USAAC in July 1935. Aircraft: In time the number of turrets carried and the number of guns mounted increased. RAF heavy bombers of World War II such as the Handley Page Halifax (until its Mk II Series I (Special) version omitted the nose turret), Short Stirling and Avro Lancaster typically had three powered turrets: rear, mid-upper and nose. (Early in the war, some British heavy bombers also featured a retractable, remotely-operated ventral/mid-under turret). The rear turret mounted the heaviest armament: four 0.303 inch Browning machine guns or, late in the war, two AN/M2 light-barrel versions of the US Browning M2 machine gun as in the Rose-Rice turret. The tail gunner or "Tail End Charlie" position was generally accepted to be the most dangerous assignment. During the war, British turrets were largely self-contained units, manufactured by Boulton Paul Aircraft and Nash & Thompson. The same model of turret might be fitted to several different aircraft types. Some models included gun-laying radar that could lead the target and compensate for bullet drop. Aircraft: As almost a 1930s "updated" adaptation of the First World War Bristol F.2b concept, the UK introduced the concept of the "turret fighter", with aeroplanes such as the Boulton Paul Defiant and Blackburn Roc where the armament (four 0.303 inch) machine-guns was in a turret mounted behind the pilot, rather than in fixed positions in the wings. The Defiant and Roc possessed no fixed, forward-firing guns; the Bristol F.2 was designed with one synchronized Vickers machine gun firing forward on a fuselage mount. Aircraft: The concept came at a time when the standard armament of a fighter was only two machine guns and in the face of heavily armed bombers operating in formation, it was thought that a group of turret fighters would be able to concentrate their fire flexibly on the bombers; making beam, stern and rising attacks practicable. Although the idea had some merits in attacking unescorted bombers the weight and drag penalty of the turret (and gunner) put them at a disadvantage when Germany was able to escort its bombers with fighters from bases in Northern France. By this point British fighters were flying with eight machine guns which concentrated firepower for use in single fleeting attacks of fighters against bombers. Aircraft: Attempts to put this heavier armament, such as multiple 20 mm cannon in low profile aerodynamic turrets were explored by the British in the Boulton Paul P.92 and other designs but were not successful this class of weapons and heavier armament (up to and including artillery pieces as in the 1,420 examples produced of the American B-25G and B-25H Mitchell medium bombers and the experimental 'Tsetse' variant of the de Havilland Mosquito) being exclusively fuselage or underwing-mounted and thus aimed by pointing the aircraft. Not all turret designs put the gunner in the turret along with the armament: US and German-designed aircraft both featured remote-controlled turrets. Aircraft: In the US, the large, purpose-built Northrop P-61 Black Widow night fighter was produced with a remotely operated dorsal turret that had a wide range of fire though in practice it was generally fired directly forward under control of the pilot. For the last Douglas-built production blocks of the B-17F (the "B-17F-xx-DL" designated blocks) and for all versions of the B-17G Flying Fortress a twin-gun remotely operated "chin" turret, designed by Bendix and first used on the experimental YB-40 "gunship" version of the Fortress, was added to give more forward defence. Specifically designed to be compact and not obstruct the bombardier, this was operated by a swing-away diagonal column possessing a yoke to traverse the turret, and aimed by a reflector sight mounted in the windscreen. Aircraft: The intended replacement for the German Bf 110 heavy fighter, the Messerschmitt Me 210, possessed twin half-teardrop-shaped, remotely operated Ferngerichtete Drehringseitenlafette (Remote rotating side mount) FDSL 131/1B turrets, one on each side "flank" of the rear fuselage to defend the rear of the aircraft, controlled from the rear area of the cockpit. By 1942, the German He 177A Greif heavy bomber would feature a Fernbedienbare Drehlafette (Remotely controlled rotary carriage) FDL 131Z remotely operated forward dorsal turret, armed with twin 13mm MG 131 machine guns on the top of the fuselage, which was operated from the astrodome a hemispherical, clear rotating observation cover, just behind the cockpit glazing and offset to starboard atop the fuselage—a second, manned powered Hydraulische Drehlafette (Hydraulic rotary mount) HDL 131 dorsal turret, further aft on the fuselage with a MG 131 was also used on most examples. Aircraft: The US B-29 Superfortress had four remotely controlled turrets, comprising two dorsal and two ventral turrets. These were controlled from a trio of hemispherical, glazed, gunner-manned "astrodome" sighting stations operated from the pressurised sections in the nose and middle of the aircraft, each housing an altazimuth mounted pivoting gunsight to aim one or more of the unmanned remote turrets as needed, in addition to a B-17 style flexible manned tail gunner's station. Aircraft: The defensive turret on bombers fell from favour with the realization that bombers could not attempt heavily defended targets without escort regardless of their defensive armament unless very high loss rates were acceptable and the performance penalty from the weight and drag of turrets reduced speed, range and payload and increased the number of crew required. The de Havilland Mosquito light bomber was designed to operate without any defensive armament and used its speed to avoid engagement with fighters, much as the minimally armed German Schnellbomber aircraft concepts had been meant to do early in World War II. Aircraft: A small number of aircraft continued to use turrets, in particular maritime patrol aircraft such as the Avro Shackleton used one as an offensive weapon against small un-armoured surface targets. The Boeing B-52 jet bomber and many of its contemporaries (particularly Russian) featured a barbette (a British English term equivalent to the American usage of the term 'tail gun'), or a "remote turret"—an unmanned turret but often one with a more limited field of fire than a manned equivalent. Aircraft: Layout Aircraft carry their turrets in various locations: "dorsal" – on top of the fuselage, sometimes referred to as a mid-upper turret. "ventral" – underneath the fuselage, often on US heavy bombers, a Sperry-designed ball turret. "rear" or "tail" – at the very end of the fuselage. "nose" – at the front of the fuselage. "cheek" – on the flanks of the nose, as single-gun flexible defensive mounts for B-17 and B-24 heavy bombers "chin" – below the nose of the aircraft as on later versions of the Boeing B-17 Flying Fortress. "wing" – a handful of very large aircraft, such as the Messerschmitt Me 323 and the Blohm & Voss BV 222, had manned turrets in the wings "waist" or "beam" – mounted on the sides of the rear fuselage e.g. US twin- and four-engined bombers. Gallery Combat vehicles: History Amongst the first armoured vehicles to be equipped with a gun turret were the Lanchester and Rolls-Royce Armoured Cars, both produced from 1914. The Royal Naval Air Service (RNAS) raised the first British armoured car squadron during the First World War. In September 1914 all available Rolls-Royce Silver Ghost chassis were requisitioned to form the basis for the new armoured car. The following month a special committee of the Admiralty Air Department, among whom was Flight Commander T.G. Hetherington, designed the superstructure which consisted of armoured bodywork and a single fully rotating turret holding a regular water cooled Vickers machine gun. Combat vehicles: However, the first tracked combat vehicles were not equipped with turrets due to the problems with getting sufficient trench crossing while keeping the centre of gravity low, and it was not until late in World War I that the French Renault FT light tank introduced the single fully rotating turret carrying the vehicle's main armament that continues to be the standard of almost every modern main battle tank and many post-World War II self-propelled guns. The first turret designed for the FT was a circular, cast steel version almost identical to that of the prototype. It was designed to carry a Hotchkiss 8mm machine gun. Meanwhile, the Berliet Company produced a new design, a polygonal turret of riveted plate, which was simpler to produce than the early cast steel turret. It was given the name "omnibus", since it could easily be adapted to mount either the Hotchkiss machine gun or the Puteaux 37mm with its telescopic sight. This turret was fitted to production models in large numbers. Combat vehicles: In the 1930s, several nations produced multi-turreted tanks—probably influenced by the experimental British Vickers A1E1 Independent of 1926. Those that saw combat during the early part of World War II performed poorly and the concept was soon dropped. Combat vehicles without turrets, with the main armament mounted in the hull, or more often in a completely enclosed, integral armored casemate as part of the main hull, saw extensive use by both the German (as Sturmgeschütz and Jagdpanzer vehicles) and Soviet (as Samokhodnaya Ustanovka vehicles) armored forces during World War II as tank destroyers and assault guns. However, post-war, the concept fell out of favour due to its limitations, with the Swedish Stridsvagn 103 'S-Tank' and the German Kanonenjagdpanzer being exceptions. Combat vehicles: Layout In modern tanks, the turret is armoured for crew protection and rotates a full 360 degrees carrying a single large-calibre tank gun, typically in the range of 105 mm to 125 mm calibre. Machine guns may be mounted inside the turret, which on modern tanks is often on a "coaxial" mount, parallel with the larger main gun. Combat vehicles: Early designs often featured multiple weapons mounts. This concept was carried forwards into the early interwar years in Britain, Germany and the Soviet Union, arguably reaching its most absurd expression in the British Vickers A1E1 Independent tank, though this attempt was soon abandoned while the Soviet Union's similar effort produced a 'land battleship' which was actually produced and fought in defence of the Soviet Union. Combat vehicles: In modern tanks, the turret houses all the crew except the driver (who is located in the hull). The crew located in the turret typically consist of tank commander, gunner, and often a gun loader (except in tanks that have an autoloader), while the driver sits in a separate compartment with a dedicated entry and exit, though often one that allows the driver to exit via the turret basket (fighting compartment). Combat vehicles: For other combat vehicles, the turrets are equipped with other weapons dependent on role. An infantry fighting vehicle may carry a smaller calibre gun or an autocannon, or an anti-tank missile launcher, or a combination of weapons. A modern self-propelled gun mounts a large artillery gun but less armour. Lighter vehicles may carry a one-man turret with a single machine gun, occasionally the same model being shared with other classes of vehicle, such as the Cadillac Gage T50 turret/weapons station. Combat vehicles: The size of the turret is a factor in combat vehicle design. One dimension mentioned in terms of turret design is "turret ring diameter" which is the size of the aperture in the top of the chassis into which the turret is seated. Land fortifications: In 1859, the Royal Commission on the Defence of the United Kingdom were in the process of recommending a huge programme of fortifications to protect Britain's naval bases. They interviewed Captain Coles, who had bombarded Russian fortifications during the Crimean War, however Coles repeatedly lost his temper during the discussion and the commissioners failed to ask him about the gun turret that he had patented earlier in that year, with the result that none of the Palmerston Forts mounted turrets. Eventually, the Admiralty Pier Turret at Dover was commissioned in 1877 and completed in 1882. Land fortifications: In continental Europe, the invention of high explosive shells in 1885 threatened to make all existing fortifications obsolete; a partial solution was the protection of fortress guns in armoured turrets. Pioneering designs were produced by Commandant Henri-Louis-Philippe Mougin in France and Captain Maximilian Schumann in Germany. Mougin's designs were incorporated in a new generation of polygonal forts constructed by Raymond Adolphe Séré de Rivières in France and Henri Alexis Brialmont in Belgium. Developed versions of Schumann's turrets were employed after his death in the fortifications of Metz. In 1914, the Brialmont forts in the Battle of Liège proved unequal to the German "Big Bertha" 42 cm siege howitzers, which were able to penetrate the turret armour and smash turret mountings. Land fortifications: Elsewhere, armoured turrets, sometimes described a cupolas, were incorporated into coastal artillery defences. An extreme example was Fort Drum, the "concrete battleship", near Corregidor, Philippines; this mounted four huge 14-inch guns in two naval pattern turrets and was the only permanent turreted fort ever constructed by the United States. Between the wars, improved turrets formed the offensive armament of the Maginot Line forts in France. During the Second World War, some of the artillery pieces in the Atlantic Wall fortifications, such as the Cross-Channel guns, were large naval guns housed in turrets. Land fortifications: Some nations, from Albania to Switzerland and Austria, have embedded the turrets of obsolete tanks in concrete bunkers, while others have constructed or updated fortifications with modern artillery systems, such as the 1970s era Swedish coastal artillery battery on Landsort Island. Gallery
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Particle radiation** Particle radiation: Particle radiation is the radiation of energy by means of fast-moving subatomic particles. Particle radiation is referred to as a particle beam if the particles are all moving in the same direction, similar to a light beam. Due to the wave–particle duality, all moving particles also have wave character. Higher energy particles more easily exhibit particle characteristics, while lower energy particles more easily exhibit wave characteristics. Types and production: Particles can be electrically charged or uncharged: Particle radiation can be emitted by an unstable atomic nucleus (via radioactive decay), or it can be produced from some other kind of nuclear reaction. Many types of particles may be emitted: protons and other hydrogen nuclei stripped of their electrons positively charged alpha particles (α), equivalent to a helium-4 nucleus helium ions at high energy levels HZE ions, which are nuclei heavier than helium positively or negatively charged beta particles (high-energy positrons β+ or electrons β−; the latter being more common) high-speed electrons that are not from the beta decay process, but others such as internal conversion and Auger effect neutrons, subatomic particles which have no charge; neutron radiation neutrinos mesons muonsMechanisms that produce particle radiation include: alpha decay Auger effect beta decay cluster decay internal conversion neutron emission nuclear fission and spontaneous fission nuclear fusion particle colliders in which streams of high energy particles are smashed proton emission solar flares solar particle events supernova explosions Additionally, galactic cosmic rays include these particles, but many are from unknown mechanismsCharged particles (electrons, mesons, protons, alpha particles, heavier HZE ions, etc.) can be produced by particle accelerators. Ion irradiation is widely used in the semiconductor industry to introduce dopants into materials, a method known as ion implantation. Types and production: Particle accelerators can also produce neutrino beams. Neutron beams are mostly produced by nuclear reactors. Passage through matter: In radiation protection, radiation is often separated into two categories, ionizing and non-ionizing, to denote the level of danger posed to humans. Ionization is the process of removing electrons from atoms, leaving two electrically charged particles (an electron and a positively charged ion) behind. The negatively charged electrons and positively charged ions created by ionizing radiation may cause damage in living tissue. Basically, a particle is ionizing if its energy is higher than the ionization energy of a typical substance, i.e., a few eV, and interacts with electrons significantly. Passage through matter: According to the International Commission on Non-Ionizing Radiation Protection, electromagnetic radiations from ultraviolet to infrared, to radiofrequency (including microwave) radiation, static and time-varying electric and magnetic fields, and ultrasound belong to the non-ionizing radiations.The charged particles mentioned above all belong to the ionizing radiations. When passing through matter, they ionize and thus lose energy in many small steps. The distance to the point where the charged particle has lost all its energy is called the range of the particle. The range depends upon the type of particle, its initial energy, and the material it traverses. Similarly, the energy loss per unit path length, the 'stopping power', depends on the type and energy of the charged particle and upon the material. The stopping power and hence, the density of ionization, usually increases toward the end of range and reaches a maximum, the Bragg Peak, shortly before the energy drops to zero.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Kerbside collection** Kerbside collection: Kerbside collection or curbside collection is a service provided to households, typically in urban and suburban areas, of collecting and disposing of household waste and recyclables. It is usually accomplished by personnel using specially built vehicles to pick up household waste in containers that are acceptable to, or prescribed by, the municipality and are placed on the kerb. History: Before the 20th century, the amount of waste produced by a household was relatively small. Household waste was often simply thrown out of an open window, buried in the garden or deposited in outhouses (see more at urban archaeology). When human concentrations became more dense, waste collectors, called nightmen or gong farmers were hired to collect the night soil from pail closets, performing their duties only at night (hence the name). Meanwhile, disposing of refuse became a problem wherever cities grew. Often refuse was placed in unusable areas just outside the city, such as wetlands and tidal zones. One example is London, which from Roman times disposed of its refuse outside the London Wall beside the River Thames. Another example is 1830s Manhattan, where thousands of hogs were permitted to roam the streets and eat garbage. A small industry developed as "swill children" collected kitchen refuse to sell for pig feed and the rag and bone man traded goods for bones (used for glue) and rags (essential for paper manufacture prior to the invention of wood pulping). Later, in the late nineteenth century, trash was fed to swine in industrial. History: As sanitation engineering came to be practised beginning in the mid-19th century and human waste was conveyed from the home in pipes, the gong farmer was replaced by the municipal rubbish collector as there remained growing amounts of household refuse, including fly ash from coal, which was burnt for home heating. In Paris, the rag and bone man worked side by side with the municipal bin man, though reluctantly: in 1884, Eugène Poubelle introduced the first integrated kerbside collection and recycling system, requiring residents to separate their waste into perishable items, paper and cloth, and crockery and shells. He also established rules for how private collectors and city workers should cooperate and he developed standard dimensions for refuse containers: his name in France is now synonymous with the garbage can. Under Poubelle, food waste and other organics collected in Paris were transported to nearby Saint Ouen where they were composted. This continued well into the 20th century when plastics began to contaminate the waste stream.From the late-19th century to the mid-20th century, more or less consistent with the rise of consumables and disposable products municipalities began to pass anti-dumping ordinances and introduce kerbside collection. Residents were required to use a variety of refuse containers to facilitate kerbside collection but the main type was a variation of Poubelle's metal garbage container. It was not until the late 1960s that the green bin bag was introduced by Glad. Later, as waste management practices were introduced with the aim of reducing landfill impacts, a range of container types, mostly made of durable plastic, came to be introduced to facilitate the proper diversion of the waste stream. Such containers include blue boxes, green bins and wheelie bins or dumpsters. Over time, waste collection vehicles gradually increased in size from the hand pushed tip cart or English dust cart, a name by which these vehicles are still referred, to large compactor trucks. Waste management and resource recovery: Kerbside collection is today often referred to as a strategy of local authorities to collect recyclable items from the consumer. Kerbside collection is considered a low-risk strategy to reduce waste volumes and increase the recycling rates. Recyclable materials are typically collected in large wheelie bins, plastic bags, or small open, coloured plastic boxes, specifically designated for content. Waste management and resource recovery: Recyclable materials that may be separately collected from municipal waste include: Biodegradable waste component Green waste Kitchen and food waste Christmas treesRecyclable materials, depending on location Office paper Newsprint Paperboard Cardboard Corrugated fiberboard Plastics (#1 PET, #2 HDPE natural and coloured, #3 PVC narrow-necked containers, #4 LDPE, #5 PP, #6 Polystyrene (but not expanded polystyrene), #7 other mixed resin plastics) Glass Copper Aluminium Steel and Tinplate Co-mingled recyclables- can be sorted by a clean materials recovery facilityKerbside collection of recyclable resources is aimed to recover purer waste streams with higher market value than by other collection methods. If the household residents incorrectly separate the recyclable materials, or put the wrong items in the recycling bin, the whole vehicle load of recycling will have to be rejected and sent to landfill or incineration if it is deemed to be contaminated. Waste management and resource recovery: Kerbside collection and household recycling schemes are also being used as tools by many local authorities to increase the public's awareness of their waste production. New and emerging waste treatment technologies such as mechanical biological treatment may offer an alternative to kerbside collection through automated separation of waste in recycling factories. Recycling variants Kerbside collection encompasses many subtly different systems, which differ mostly on where in the process the recyclates are sorted and cleaned. The main categories are 1) mixed waste collection, 2) commingled recyclables, and 3) source separation. A waste collection vehicle generally picks up the waste. Source separation used to be the preferred method due to the high cost of sorting commingled (mixed waste) collection. However, advances in sorting technology have substantially lowered this overhead, and many areas that had developed source separation programs have switched to what is called co-mingled collection. Usage by country: Australia Residential kerbside collection is carried out by local governments, with some exceptions, e.g. some large apartment complexes may have their own separate arrangements with commercial providers. Available services and details vary from council to council. Councils generally provide residents with wheelie bins for kerbside collection of normal waste which is collected weekly or fortnightly. Many councils also have less frequent kerbside collection of bulkier waste ("hard rubbish") collected once or twice a year. Usage by country: Councils provide their residents with two or three wheelie bins, depending on the council, with some councils having different options for different properties. The two-bin system consists of a recycling bin (usually 240 litre) for co-mingled recyclables, and a general waste bin which is often smaller (e.g. 140 litre, 120 litre or 80 litre). The three-bin system consists of the above two bins plus a green waste bin (usually 240 litre). Not all councils have a green waste bin collection service. Many councils provide the option of larger bins, smaller bins, or additional bins. Usage by country: A wide variety of hard plastics, glass bottles and jars, steel cans, aluminium cans, paper and cardboard can be put in the recycling bin. The green waste bin can be used for garden organics (e.g. small branches, leaves, grass clippings), and councils are increasingly allowing food scraps, used paper towels and tissues and other biodegradable organics to be placed in the green waste bin. The council may turn the green waste into mulch (garden waste collection only) or compost and extract energy (food organics and garden organics). Details of what can and cannot be placed into each bin vary by council. Usage by country: Most councils follow a standard colour scheme for their wheelie bins, specified in Australian standard AS4123. According to the standard, general waste bins have a red lid, recycling bins have a yellow lid, green waste bins have a lime green lid, and all these bins have a dark green or black body. Not all councils follow this colour scheme. For example, recycling bins in some councils have a blue body and yellow lid. Usage by country: Bins are emptied according to one of several patterns. Generally speaking, general waste bins are emptied weekly while recycling bins and green waste bins are emptied fortnightly on alternate weeks. Many councils with food waste recycling have switched to emptying green waste bins weekly and general waste bins and recycling bins fortnightly on alternate weeks. Some councils empty recycling bins weekly, while others do so only during a certain period like the Christmas and summer holiday period, switching to fortnightly at other times. Usage by country: Recycling bins are provided at no additional cost, while the general waste bin is either at no additional cost or at an annual cost. The green waste bin, where available, is either provided to all residents, or available as an option to residents, either at an additional annual cost, a one-off cost or no additional cost, depending on the council. Some councils limit the availability of green waste bins (e.g. the City of Cockburn limits them to properties over a certain land size). Many councils provide the option of larger bins than the standard ones provided (even larger than 240 litres in some cases) or additional bins at additional annual cost. Some provide the option of a smaller general waste bin at a reduced cost. Usage by country: Many councils also have kerbside collection of bulky waste. There may be different kinds of collection, e.g.: Large branches E-waste (e.g. TVs, computers) which the council may recycle Hard rubbish (anything else too big or too heavy for the wheelie bin)For bulky waste, residents are asked to place items directly on the kerbside. There may be other rules, e.g. what can and cannot be collected, limits on the amount of rubbish that will be collected, etc. that vary from council to council. Collections may occur once or a few times a year on specific dates or date ranges, or on demand with a limit to the number of times per year, depending on the council. Usage by country: Austria Kerbside collection is universal in Austria. The service is provided by the municipality. A fee applies for non-recyclable general waste, while recyclables are collected for free, being mainly financed by companies selling packaged goods via a mandatory fee. Different waste containers are used for general waste (black), paper (red), plastics (yellow), organic waste (green or brown), metal (blue) and glass (white for clear glass, green for coloured glass). In some rural areas, appropriately coloured plastic bags are used instead of bins. In many areas, a collection service for Christmas trees is provided in early January. Usage by country: Canada Calgary, Alberta has adopted "Curbside" Recycling and uses blue bins. The blue cart programme accepts all types of recyclables, including plastics 1–7. It is picked up weekly for the cost of $8.00 per month. This programme is mandatory. Usage by country: In 1981 Resource Integration Systems (RIS) in collaboration with Laidlaw International tested the first blue box recycling system on 1500 homes in Kitchener, Ontario. Due to the success of the project the City of Kitchener put out a contract for public bid in 1984 for a recycling system citywide. Laidlaw won the bid and continued with the popular blue box recycling system. Today hundreds of cities around the world use the blue box system or a similar variation.Many Canadian municipalities use "green bins" for curbside recycling. Others, such as Moncton, use wet/dry waste separation and recovery programmes. Usage by country: New Zealand In New Zealand, kerbside collection of general refuse and recycling, and in some areas organic waste, is the responsibility of the local city or district council, or private contractors. Practices and collection methods vary widely from council to council and company to company. Some examples of collection are: Auckland Council: Two 240-litre wheelie bins are supplied: a red-lidded bin for general refuse, collected weekly, and a blue-lidded bin for recyclables, collected fortnightly. Usage by country: Christchurch City Council: Three wheelie bins are supplied: a 140-litre red-lidded bin for general refuse, a 240-litre yellow-lidded bin for recyclables, and an 80-litre green-lidded bin for organic waste. The organic waste bins are collected weekly, while the recyclables and general refuse bins are collected on alternating weeks. Hamilton City Council and Hutt City Council: A 45-litre bin is supplies for recyclables, collected weekly. General refuse is collected weekly using user-pays official council bags. Dunedin City Council, Palmerston North City Council and Wellington City Council: Two bins are supplied: a 45-litre or 70-litre bin for glass, and an 80-litre or 240-litre wheelie bin for non-glass recyclables. These two bins are collected on alternating weeks. General refuse is collected weekly using user-pays official council bags. Rodney District Council: A 45-litre bin is supplies for recyclables, collected weekly. There is no council collection of general waste, and all general waste collection is carried out by independent companies. Taupō District Council: A 45-litre bin is supplies for recyclables, collected weekly. General refuse is collected weekly using user-pays system of orange tags - one orange tag is to be placed on a standard rubbish bag up to 60 litres capacity, or half an orange sticker can be placed on two supermarket bags tied together. Upper Hutt City Council: Recycling is to be placed in plastic bags, with paper and cardboard collected in the first week, and plastic, metal and glass in the second week. General refuse is collected weekly using user-pays official council bags. Usage by country: Waitakere City Council: A 140-litre wheelie bin is provided for recyclables, collected fortnightly. General refuse is collected weekly using user-pays official council bags.By 1996 the New Zealand cities of Auckland, Waitakere, North Shore and Lower Hutt had kerbside recycling bins available. In New Plymouth, Wanganui and Upper Hutt recyclable material was collected if placed in suitable bags. By 2007 73% of New Zealanders had access to kerbside recycling.Kerbside collection of organic waste is carried out by the Mackenzie District Council and the Timaru District Council. Christchurch City Council is introducing the system to their kerbside collection. Other councils are carrying out trials. Usage by country: United Kingdom In the United Kingdom, the Household Waste Recycling Act 2003 requires local authorities to provide every household with a separate collection of at least two types of recyclable materials by 2010. Usage by country: There has been criticism in the difference of schemes used in the country such as the colour of bins, whether the recycling is collected from wheelie bins, coloured plastic boxes or plastic bags, and also the fact that the bins, boxes and bags obstruct the roads and pavements, and how the additional collection vehicles and waste collection services needed also contribute to traffic congestion and produce carbon dioxide emissions. Some find the colour differences confusing, and some people want a national scheme. A typical example is to compare two neighbouring councils in Greater Manchester; Bury Council and Salford City Council. Bury uses blue for cans, plastic and glass, green for paper and cardboard and brown for garden waste. Salford uses blue for paper and card, brown for cans, plastic and glass, and pink for garden waste. Most councils use grey or black bins for general waste, with a few exceptions such as Liverpool, which uses a purple bin for general waste, a colour that is used by no other council. Usage by country: Another controversial issue in the UK is the frequency of the waste collections. To save money, many councils are reducing the frequency of both general waste and recycling collections. This has caused problems from larger households, and has led to an increase of overflowing bins and fly tipping. For example, previously, Bury Council collected general waste once a week and recyclables fortnightly. This has now changed to fortnightly for general waste and monthly (every 4 weeks) collection of recyclables. Usage by country: A few councils are using "forced" recycling, by replacing the large, 240 litre general waste bin with a smaller 180 litre or 140 litre bin, and using the old 240 litre one for recyclables. This may be made worse by fortnightly collections of the "small" bin, and strict rules such as "No extra waste will be collected" and "Bin lids must be fully closed". Stockport Council is a notable user of this scheme. Their recycling rates have substantially increased as a result, but there are usually complaints from household residents. Trafford Council also use a similar scheme, but the small grey bin is emptied every week. In addition, the two named councils, and more, collect food waste together with garden waste, by sending out kitchen caddies and compostable bin liners. These prevent food waste (including meat and fish) from going to landfill or incineration, and to increase the council's recycling rate. The food and garden waste is usually collected weekly or fortnightly, and is taken to an in-vessel composting facility or an anaerobic digestion plant, where the biodegradable waste is organically recycled into soil fertiliser to be used on local farms.In North West England, all the glass collected for recycling is used within the UK, around half of the plastics and cans are used in the UK; the rest is sent further afield to continental Europe or China to be made into new products, and paper and cardboard collected is sent to local paper mills to be reprocessed into newspapers, tissues and paper towels, cardboard and office paper. Once again, some of the paper will be sent further afield. Usage by country: Some councils only use three bins, i.e. general waste, food and garden waste and mixed recyclables. This means that a single-stream recycling system is used, so plastics, cans and glass go into the same bin as paper and cardboard. Although this is much easier for the residents, there is more sorting required, and the paper quality is sometimes of a low grade due to food waste or liquid contamination or shards of glass in the paper, and so this scheme has been criticised. Usage by country: Also, most councils require residents to remove all plastic caps and lids from plastic bottles, and thoroughly rinse them out to avoid unpleasant smells or liquid contamination. This is because the caps and lids are made from a different type of plastic (PP) from the bottle (PET/HDPE); if the bottles are squashed down and folded over like toothpaste tubes and caps are screwed back on, the size and volume of bottles is greatly reduced, so that more bottles can be contained inside the recycling bins. In fact many bottlers, especially bottled water companies, have now designed their bottles to be collapsible; though this message has not been effectively disseminated to the consumer. A collapsible bottle takes between 25% and 33% of the space a non-collapsed bottle. Usage by country: Labels and neck rings can, however, be left on the bottles and they do not need to be removed. This also means that only plastic bottles can be recycled. Many councils are still trying to remind residents that plastic pots, tubs and trays (yoghurts, desserts and spreads), plastic carrier bags, crisp packets and cling film cannot be recycled via the kerbside economically. If too many incorrect, unsuitable or unsafe materials are put into the recycling bin, this means that the whole vehicle load of recycling will have to be rejected and sent directly to landfill or incineration at a high cost. Contamination is normally a problem if recyclables are collected from wheelie bins, as the bin collection workers can only look at the top; there may be a small amount of contamination 'hidden' at the bottom. Councils that use many bags and boxes (Edinburgh) suffer from less contamination but are complicated and the loose paper and cardboard, and plastic recycling bags are blown around by the wind, and paper can become wet due to rain or snow, or contaminated with food residue, dirt, oil or grease. Usage by country: Spain Basque Country In the province of Gipuzkoa, this system is implanted in many towns as Usurbil, Hernani, Oiartzun, Antzuola, Legorreta, Itsasondo, Zaldibia, Anoeta, Alegia, Irura, Zizurkil, Astigarraga, Ordizia, Oñati and Lezo, where the common used name in Basque is "atez-atekoa", which means door-by-door. Due to the big success in these towns, with more than 80% of the waste recycled, 34 towns in Gipuzkoa are considering setting this system up in 2013, like Arrasate, Bergara, Aretxabaleta, Eskoriatza, Legazpi, Tolosa or Pasaia. Usage by country: The "atez-ate" system consists in hanging each kind of rubbish in a hanger outside the house a certain day or days in a week. For example, in Hernani, they have three days to hang their organic rubbish, two days for plastics and metallics, one for paper and one for rejects residuals. Usage by country: This system started in the town of Usurbil in the year 2009, due to the incinerator of the region of Gipuzkoa which was going to be built in this town, exactly in the neighborhood of Zubieta. Three years after, the construction of the incinerator was stopped by the government of the region, suggesting that the incinerator was a source of contamination and the high cost of the building. Criticism: This type of collection service is subject to criticism: The large (wheelie bin) container encourages the "out of sight" rubbish mentality and invites more rubbish to be disposed of. The bins and collection trucks are not suited to narrow roads or houses with steep driveways or steps. They lock local authorities into capital intensive equipment programmes and multi-national providers. Co-mingled recyclables are sometimes not being successfully managed by automated sorting stations and the rates of diversion are low. In some cases, this results in mountains of unsorted recyclables. Criticism: In the UK especially, some councils are sending out at least 4 large bins - residents of smaller houses with no gardens have little space to put them; not everybody lives in a house, some live in blocks of apartments Many use small plastic boxes, bags and lockable outdoor food waste 'caddies' which get blown around and lost, bad for recycling participation.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Send My Bag** Send My Bag: Send My Bag is a luggage shipping company from the UK that aims to allow travellers to transport baggage, claiming a lower cost than airline baggage handling fees. History: Send My Bag was formed after founder Adam Ewart was charged excess baggage fees when helping his girlfriend travel home from university. After paying the fee Ewart returned home and searched the web for a service which offered to deliver luggage at a lower cost. With nothing found, he decided to start Send My Bag. Setting up just one web page for under £100, he created a simple booking system for the business venture. History: Prior to 2011 Send My Bag primarily existed as a niche luggage shipper for students. From 2011 onwards, their focus broadened as airlines such as Ryanair took additional steps to dissuade passengers from checking bags, and as airline revenue from baggage fees worldwide dramatically increased.The service is available as an iOS app.On 9 September 2012, Ewart appeared on the BBC television program Dragons' Den in search of an investment for his door to door baggage service.Following the unsuccessful appearance on Dragons' Den, Send My Bag announced a £100k funding from investors Lough Shore Investments.On 24 November 2014 on CNBC news Ewart announced the launch of US worldwide services. In 2015 Send My Bag further expanded launching a US domestic service and worldwide services from Australia.Send My Bag provides 24hr worldwide support from offices in Bangor, Northern Ireland and New York City. Awards: In celebration of Queen Elizabeth II's 92nd birthday, on 21 April 2018, Send My Bag was announced as winner of the Queen's Awards for Enterprise. At the time of winning the award Send My Bag had shipped 250,000 pieces of luggage in the previous 12 months.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Covariance and correlation** Covariance and correlation: In probability theory and statistics, the mathematical concepts of covariance and correlation are very similar. Both describe the degree to which two random variables or sets of random variables tend to deviate from their expected values in similar ways. Covariance and correlation: If X and Y are two random variables, with means (expected values) μX and μY and standard deviations σX and σY, respectively, then their covariance and correlation are as follows: covariance cov XY=σXY=E[(X−μX)(Y−μY)] correlation corr XY=ρXY=E[(X−μX)(Y−μY)]/(σXσY), so that where E is the expected value operator. Notably, correlation is dimensionless while covariance is in units obtained by multiplying the units of the two variables. If Y always takes on the same values as X, we have the covariance of a variable with itself (i.e. σXX ), which is called the variance and is more commonly denoted as σX2, the square of the standard deviation. The correlation of a variable with itself is always 1 (except in the degenerate case where the two variances are zero because X always takes on the same single value, in which case the correlation does not exist since its computation would involve division by 0). More generally, the correlation between two variables is 1 (or –1) if one of them always takes on a value that is given exactly by a linear function of the other with respectively a positive (or negative) slope. Covariance and correlation: Although the values of the theoretical covariances and correlations are linked in the above way, the probability distributions of sample estimates of these quantities are not linked in any simple way and they generally need to be treated separately. Multiple random variables: With any number of random variables in excess of 1, the variables can be stacked into a random vector whose i th element is the i th random variable. Then the variances and covariances can be placed in a covariance matrix, in which the (i, j) element is the covariance between the i th random variable and the j th one. Likewise, the correlations can be placed in a correlation matrix. Time series analysis: In the case of a time series which is stationary in the wide sense, both the means and variances are constant over time (E(Xn+m) = E(Xn) = μX and var(Xn+m) = var(Xn) and likewise for the variable Y). In this case the cross-covariance and cross-correlation are functions of the time difference: cross-covariance σXY(m)=E[(Xn−μX)(Yn+m−μY)], cross-correlation ρXY(m)=E[(Xn−μX)(Yn+m−μY)]/(σXσY). If Y is the same variable as X, the above expressions are called the autocovariance and autocorrelation: autocovariance σXX(m)=E[(Xn−μX)(Xn+m−μX)], autocorrelation ρXX(m)=E[(Xn−μX)(Xn+m−μX)]/(σX2).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Pointer analysis** Pointer analysis: In computer science, pointer analysis, or points-to analysis, is a static code analysis technique that establishes which pointers, or heap references, can point to which variables, or storage locations. It is often a component of more complex analyses such as escape analysis. A closely related technique is shape analysis. This is the most common colloquial use of the term. A secondary use has pointer analysis be the collective name for both points-to analysis, defined as above, and alias analysis. Points-to and alias analysis are closely related but not always equivalent problems. Example: For the following example program, a points-to analysis would compute that the points-to set of p is {x, y}. Introduction: As a form of static analysis, fully precise pointer analysis can be shown to be undecidable. Most approaches are sound, but range widely in performance and precision. Many design decisions impact both the precision and performance of an analysis; often (but not always) lower precision yields higher performance. These choices include: Field sensitivity (also known as structure sensitivity): An analysis can either treat each field of a struct or object separately, or merge them. Introduction: Array sensitivity: An array-sensitive pointer analysis models each index in an array separately. Other choices include modelling just the first entry separately and the rest together, or merging all array entries. Context sensitivity or polyvariance: Pointer analyses may qualify points-to information with a summary of the control flow leading to each program point. Flow sensitivity: An analysis can model the impact of intraprocedural control flow on points-to facts. Heap modeling: Run-time allocations may be abstracted by: their allocation sites (the statement or instruction that performs the allocation, e.g., a call to malloc or an object constructor), a more complex model based on a shape analysis, the type of the allocation, or one single allocation (this is called heap-insensitivity). Heap cloning: Heap- and context-sensitive analyses may further qualify each allocation site by a summary of the control flow leading to the instruction or statement performing the allocation. Subset constraints or equality constraints: When propagating points-to facts, different program statements may induce different constraints on a variable's points-to sets. Equality constraints (like those used in Steensgaard's algorithm) can be tracked with a union-find data structure, leading to high performance at the expense of the precision of a subset-constraint based analysis (e.g., Andersen's algorithm). Context-Insensitive, Flow-Insensitive Algorithms: Pointer analysis algorithms are used to convert collected raw pointer usages (assignments of one pointer to another or assigning a pointer to point to another one) to a useful graph of what each pointer can point to.Steensgaard's algorithm and Andersen's algorithm are common context-insensitive, flow-insensitive algorithms for pointer analysis. They are often used in compilers, and have implementations in the LLVM codebase. Flow-Insensitive Approaches: Many approaches to flow-insensitive pointer analysis can be understood as forms of abstract interpretation, where heap allocations are abstracted by their allocation site (i.e., a program location). Flow-Insensitive Approaches: Many flow-insensitive algorithms are specified in Datalog, including those in the Soot analysis framework for Java.Context-sensitive, flow-sensitive algorithms achieve higher precision, generally at the cost of some performance, by analyzing each procedure several times, once per context. Most analyses use a "context-string" approach, where contexts consist of a list of entries (common choices of context entry include call sites, allocation sites, and types). To ensure termination (and more generally, scalability), such analyses generally use a k-limiting approach, where the context has a fixed maximum size, and the least recently added elements are removed as needed. Three common variants of context-sensitive, flow-insensitive analysis are: Call-site sensitivity Object sensitivity Type sensitivity Call-site sensitivity In call-site sensitivity, the points-to set of each variable (the set of abstract heap allocations each variable could point to) is further qualified by a context consisting of a list of callsites in the program. These contexts abstract the control-flow of the program. Flow-Insensitive Approaches: The following program demonstrates how call-site sensitivity can achieve higher precision than a flow-insensitive, context-insensitive analysis. Flow-Insensitive Approaches: For this program, a context-insensitive analysis would (soundly but imprecisely) conclude that x can point to either the allocation holding y or that of z, so y2 and z2 may alias, and both could point to either allocation. A callsite-sensitive analysis would analyze id twice, once for call-site 1 and once for call-site 2, and the points-to facts for x would be qualified by the call-site, enabling the analysis to deduce that when main returns, y2 can only point to the allocation holding y and z2 can only point to the allocation holding z. Flow-Insensitive Approaches: Object sensitivity In an object sensitive analysis, the points-to set of each variable is qualified by the abstract heap allocation of the receiver object of the method call. Unlike call-site sensitivity, object-sensitivity is non-syntactic or non-local: the context entries are derived during the points-to analysis itself. Type sensitivity Type sensitivity is a variant of object sensitivity where the allocation site of the receiver object is replaced by the class/type containing the method containing the allocation site of the receiver object. This results in strictly fewer contexts than would be used in an object-sensitive analysis, which generally means better performance.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**K-homology** K-homology: In mathematics, K-homology is a homology theory on the category of locally compact Hausdorff spaces. It classifies the elliptic pseudo-differential operators acting on the vector bundles over a space. In terms of C∗ -algebras, it classifies the Fredholm modules over an algebra. An operator homotopy between two Fredholm modules (H,F0,Γ) and (H,F1,Γ) is a norm continuous path of Fredholm modules, t↦(H,Ft,Γ) , t∈[0,1]. K-homology: Two Fredholm modules are then equivalent if they are related by unitary transformations or operator homotopies. The K0(A) group is the abelian group of equivalence classes of even Fredholm modules over A. The K1(A) group is the abelian group of equivalence classes of odd Fredholm modules over A. Addition is given by direct summation of Fredholm modules, and the inverse of (H,F,Γ) is (H,−F,−Γ).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Symbescaline** Symbescaline: Symbescaline, or 3,5-diethoxy-4-methoxyphenethylamine, is a lesser-known psychedelic drug. It is an isomer of asymbescaline. Symbescaline was first synthesized by Alexander Shulgin. In his book PiHKAL (Phenethylamines i Have Known And Loved), the dosage is listed as 240 mg, and the duration listed as unknown. Symbescaline causes few effects, which include alertness and a threshold. Very little data exists about the pharmacological properties, metabolism, and toxicity of symbescaline.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Mulberry molar** Mulberry molar: Mulberry molars are a dental condition usually associated with congenital syphilis, characterized by multiple rounded rudimentary enamel cusps on the permanent first molars. These teeth are functional but can be cosmetically fixed with crowns, bridges, or implants.Just above the gum line, the mulberry molar looks normal. A deformity becomes apparent towards the cusp or top grinding surface of the tooth. Here, the size of the mulberry molar is diminished in all aspects, creating a stumpy version of a conventional molar. The cause of the molar atrophy is thought to be enamel hypoplasia, or a deficiency in tooth enamel. The underlying dentin and pulp of the tooth is normal, but the enamel covering or molar sheath is thin and deformed, creating a smaller version of a typical tooth.The grinding surface of a mulberry molar is also corrupted. Normally, the grinding surface of a molar has a pit and is surrounded by a circular ridge at the top of the tooth, which is used for grinding. The cusp deformity of the mulberry molar is characterized by an extremely shallow or completely absent pit. Instead, the pit area is filled with globular structures bunched together all along the top surface of the cusp. This type of deformity is also thought to be caused by enamel hypoplasia. Mulberry molars are typically functional and do not need treatment. If the deformity is severe or the person is bothered by the teeth, there are several options. The teeth can be covered with a permanent cast crown, stainless steel crown, or the molars can be removed and an implant or bridge can be put in place of the mulberry molar.A mulberry molar is caused by congenital syphilis, which is passed from the mother to the child in the uterus through the placenta. Since this particular symptom of congenital syphilis manifests later in childhood with the eruption of the permanent molars, it is a late stage marker for the disease. Hutchinson's teeth, marked by dwarfed teeth and deformed cusps that are spaced abnormally far apart, are another dental deformity caused by congenital syphilis. Mulberry molars and Hutchinson's teeth will often occur together. Pregnant women with syphilis should tell their doctors about the condition and be treated for it during pregnancy, otherwise the baby should be screened for the disease after birth and treated with penicillin if necessary.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Sub-pixel resolution** Sub-pixel resolution: In digital image processing, sub-pixel resolution can be obtained in images constructed from sources with information exceeding the nominal pixel resolution of said images. Aliasing: When an object with a certain resolution is represented on a display with lower resolution, the imperfections due to the loss of information are known as aliasing. This can happen with geometric objects, vector graphics, vector fonts or 3D graphics. The most common kind of visual aliasing is when a smooth object such as a line appears jagged because the pixels are large enough to be easily distinguished by the naked eye. These effects can be reduced by anti-aliasing techniques, e.g. adjusting the colour or transparency of a pixel according to how much of it is covered by the object (sub-pixel rendering). Example: For example, if the image of a ship of length 50 metres (160 ft), viewed side-on, is 500 pixels long, the nominal resolution (pixel size) on the side of the ship facing the camera is 0.1 metres (3.9 in). Now sub-pixel resolution of well resolved features can measure ship movements which are an order of magnitude (10×) smaller. Movement is specifically mentioned here because measuring absolute positions requires an accurate lens model and known reference points within the image to achieve sub-pixel position accuracy. Small movements can however be measured (down to 1 cm) with simple calibration procedures. Specific fit functions often suffer specific bias with respect to image pixel boundaries. Users should therefore take care to avoid these "pixel locking" (or "peak locking") effects. Determining feasibility: Whether features in a digital image are sharp enough to achieve sub-pixel resolution can be quantified by measuring the point spread function (PSF) of an isolated point in the image. If the image does not contain isolated points, similar methods can be applied to edges in the image. It is also important when attempting sub-pixel resolution to keep image noise to a minimum. This, in the case of a stationary scene, can be measured from a time series of images. Appropriate pixel averaging, through both time (for stationary images) and space (for uniform regions of the image) is often used to prepare the image for sub-pixel resolution measurements.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**ISO/IEC 14443** ISO/IEC 14443: ISO/IEC 14443 Identification cards -- Contactless integrated circuit cards -- Proximity cards is an international standard that defines proximity cards used for identification, and the transmission protocols for communicating with it. Standard: The standard is developed by ISO/IEC JTC 1 (Joint Technical Committee 1) / SC 17 (Subcommittee 17) / WG 8 (Working Group 8). Standard: Parts ISO/IEC 14443-1:2018 Part 1: Physical characteristics ISO/IEC 14443-2:2016 Part 2: Radio frequency power and signal interface ISO/IEC 14443-3:2018 Part 3: Initialization and anticollision ISO/IEC 14443-4:2018 Part 4: Transmission protocol Types Cards may be Type A and Type B, both of which communicate via radio at 13.56 MHz (RFID HF). The main differences between these types concern modulation methods, coding schemes (Part 2) and protocol initialization procedures (Part 3). Both Type A and Type B cards use the same transmission protocol (described in Part 4). The transmission protocol specifies data block exchange and related mechanisms: data block chaining waiting time extension multi-activationISO/IEC 14443 uses following terms for components: PCD: proximity coupling device (the card reader) PICC: proximity integrated circuit card Physical size: Part 1 of the standard specifies that the card shall be compliant with ISO/IEC 7810 or ISO/IEC 15457-1, or "an object of any other dimension". Notable implementations: Ventra cards used in bus and trains MIFARE cards (partial or full implementation, depending on product) Biometric passports EMV payment cards (PayPass, Visa payWave, ExpressPay) National identity cards in the European Economic Area Near Field Communication is based on in part, and is compatible with, ISO/IEC 14443 Calypso, open security standard for transit fare collection systems CIPURSE, open security standard for transit fare collection systems
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Adenoidectomy** Adenoidectomy: Adenoidectomy is the surgical removal of the adenoid for reasons which include impaired breathing through the nose, chronic infections, or recurrent earaches. The effectiveness of removing the adenoids in children to improve recurrent nasal symptoms and/or nasal obstruction has not been well studied. The surgery is less commonly performed in adults in whom the adenoid is much smaller and less active than it is in children. It is most often done on an outpatient basis under general anesthesia. Post-operative pain is generally minimal and reduced by icy or cold foods. The procedure is often combined with tonsillectomy (this combination is usually called an "adenotonsillectomy" or "T&A"), for which the recovery time is an estimated 10–14 days, sometimes longer, mostly dependent on age. Adenoidectomy: Adenoidectomy is not often performed under one year of age as adenoid function is part of the body's immune system but its contribution to this decreases progressively beyond this age. Medical uses: The indications for adenoidectomy are still controversial. Widest agreement surrounds the removal of the adenoid for obstructive sleep apnea, usually combined with tonsillectomy. Even then, it has been observed that a significant percentage of the study population (18%) did not respond. There is also support for adenoidectomy in recurrent otitis media in children previously treated with tympanostomy tubes. Finally, the effectiveness of adenoidectomy in children with recurrent upper respiratory tract infections, common cold, otitis media and moderate nasal obstruction has been questioned with the outcome, in some studies, being no better than watchful waiting. Frequency: In 1971, more than one million Americans underwent tonsillectomies and/or adenoidectomies, of which 50,000 consisted of adenoidectomy alone.By 1996, roughly a half million children underwent some surgery on their adenoid and/or tonsils in both outpatient and inpatient settings. This included approximately 60,000 tonsillectomies, 250,000 combined tonsillectomies and adenoidectomies, and 125,000 adenoidectomies. By 2006, the total number had risen to over 700,000 but when adjusted for population changes, the tonsillectomy "rate" had dropped from 0.62 per thousand children to 0.53 per thousand. A larger decline for combined tonsillectomy and adenoidectomy was noted - from 2.20 per thousand to 1.46. There was no significant change in adenoidectomy rates for chronic infectious reasons (0.25 versus 0.21 per 1000). History: Adenoidectomy was first performed using a ring forceps through the nasal cavity by William Meyer in 1867.In the early 1900s, adenoidectomies began to be routinely combined with tonsillectomy. Initially, the procedures were performed by otolaryngologists, general surgeons, and general practitioners but over the past 30 years have been performed almost exclusively by otolaryngologists. History: Then, adenoidectomies were performed as treatment of anorexia nervosa, mental retardation, and enuresis or to promote 'good health'. By current standards, these indications seem odd but may be explained by the hypothesis that children might have failed to thrive if they had chronically sore throats or severe obstructive sleep apnea (OSA). Also, children who heard poorly because of chronic otitis media might have had unrecognized speech delay mistaken for intellectual disability. Adenoidectomy might have helped to resolve ear fluid problems, speech delays, and consequent perceptions of low intelligence. History: The relationship between enuresis and obstructive apnea, and the benefit of adenoidectomy by implication, is complex and controversial. On one hand, the frequency of enuresis declines as children grow older. On the other, the size of the adenoid, and again by implication, any obstruction that they might be causing, also declines with increasing age. These two factors make it difficult to distinguish the benefits of adenoidectomy from age-related spontaneous improvement. Further, most of the studies in the medical literature which appear to show benefit from adenoidectomy have been case reports or case series. Such studies are prone to unintentional bias. Finally, a recent study of six thousand children has not shown an association between enuresis and obstructive sleep in general but an increase with advancing severity of obstructive sleep apnea, observed only in girls.A decline in the frequency of the procedure started in the 1930s as its use became controversial. Tonsillitis and adenoiditis requiring surgery became less frequent with the development of antimicrobial agents and a decline in upper respiratory infections among older school-aged children. Also, several studies had shown that adenoidectomy and tonsillectomy were ineffective for many of the indications used at that time as well as the suggestion of an increased risk of developing poliomyelitis after the procedure, later disproved. Prospective clinical trials, performed over the last 2 decades, have redefined the appropriate indications for tonsillectomy and adenoidectomy (T&A), tonsillectomy alone, and adenoidectomy alone.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Sawyer motor** Sawyer motor: A Sawyer motor or planar motor (also called area drive) is a multi-coordinate drive that can perform several independent movements in one plane. Goods can be transported along any path to any location. In the industrial environment, the planar motor replaces cross tables in machine tools, for example. This class of motors is named for Bruce Sawyer, who invented it in 1968. Operating principles: The planar motor is a driver/guideless drive system. The planar motor consists of a flat base element ("stator") made of tiles and carriages ("movers") arranged on it. The latter are equipped with mostly cuboid magnets whose magnetization is perpendicular to the plane and which are controlled in the X and Y directions with alternating polarity. The movement of the slides themselves is achieved by further magnets arranged parallel to the plane, thus allowing the slides to move in the X and Z directions. The number and arrangement of magnets perpendicular and parallel to the base determines the degrees of freedom and positioning accuracy. Operating principles: The operating principle of the planar motor can be traced back to Bruce Sawyer, which is why it is also known as the Sawyer motor. The U.S. engineer applied for a patent for a "Magnetic Positioning Device" in 1966, which was confirmed in 1968. Application area: Planar motors are mainly used for handling products in individual machines or in machine lines. They combine the dynamics of conventional linear transport systems with magnetic fabric technology, which enables individual and decoupled product transport. In addition, there is the traceability. Since there is no mechanical connection between the base surface and the carriage, planar motors are characterized by minimal maintenance and cleaning requirements. The cover of the base surface can also be made of stainless steel, glass, or plastic, for example, to protect it from leakage of liquids or cleaning processes.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Tetramer assay** Tetramer assay: A tetramer assay (also known as a tetramer stain) is a procedure that uses tetrameric proteins to detect and quantify T cells that are specific for a given antigen within a blood sample. The tetramers used in the assay are made up of four major histocompatibility complex (MHC) molecules, which are found on the surface of most cells in the body. MHC molecules present peptides to T-cells as a way to communicate the presence of viruses, bacteria, cancerous mutations, or other antigens in a cell. If a T-cell's receptor matches the peptide being presented by an MHC molecule, an immune response is triggered. Thus, MHC tetramers that are bioengineered to present a specific peptide can be used to find T-cells with receptors that match that peptide. Tetramer assay: The tetramers are labeled with a fluorophore, allowing tetramer-bound T-cells to be analyzed with flow cytometry. Quantification and sorting of T-cells by flow cytometry enables researchers to investigate immune response to viral infection and vaccine administration as well as functionality of antigen-specific T-cells. Generally, if a person's immune system has encountered a pathogen, the individual will possess T cells with specificity toward some peptide on that pathogen. Hence, if a tetramer stain specific for a pathogenic peptide results in a positive signal, this may indicate that the person's immune system has encountered and built a response to that pathogen. History: This methodology was first published in 1996 by a lab at Stanford University. Previous attempts to quantify antigen-specific T-cells involved the less accurate limiting dilution assay, which estimates numbers of T-cells at 50-500 times below their actual levels. Stains using soluble MHC monomers were also unsuccessful due to the low binding affinity of T-cell receptors and MHC-peptide monomers. MHC tetramers can bind to more than one receptor on the target T-cell, resulting in an increased total binding strength and lower dissociation rates. Uses: CD8+ T-cells Tetramer stains usually analyze cytotoxic T lymphocyte (CTL) populations. CTLs are also called CD8+ T-cells, because they have CD8 co-receptors that bind to MHC class I molecules. Most cells in the body express MHC class I molecules, which are responsible for processing intracellular antigens and presenting at the cell's surface. If the peptides being presented by MHC class I molecules are foreign—for example, derived from viral proteins instead of the cell's own proteins—the CTL with a receptor that matches the peptide will destroy the cell. Tetramer stains allow for the visualization, quantification, and sorting of these cells by flow cytometry, which is extremely useful in immunology. T-cell populations can be tracked over the duration of a virus or after the application of a vaccine. Tetramer stains can also be paired with functional assays like ELIspot, which detects the number of cytokine secreting cells in a sample. Uses: MHC Class I Tetramer Construction MHC tetramer molecules developed in a lab can mimic the antigen presenting complex on cells and bind to T-cells that recognize the antigen. Class I MHC molecules are made up of a polymorphic heavy α-chain associated with an invariant light chain beta-2 microglobulin (β2m). Escherichia coli are used to synthesize the light chain and a shortened version of the heavy chain that includes the biotin 15 amino acid recognition tag. These MHC chains are biotinylated with the enzyme BirA and refolded with the antigenic peptide of interest. Biotin is a small molecule that forms a strong bond with another protein called streptavidin. Fluorophore tagged streptavidin is added to the bioengineered MHC monomers, and the biotin-streptavidin interaction causes four MHC monomers to bind to the streptavidin and create a tetramer. When the tetramers are mixed with a blood sample, they will bind to T-cells expressing the appropriate antigen specific receptor. Any MHC tetramers that are not bound are washed out of the sample before it is analyzed with flow cytometry.Recent advancements within recombinant MHC molecules have democratised peptide MHC complex formulation and subsequent multimerisation. Highly active formulations of a broad range of MHC class I molecules now allows non-experts users to make their own custom peptide-MHC complexes from day-to-day in any lab without special equipment. Uses: CD4+ T-cells Tetramers that bind to helper T-cells have also been developed. Helper T-cells or CD4+ T-cells express CD4 co-receptors. They bind to class II MHC molecules, which are only expressed in professional antigen-presenting cells like dendritic cells or macrophages. Class II MHC molecules present extracellular antigens, allowing helper T-cells to detect bacteria, fungi, and parasites. Class II MHC tetramer use is becoming more common, but the tetramers are more difficult to create than class I tetramers and the bond between helper T-cells and MHC molecules is even weaker. Uses: Natural Killer T-cells Natural killer T-cells (NKT cells) can also be visualized with tetramer technology. NKT cells bind to proteins that present lipid or glycolipid antigens. The antigen presenting complex that NKT cells bind to involves CD1 proteins, so tetramers made of CD1 can be used to stain for NKT cells. Examples: An early application of tetramer technology focused on the cell-mediated immune response to HIV infection. MHC tetramers were developed to present HIV antigens and used to find the percentage of CTLs specific to those HIV antigens in blood samples of infected patients. This was compared to results of cytotoxic assays and plasma RNA viral load to characterize the function of CTLs in HIV infection. The CTLs that bound to tetramers were sorted into ELIspot wells for analysis of cytokine secretion.Another study utilized MHC tetramer complexes to investigate the effectiveness of an influenza vaccine delivery method. Mice were given subcutaneous and intranasal vaccinations for influenza, and tetramer stains coupled with flow cytometry were used to quantify the CTLs specific to the antigen used in the vaccine. This allowed for comparison of the immune response (the number of T-cells that target a virus) in two different vaccine delivery methods.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Methylenedioxycathinone** Methylenedioxycathinone: 3,4-Methylenedioxycathinone (also known as MDC, Nitrilone, Amylone and βk-MDA) is an empathogen and stimulant of the phenethylamine, amphetamine, and cathinone classes and the β-keto analogue of MDA.Methylenedioxycathinone has been investigated as antidepressant and antiparkinson agent.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Coffin–Siris syndrome** Coffin–Siris syndrome: Coffin–Siris Syndrome (CSS), first described in 1970 by Dr Grange S. Coffin and Dr E. Siris, is a rare genetic disorder that causes developmental delays and absent fifth finger and toe nails. There had been 31 reported cases by 1991. The number of occurrences since then has grown and is now reported to be around 200.The differential includes Nicolaides–Baraitser syndrome. Presentation: mild to moderate to severe intellectual disability, also called "developmental disability" short fifth digits with hypoplastic or absent nails low birth weight feeding difficulties upon birth frequent respiratory infections during infancy hypotonia joint laxity delayed bone age microcephaly coarse facial features, including wide nose, wide mouth, and thick eyebrows and lashes Causes: Disease can be inherited as an autosomal dominant trait, however most cases of CSS appear to be the result of a de novo mutation. Causes: This syndrome has been associated with mutations in the ARID1B gene, which is the most prevalent in CSS.There are also multiple genes mutations associated to this syndrome, including SOX11, ARID2, DPF2, PHF6, SMARCA2, SMARCA4, SMARCB1, SMARCC2, SMARCE1, SOX4.The diagnosis is generally based on the presence of major and at least one minor clinical sign and can be confirmed by molecular genetic testing of the causative genes. Recent studies revealed that fifth finger nail/distal phalanx hypoplasia or aplasia is not a mandatory finding.Typically, lab work will be done to rule out other conditions and genetic testing will also be performed to get the official diagnosis. Treatment: There is no known cure or standard for treatment. Treatment is based on symptoms and may include physical, occupational and speech therapy and educational services as well.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Armed Forces Institute of Regenerative Medicine** Armed Forces Institute of Regenerative Medicine: The Armed Forces Institute of Regenerative Medicine (AFIRM) is a federally funded institution in the United States, which is committed to develop clinical therapies for the following five areas: Burn repair Wound healing without scarring Craniofacial reconstruction Limb reconstruction, regeneration or transplantation Compartment syndrome, a condition related to inflammation after surgery or injury that can lead to increased pressure, impaired blood flow, nerve damage and muscle deathThe Institute was established in 2008 by the United States Department of Defense.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**D-Bus** D-Bus: D-Bus (short for "Desktop Bus") is a message-oriented middleware mechanism that allows communication between multiple processes running concurrently on the same machine. D-Bus was developed as part of the freedesktop.org project, initiated by GNOME developer Havoc Pennington to standardize services provided by Linux desktop environments such as GNOME and KDE.The freedesktop.org project also developed a free and open-source software library called libdbus, as a reference implementation of the specification. This library should not be confused with D-Bus itself, as other implementations of the D-Bus specification also exist, such as GDBus (GNOME), QtDBus (Qt/KDE), dbus-java and sd-bus (part of systemd). Overview: D-Bus is an inter-process communication (IPC) mechanism initially designed to replace the software component communications systems used by the GNOME and KDE Linux desktop environments (CORBA and DCOP respectively). The components of these desktop environments are normally distributed in many processes, each one providing only a few—usually one—services. These services may be used by regular client applications or by other components of the desktop environment to perform their tasks. Overview: Due to the large number of processes involved—adding up processes providing the services and clients accessing them—establishing one-to-one IPC between all of them becomes an inefficient and quite unreliable approach. Instead, D-Bus provides a software-bus abstraction that gathers all the communications between a group of processes over a single shared virtual channel. Processes connected to a bus do not know how it is internally implemented, but D-Bus specification guarantees that all processes connected to the bus can communicate with each other through it. Overview: Linux desktop environments take advantage of the D-Bus facilities by instantiating multiple buses, notably: a single system bus, available to all users and processes of the system, that provides access to system services (i.e. services provided by the operating system and also by any system daemons) a session bus for each user login session, that provides desktop services to user applications in the same desktop session, and allows the integration of the desktop session as a wholeA process can connect to any number of buses, provided that it has been granted access to them. In practice, this means that any user process can connect to the system bus and to its current session bus, but not to another user's session buses, or even to a different session bus owned by the same user. The latter restriction may change in the future if all user sessions are combined into a single user bus.D-Bus provides additional or simplifies existing functionality to the applications, including information-sharing, modularity and privilege separation. For example, information on an incoming voice-call received through Bluetooth or Skype can be propagated and interpreted by any currently-running music player, which can react by muting the volume or by pausing playback until the call is finished.D-Bus can also be used as a framework to integrate different components of a user application. For instance, an office suite can communicate through the session bus to share data between a word processor and a spreadsheet. D-Bus specification: Bus model Every connection to a bus is identified in the context of D-Bus by what is called a bus name. A bus name consists of two or more dot-separated strings of letters, digits, dashes, and underscores. An example of a valid bus name is org.freedesktop.NetworkManager.When a process sets up a connection to a bus, the bus assigns to the connection a special bus name called unique connection name. Bus names of this type are immutable—it is guaranteed they will not change as long as the connection exists—and, more importantly, they cannot be reused during the bus lifetime. This means that no other connection to that bus will ever have assigned such unique connection name, even if the same process closes down the connection to the bus and creates a new one. Unique connection names are easily recognizable because they start with the otherwise forbidden colon character. An example of a unique connection name is :1.1553 (the characters after the colon have no particular meaning). D-Bus specification: A process can ask for additional bus names for its connection, provided that any requested name is not already being used by another connection to the bus. In D-Bus parlance, when a bus name is assigned to a connection, it is said the connection owns the bus name. In that sense, a bus name cannot be owned by two connections at the same time, but, unlike unique connection names, these names can be reused if they are available: a process may reclaim a bus name released—purposely or not—by another process.The idea behind these additional bus names, commonly called well-known names, is to provide a way to refer to a service using a prearranged bus name. For instance, the service that reports the current time and date in the system bus lies in the process whose connection owns the org.freedesktop.timedate1 bus name, regardless of which process it is. D-Bus specification: Bus names can be used as a simple way to implement single-instance applications (second instances detect that the bus name is already taken). It can also be used to track a service process lifecycle, since the bus sends a notification when a bus name is released due to a process termination. D-Bus specification: Object model Because of its original conception as a replacement for several component oriented communications systems, D-Bus shares with its predecessors an object model in which to express the semantics of the communications between clients and services. The terms used in the D-Bus object model mimic those used by some object oriented programming languages. That does not mean that D-Bus is somehow limited to OOP languages—in fact, the most used implementation (libdbus) is written in C, a procedural programming language. D-Bus specification: In D-Bus, a process offers its services by exposing objects. These objects have methods that can be invoked, and signals that the object can emit. Methods and signals are collectively referred to as the members of the object. Any client connected to the bus can interact with an object by using its methods, making requests or commanding the object to perform actions. For instance, an object representing a time service can be queried by a client using a method that returns the current date and time. A client can also listen to signals that an object emits when its state changes due to certain events, usually related to the underlying service. An example would be when a service that manages hardware devices—such as USB or network drivers—signals a "new hardware device added" event. Clients should instruct the bus that they are interested in receiving certain signals from a particular object, since a D-Bus bus only passes signals to those processes with a registered interest in them.A process connected to a D-Bus bus can request it to export as many D-Bus objects as it wants. Each object is identified by an object path, a string of numbers, letters and underscores separated and prefixed by the slash character, called that because of their resemblance to Unix filesystem paths. The object path is selected by the requesting process, and must be unique in the context of that bus connection. An example of a valid object path is /org/kde/kspread/sheets/3/cells/4/5. However, it is not enforced—but also not discouraged—to form hierarchies within object paths. The particular naming convention for the objects of a service is entirely up to the developers of such service, but many developers choose to namespace them using the reserved domain name of the project as a prefix (e.g. /org/kde).Every object is inextricably associated to the particular bus connection where it was exported, and, from the D-Bus point of view, only lives in the context of such connection. Therefore, in order to be able to use a certain service, a client must indicate not only the object path providing the desired service, but also the bus name under which the service process is connected to the bus. This in turn allows that several processes connected to the bus can export different objects with identical object paths unambiguously. D-Bus specification: An interface specifies members—methods and signals—that can be used with an object. It is a set of declarations of methods (including its passing and returning parameters) and signals (including its parameters) identified by a dot-separated name resembling the Java language interfaces notation. An example of a valid interface name is org.freedesktop.Introspectable. Despite their similarity, interface names and bus names should not be mistaken. A D-Bus object can implement several interfaces, but at least must implement one, providing support for every method and signal defined by it. The combination of all interfaces implemented by an object is called the object type.When using an object, it is a good practice for the client process to provide the member's interface name besides the member's name, but is only mandatory when there is an ambiguity caused by duplicated member names available from different interfaces implemented by the object—otherwise, the selected member is undefined or erroneous. An emitted signal, on the other hand, must always indicate to which interface it belongs. D-Bus specification: The D-Bus specification also defines several standard interfaces that objects may want to implement in addition to its own interfaces. Although technically optional, most D-Bus service developers choose to support them in their exported objects since they offer important additional features to D-Bus clients, such as introspection. These standard interfaces are: org.freedesktop.DBus.Peer: provides a way to test if a D-Bus connection is alive. D-Bus specification: org.freedesktop.DBus.Introspectable: provides an introspection mechanism by which a client process can, at run-time, get a description (in XML format) of the interfaces, methods and signals that the object implements. org.freedesktop.DBus.Properties: allows a D-Bus object to expose the underlying native object properties or attributes, or simulate them if it does not exist. D-Bus specification: org.freedesktop.DBus.ObjectManager: when a D-Bus service arranges its objects hierarchically, this interface provides a way to query an object about all sub-objects under its path, as well as their interfaces and properties, using a single method call.The D-Bus specification defines a number of administrative bus operations (called "bus services") to be performed using the /org/freedesktop/DBus object that resides in the org.freedesktop.DBus bus name. Each bus reserves this special bus name for itself, and manages any requests made specifically to this combination of bus name and object path. The administrative operations provided by the bus are those defined by the object's interface org.freedesktop.DBus. These operations are used for example to provide information about the status of the bus, or to manage the request and release of additional well-known bus names. D-Bus specification: Communications model D-Bus was conceived as a generic, high-level inter-process communication system. To accomplish such goals, D-Bus communications are based on the exchange of messages between processes instead of "raw bytes". D-Bus messages are high-level discrete items that a process can send through the bus to another connected process. Messages have a well-defined structure (even the types of the data carried in their payload are defined), allowing the bus to validate them and to reject any ill-formed message. D-Bus specification: In this regard, D-Bus is closer to an RPC mechanism than to a classic IPC mechanism, with its own type definition system and its own marshaling. D-Bus specification: The bus supports two modes of interchanging messages between a client and a service process: One-to-one request-response: This is the way for a client to invoke an object's method. The client sends a message to the service process exporting the object, and the service in turn replies with a message back to the client process. The message sent by the client must contain the object path, the name of the invoked method (and optionally the name of its interface), and the values of the input parameters (if any) as defined by the object's selected interface. The reply message carries the result of the request, including the values of the output parameters returned by the object's method invocation, or exception information if there was an error. D-Bus specification: Publish/subscribe: This is the way for an object to announce the occurrence of a signal to the interested parties. The object's service process broadcasts a message that the bus passes only to the connected clients subscribed to the object's signal. The message carries the object path, the name of the signal, the interface to which the signal belongs, and also the values of the signal's parameters (if any). The communication is one-way: there are no response messages to the original message from any client process, since the sender knows neither the identities nor the number of the recipients.Every D-Bus message consists of a header and a body. The header is formed by several fields that identify the type of message, the sender, as well as information required to deliver the message to its recipient (destination bus name, object path, method or signal name, interface name, etc.). The body contains the data payload that the receiver process interprets—for instance the input or output arguments. All the data is encoded in a well known binary format called the wire format which supports the serialization of various types, such as integers and floating-point numbers, strings, compound types, and so on, also referred to as marshaling. D-Bus specification: The D-Bus specification defines the wire protocol: how to build the D-Bus messages to be exchanged between processes within a D-Bus connection. However, it does not define the underlying transport method for delivering these messages. Internals: Most existing D-Bus implementations follow the architecture of the reference implementation. This architecture consists of two main components: a point-to-point communications library that implements the D-Bus wire protocol in order to exchange messages between two processes. In the reference implementation this library is libdbus. In other implementations libdbus may be wrapped by another higher-level library, language binding, or entirely replaced by a different standalone implementation that serves the same purpose. This library only supports one-to-one communications between two processes. Internals: a special daemon process that plays the bus role and to which the rest of the processes connect using any D-Bus point-to-point communications library. This process is also known as the message bus daemon, since it is responsible for routing messages from any process connected to the bus to another. In the reference implementation this role is performed by dbus-daemon, which itself is built on top of libdbus. Another implementation of the message bus daemon is dbus-broker, which is built on top of sd-bus. Internals: The libdbus library (or its equivalent) internally uses a native lower-level IPC mechanism to transport the required D-Bus messages between the two processes in both ends of the D-Bus connection. D-Bus specification does not mandate which particular IPC transport mechanisms should be available to use, as it is the communications library that decides what transport methods it supports. For instance, in Unix-like operating systems such as Linux libdbus typically uses Unix domain sockets as the underlying transport method, but it also supports TCP sockets.The communications libraries of both processes must agree on the selected transport method and also on the particular channel used for their communication. This information is defined by what D-Bus calls an address. Unix-domain sockets are filesystem objects, and therefore they can be identified by a filename, so a valid address would be unix:path=/tmp/.hiddensocket. Both processes must pass the same address to their respective communications libraries to establish the D-Bus connection between them. An address can also provide additional data to the communications library in the form of comma-separated key=value pairs. This way, for example, it can provide authentication information to a specific type of connection that supports it. Internals: When a message bus daemon like dbus-daemon is used to implement a D-Bus bus, all processes that want to connect to the bus must know the bus address, the address by which a process can establish a D-Bus connection to the central message bus process. In this scenario, the message bus daemon selects the bus address and the remainder processes must pass that value to their corresponding libdbus or equivalent libraries. dbus-daemon defines a different bus address for every bus instance it provides. These addresses are defined in the daemon's configuration files. Internals: Two processes can use a D-Bus connection to exchange messages directly between them, but this is not the way in which D-Bus is normally intended to be used. The usual way is to always use a message bus daemon (i.e. dbus-daemon) as a communications central point to which each process should establish its point-to-point D-Bus connection. When a process—client or service—sends a D-Bus message, the message bus process receives it in the first instance and delivers it to the appropriate recipient. The message bus daemon may be seen as a hub or router in charge of getting each message to its destination by repeating it through the D-Bus connection to the recipient process. The recipient process is determined by the destination bus name in the message's header field, or by the subscription information to signals maintained by the message bus daemon in the case of signal propagation messages. The message bus daemon can also produce its own messages as a response to certain conditions, such as an error message to a process that sent a message to a nonexistent bus name.dbus-daemon improves the feature set already provided by D-Bus itself with additional functionality. For example, service activation allows automatic starting of services when needed—when the first request to any bus name of such service arrives at the message bus daemon. This way, service processes neither need to be launched during the system initialization or user initialization stage nor need they consume memory or other resources when not being used. This feature was originally implemented using setuid helpers, but nowadays it can also be provided by systemd's service activation framework. Service activation is an important feature that facilitates the management of the process lifecycle of services (for example when a desktop component should start or stop). History and adoption: D-Bus was started in 2002 by Havoc Pennington, Alex Larsson (Red Hat) and Anders Carlsson. The version 1.0—considered API stable—was released in November 2006. History and adoption: Heavily influenced by the DCOP system used by versions 2 and 3 of KDE, D-Bus has replaced DCOP in the KDE 4 release. An implementation of D-Bus supports most POSIX operating systems, and a port for Windows exists. It is used by Qt 4 and later by GNOME. In GNOME it has gradually replaced most parts of the earlier Bonobo mechanism. It is also used by Xfce. History and adoption: One of the earlier adopters was the (nowadays deprecated) Hardware Abstraction Layer. HAL used D-Bus to export information about hardware that has been added to or removed from the computer.The usage of D-Bus is steadily expanding beyond the initial scope of desktop environments to cover an increasing amount of system services. For instance, the NetworkManager network daemon, BlueZ bluetooth stack and PulseAudio sound server use D-Bus to provide part or all of their services. systemd uses the D-Bus wire protocol for communication between systemctl and systemd, and is also promoting traditional system daemons to D-Bus services, such as logind. Another heavy user of D-Bus is Polkit, whose policy authority daemon is implemented as a service connected to the system bus. Implementations: libdbus Although there are several implementations of D-Bus, the most widely used is the reference implementation libdbus, developed by the same freedesktop.org project that designed the specification. However, libdbus is a low-level implementation that was never meant to be used directly by application developers, but as a reference guide for other reimplementations of D-Bus (such as those included in standard libraries of desktop environments, or in programming language bindings). The freedesktop.org project itself recommends applications authors to "use one of the higher level bindings or implementations" instead. The predominance of libdbus as the most used D-Bus implementation caused the terms "D-Bus" and "libdbus" to be often used interchangeably, leading to confusion. Implementations: GDBus GDBus is an implementation of D-Bus based on GIO streams included in GLib, aiming to be used by GTK+ and GNOME. GDBus is not a wrapper of libdbus, but a complete and independent reimplementation of the D-Bus specification and protocol. MATE Desktop and Xfce (version 4.14), which are also based on GTK+ 3, also use GDBus. sd-bus In 2013, the systemd project rewrote libdbus in an effort to simplify the code, but it also resulted in a significant increase of the overall D-Bus performance. In preliminary benchmarks, BMW found that the systemd's D-Bus library increased performance by 360%. By version 221 of systemd, the sd-bus API was declared stable. Implementations: kdbus kdbus was a project that aimed to reimplement D-Bus as a kernel-mediated peer-to-peer inter-process communication mechanism. Beside performance improvements, kdbus would have advantages arising from other Linux kernel features such as namespaces and auditing, security from the kernel mediating, closing race conditions, and allowing D-Bus to be used during boot and shutdown (as needed by systemd). kdbus inclusion in the Linux kernel proved controversial, and was dropped in favor of BUS1, as a more generic inter-process communication. Implementations: Language bindings Several programming language bindings for D-Bus have been developed, such as those for Java, C#, Ruby, and Perl.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Carva** Carva: Carva is a Lean Steer elliptical cross-trainer, human-powered, tricycle. Two wheels in the front and one in the back. Used to simulate outdoor walking or running without causing excessive pressure to the joints, hence decreasing the risk of impact injuries. Carva offers a non-impact cardiovascular workout that can vary from light to high intensity based on the riding preference chosen by the user. Lean Steer: Steering mechanism actuated by degree of body lean and weight transfer. Uses: Carva can be employed for many uses: Recreation Workout/Fitness Brakes: Carva uses standard bicycle brakes like: rim brakes, in which friction pads are compressed against the wheel rims; internal hub brakes, in which the friction pads are contained within the wheel hubs; or disc brakes, with a separate rotor for braking. With hand-operated brakes, force is applied to brake levers mounted on the handlebars and transmitted via Bowden cables or hydraulic lines to the friction pads. A rear hub brake may be either hand-operated or pedal-actuated, as in the back pedal coaster brakes which were popular in North America until the 1960s, and are common in children's bicycles.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Kpatch** Kpatch: kpatch is a feature of the Linux kernel that implements live patching of a running kernel, which allows kernel patches to be applied while the kernel is still running. By avoiding the need for rebooting the system with a new kernel that contains the desired patches, kpatch aims to maximize the system uptime and availability. At the same time, kpatch allows kernel-related security updates to be applied without deferring them to scheduled downtimes. Internally, kpatch allows entire functions in a running kernel to be replaced with their patched versions, doing that safely by stopping all running processes while the live patching is performed.kpatch is developed by Red Hat, with its source code licensed under the terms of the GNU General Public License version 2 (GPLv2). In May 2014, kpatch was submitted for inclusion into the Linux kernel mainline, and the minimalistic foundations for live patching were merged into the Linux kernel mainline in kernel version 4.0, which was released on April 12, 2015. Internals: Internally, kpatch consists of two parts – the core kernel module executes the live patching mechanism by altering kernel's inner workings, while a set of userspace utilities prepares individual hot patch kernel modules from source diffs and manages their application. Live kernel patching is performed at the function level, meaning that kpatch can replace entire functions in the running kernel with their patched versions by using facilities provided by ftrace to "route around" old versions of functions; that way, hot patches can also easily be undone. No changes to the kernel's internal data structures are possible; however, security patches, which are one of the natural candidates to be used with kpatch, rarely contain changes to the kernel's data structures.kpatch ensures that hot patches are applied atomically and safely by stopping all running processes while the hot patch is applied, and by ensuring that none of the stopped processes is running inside the functions that are to be patched. Such an approach simplifies the whole live patching mechanism and prevents certain issues associated with the way data structures are used by original and patched versions of functions. As the downside, this approach also leaves the possibility for a hot patch to fail, and introduces a small amount of latency required for stopping all running processes. History: Red Hat announced and publicly released kpatch in February 2014 under the terms of the GNU General Public License version 2 (GPLv2), shortly before SUSE released its own live kernel patching implementation called kGraft. kpatch was merged into the Linux kernel mainline, and it was submitted for the inclusion in May 2014.kpatch has been included in Red Hat Enterprise Linux 7.0, released on June 10, 2014, as a technology preview.Minimalistic foundations for live kernel patching were merged into the Linux kernel mainline in kernel version 4.0, which was released on April 12, 2015. Those foundations, based primarily on the kernel's ftrace functionality, form a common core capable of supporting hot patching by both kpatch and kGraft, by providing an application programming interface (API) for kernel modules that contain hot patches and an application binary interface (ABI) for the userspace management utilities. However, the common core included into Linux kernel 4.0 supports only the x86 architecture and does not provide any mechanisms for ensuring function-level consistency while the hot patches are applied.Since April 2015, there is ongoing work on porting kpatch to the common live patching core provided by the Linux kernel mainline. However, implementation of the required function-level consistency mechanisms has been delayed because the call stacks provided by the Linux kernel may be unreliable in situations that involve assembly code without proper stack frames; as a result, the porting work remains in progress as of September 2015. In an attempt to improve the reliability of kernel's call stacks, a specialized sanity-check stacktool userspace utility has also been developed.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Paradigms of AI Programming** Paradigms of AI Programming: Paradigms of AI Programming: Case Studies in Common Lisp (ISBN 1-55860-191-0) is a well-known programming book by Peter Norvig about artificial intelligence programming using Common Lisp. History: The Lisp programming language has survived since 1958 as a primary language for Artificial Intelligence research. This text was published in 1992 as the Common Lisp standard was becoming widely adopted. Norvig introduces Lisp programming in the context of classic AI programs, including General Problem Solver (GPS) from 1959, ELIZA: Dialog with a Machine, from 1966, and STUDENT: Solving Algebra Word Problems, from 1964. The book covers more recent AI programming techniques, including Logic Programming, Object-Oriented Programming, Knowledge Representation, Symbolic Mathematics and Expert Systems.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**IBM Retail Store Solutions** IBM Retail Store Solutions: IBM Retail Store Solutions was IBM's division in the retail market segment. During its run, IBM Retail Store Solutions had several product lines, both hardware and software. Hardware products included IBM SurePOS 700 point-of-sale systems or printers. Software products under its portfolio included IBM 4690, IRES (IBM Retail Environment for SUSE LINUX), Lotus Expeditor, Lotus Expeditor Integrator, IBM Store Integrator, IBM Store Integrator Graphic User Interface. Besides those, IBM RSS was responsible for the creation of software such as the 4690 software, IRES. and POSS for DOS. IBM won the 2008 Point of Sale Green Excellence of the Year award. On April 17, 2012, IBM announced a definitive agreement under which Toshiba TEC acquired IBM's Retail Store Solutions business. IBM IRES: IBM IRES (IBM Retail Environment for SUSE LINUX) retail functions such as those provided by IBM's 4690 features, including Server-based POS loading and booting, Industry-standard system-wide configuration and change management, Automatic problem determination with single-step dump button support, Combined server/terminal support, Client preload GUI and Remote Management Agent for systems management support. It is also one of the platforms that support WebSphere MQ platform as part of the IBM Store Integration Framework.IBM IRES made a partnership with 360Commerce via the 2005 PartnerWorld Beacon Award for IRES Solutions. IBM IRES is also used with Triversity's Transactionware Enterprise and, it is the first Java POS solution designed from the ground up to harness the power of J2EE in retail. IBM IRES: IRES has an IBM IRES Sales Mastery course and exam which can be found online. IBM POSS for DOS: IBM POSS (Point of Sale Subsystem for DOS) is a legacy software that functions with IBM PC DOS and enables many IBM customers from the 1980s. The latest publicly supported version is 2.2.0.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Frantoio** Frantoio: Frantoio and Leccino cultivars are the principal raw material for Italian olive oils from Tuscany. Frantoio is fruity, with a stronger aftertaste than Leccino. About the tree: The Frantoio tree grows well in milder climates, but is not as tolerant of heat and cold as Spanish olive cultivars. The tree grows moderately and has an airy canopy. It tends to be highly productive in the right conditions and has a tendency to grow more like a tree than a bush, which is different from most olive trees. Average oil yield is 23-28% of the fruit. It is self-pollinating and is excellent for pollinating other cultivars.Note that cross-pollination will increase yield.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Choice in eCommerce** Choice in eCommerce: Choice in eCommerce - Initiative for Choice and Innovation in Online-Trade - is an initiative of online retailers throughout Europe that works for unrestricted trade and innovation in Europe. Spokesman for the initiative is Oliver Prothmann, founder of the Multi-Channel tool chartixx. History: Choice in eCommerce was founded on May 8, 2013 by several online retailers in Berlin, Germany. The cause was, in the view of the initiative, sales bans and online restrictions by individual manufacturers. The dealers felt cut off from their main sales channel and thus deprived them the opportunity to use online platforms like Amazon, eBay or Rakuten in a competitive market for the benefit of their customers. Sales of all products and services traded online in Europe in 2012 counted 311.6 billion Euros. Through online trading in Europe is estimated that up to two million jobs were created. History: The initiative later received support from other industry stakeholders including German BVOH (Bundesverband Onlinehandel) and CCIA (Computer and Communications Industry Association). In the summer of 2013 Choice started a petition calling for free and fair trade. On December 17, 2013 Oliver Prothmann handed the petition containing 14,341 signatures of online retailers from across Europe to Olli Rehn, Vice-President of the European Commission. The petition calls for manufacturers and brand owners to refrain from trade restrictions or prohibitions for online retailers.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Parabounce** Parabounce: The Parabounce is a balloon-like apparatus created by Stephen Meadows and patented on December 4, 2001 (U.S. 6,325,329). Description and operation: The apparatus consists of a gas-filled balloon envelope made from polyurethane-coated material and of a sufficient diameter and volume so that the balloon, when fully inflated and balanced with appropriate weight, almost counteracts the effects of gravity of a pilot.The Parabounce incorporates patented features that permit it to be weight balanced and quickly deflated in case of an emergency.A parachute-style harness secures the pilot to the balloon. By pushing off the ground with his or her legs, the pilot ascends in the balloon to a maximum height of about 120 feet before gradually descending due to the positive weight of the pilot. Optional tether lines held by persons serving as the ground crew prevent the balloon from floating out of control. Once aloft, the pilot can float and glide for distances up to a quarter mile before gradually descending. In the news: The Parabounce premiered on NBC's Today Show on August 9, 1999. Katie Couric flew the Parabounce a hundred feet above Rockefeller Center.On February 24, 2002, Five Parabounce units were internally lit and supported acrobats for the closing ceremonies of the Winter Olympics in Salt Lake City, Utah,On June 29, 2006, Snapple sponsored Parabounce balloon flights at Bryant Park in New York City Parabounce was featured in Inflatable Magazine on June/July 2003 pg 46-47
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Multisensory worship** Multisensory worship: Multisensory worship or multi-sensory worship is a form of alternative worship, often associated with the emerging church. Multisensory worship is usually corporate worship that is designed to engage the senses. Proponents of multisensory worship often contend that in the postmodern world people don't simply want to hear about God or sing about God, but instead want to feel and experience the presence of God. It frequently involves the use of video and onscreen graphics which are designed to speak to people through the power of the image. Audio-visual elements are often added to supplement sermons and teaching. Thus multisensory worship involves a great deal of experimentation and variety in crafting a more holistic worship experience. Spaces designed for multisensory worship often feature floral arrangements, paintings, and creative lighting to enhance the experience of participants. Multisensory worship is part prayer, part worship, sometimes using prayer stations to evoke the physical senses. As opposed to just reading a book or hearing a sermon, a room is set up for participants to have an experience that involves the physical body in the act of worship. Multisensory worship: An early pioneer of multisensory worship, Leonard Sweet, has a theory that worship should be EPIC in nature: experiential, participatory, image-rich and connective. Another early proponent, Bob Rognlien, wrote in his book Experiential Worship that worship should engage the heart, soul, mind and strength. See also the books Handbook for MultiSensory Worship Volumes I and II and Redesigning Worship, all by Kim Miller of Ginghamsburg Church in Tipp City, Ohio, and the works of Greg Atkinson who also speaks and writes on multisensory worship, leadership, and hospitality.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Azapride** Azapride: Azapride is the azide derivative of the dopamine antagonist clebopride synthesized in order to label dopamine receptors. It is an irreversible dopamine antagonist.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Autonomous circuit** Autonomous circuit: An autonomous circuit in analogue electronics is a circuit that produces a time-varying output without having a time-varying input (i.e., it has only DC power as an input). In digital electronics, an autonomous circuit may have a clock signal input, but no other inputs, and operates autonomously (i.e. independently of other circuits), cycling through a set series of states. A Moore machine is autonomous if it has no data inputs, the clock signal not counting as a data input. If a Moore machine has data inputs, they may determine what the next state is, even though they do not affect the outputs of any given state, and this is a non-autonomous circuit.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Long-range reconnaissance patrol** Long-range reconnaissance patrol: A long-range reconnaissance patrol, or LRRP, is a small, well-armed reconnaissance team that patrols deep in enemy-held territory.The concept of scouts dates back to the origins of warfare itself. However, in modern times these specialized units evolved from examples such as Rogers' Rangers in colonial British America, the Lovat Scouts in World War One, the Long Range Desert Group and the Special Air Service in the Western Desert Campaign and North West Europe, similar units such as Force 136 in East Asia, and the special Finnish light infantry units during the Second World War. Long-range reconnaissance patrol: Postwar, the role was carried in various North Atlantic Treaty Organization (N.A.T.O.) and British Commonwealth countries by units that could trace their origins to these wartime creations such as the British SAS, Australia's Special Air Service Regiment and the New Zealand Special Air Service, 1er RPIMa, 13e RDP, G.C.P., Groupement de Commandos Mixtes Aéroportés in France and the United States Army Rangers, Long Range Surveillance teams, and Reconnaissance, Surveillance, and Target Acquisition squadrons. History: As indicated, the use of scouts is ancient, however, during the French and Indian War (1754–1763), the techniques of long-range reconnaissance and raiding were significantly implemented by the British in colonial North America. The British employed the Major Robert Rogers to make long-range attacks against the French and their Indian allies along the frontiers of the British colonies and New France. The achievements of Major Roberts' dozen companies of approximately 1,200 men during the French and Indian War were so extraordinary that his doctrine, "Standing Orders, Rogers' Rangers," 1759, became the cornerstone of future U.S. Army long-range reconnaissance patrol units. Long-range reconnaissance patrol by nation: Australia During the Second World War, the 2/1st North Australia Observer Unit was tasked with patrolling the remote areas of northern Australia on horseback. Many from the Unit were recruited to join M Special Unit and Z Special Unit for long-range specialist reconnaissance and sabotage behind Japanese lines. Long-range reconnaissance patrol by nation: From 1966 until 1971 troopers from the Australian Special Air Service Regiment (SASR) served in Vietnam as part of the 1st Australian Task Force at Nui Dat, Phuoc Tuy Province. Missions included medium range reconnaissance patrols, observation of enemy troop movements, and long range offensive operations and ambushing in enemy dominated territory in support of 1ATF operations throughout Phuoc Tuy Province as well as Bien Hoa, Long Khanh and Binh Tuy provinces.In the 1980s the Regional Force Surveillance Units (NORFORCE, The Pilbara Regiment and 51st Battalion, Far North Queensland Regiment) were formed to conduct long-range reconnaissance and surveillance patrols in the sparsely populated and remote regions of northern Australia. Long-range reconnaissance patrol by nation: Canada The Canadian Rangers conduct long-range surveillance or sovereignty patrols in the sparsely settled areas of Northern Canada. Although part of the Canadian Army, they are an irregular military force. Long-range reconnaissance patrol by nation: Patrol Pathfinders units form part of the Reconnaissance Platoon of the 3rd (Light Infantry) Battalion of each Regular Force infantry Regiment. Patrol Pathfinders are trained in airborne and amphibious insertion, including by submarine, and conduct deep reconnaissance missions Denmark The Danish Defence Forces had three Long-Range Surveillance companies (LRSC) known as "Patrol-Companies" (PTLCOY): two assigned to the two Land Commands: LANDJUT and LANDZEALAND (Corps-level) (abbreviated "SEP/ELK" and "SEP/VLK" for: "Specielle Efterretningspatruljer/Østre resp. Vestre Landskommando" i.e. Special Intelligence Patrols) – two all-volunteer units within the Danish Home Guard - that was changed into the Special Support and Reconnaissance Company (SSR) in 2007 as a Special Reconnaissance (SR) Company dedicated to supporting the Danish Special Operations. The third and last company (PTLCOY/DDIV) was assigned to the Jutland Division (later Danish Division/DDIV) and was trained by instructors from the Danish Army Special Operations Forces: Jægerkorpset (i.e. Hunter Force) in Aalborg. PTLCOY/DDIV was disbanded in 2002 due to budget-cuts and the intent to implement UAV in the Danish Army as the primary means of ISR. The first UAV project later failed and was disbanded too. Long-range reconnaissance patrol by nation: In addition to these units, the 3rd Reconnaissance Battalion, Guard Hussar Regiment, also has Long range reconnaissance capabilities, particularly in 1st Light Reconnaissance Squadron (1.LOPESK), whose primary role is long range reconnaissance and sabotage in light vehicles and with minimal support and resupply. Long-range reconnaissance patrol by nation: Likely to be the world's smallest LRS unit is the Sirius Dog Sled Patrol (Danish: Slædepatruljen Sirius), known informally as Siriuspatruljen (the Sirius Patrol). It is a small squad-sized elite unit in the Danish Navy, that enforces Danish sovereignty in the Arctic wilderness of northern and eastern Greenland, and conducts long-range reconnaissance patrolling. Patrolling is usually done in pairs, sometimes for four months and often without additional human contact. Long-range reconnaissance patrol by nation: Finland In Finland, long-range patrols (kaukopartio) were especially notable during World War II. For example, Erillinen Pataljoona 4 (4th Detached Battalion), a command of four different long-range patrol detachments; Detachment Paatsalo, Detachment Kuismanen, Detachment Vehniäinen and Detachment Marttina, operated in the Finnish-Soviet theater of WWII, also known as the Continuation War, from 1941 through 1944. These units penetrated Soviet lines and conducted reconnaissance and destroy missions. During the trench warfare period of the war, long-range patrols were often conducted by special Finnish Sissi troops. After the war, NATO hired former members of the 4th Detached Battalion to spy on Soviet Union's military bases in the Kola Peninsula and Karelian Isthmus. NATO ended the spy operation in 1957. From then on, espionage data was obtained from forward satellites.Former President of Finland, Mauno Koivisto, served in Lauri Törni's specially designed Jäger Company (called Detachment Törni) in the Finnish 1st Infantry Division. Lauri Törni became a US citizen and entered the US Army Special Forces. He gave important knowledge in long-range patrolling techniques and was declared MIA during the Vietnam War in 1965. His remains were later found, brought to the US, and buried in Arlington on 26 June 2003. Long-range reconnaissance patrol by nation: France 13th Parachute Dragoon Regiment 2nd Hussar Regiment Groupement des Commandos Parachutistes Germany In the German Bundeswehr, LRRP is called Fernspäher (long-range scouts). Historically, the German Fernspäher units were modelled after the Finnish long-range patrols and derived from the existing elite units of Gebirgsjäger (mountain troops) and Fallschirmjäger (airborne troops). Originally, there were three companies of Fernspäher in the Bundeswehr, one being assigned to each corps. Since the reformation of German Special Forces in 1996, the Fernspählehrkompanie 200 (FSLK200) is the single remaining Fernspäher unit. The Fernspähers are part of the Special Operations Division. FSLK200 is the only German special force-type unit which has also recruited women. Details about operations of the FSLK200 are secret but it is known that Fernspäher carried out missions in Bosnia and Herzegovina, during the Kosovo War and later during Operation Enduring Freedom in Afghanistan. Long-range reconnaissance patrol by nation: India Special Frontier Force is considered a long-range reconnaissance patrol or pathfinder. They were trained against the Chinese but used to great success in Pakistan administered Kashmir and in the northern state of Punjab. Indonesia Kopassus and Tontaipur of the Indonesian Army are units able to conduct long-range reconnaissance patrolling including pathfinder and Special reconnaissance operations. Long-range reconnaissance patrol by nation: Italy Historically, airborne units are normally tasked with carrying, apart from the ordinary airborne assaults, deep infiltration small unit reconnaissance. After World War 2, during the Cold War, the main LRRP unit was the "Col Moschin" Parachute Assault Company (later battalion and then regiment). Another LRRP unit specialising as artillery observers were the 13th Gruppo Acquisizione Obiettivi "GRACO" (Target Acquisition Group, where "Group" is a definition of the Italian artillery indicating three batteries of guns, roughly a battalion sized unit) of the 3rd Missile Brigade "Aquileia", and especially the Batteria Acquisizione Obiettivi "Pipistrelli" (Target Acquisition Battery "Bats"), a company-sized fully airborne LRRP unit composed of artillery soldiers that trained at the I-LRRP school of Weingarten. This group later was incorporated in the Folgore Airborne Brigade, becoming the 185th Reggimento Ricognizione Acquisizione Obiettivi (Reconnaissance and Target Acquisition Regiment). Long-range reconnaissance patrol by nation: Kenya The Kenya Defence Forces has one LRS unit based in Nairobi. This unit shares LRP missions with the Special Forces Group. Long-range reconnaissance patrol by nation: Netherlands The Korps Commandotroepen and NLMARSOF are LRRP capable. During the Cold War, the Korps Commandotroepen were known as Waarneming en Verkenning Compagnie (observation and reconnaissance company) and specialized in staying behind enemy lines. NLMARSOF's C-Squadron consists of two special recon units: Mountain Leaders and Special Forces Underwater Operators. From 1995 until 2010 the 11th Airmobile Brigade Air Assault had 3 platoons of long range scouts (RECCE). Main objective battlefield intelligence and direct actions. Trained in stay behind operations working in small units. These highly flexible units operated completely on its own in cross FLOT operations. Long-range reconnaissance patrol by nation: New Zealand The New Zealand Special Air Service (NZSAS) is New Zealand's Special forces branch. NZSAS served with the Australian SAS Squadron during the Vietnam War and carried out Long-range reconnaissance patrols and ambushing of enemy supply routes, mounting 155 patrols over three tours. Norway The Norwegian Army has LRRP operations dating back to the 1960s, Fjernoppklaring (remote reconnaissance). It was split in two, creating a new group of airborne special forces, Hærens Jegerkommando, and the current LRRP unit Fjernoppklaringseskadronen. Fjernoppklaringseskadronen is part of the Norwegian army under Etterretningsbataljonen (Military Intelligence Battalion). Portugal Presently, in the Portuguese Army, LRRP operations are carried out by the Special Operations Forces. The Special Actions Detachment of the Portuguese Navy also carries out LRRP missions, mainly in the scope of amphibious operations. From 1983 to 1993, the Portuguese Army Comando Regiment included the REDES Company, a specialist LRRP unit. Serbia LRRP units within the Serbian Army Special Brigade and 72nd Reconnaissance Commando Battalion have been operating since 1992. Long-range reconnaissance patrol by nation: Spain LRRP is carried out in Spain by the Advanced Reconnaissance Parachute Company of the Paratroopers Brigade "Almogávares" VI and the Target Acquisition and Reconnaissance (TAR) Company of the HQ Battalion within the Spanish Marine Infantry. In the past long-range reconnaissance patrols of Spanish forces have played a notable role in the Bosnian War, specially the deep reconnaissance patrols carried out by the Special Operations Unit (UOE) of the Spanish marines within the multinational battalion. Long-range reconnaissance patrol by nation: Sri Lanka Long-range reconnaissance patrols of the Armed forces of Sri Lanka have played a notable role in Sri Lanka's multi-phase military campaign against the Liberation Tigers of Tamil Eelam (LTTE). LRRP members attached to Special forces of the Sri Lankan Army have been most successful in carrying out assassinations on high-ranking members of the LTTE. The LRRP concept was developed by Major Sreepathi Gunasekara who formed a special recon unit named 'Delta Patrols' in 1986 which later evolved into a highly secretive SF LRRP battalion. Special mission units such as the 3rd Commando Regiment and the 3rd Army Special Forces Regiment have Specialized LRRP battalions. There are also LRRP units attached to Infantry battalions. Long-range reconnaissance patrol by nation: Until the end of the war, the government kept their very existence under wraps. Long-range reconnaissance patrol by nation: United Kingdom In the modern British Army, the Royal Armoured Corps Light Cavalry regiments ( 1st Queen's Dragoon Guards, Royal Scots Dragoon Guards, Light Dragoons) operate in the Long Range Reconnaissance role. All three units took turns to operate as the Long Range Reconnaissance Group, part of Operation Newcombe, The UK's contingent in the United Nations mission in Mali. This involved deep penetration vehicle mounted patrols into the Sahel scrub and desert for up to four weeks at a time to search for Islamist insurgents. The Honourable Artillery Company (HAC) and its regular sister unit, 4/73 (Sphinx) Special Observation Post Battery Royal Artillery, currently operate in the surveillance and target acquisition role.During the Second World War, the Long Range Desert Group performed long-range reconnaissance and raiding during the North African Campaign and during the Cold War, the Corps Patrol Unit (CPU) consisted of 21 and 23 SAS and the HAC. Long-range reconnaissance patrol by nation: Cold War The 21 SAS was stood up in 1947 specifically for the task of letting themselves be bypassed and staying-behind in the event of a Soviet Invasion of Western Europe, they were later joined by 23 SAS and in 1973, the Honourable Artillery Company (HAC) which became a Surveillance and Target Acquisition (STA) Patrol Regiment providing Stay-Behind Observation Posts (SBOP) with their three squadrons each with a number of four to six man patrols. HAC provided SBOP capabilities to the HQs of 1st Artillery Brigade (HQ Sqn HAC), 1 Armoured Division (I Sqn HAC), 4 Armoured Division (II Sqn HAC), and 1 BR Corps (III Sqn HAC) with one ‘sabre’ squadron each United States World War II The predecessor of the U.S. Army's LRRP teams was the U.S. Sixth Army Special Reconnaissance Unit, better known as the Alamo Scouts. In the South West Pacific Theater of Operations, the Alamo Scouts conducted over 110 intelligence gathering missions behind enemy lines throughout New Guinea and the Philippines during 1944–45. General Walter Krueger established the Alamo Scouts Training Center to train candidates in long-range reconnaissance patrol techniques, including rubber boat handling, intelligence gathering, report writing, scouting and patrolling, jungle navigation, communications, weapons training, and camouflage. Of those that successfully completed the rigorous course, 138 became full-time Alamo Scouts, while the others returned to their units to serve as reconnaissance troops. After Japan's surrender, the Alamo Scouts Training Center was closed down and the unit was disbanded. In 1988, the U.S. Army retroactively awarded members of the Alamo Scouts the Special Forces tab due to their wartime record and the techniques they pioneered. Long-range reconnaissance patrol by nation: In Germany The modern US Army long-range reconnaissance patrol concept was created in 1956 by the 11th Airborne Division in Augsburg, Germany. They patrolled near the Czechoslovakian and East German borders, then members of the Communist Warsaw Pact states, and in event of war in Europe would be inserted behind enemy lines to provide surveillance and to select targets of opportunity. The LRRP concept was well known throughout the Army though concentrated in 7th Army in Germany. Provisional LRRP Companies made up of both trained LRRPs and regular soldiers were put together for a series of exercises called Wintershield and proved themselves in the field. After the 11th Airborne Division was inactivated on 1 July 1958, the Department of the Army authorized two Airborne LRRP companies in 1961: Company D, 17th Infantry and Company C, 58th Infantry in the Wildflecken and Nellingen Barracks (near Stuttgart), and were respectively assigned to V Corps and VII Corps. In 1963, V Corps LRRPs (Company D) transferred to the Gibbs Kaserne in Frankfurt near Corps HQ. In 1965, these companies developed the first LRRP Table of Organization and Equipment and in doing so increased their strength to 208 men, team size from 4 to 5 men, as well as adding an organic transport component. All LRRPs from team leader and above were to be Ranger qualified. The experiences of these two companies formed the basis of the first US Army LRRP manual. Both companies used carrier wave (Morse Code) radios including the AN/TRC-77 for long-range communications to their respective Corps G2 (Intelligence) center. In 1968, both companies were transferred to the United States, but neither were sent to Vietnam because they retained their status as LRRP units for V and VII Corps in the event of war in Europe.All LRRPs were redesignated as "Ranger" on 1 February 1969, and these two units (companies C and D) respectively became Companies B and A, 75th Infantry (Ranger). They were the only Ranger units to remain on active duty at the end of the Vietnam War and they continued in service until November 1974 when they were inactivated, with most of their personnel forming the core of the new 1st and 2d Battalions (Ranger), 75th Infantry. Long-range reconnaissance patrol by nation: In Italy In the 1960s, the U. S. Army Southern European Task Force (SETAF) utilized the Airborne Recon Platoon of the 1st Combat Aviation Company (Provisional) located in Verona, Italy. They provided reconnaissance missions as well as target acquisition and battle damage assessment for SETAF which was a missile command.The Airborne Recon Platoon was a LRRP unit that served as the “eyes and ears” for SETAF. During the period of 1961-62 Lieutenant James D. James commanded the platoon. Three years later in 1965 when Captain James served in Vietnam with the 1st Cavalry Division he utilized much of the tactics, structure, and doctrine of the Airborne Recon Platoon when creating Company E, 52nd Infantry (LRP). Captain James retired from the army as a colonel. Long-range reconnaissance patrol by nation: In Vietnam In December 1965, the 1st Brigade, 101st Airborne Division, formed a LRRP platoon, and by April 1966, the 1st Infantry Division, 25th Infantry Division and each of the four Battalions of the 173rd Airborne Brigade formed LRRP units as well. On 8 July 1966, General William Westmoreland authorized the formation of a (LRRP) unit in each infantry brigade or division in Vietnam. By 1967 formal LRRP companies were organized, most having three platoons, each with five six-man teams equipped with VHF/FM AN/PRC-25 radios. LRRP training was notoriously rigorous and team leaders were often graduates of the U.S. Army's 5th Special Forces Recondo School in Nha Trang, Vietnam.Tiger Force was the nickname of an infamous long-range reconnaissance patrol unit of the 1st Battalion (Airborne), 327th Infantry Regiment, 1st Brigade (Separate), 101st Airborne Division, which fought in the Vietnam War, and was responsible for counterinsurgency operations against the North Vietnamese People's Army of Vietnam (PAVN) and Viet Cong.The platoon-sized unit, approximately 45 paratroopers, was founded by Colonel David Hackworth in November 1965 to "outguerrilla the guerrillas". Tiger Force (Recon) 1/327th was a highly decorated small unit in Vietnam, and paid for its reputation with heavy casualties. In October 1968, Tiger Force's parent battalion was awarded the Presidential Unit Citation by President Lyndon B. Johnson, which included a mention of Tiger Force's service at Đắk Tô in June 1966.Since satellite communications were a thing of the future, one of the most daring long-range penetration operations of the war was launched by members of Company E, 52nd Infantry (LRP) of the 1st Air Cavalry Division, against the PAVN when they seized "Signal Hill" the name attributed to the peak of Dong Re Lao Mountain, a densely forested 4,879 feet (1,487 m) mountain, midway in A Shau Valley, so its 1st and 3rd Brigades, who would be fighting behind a wall of mountains, could communicate with Camp Evans near the coast or with approaching aircraft. Long-range reconnaissance patrol by nation: The US Marine Corps also performed long-range reconnaissance missions typically assigned to Marine Recon, especially Force Recon at the corps-level (i.e., Marine Expeditionary Force (MEF)) level, as opposed to the Battalion Recon units answering to battalion commanders. Marine Recon teams typically were twice as large as Army LRRPs and were more heavily armed, however, sacrificing a degree of stealth. In addition, the Marines did not employ indigenous Montagnards as front and rear scouts as Army LRRPs and Special Forces teams did which proved invaluable in confusing the enemy if contact was made. The tactical employment of LRRPs was later evaluated to be generally used far too dangerously by commanders, who were pleased by the kill ratios of LRRPs teams (reported as high as 400 enemy troops for every LRRP killed). Writes one commentator: "During the course of the war Lurps conducted around 23,000 long-range patrols, of this amount two-thirds resulted in enemy sightings." LRRPs also accounted for approximately 10,000 enemy KIA through ambushes, air strikes, and artillery. Long-range reconnaissance patrol by nation: In February 1969, all US Army LRRP units were folded into the newly formed 75th Infantry Regiment (Ranger), a predecessor of the 75th Ranger Regiment, bringing back operational Ranger units for the first time since the Korean War. The Army had inactivated Ranger units after Korea, but kept Ranger School, on the premise that spreading Ranger School graduates throughout the Army would improve overall performance. The initial Ranger companies formed in 1969 were: "A" V Corps, Fort Hood, Texas; "B" VII Corps, Fort Lewis, Washington; "C" I Field Force, Vietnam; "D" II Field Force, Vietnam; "E" 9th Infantry Division, Vietnam; F 25th Infantry Division, Vietnam; "G" 23d Infantry Division, Vietnam; "H" 1st Cavalry Division, Vietnam; "I" 1st Infantry Division, Vietnam; "K" 4th Infantry Division, Vietnam; "L" 101st Airborne Division, Vietnam; "M" 199th Light Infantry Brigade, Vietnam; "N" 173d Airborne Brigade, Vietnam; "O" 3d Brigade, 82d Airborne Division, Vietnam; and "P" 1st Brigade, 5th Infantry Division (Mechanized), Vietnam. Following its mobilization for Vietnam service, Company D (LRP), 151st Infantry of the Indiana Army National Guard completed its tour in Vietnam and, as it departed, Company D (Ranger), 75th Infantry was raised to replace it. Company F (LRP), 425th Infantry of the Michigan Army National Guard was not mobilized or sent to Vietnam. As National Guard units, both D-151st and F-425th retained their regimental designations and were not reflagged as companies of the 75th Infantry. Long-range reconnaissance patrol by nation: As the Vietnam War matured, I Field Force LRRPs widened their area of operation to include I Corps and II Corps, and II Field Force LRRPs respectively included III Corps and IV Corps.During the War on Terror, Long Range Recon (LRS-D or Long Range Surveillance Detachments) were used to conduct high value target and small kill team operations deep in hostile territory. LRS-D units were inactivated in 2017 and personnel were absorbed into R&S (Recon and Surveillance) Teams. Long-range reconnaissance patrol by nation: The legacy of LRRP units later continued with the U.S. Army's Long Range Surveillance (both detachments and companies), which have been dropped from the force structure and inactivated, and Reconnaissance, Surveillance, and Target Acquisition squadrons. NATO International Long Range Reconnaissance Patrol School: In 1977, Belgium, the Netherlands, and the United Kingdom sent instructors to Germany to work on the planning of an international long-range reconnaissance patrol (LRRP) school. From 1979 onward, joint training for LRRP and military stay-behind units was conducted at NATO's International Long Range Reconnaissance Patrol School (ILRRPS) in Weingarten, Germany, under the lead of UK SF. British SAS, German Fernspäher, Dutch Marines, Belgian Para-Commandos, US SF, and others worked and trained together on a daily basis. ILRRPS provided specialist training to allow soldiers to operate effectively in gathering intelligence behind enemy lines. Courses included Long Range Reconnaissance, Combat Survival (E&E and resistance to interrogation), Advanced WP Specialist Recognition, Close Quarter Battle and so on. TRISTAR, a NATO LRRP exercise originally sponsored by the SAS, was conducted annually. NATO International Long Range Reconnaissance Patrol School: In May 2001, the ILRRPS was renamed the International Special Training Center (ISTC).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Projection (alchemy)** Projection (alchemy): Projection was the ultimate goal of Western alchemy. Once the philosopher's stone or powder of projection had been created, the process of projection would be used to transmute a lesser substance into a higher form; often lead into gold. Typically, the process is described as casting a small portion of the Stone into a molten base metal. Claims and demonstrations: The seventeenth century saw an increase in tales of physical transmutation and projection. These are variously explained as examples of charlatanism, fiction, pseudo-scientific error, or missed metaphor. The following is a typical account of the projection process described by Jan Baptista van Helmont in his De Natura Vitae Eternae. Claims and demonstrations: I have seen and I have touched the Philosopher’s Stone more than once. The color of it was like saffron in powder, but heavy and shining like pounded glass. I had once given me the fourth of a grain - I call a grain that which takes 600 to make an ounce. I made projection with this fourth part of a grain wrapped in paper upon eight ounces of quicksilver heated in a crucible. The result of the projection was eight ounces, lacking eleven grains, of the most pure gold. Claims and demonstrations: Other reports include: Elias Ashmole's Theatrum Chemicum Britannicum lists an account of Edward Kelley making projections from lesser metals into both gold and silver. Kelley's success is also recorded by John Dee. Alexander Seton was reported to have projected a heavy yellow powder onto a mixture of lead and sulphur resulting in a button of gold. A variety of accounts are given of Sendivogius performing public transmutations. Claims and demonstrations: In legend, Nicolas Flamel makes a projection of the red stone onto mercury, making gold.While it may not account for all claims of metallic transmutation, some alchemists of this time period give accounts of fraudulent projection demonstrations, distinguishing themselves from the projectors. Maier's Examen Fucorum Pseudo-chymicorum and Khunrath's Treuhertzige Warnungs-Vermahnung list tricks used by pseudo-alchemists. Accounts are given of double-bottomed crucibles used to conceal hidden gold during projection demonstrations. In art and entertainment: The concept of projection appears in various fictional works related to alchemy. It's a notable theme in Ben Jonson's The Alchemist where the following dialogue can be found, commenting on fraudulent applications of projection:
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Pars intermedia** Pars intermedia: Pars intermedia is the boundary between the anterior and posterior lobes of the pituitary. It contains colloid-filled cysts and two types of cells - basophils and chromophobes. The cysts are the remainder of Rathke’s pouch. As technically part of the anterior pituitary, it separates the posterior pituitary and pars distalis. It is composed of large, pale cells that encompass the aforementioned colloid-filled follicles.In human fetal life, this area produces melanocyte stimulating hormone or MSH which causes the release of melanin pigment in skin melanocytes (pigment cells). However, the pars intermedia is normally either very small or entirely absent in adulthood. Pars intermedia: In lower vertebrates (fish, amphibians) MSH from the pars intermedia is responsible for darkening of the skin, often in response to changes in background color. This color change is due to MSH stimulating the dispersion of melanin pigment in dermal (skin) melanophore cells.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**IBM Office/36** IBM Office/36: Office/36 was a suite of applications marketed by IBM from 1983 to 2000 for the IBM System/36 family of midrange computers. IBM announced its System/36 Office Automation (OA) strategy in 1985.Office/36 could be purchased in its entirety, or piecemeal. Components of Office/36 include: IDDU/36, the Interactive Data Definition Utility. Query/36, the Query utility. DisplayWrite/36, a word processing program. Personal Services/36, a calendaring system and an office messaging utility.Query/36 was not quite the same as SQL, but it had some similarities, especially the ability to very rapidly create a displayed recordset from a disk file. Note that SQL, also an IBM development, had not been standardized prior to 1986. DisplayWrite/36, in the same category as Microsoft Word, had online dictionaries and definition capabilities, and spell-check, and unlike the standard S/36 products, it would straighten spillover text and scroll in real time. IBM Office/36: Considerable changes were required to S/36 design to support Office/36 functionality, not the least of which was the capability to manage new container objects called "folders" and produce multiple extents to them on demand. Q/36 and DW/36 typically exceeded the 64K program limit of the S/36, both in editing and printing, so using Office products could heavily impact other applications. DW/36 allowed use of bold, underline, and other display formatting characteristics in real time.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Creatine transporter defect** Creatine transporter defect: Creatine transporter deficiency (CTD) is an inborn error of creatine metabolism in which creatine is not properly transported to the brain and muscles due to defective creatine transporters. CTD is an X-linked disorder caused by mutation in SLC6A8. SLC6A8 is located at Xq28. Hemizygous males with CTD express speech and behavior abnormalities, intellectual disabilities, development delay, seizures, and autistic behavior. Heterozygous females with CTD generally express fewer, less severe symptoms. CTD is one of three different types of cerebral creatine deficiency (CCD). The other two types of CCD are guanidinoacetate methyltransferase (GAMT) deficiency and L-arginine:glycine amidinotransferase (AGAT) deficiency. Clinical presentation of CTD is similar to that of GAMT and AGAT deficiency. CTD was first identified in 2001 with the presence of a hemizygous nonsense change in SLC6A8 in a male patient. Signs and symptoms: Generally, the majority of individuals with creatine transporter defect express the following symptoms with varying levels of severity: developmental delay and regression, intellectual disability, and abnormalities in expressive and cognitive speech. However, several studies have shown a wider variety of symptoms including, but not limited to attention deficit and hyperactivity with impulsivity, myopathy, hypotonia, semantic-pragmatic language disorder, oral dyspraxia, extrapyramidal movement disorder, constipation, absent speech development, seizures, and epilepsy. Furthermore, symptoms can significantly vary between hemizygous males and heterozygous females, although, symptoms are generally more severe in hemizygous males. Hemizygous males more commonly express seizures, growth deficiency, severe intellectual disability, and severe expressive language impairment. Heterozygous females more commonly express mild intellectual disability, impairments to confrontational naming and verbal memory, and learning and behavior problems. Genetics: CTD is caused by pathogenic variants in SLC6A8, located at Xq28. SLC6A8 contains 13 exons and spreads across 8.5 kb of genomic DNA (gDNA). The presence of hemizygous variants in males and heterozygous variants in females in SLC6A8 provides evidence that CTD is inherited in an X-linked recessive manner. This usually results in hemizygous males having severe symptoms, while heterozygous female carriers tend to have less severe and more varying symptoms. Mechanism: The creatine phosphate system is needed for the storage and transmission of phosphate-bound energy in the brain and muscle. The brain and muscle have particularly high metabolic demands, therefore, making creatine a necessary molecule in ATP homeostasis. In regard to the brain, in order for creatine to reach the brain, it must first pass through the blood–brain barrier (BBB). The BBB separates blood from brain interstitial fluid and is, therefore, able to regulate the transfer of nutrients to the brain from the blood. In order to pass through the BBB, creatine utilizes creatine transporter (CRT). When present at the BBB, CRT mediates the passage of creatine from the blood to the brain. When being transported from the blood to the brain, creatine has to constantly move against the creatine concentration gradient that is present at the border between the brain and circulating blood. Diagnosis: The diagnosis of CTD is usually suspected based on the clinical presentation of intellectual disability, abnormalities in cognitive and expressive speech, and developmental delay. Furthermore, a family history of X-linked intellectual disability, developmental coordination disorder, and seizures is strongly suggestive. Initial screening of CTD involves obtaining a urine sample and measuring the ratio of creatine to creatinine. If the ratio of creatine to creatinine is greater than 1.5, then the presence of CTD is highly likely. This is because a large ratio indicates a high amount of creatine in the urine. This, in turn, indicates inadequate transport of creatine into the brain and muscle. However, the urine screening test often fails in diagnosing heterozygous females. Studies have demonstrated that as a group heterozygous females have significantly decreased cerebral creatine concentration, but that individual heterozygous females often have normal creatine concentrations found in their urine. Therefore, urine screening tests are unreliable as a standard test for diagnosing CTD, particularly in females.A more reliable and sophisticated manner of testing for cerebral creatine concentrations is through in vivo proton magnetic resonance spectroscopy (1H MRS). In vivo 1H MRS uses proton signals to determine the concentration of specific metabolites. This method of testing is more reliable because it provides a fairly accurate measurement of the amount of creatine inside the brain. Similar to urine testing, a drawback of using 1H MRS as a test for CTD is that the results of the test could be attributed to any of the cerebral creatine deficiencies. The most accurate and reliable method of testing for CTD is through DNA sequence analysis of SLC6A8. DNA analysis of SLC6A8 allows the identification of the location and type of variant causing the cerebral creatine deficiency. Furthermore, DNA analysis of SLC6A8 is able to prove that a cerebral creatine deficiency is due to CTD and not GAMT or AGAT deficiency. Treatment: CTD is difficult to treat because the actual transporter responsible for transporting creatine to the brain and muscles is defective. Affected individuals have sufficient amounts of creatine, however it cannot get to the tissues where it is needed. Studies in which oral creatine monohydrate supplements were given to patients with CTD found that patients did not respond to treatment. However, similar studies conducted in which patients that had GAMT or AGAT deficiency were given oral creatine monohydrate supplements found that patient's clinical symptoms improved. Patients with CTD are unresponsive to oral creatine monohydrate supplements because regardless of the amount of creatine they ingest, the creatine transporter is still defective, and therefore creatine is incapable of being transported across the BBB. Given the major role that the BBB has in the transport of creatine to the brain and unresponsiveness of oral creatine monohydrate supplements in CTD patients, future research will focus on working with the BBB to deliver creatine supplements. However, given the limited number of patients that have been identified with CTD, future treatment strategies must be more effective and efficient when recognizing individuals with CTD. Patient Support: There are two organizations that support patients and families affected by Creatine Transporter Defect (CTD). The international organization, Association for Creatine Deficiencies (ACD), supports patients with all three Cerebral Creatine Deficiency Syndromes (CCDS), including AGAT Deficiency, GAMT Deficiency, and CTD. Their mission is to advance education, diagnosis, and research of CCDS. ACD's website is creatineinfo.org. The French organization, Xtraordinaire, represents families with relatives with mental retardation linked to the X chromosome, including CTD. Xtraordinaire's website is xtraordinaire.org.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**M/G/k queue** M/G/k queue: In queueing theory, a discipline within the mathematical theory of probability, an M/G/k queue is a queue model where arrivals are Markovian (modulated by a Poisson process), service times have a General distribution and there are k servers. The model name is written in Kendall's notation, and is an extension of the M/M/c queue, where service times must be exponentially distributed and of the M/G/1 queue with a single server. Most performance metrics for this queueing system are not known and remain an open problem. Model definition: A queue represented by a M/G/k queue is a stochastic process whose state space is the set {0,1,2,3...}, where the value corresponds to the number of customers in the queue, including any being served. Transitions from state i to i + 1 represent the arrival of a new customer: the times between such arrivals have an exponential distribution with parameter λ. Transitions from state i to i − 1 represent the departure of a customer who has just finished being served: the length of time required for serving an individual customer has a general distribution function. The lengths of times between arrivals and of service periods are random variables which are assumed to be statistically independent. Steady state distribution: Tijms et al. believe it is "not likely that computationally tractable methods can be developed to compute the exact numerical values of the steady-state probability in the M/G/k queue."Various approximations for the average queue size, stationary distribution and approximation by a reflected Brownian motion have been offered by different authors. Recently a new approximate approach based on Laplace transform for steady state probabilities has been proposed by Hamzeh Khazaei et al.. This new approach is yet accurate enough in cases of large number of servers and when the distribution of service time has a Coefficient of variation more than one. Average delay/waiting time: There are numerous approximations for the average delay a job experiences. The first such was given in 1959 using a factor to adjust the mean waiting time in an M/M/c queue This result is sometimes known as Kingman's law of congestion. Average delay/waiting time: M/G/ M/M/ c] where C is the coefficient of variation of the service time distribution. Ward Whitt described this approximation as “usually an excellent approximation, even given extra information about the service-time distribution."However, it is known that no approximation using only the first two moments can be accurate in all cases.A Markov–Krein characterization has been shown to produce tight bounds on the mean waiting time. Inter-departure times: It is conjectured that the times between departures, given a departure leaves n customers in a queue, has a mean which as n tends to infinity is different from the intuitive 1/μ result. Two servers: For an M/G/2 queue (the model with two servers) the problem of determining marginal probabilities can be reduced to solving a pair of integral equations or the Laplace transform of the distribution when the service time distribution is a mixture of exponential distributions. The Laplace transform of queue length and waiting time distributions can be computed when the waiting time distribution has a rational Laplace transform.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Blocking below the waist** Blocking below the waist: In gridiron football, blocking below the waist is an illegal block, from any direction, below the waist by any defensive player or by an offensive player under certain situations, by any player after change of possession, with certain exceptions. It is sometimes incorrectly referred to as a "chop block". Such blocks are banned due to the risk of injury, particularly those to the knee and ankle. The penalty for a block below the waist is 15 yards in the NFL, NCAA, and in high school. The block is illegal unless it is against the ball carrier.In the NFL, blocking below the waist is illegal during kicking plays and after a change of possession. Illegal crackback blocks, peel-back blocks and cut blocks are called during other times when an illegal block is made below the waist. Blocking below the waist: It was during the 1970s that the rules prohibiting these blocks were instituted in various leagues. Blocking below the waist was initially banned in 1970 in the NCAA after a unanimous vote.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Wireless Session Protocol** Wireless Session Protocol: Wireless Session Protocol (WSP) is an open standard for maintaining high-level Wireless sessions. The Protocol is involved from the second that the user connects to one URL and ends when the user leaves that URL. The session-wide properties are defined once at the beginning of the session, saving bandwidth over continuous monitoring. The session-establishing process does not have long connection algorithms. WSP is based on HTTP 1.1 with few enhancements. WSP provides the upper-level application layer of WAP with a consistent interface for two session services. The first is a connection-oriented service that operates above a transaction layer protocol WTP and the second is a connectionless service that operates above a secure or non-secure data-gram transport service. Therefore, WSP exists for two reasons. First, the connection mode enhances HTTP 1.1's performance over the wireless environment. Second, it provides a session layer so the whole WAP environment resembles ISO OSI Reference Model.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Earth inductor compass** Earth inductor compass: The Earth inductor compass (or simply induction compass) is a compass that determines directions using the principle of electromagnetic induction, with the Earth's magnetic field acting as the induction field for an electric generator. The electrical output of the generator will vary depending on its orientation with respect to the Earth's magnetic field. This variation in the generated voltage is measured, allowing the Earth inductor compass to determine direction. History: The earth inductor compass was first patented by Donald M. Bliss in 1912 and further refined in the 1920s by Paul R. Heyl and Lyman James Briggs of the United States National Bureau of Standards, and in 1924 by Morris Titterington at the Pioneer Instrument Company in Brooklyn, New York. Heyl and Briggs were awarded the Magellan Medal of the American Philosophical Society for this work in 1922. Designed to compensate for the weaknesses of the magnetic compass, the Earth inductor compass provided pilots with a more stable and reliable reference instrument. They were used in the Douglas World Cruisers in 1924 during the Around-the-World flight by the U.S. Army Air Corps. Charles Lindbergh used the compass on his transatlantic flight in the Spirit of St. Louis in 1927. Over the transatlantic leg of his voyage – a distance of about 2,000 miles (3,200 km) – he was able to navigate with a cumulative error of about 10 miles (16 km) in landfall, or about one half of one percent of the distance travelled, by computing his heading at hourly intervals for a dead reckoning estimate of position. Operation: Bliss' original design consisted of two armatures spinning on a single vertical axle. One armature was connected to commutators that were 90 degrees offset from the commutators connected to the other armature. When one set of commutators is aligned with the earth's magnetic field no current is produced, but an offset angle creates a positive or negative current in proportion to the sine of the offset angle. Since the sine of the angle peaks at 90 degrees, a reading could indicate either a certain direction or the exact opposite direction. The solution to this was a second armature with commutators offset by 90 degrees to help distinguish the two opposite directions. Operation: The direction of travel was read by comparing the indications on two independent galvanometers, one for each armature. The galvanometers had to be calibrated with the correct headings, since the voltage was proportional to the sine of the angle. Readings could be impacted by the armature's speed of rotation and by stray magnetic fields. Operation: Later versions simplified readings to show the offset from the intended heading, rather than the full range of compass directions. The revised design allowed the user to rotate the commutators in such a way that zero current would be produced when the craft was traveling in the intended direction. A single galvanometer was then used to show if the pilot was steering too far to the left or to the right. Operation: Lindberg's compass used an anemometer to spin the armature through a universal joint. The armature was mounted on gimbals to prevent it from tilting with the airplane's pitch and roll. Tilting the armature could have changed the angle of the Earth's flux to the armature, resulting in erroneous readings. The gyroscopic effect of the spinning armature also helped to keep it properly aligned. Patents: US granted 1047157, Bliss, Donald M, "Device for Determining Direction", issued 1912-12-17, assigned to Henschel, Charles J US granted 1840911, N. Minorsky, "Induction compass", published 1932-01-12, assigned to N. Minorsky US granted 2434324, Lehde, Henry, "Earth inductor compass", issued 1948-01-13, assigned to Control Instrument Company Inc. GB granted 314786, Vion, Eugene, "Improvements in electro-magnetic apparatus for the observation and correction of travel of aerial and marine craft", issued 1929-12-23
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Event (computing)** Event (computing): In programming and software design, an event is an action or occurrence recognized by software, often originating asynchronously from the external environment, that may be handled by the software. Computer events can be generated or triggered by the system, by the user, or in other ways. Typically, events are handled synchronously with the program flow; that is, the software may have one or more dedicated places where events are handled, frequently an event loop. A source of events includes the user, who may interact with the software through the computer's peripherals - for example, by typing on the keyboard. Another source is a hardware device such as a timer. Software can also trigger its own set of events into the event loop, e.g. to communicate the completion of a task. Software that changes its behavior in response to events is said to be event-driven, often with the goal of being interactive. Description: Event driven systems are typically used when there is some asynchronous external activity that needs to be handled by a program; for example, a user who presses a button on their mouse. An event driven system typically runs an event loop, that keeps waiting for such activities, e.g. input from devices or internal alarms. When one of these occurs, it collects data about the event and dispatches the event to the event handler software that will deal with it. Description: A program can choose to ignore events, and there may be libraries to dispatch an event to multiple handlers that may be programmed to listen for a particular event. The data associated with an event at a minimum specifies what type of event it is, but may include other information such as when it occurred, who or what caused it to occur, and extra data provided by the event source to the handler about how the event should be processed. Description: Events are typically used in user interfaces, where actions in the outside world (mouse clicks, window-resizing, keyboard presses, messages from other programs, etc.) are handled by the program as a series of events. Programs written for many windowing environments consist predominantly of event handlers. Events can also be used at instruction set level, where they complement interrupts. Compared to interrupts, events are normally implemented synchronously: the program explicitly waits for an event to be generated and handled (typically by calling an instruction that dispatches the next event), whereas an interrupt can demand immediate service. Delegate event model: A common variant in object-oriented programming is the delegate event model, which is provided by some graphic user interfaces. This model is based on three entities: a control, which is the event source listeners, also called event handlers, that receive the event notification from the source interfaces (in the broader meaning of the term) that describe the protocol by which the event is to be communicated.Furthermore, the model requires that: every listener must implement the interface for the event it wants to listen to every listener must register with the source to declare its desire to listen to the event every time the source generates the event, it communicates it to the registered listeners, following the protocol of the interface.C# uses events as special delegates that can only be fired by the class that declares it. This allows for better abstraction, for example: Event handler: In computer programming, an event handler may be implemented using a callback subroutine that handles inputs received in a program (called a listener in Java and JavaScript). Each event is a piece of application-level information from the underlying framework, typically the GUI toolkit. GUI events include key presses, mouse movement, action selections, and timers expiring. On a lower level, events can represent availability of new data for reading a file or network stream. Event handlers are a central concept in event-driven programming. Event handler: The events are created by the framework based on interpreting lower-level inputs, which may be lower-level events themselves. For example, mouse movements and clicks are interpreted as menu selections. The events initially originate from actions on the operating system level, such as interrupts generated by hardware devices, software interrupt instructions, or state changes in polling. On this level, interrupt handlers and signal handlers correspond to event handlers. Event handler: Created events are first processed by an event dispatcher within the framework. It typically manages the associations between events and event handlers, and may queue event handlers or events for later processing. Event dispatchers may call event handlers directly, or wait for events to be dequeued with information about the handler to be executed. Event notification: Event notification is a term used in conjunction with communications software for linking applications that generate small messages (the "events") to applications that monitor the associated conditions and may take actions triggered by events. Event notification: Event notification is an important feature in modern database systems (used to inform applications when conditions they are watching for have occurred), modern operating systems (used to inform applications when they should take some action, such as refreshing a window), and modern distributed systems, where the producer of an event might be on a different machine than the consumer, or consumers. Event notification platforms are normally designed so that the application producing events does not need to know which applications will consume them, or even how many applications will monitor the event stream. Event notification: It is sometimes used as a synonym for publish-subscribe, a term that relates to one class of products supporting event notification in networked settings. The virtual synchrony model is sometimes used to endow event notification systems, and publish-subscribe systems, with stronger fault-tolerance and consistency guarantees. User-generated events: There are a large number of situations or events that a program or system may generate or respond to. Some common user generated events include: Mouse events A pointing device can generate a number of software recognisable pointing device gestures. A mouse can generate a number of mouse events, such as mouse move (including direction of move and distance), mouse left/right button up/down and mouse wheel motion, or a combination of these gestures. For example, double-clicks commonly select words and characters within boundary, and triple-clicks entire paragraphs. User-generated events: Keyboard events Pressing a key on a keyboard or a combination of keys generates a keyboard event, enabling the program currently running to respond to the introduced data such as which key/s the user pressed. Joystick events Moving a joystick generates an X-Y analogue signal. They often have multiple buttons to trigger events. Some gamepads for popular game boxes use joysticks. Touchscreen events The events generated using a touchscreen are commonly referred to as touch events or gestures. Device events Device events include action by or to a device, such as a shake, tilt, rotation, move etc.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Tetrahedron Prize** Tetrahedron Prize: The Tetrahedron Prize for Creativity in Organic Chemistry or Bioorganic and Medicinal Chemistry is awarded annually by Elsevier, the publisher of Tetrahedron Publications. It was established in 1980 and named in honour of the founding co-chairmen of these publications, Professor Sir Robert Robinson and Professor Robert Burns Woodward. The prize consists of a gold medal, a certificate, and a monetary award of US $15,000.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Perillyl-alcohol dehydrogenase** Perillyl-alcohol dehydrogenase: In enzymology, a perillyl-alcohol dehydrogenase (EC 1.1.1.144) is an enzyme that catalyzes the chemical reaction perillyl alcohol + NAD+ ⇌ perillyl aldehyde + NADH + H+Thus, the two substrates of this enzyme are perillyl alcohol and NAD+, whereas its 3 products are perillyl aldehyde, NADH, and H+. This enzyme belongs to the family of oxidoreductases, specifically those acting on the CH-OH group of donor with NAD+ or NADP+ as acceptor. The systematic name of this enzyme class is perillyl-alcohol:NAD+ oxidoreductase. This enzyme is also called perillyl alcohol dehydrogenase. This enzyme participates in limonene and pinene degradation.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Buddhist cosmology** Buddhist cosmology: Buddhist cosmology is the description of the shape and evolution of the Universe according to Buddhist scriptures and commentaries. It consists of a temporal and a spatial cosmology. The temporal cosmology describes the timespan of the creation and dissolvement of alternate universes in different aeons. The spatial cosmology consists of a vertical cosmology, the various planes of beings, into which beings are reborn due to their merits and development; and a horizontal cosmology, the distribution of these world-systems into an infinite sheet of existential dimensions included in the cycle of samsara. The entire universe is said to be made up of five basic elements of Earth, Water, Fire, Air and Space. Buddhist cosmology is also intwined with the belief of Karma. As a result, some ages are filled with prosperity and peace due to common goodness, whereas other eras are filled with suffering, dishonesty and short lifespans. Meaning and origin: Course of rebirth and liberation The Buddhist cosmology is not a literal description of the shape of the universe; rather, it is the universe as seen through the divyacakṣus (Pali: dibbacakkhu दिब्बचक्खु), the "divine eye" by which a Buddha or an arhat can perceive all beings arising (being born) and passing away (dying) within various worlds; and can tell from what state they have been reborn, and into which state they will be reborn. Meaning and origin: Beings can be reborn as devas (gods and Brahmas), humans, animals, asuras (titans), pretas ("hungry ghosts"), and as inhabitants of the hell realms.The process by which sentient beings migrate from one state of existence to another is dependent on causes and conditions. The three causes are giving or charity, moral conduct, meditative development, and their opposites. Rebirth in the Kama-loka (desire realm) depends on a person's moral conduct and practice of giving. Rebirth in the Rupa-loka (form realm) and Arupa-loka (formless realm) also requires meditation development. Liberation from all rebirth requires eons upon eons of perfecting charity, moral conduct, and meditative development, in order to achieve Buddhahood. Meaning and origin: Origins The Buddhist cosmology as presented in commentaries and works of Abhidharma in both Theravāda and Mahāyāna traditions, is the end-product of an analysis and reconciliation of cosmological comments found in the Buddhist sūtra and vinaya traditions. No single sūtra sets out the entire structure of the universe, but in several sūtras the Buddha describes other worlds and states of being, and other sūtras describe the origin and destruction of the universe. The order of the planes are found in various discourses of Gautama Buddha in the Sutta Pitaka. In the Saleyyaka Sutta of the Majjhima Nikaya the Buddha mentioned the planes above the human plane in ascending order. In several suttas in the Anguttara Nikaya, the Buddha described the causes of rebirth in these planes in the same order. Meaning and origin: The synthesis of these data into a single comprehensive system must have taken place early in the history of Buddhism, as the system described in the Pāli Vibhajyavāda tradition (represented by today's Theravādins) agrees, despite some minor inconsistencies of nomenclature, with the Sarvāstivāda tradition which is preserved by Mahāyāna Buddhists. Spatial cosmology: The spatial cosmology displays the various worlds in which beings can be reborn. Spatial cosmology can also be divided into two branches. The vertical (or cakravāḍa; Devanagari: चक्रवाड) cosmology describes the arrangement of worlds in a vertical pattern, some being higher and some lower. By contrast, the horizontal (sahasra) cosmology describes the grouping of these vertical worlds into sets of thousands, millions or billions. Spatial cosmology: Vertical cosmology - Three Realms The three realms The vertical cosmology is divided into three realms, or dhātus: the formless realm (Ārūpyadhātu), corresponding to the formless jhanas; the form realm (Rūpadhātu), corresponding to the rūpa jhānas; and the desire realm (Kamadhātu). The three realms contain together thirty-one planes of existence, each corresponding to a different type of mentality. These three realms (tridhātu, trailokya) are the Formless Realm (Ārūpyadhātu), which consists of four planes; the Form Realm (Rūpadhātu), which consists of sixteen planes; and the Pleasure Realm (Kāmadhātu), which consists of fifteen planes. Spatial cosmology: A world is not so much a location as it is the beings which compose it; it is sustained by their karma, and if the beings in a world all die or disappear, the world disappears too. Likewise, a world comes into existence when the first being is born into it. The physical separation is not so important as the difference in mental state; humans and animals, though they partially share the same physical environments, still belong to different worlds because their minds perceive and react to those environments differently. Spatial cosmology: Devas and Brahma In some instances, all of the beings born in the Ārūpyadhātu and the Rūpadhātu are informally classified as "gods" or "deities" (devāḥ), along with the gods of the Kāmadhātu, notwithstanding the fact that the deities of the Kāmadhātu differ more from those of the Ārūpyadhātu than they do from humans. It is to be understood that deva is an imprecise term referring to any being living in a longer-lived and generally more blissful state than humans. Most of them are not "gods" in the common sense of the term, having little or no concern with the human world and rarely if ever interacting with it; only the lowest deities of the Kāmadhātu correspond to the gods described in many polytheistic religions. Spatial cosmology: The term Brahmā is used both as a name and as a generic term for one of the higher devas. In its broadest sense, it can refer to any of the inhabitants of the Ārūpyadhātu and the Rūpadhātu. In more restricted senses, it can refer to an inhabitant of one of the eleven lower worlds of the Rūpadhātu, or in its narrowest sense, to the three lowest worlds of the Rūpadhātu (Plane of Brahma's retinue). A large number of devas use the name "Brahmā", e.g. Brahmā Sahampati (ब्रह्मा सहम्पत्ति), Brahmā Sanatkumāra (ब्रह्मा सनत्कुमारः), Baka Brahmā (बकब्रह्मा), etc. It is not always clear which world they belong to, although it must always be one of the worlds of the Rūpadhātu. According to the Ayacana Sutta, Brahmā Sahampati, who begs the Buddha to teach Dhamma to the world, resides in the Śuddhāvāsa worlds. Spatial cosmology: Formless Realm (Ārūpyadhātu) The Formless Realm (Ārūpyadhātu (Sanskrit) or Arūpaloka (Pāli)) belongs to those Devas who attained and remained in the Four Formless Absorptions (catuḥ-samāpatti चतुःसमापत्ति) of the arūpadhyānas in a previous life, and now enjoy the fruits (vipāka) of the good karma of that accomplishment. Bodhisattvas, however, are never born in the Ārūpyadhātu even when they have attained the arūpadhyānas. Spatial cosmology: The Formless Realm would have no place in a purely physical cosmology, as none of the beings inhabiting it has either shape or location; and correspondingly, the realm has no location either. The inhabitants of these realms are possessed entirely of mind. Having no physical form or location, they are unable to hear Dhamma teachings. Spatial cosmology: There are four types of Formless Deva planes corresponding to the four types of arūpadhyānas: "Sphere of neither perception nor non-perception" (Naivasaṃjñānāsaṃjñāyatana नैवसंज्ञानासंज्ञायतन or Nevasaññānāsaññāyatana नेवसञ्ञानासञ्ञायतन ). Rebirth on this plane is a result of attaining the fourth formless jhana in a previous life. In this sphere the Formless Devas have gone beyond a mere negation of perception and have attained a liminal state where they do not engage in "perception" (saṃjñā, recognition of particulars by their marks) but are not wholly unconscious. This was the sphere reached by Udraka Rāmaputra (Pāli: Uddaka Rāmaputta), the second of the Buddha's original teachers, who considered it equivalent to enlightenment. Total life span in this realm in human years - 84,000 Maha Kalpa (Maha Kalpa = 4 Asankya Kalpa). This realm is placed 5,580,000 Yojanas above the Plane of Nothingness (Ākiṃcanyāyatana). Spatial cosmology: "Sphere of Nothingness" (literally "lacking anything") (Ākiṃcanyāyatana आकिंचन्यायतना or Ākiñcaññāyatana आकिञ्चञ्ञायतन ). Rebirth on this plane is a result of attaining the third formless jhana in a previous life. In this sphere Formless Devas dwell contemplating upon the thought that "there is no thing". This is considered a form of perception, though a very subtle one. This was the sphere reached by Ārāḍa Kālāma (Pali: Āḷāra Kālāma), the first of the Buddha's original teachers; he considered it to be equivalent to enlightenment. Total life span in this realm in human years – 60,000 Maha Kalpa. This realm is placed 5,580,000 yojanas above the Plane of Infinite Consciousness(Vijñānānantyāyatana). Spatial cosmology: "Sphere of Infinite Consciousness" (Vijñānānantyāyatana विज्ञानानन्त्यायतन or Viññāṇānañcāyatana विञ्ञाणानञ्चायतन or more commonly the contracted form Viññāṇañcāyatana ). Rebirth on this plane is a result of attaining the second formless jhana. In this sphere Formless Devas dwell meditating on their consciousness (vijñāna) as infinitely pervasive. Total life span in this realm in human years – 40,000 Maha Kalpa. This realm is placed 5,580,000 yojanas above the Plane of Infinite Space (Ākāśānantyāyatana) "Sphere of Infinite Space" (Ākāśānantyāyatana अाकाशानन्त्यायतन or Ākāsānañcāyatana आकासानञ्चायतन ). Rebirth on this plane is a result of attaining the first formless jhana. In this sphere Formless Devas dwell meditating upon space or extension (ākāśa) as infinitely pervasive. Total life span in this realm in human years – 20,000 Maha Kalpa. This realm is placed 5,580,000 yojanas above the Akanittha Brahma Loka – Highest plane of pure abodes. Spatial cosmology: Form Realm (Rūpadhātu) The Rūpadhātu (Sanskrit: रूपधातु; Pali: रूपलोक, romanized: rūpaloka; Tibetan: གཟུགས་ཀྱི་ཁམས་, Wylie: gzugs kyi khams; Vietnamese: Giới Sắc; Chinese: 色界; Japanese: 色界, romanized: shiki-kai; Burmese: ရူပဗြဟ္မာဘုံ; Thai: รูปโลก / รูปธาตุ) or "Form realm" is, as the name implies, the first of the physical realms; its inhabitants all have a location and bodies of a sort, though those bodies are composed of a subtle substance which is of itself invisible to the inhabitants of the Kāmadhātu. According to the Janavasabha Sutta, when a brahma (a being from the Brahma-world of the Rūpadhātu) wishes to visit a deva of the Trāyastriṃśa heaven (in the Kāmadhātu), he has to assume a "grosser form" in order to be visible to them. There are 16–22 Rūpadhātu in Buddhism texts, the most common saying is 18.The beings of the Form realm are not subject to the extremes of pleasure and pain, or governed by desires for things pleasing to the senses, as the beings of the Kāmadhātu are. The bodies of Form realm beings do not have sexual distinctions. Spatial cosmology: Like the beings of the Ārūpyadhātu, the dwellers in the Rūpadhātu have minds corresponding to the dhyānas (Pāli: jhānas). In their case, it is the four lower dhyānas or rūpadhyānas (रुपध्यान). However, although the beings of the Rūpadhātu can be divided into four broad grades corresponding to these four dhyānas, each of them is subdivided into further grades, three for each of the four dhyānas and five for the Śuddhāvāsa devas, for a total of seventeen grades (the Theravāda tradition counts one less grade in the highest dhyāna for a total of sixteen). Spatial cosmology: Physically, the Rūpadhātu consists of a series of planes stacked on top of each other, each one in a series of steps half the size of the previous one as one descends. In part, this reflects the fact that the devas are also thought of as physically larger on the higher planes. The highest planes are also broader in extent than the ones lower down, as discussed in the section on Sahasra cosmology. The height of these planes is expressed in yojanas, a measurement of very uncertain length, but sometimes taken to be about 4,000 times the height of a man, and so approximately 4.54 miles (7.31 km). Spatial cosmology: Pure Abodes (non-returners) The Śuddhāvāsa (Sanskrit: शुद्धावास; Pali: सुद्धावास, romanized: suddhāvāsa; Tibetan: གནས་གཙང་མ་, Wylie: gnas gtsang ma; Vietnamese: Tịnh Cư Thiên; Chinese: 净居天/凈居天; Thai: สุทฺธาวสฺสภูมิ) worlds, or "Pure Abodes", are distinct from the other worlds of the Rūpadhātu in that they do not house beings who have been born there through ordinary merit or meditative attainments, but only those Anāgāmins ("Non-returners"), the third level on the path of enlightenment, who are already on the path to Arhat-hood and who will attain enlightenment directly from the Śuddhāvāsa worlds without being reborn in a lower plane. These Pure Abodes are accessible only to those who have destroyed the lower five fetters, consisting of self-view, sceptical doubt, clinging to rites and ceremonies, sense desires, and ill-will. They will destroy their remaining fetters of craving for fine material existence, craving for immaterial existence, conceit, restlessness and ignorance during their existence in the Pure Abodes. Those who take rebirth here are called "non-returners" because they do not return from that world, but attain final nibbana there without coming back. Every Śuddhāvāsa deva is therefore a protector of Buddhism. They guard and protect Buddhism on earth, and will pass into enlightenment as Arhats when they pass away from the Suddhavasa worlds. Brahma Sahampati, an inhabitant from these worlds, who appealed to the newly enlightened Buddha to teach, was an Anagami under the previous Buddha. Because a Śuddhāvāsa deva will never be reborn outside the Śuddhāvāsa worlds, no Bodhisattva is ever born in these worlds, as a Bodhisattva must ultimately be reborn as a human being. Spatial cosmology: Since these devas rise from lower planes only due to the teaching of a Buddha, they can remain empty for very long periods if no Buddha arises. However, unlike the lower worlds, the Śuddhāvāsa worlds are never destroyed by natural catastrophe. The Śuddhāvāsa devas predict the coming of a Buddha and, taking the guise of Brahmins, reveal to human beings the signs by which a Buddha can be recognized. They also ensure that a Bodhisattva in his last life will see the four signs that will lead to his renunciation. Spatial cosmology: The five Śuddhāvāsa worlds are: Akaniṣṭha (Sanskrit: अकनिष्ठ; Pali: अकनिठ्ठ, romanized: akaniṭṭha; Vietnamese: Trời Sắc Cứu Cánh; Chinese: 色究竟天; Thai: อกนิฏฺฐา, อกนิษฐา) – World of devas "equal in rank" (literally: having no one as the youngest). The highest of all the Rūpadhātu worlds, it is often used to refer to the highest extreme of the universe. The current Śakra will eventually be born there. The duration of life in Akaniṣṭha is 16,000 kalpas (Vibhajyavāda tradition). Maheśvara, the ruler of the three realms of samsara is said to dwell here. The height of this world is 167,772,160 yojanas above the Earth. Spatial cosmology: Sudarśana (Sanskrit: सुदर्शन; Pali: सुदस्सी, romanized: sudassī; Vietnamese: Trời Thiện Kiến; Chinese: 善见天; Thai: สุทัสสี, สุทารฺศฺน) – The "clear-seeing" devas live in a world similar to and friendly with the Akaniṣṭha world. The height of this world is 83,886,080 yojanas above the Earth. Sudṛśa (Sanskrit: सुदृश; Pali: सुदस्स, romanized: sudassa; Vietnamese: Trời Thiện Hiện; Chinese: 善现天; Thai: สุทัสสา, สุทรรศา) – The world of the "beautiful" devas are said to be the place of rebirth for five kinds of anāgāmins. The height of this world is 41,943,040 yojanas above the Earth. Atapa (Sanskrit: अतप; Pali: अतप्प, romanized: atappa; Vietnamese: Trời Vô Nhiệt; Chinese: 无热天; Thai: อตัปปา, อตปา) – The world of the "untroubled" devas, whose company those of lower realms wish for. The height of this world is 20,971,520 yojanas above the Earth. Spatial cosmology: Avṛha (Sanskrit: अवृह; Pali: अविह, romanized: aviha; Vietnamese: Trời Vô Phiền; Chinese: 无烦天; Thai: อวิหา, อวรรหา) – The world of the "not falling" devas, perhaps the most common destination for reborn Anāgāmins. Many achieve arhatship directly in this world, but some pass away and are reborn in sequentially higher worlds of the Pure Abodes until they are at last reborn in the Akaniṣṭha world. These are called in Pāli uddhaṃsotas, "those whose stream goes upward". The duration of life in Avṛha is 1,000 kalpas (Vibhajyavāda tradition). The height of this world is 10,485,760 yojanas above the Earth. Spatial cosmology: Bṛhatphala worlds (fourth dhyana) The mental state of the devas of the Bṛhatphala worlds (Vietnamese: Tứ Thiền; Chinese: 四禅九天/四禪九天; Japanese: 四禅九天; Thai: เวหปฺปผลา) corresponds to the fourth dhyāna, and is characterized by equanimity (upekṣā). The Bṛhatphala worlds form the upper limit to the destruction of the universe by wind at the end of a mahākalpa (see Temporal cosmology below), that is, they are spared such destruction. Spatial cosmology: Asaññasatta Sanskrit: असञ्ञसत्त, romanized: Asaṃjñasattva; Vietnamese: Trời Vô Tưởng;Chinese: 无想天; Thai: อสัญฺญสัตฺตา or อสํชญสตฺวา (Vibhajyavāda tradition only) – "Unconscious beings", who have only bodies without consciousness are the devas who have attained a high dhyāna (similar to that of the Formless Realm), and, wishing to avoid the perils of perception, have achieved a state of non-perception in which they endure for a time. Rebirth into this plane results from a meditative practice aimed at the suppression of consciousness. Those who take up this practice assume release from suffering can be achieved by attaining unconsciousness. However, when the life span in this realm ends, perception arises again, the beings pass away and are born in other planes where consciousness returns. Spatial cosmology: Bṛhatphala बृहत्फल or Vehapphala वेहप्फल (Vietnamese: Trời Quảng Quả; Chinese: 广果天; Tibetan: འབྲས་བུ་ཆེ་, Wylie: 'bras bu che; Thai: เวหัปปผลา or พรฺหตฺผลา – Devas "having great fruit". Their lifespan is 500 mahākalpas. (Vibhajyavāda tradition). Some Anāgāmins are reborn here. The height of this world is 5,242,880 yojanas above the Earth. In the Jhana Sutta of the Anguttara Nikaya the Buddha said "The Vehapphala devas, monks, have a life-span of 500 eons. A run-of-the-mill person having stayed there, having used up all the life-span of those devas, goes to hell, to the animal womb, to the state of the hungry shades." Puṇyaprasava पुण्यप्रसव (Sarvāstivāda tradition only; Vietnamese: Trời Phước Sanh; Chinese: 福生天; Tibetan: བསོད་ནམས་སྐྱེས་, Wylie: bsod nams skyes; Thai: ปณฺยปรัสวา – The world of the devas who are the "offspring of merit". The height of this world is 2,621,440 yojanas above the Earth. Spatial cosmology: Anabhraka अनभ्रक (Sarvāstivāda tradition only; Vietnamese: Trời Vô Vân; Chinese: 无云天; Tibetan: སྤྲིན་མེད་, Wylie: sprin med; Thai: อนภร๎กา – The world of the "cloudless" devas. The height of this world is 1,310,720 yojanas above the Earth. Spatial cosmology: Śubhakṛtsna worlds (third dhyana) The mental state of the devas of the Śubhakṛtsna worlds (Vietnamese: Tam Thiền; Chinese: 三禅三天; Devanagari: शुभकृत्स्न; Thai: ศุภกฤตฺสนาภูมิ) corresponds to the third dhyāna, and is characterized by a quiet joy (sukha). These devas have bodies that radiate a steady light. The Śubhakṛtsna worlds form the upper limit to the destruction of the universe by water at the end of a mahākalpa (see Temporal cosmology below), that is, the flood of water does not rise high enough to reach them. Spatial cosmology: Śubhakṛtsna शुभकृत्स्न or Subhakiṇṇa / Subhakiṇha सुभकिण्ण/सुभकिण्ह (Vietnamese: Trời Biến Tịnh; Chinese: 遍净天; Tibetan: དགེ་རྒྱས་, Wylie: dge rgyas; Thai: สุภกิณหา or ศุภกฤตฺสนา) – The world of devas of "total beauty". Their lifespan is 64 mahākalpas (some sources: 4 mahākalpas) according to the Vibhajyavāda tradition. 64 mahākalpas is the interval between destructions of the universe by wind, including the Śubhakṛtsna worlds. The height of this world is 655,360 yojanas above the Earth. The Buddha said, " A run-of-the-mill person having stayed there, having used up all the life-span of those devas, goes to hell, to the animal womb, to the state of the hungry shades." Apramāṇaśubha अप्रमाणशुभ or Appamāṇasubha अप्पमाणसुभ (Vietnamese: Trời Vô Lượng Tịnh; Chinese: 无量净天; Tibetan: ཚད་མེད་དགེ་, Wylie: tshad med dge; Thai: อัปปมาณสุภา or อัปรมาณศุภา) – The world of devas of "limitless beauty". Their lifespan is 32 mahākalpas (Vibhajyavāda tradition). They possess "faith, virtue, learning, munificence and wisdom". The height of this world is 327,680 yojanas above the Earth. Spatial cosmology: Parīttaśubha परीत्तशुभ or Parittasubha परित्तसुभ (Vietnamese: Trời Thiểu Tịnh; Chinese: 少净天; Tibetan: དགེ་ཆུང་, Wylie: dge chung; Thai: ปริตฺตสุภา or ปรีตฺตศุภา) – The world of devas of "limited beauty". Their lifespan is 16 mahākalpas. The height of this world is 163,840 yojanas above the Earth. Spatial cosmology: Ābhāsvara worlds (second dhyana) The mental state of the devas of the Ābhāsvara आभास्वर worlds (Vietnamese: Nhị Thiền; Chinese: 二禅三天; Thai: อาภัสสราภูมิ/อาภาสวราธาตุ corresponds to the second dhyāna, and is characterized by delight (prīti) as well as joy (sukha); the Ābhāsvara devas are said to shout aloud in their joy, crying aho sukham! ("Oh joy!"). These devas have bodies that emit flashing rays of light like lightning. They are said to have similar bodies (to each other) but diverse perceptions. Spatial cosmology: The Ābhāsvara worlds form the upper limit to the destruction of the universe by fire at the end of a mahākalpa (see Temporal cosmology below), that is, the column of fire does not rise high enough to reach them. After the destruction of the world, at the beginning of the vivartakalpa, the worlds are first populated by beings reborn from the Ābhāsvara worlds. Spatial cosmology: Ābhāsvara आभास्वर or Ābhassara आभस्सर (Vietnamese: Trời Quang Âm; Chinese: 光音天; Tibetan: འོད་གསལ་, Wylie: 'od gsal; Thai: อาภัสสรา or อาภาสวรา) – The world of devas "possessing splendor". The lifespan of the Ābhāsvara devas is 8 mahākalpas (others: 2 mahākalpas). Eight mahākalpas is the interval between destructions of the universe by water, which includes the Ābhāsvara worlds. The height of this world is 81,920 yojanas above the Earth. Spatial cosmology: Apramāṇābha अप्रमाणाभ or Appamāṇābha अप्पमाणाभ (Vietnamese: Trời Vô Lượng Quang; Chinese: 无量光天; Tibetan: ཚད་མེད་འོད་, Wylie: tshad med 'od; Thai: อัปปมาณาภา or อัปรมาณาภา) – The world of devas of "limitless light", a concept on which they meditate. Their lifespan is 4 mahākalpas. The height of this world is 40,960 yojanas above the Earth. Parīttābha परीत्ताभ or Parittābha परित्ताभ (Vietnamese: Trời Thiểu Quang; Chinese: 少光天; Tibetan: འོད་ཆུང་, Wylie: 'od chung; Thai: ปริตฺตาภา or ปรีตตาภา) – The world of devas of "limited light". Their lifespan is 2 mahākalpas. The height of this world is 20,480 yojanas above the Earth. Spatial cosmology: Brahmā worlds (first dhyana) The mental state of the devas of the Brahmā worlds (Vietnamese: Sơ Thiền; Chinese: 初禅三天; Thai: พรหมภูมิ) corresponds to the first dhyāna, and is characterized by observation (vitarka) and reflection (vicāra) as well as delight (prīti) and joy (sukha). The Brahmā worlds, together with the other lower worlds of the universe, are destroyed by fire at the end of a mahākalpa (see Temporal cosmology below). One way to rebirth in the Brahma world is mastery over the first jhana. Another is through meditations on loving kindness, compassion, altruistic joy, and equanimity. The Buddha teaches the Brahmin Subha, how to be born in the world of Brahma, in the Subha Sutta, when asked by him. Spatial cosmology: Mahābrahmā महाब्रह्मा (Tibetan: ཚངས་པ་ཆེན་པོ་, Wylie: tshangs pa chen po; Vietnamese: Trời Đại Phạm; Chinese: 大梵天 Japanese: 'Daibonten; Thai: มหาพรหฺมฺา) – Brahmaloka is the world of "Great Brahmā", believed by many to be the creator of the world, and having as his name Mahābrahmā, the Conqueror, the Unconquered, the All-Seeing, All-Powerful, the Lord, the Maker and Creator, the Ruler, Appointer and Orderer, Father of All That Have Been and Shall Be." According to the Brahmajāla Sutta (DN.1), a Mahābrahmā is a being from the Ābhāsvara worlds who falls into a lower world through exhaustion of his merits and is reborn alone in the Brahma-world; forgetting his former existence, he imagines himself to have come into existence without cause. Note that even such a high-ranking deity has no intrinsic knowledge of the worlds above his own. Mahābrahmā is 1 1⁄2 yojanas tall. His lifespan variously said to be 1 kalpa (Vibhajyavāda tradition) or 1 1⁄2 kalpas long (Sarvāstivāda tradition), although it would seem that it could be no longer than 3⁄4 of a mahākalpa, i.e., all of the mahākalpa except for the Saṃvartasthāyikalpa, because that is the total length of time between the rebuilding of the lower world and its destruction. It is unclear what period of time "kalpa" refers to in this case. The height of this world is 10,240 yojanas above the Earth. Spatial cosmology: Brahmapurohita ब्रह्मपुरोहित (Vietnamese: Trời Phạm Phụ; Chinese: 梵辅天; Tibetan: ཚངས་འཁོར་, Wylie: tshangs 'khor; Thai: พรหฺมปุโรหิตา) – the "Ministers of Brahmā" are beings, also originally from the Ābhāsvara worlds, that are born as companions to Mahābrahmā after he has spent some time alone. Since they arise subsequent to his thought of a desire for companions, he believes himself to be their creator, and they likewise believe him to be their creator and lord. They are 1 yojana in height and their lifespan is variously said to be 1⁄2 of a kalpa (Vibhajyavāda tradition) or a whole kalpa (Sarvāstivāda tradition). If they are later reborn in a lower world, and come to recall some part of their last existence, they teach the doctrine of Brahmā as creator as a revealed truth. The height of this world is 5,120 yojanas above the Earth. Spatial cosmology: Brahmapāriṣadya ब्रह्मपारिषद्य or Brahmapārisajja ब्रह्मपारिसज्ज (Vietnamese: Trời Phạm Chúng; Chinese: 梵众天; Tibetan: ཚངས་རིས་, Wylie: tshangs ris; Thai: พรหฺมปริสัชชา or พรหฺมปาริษัตยา) – the "Councilors of Brahmā" or the devas "belonging to the assembly of Brahmā". They are also called Brahmakāyika, but this name can be used for any of the inhabitants of the Brahma-worlds. They are half a yojana in height and their lifespan is variously said to be 1⁄3 of a kalpa (Vibhajyavāda tradition) or 1⁄2 of a kalpa (Sarvāstivāda tradition). The height of this world is 2,560 yojanas above the Earth. Spatial cosmology: Desire Realm (Kāmadhātu) The beings born in the Kāmadhātu कामधातु (Pali: कामलोक, romanized: Kāmaloka; Tibetan: འདོད་པའི་ཁམས་, Wylie: 'dod pa'i khams; Vietnamese: Giới Dục; Chinese: 欲界; Japanese: Yoku-kai; Thai: กามภูมิ) differ in degree of happiness, but they are all, other than Anagamis, Arhats and Buddhas, under the domination of Māra and are bound by sensual desire, which causes them suffering. Birth into these planes takes place as a result of our Karma. The Sense-Sphere (Desire) Realm is the lowest of the three realms. The driving force within this realm is sensual desire. Spatial cosmology: Heavens The following four worlds are bounded planes, each 80,000 yojanas square, which float in the air above the top of Mount Sumeru. Although all of the worlds inhabited by devas (that is, all the worlds down to the Cāturmahārājikakāyika world and sometimes including the Asuras) are sometimes called "heavens". These devas enjoy aesthetic pleasures, long life, beauty, and certain powers. Anyone who has led a wholesome life can be born in them. Spatial cosmology: Higher Heavens (Higher Kama Loka) These devas live in four heavens that float in the air, leaving them free from contact with the strife of the lower world. In the western sense of the word "heaven", the term best applies to the four worlds listed below: Parinirmita-vaśavartin परिनिर्मितवशवर्ती or Paranimmita-vasavatti परनिम्मितवसवत्ति (Tibetan: གཞན་འཕྲུལ་དབང་བྱེད་, Wylie: gzhan 'phrul dbang byed; Vietnamese: Trời Tha Hoá Tự Tại; Chinese: 他化自在天; Japanese: Takejizai-ten; Burmese: ပရနိမ္မိတဝသဝတ္တီ; Thai: ปรนิมมิตวสวัตฺติ or ปริเนรมิตวศวรติน) – The heaven of devas "with power over (others') creations". These devas do not create pleasing forms that they desire for themselves, but their desires are fulfilled by the acts of other devas who wish for their favor. The ruler of this world is called Vaśavartin (Pāli: Vasavatti), who has a longer life, greater beauty, more power and happiness and more delightful sense-objects than the other devas of his world. This world is also the home of the devaputra (being of divine race) called Māra, who endeavors to keep all beings of the Kāmadhātu in the grip of sensual pleasures. Māra is also sometimes called Vaśavartin, but in general these two dwellers of this world are kept distinct. The beings of this world are 4,500 feet (1,400 m) tall and live for 9,216,000,000 years (Sarvāstivāda tradition). The height of this world is 1,280 yojanas above the Earth. Spatial cosmology: Nirmāṇarati निर्माणरति or Nimmānaratī निम्माणरती (Tibetan: འཕྲུལ་དགའ་, Wylie: 'phrul dga'; Vietnamese: Trời Hoá Lạc; Chinese: 化乐天/化樂天; Japanese: 化楽天 Keraku-ten; Burmese: နိမ္မာနရတိ; Thai: นิมมานรติ or นิรมาณรติ)– The world of devas "delighting in their creations". The devas of this world are capable of making any appearance to please themselves. The lord of this world is called Sunirmita (Pāli: Sunimmita); his wife is the rebirth of Visākhā, formerly the chief of the upāsikās (female lay devotees) of the Buddha. The beings of this world are 3,750 feet (1,140 m) tall and live for 2,304,000,000 years (Sarvāstivāda tradition). The height of this world is 640 yojanas above the Earth. Spatial cosmology: Tuṣita तुषित or Tusita तुसित (Tibetan: དགའ་ལྡན་, Wylie: dga' ldan; Vietnamese: Trời Đâu Suất; Chinese: 兜率天; Japanese: Tosotsu-ten; Burmese: တုသိတာ; Thai: ดุสิต, ตุสิตา or ตุษิตา) – The world of the "joyful" devas. This world is best known for being the world in which a Bodhisattva lives before being reborn in the world of humans. Until a few thousand years ago, the Bodhisattva of this world was Śvetaketu (Pāli: Setaketu), who was reborn as Siddhārtha, who would become the Buddha Śākyamuni; since then the Bodhisattva has been Nātha (or Nāthadeva) who will be reborn as Ajita and will become the Buddha Maitreya (Pāli Metteyya). While this Bodhisattva is the foremost of the dwellers in Tuṣita, the ruler of this world is another deva called Santuṣita (Pāli: Santusita). The beings of this world are 3,000 feet (910 m) tall and live for 576,000,000 years (Sarvāstivāda tradition). The height of this world is 320 yojanas above the Earth. Spatial cosmology: Yāma याम (Tibetan: འཐབ་བྲལ་, Wylie: 'thab bral; Vietnamese: Trời Dạ Ma; Chinese: 夜摩天; Japanese: Yama-ten; Burmese: ယာမာ; Thai: ยามา) – Sometimes called the "heaven without fighting", because it is the lowest of the heavens to be physically separated from the tumults of the earthly world. These devas live in the air, free of all difficulties. Its ruler is the deva Suyāma; according to some, his wife is the rebirth of Sirimā, a courtesan of Rājagṛha in the Buddha's time who was generous to the monks. The beings of this world are 2,250 feet (690 m) tall and live for 144,000,000 years (Sarvāstivāda tradition). The height of this world is 160 yojanas above the Earth. Spatial cosmology: Lower Heavens (Worlds of Sumeru) The world-mountain of Sumeru सुमेरु (Vietnamese: Tu Di; Sineru सिनेरु; Thai: เขาพระสุเมรุ, สิเนรุบรรพต) is an immense, strangely shaped peak which arises in the center of the world, and around which the Sun and Moon revolve. Its base rests in a vast ocean, and it is surrounded by several rings of lesser mountain ranges and oceans. The three worlds listed below are all located on, or around, Sumeru: the Trāyastriṃśa devas live on its peak, the Cāturmahārājikakāyika devas live on its slopes, and the Asuras live in the ocean at its base. Sumeru and its surrounding oceans and mountains are the home not just of these deities, but also vast assemblies of beings of popular mythology who only rarely intrude on the human world. They are even more passionate than the higher devas, and do not simply enjoy themselves but also engage in strife and fighting. Spatial cosmology: Trāyastriṃśa त्रायस्त्रिंश or Tāvatiṃsa तावतिंस (Tibetan: སུམ་ཅུ་རྩ་གསུམ་པ་, Wylie: sum cu rtsa gsum pa; Vietnamese: Trời Đao Lợi/ Trời Tam Thập Tam; Chinese: 忉利天/三十三天; Japanese: Tōri-ten; Burmese: တာဝတိံသာ; Thai: ดาวดึงส์, ไตรตรึงศ์, ตาวติํสา or ตฺรายสฺตฺริศ) – The world "of the Thirty-three (devas)" is a wide flat space on the top of Mount Sumeru, filled with the gardens and palaces of the devas. Its ruler is Śakro devānām indra, शक्रो देवानामिन्द्रः "Śakra, lord of the devas or King of devas". Śakra's consort Shachi devi live in this heaven. Besides the eponymous Thirty-three million gods and goddesses, many other devas and supernatural beings, known as Varuna and Vayu dwell here, including the attendants of the devas and many heavenly courtesans (apsaras or nymphs). Sakka and the devas honor sages and holy men. Many devas dwelling here live in mansions in the air. The beings of this world are 1,500 feet (460 m) tall and live for 36,000,000 years (Sarvāstivāda tradition) or 3/4 of a yojana tall and live for 30,000,000 years (Vibhajyavāda tradition). The height of this world is 80 yojanas above the Earth. Spatial cosmology: Cāturmahārājikakāyika चातुर्महाराजिक or Cātummahārājika चातुम्महाराजिक (Tibetan: རྒྱལ་ཆེན་བཞི་, Wylie: rgyal chen bzhi; Vietnamese: Trời Tứ Thiên Vương; Chinese: 四天王天; Japanese: Shidaiōshu-ten; Burmese: စတုမဟာရာဇ်; Thai: จาตุมฺมหาราชิกา or จาตุรมหาราชิกกายิกา) – The world "of the Four Great Kings" is found on the lower slopes of Mount Sumeru, though some of its inhabitants live in the air around the mountain. Its rulers are the four Great Kings of the name, Virūḍhaka विरूढकः, king of the Southern Direction, is lord of the kumbandas; Dhṛtarāṣṭra धृतराष्ट्रः, king of the Eastern Direction, is lord of the gandhabbas; Virūpākṣa विरूपाक्षः, king of the Western Direction, is lord of the nagas; and their leader Vaiśravaṇa वैश्रवणः,also known as Kuvera, who rules as king of the Northern Direction, is lord of the yakkhas, but ultimately all are accountable to Sakra. They are the martial kings who guard the four quarters of the Earth. The Garudas and the devas who guide the Sun and Moon are also considered part of this world, as are the retinues of the four kings, composed of Kumbhāṇḍas कुम्भाण्ड (dwarfs), Gandharvas गन्धर्व (fairies), Nāgas नाग (dragons) and Yakṣas यक्ष (goblins). These devas also inhabit remote areas such as forests, hills, and abandoned caves. Though living in misery they have the potential for awakening and can attain the path and fruits of the spiritual life. The beings of this world are 750 feet (230 m) tall and live for 9,000,000 years (Sarvāstivāda tradition) or 90,000 years (Vibhajyavāda tradition). The height of this world is from sea level up to 40 yojanas above the Earth. Spatial cosmology: Asura असुर (Tibetan: ལྷ་མ་ཡིན་, Wylie: lha ma yin; Vietnamese: A Tu La; Chinese: 阿修羅; Japanese: Ashura; Burmese: အသူရာ; Thai: อสุรกาย – The world of the Asuras is the space at the foot of Mount Sumeru, much of which is a deep ocean. It is not the Asuras' original home, but the place they found themselves after they were hurled, drunken, from Trāyastriṃśa where they had formerly lived. The Asuras are always fighting to regain their lost kingdom on the top of Mount Sumeru, but are unable to break the guard of the Four Great Kings. The Asuras are divided into many groups, and have no single ruler, but among their leaders are Vemacitrin वेमचित्री (Pāli: Vepacitti वेपचित्ती) and Rāhu. In later texts, we find the Asura realm as one of the four unhappy states of rebirth, but the Nikāya evidence however does not show that the Asura realm was regarded as a state of suffering.The foundations of the earth All of the structures of the earth, Sumeru and the rest, extend downward to a depth of 80,000 yojanas below sea level – the same as the height of Sumeru above sea level. Below this is a layer of "golden earth", a substance compact and firm enough to support the weight of Sumeru. It is 320,000 yojanas in depth and so extends to 400,000 yojanas below sea level. The layer of golden earth in turn rests upon a layer of water, which is 8,000,000 yojanas in depth, going down to 8,400,000 yojanas below sea level. Below the layer of water is a "circle of wind", which is 16,000,000 yojanas in depth and also much broader in extent, supporting 1,000 different worlds upon it. Yojanas are equivalent to about 13 km (8 mi). Spatial cosmology: Earthly realms Manuṣyaloka मनुष्यलोक (Tibetan: མི་, Wylie: mi; Vietnamese: Người; Chinese: 人; Japanese: nin; Burmese: မနုဿဘုံ; Thai: มนุสสภูมิ or มนุษยโลก) – This is the world of humans and human-like beings who live on the surface of the earth. Birth in this plane results from giving and moral discipline of middling quality. This is the realm of moral choice where destiny can be guided. The Khana Sutta mentioned that this plane is a unique balance of pleasure and pain. It facilitates the development of virtue and wisdom to liberate oneself from the entire cycle of rebirths. For this reason rebirth as a human being is considered precious according to the Chiggala Sutta. The mountain-rings that engird Sumeru are surrounded by a vast ocean, which fills most of the world. The ocean is in turn surrounded by a circular mountain wall called Cakravāḍa चक्रवाड (Pāli: Cakkavāḷa चक्कवाळ ; Thai: จักรวาล or จกฺกวาฬ) which marks the horizontal limit of the world. In this ocean there are four continents which are, relatively speaking, small islands in it. Because of the immenseness of the ocean, they cannot be reached from each other by ordinary sailing vessels, although in the past, when the cakravartin kings ruled, communication between the continents was possible by means of the treasure called the cakraratna (Pāli cakkaratana), which a cakravartin king and his retinue could use to fly through the air between the continents. The four continents are: Jambudvīpa जम्वुद्वीप or Jambudīpa जम्बुदीप (Tibetan: འཛམ་བུའི་གླིང་, Wylie: 'dzam bu gling; Vietnamese: Diêm Phù Đề or Nam Thiệm Bộ Châu; Chinese: 閻浮提 or 贍部洲; Japanese: Enbudai; Burmese: ဇမ္ဗုဒီပ; Thai: ชมพูทวีป) is located in the south and is the dwelling of ordinary human beings. It is said to be shaped "like a cart", or rather a blunt-nosed triangle with the point facing south. (This description probably echoes the shape of the coastline of southern India.) It is 10,000 yojanas in extent (Vibhajyavāda tradition) or has a perimeter of 6,000 yojanas (Sarvāstivāda tradition) to which can be added the southern coast of only 3.5 yojanas' length. The continent takes its name from a giant Jambu tree (Syzygium cumini), 100 yojanas tall, which grows in the middle of the continent. Every continent has one of these giant trees. All Buddhas appear in Jambudvīpa. The people here are five to six feet tall and their length of life varies between 10 and about 10140 years (Asankya Aayu). Spatial cosmology: Pūrvavideha पूर्वविदेह or Pubbavideha पुब्बविदेह (Tibetan: ལུས་འཕགས་པོ་, Wylie: lus 'phags po; Vietnamese: Đông Thắng Thần Châu; Burmese: ပုဗ္ဗဝိဒေဟ; Thai: ปุพพวิเทหทีป or บูรพวิเทหทวีป; Chinese: 勝身洲 is located in the east, and is shaped like a semicircle with the flat side pointing westward (i.e., towards Sumeru). It is 7,000 yojanas in extent (Vibhajyavāda tradition) or has a perimeter of 6,350 yojanas of which the flat side is 2,000 yojanas long (Sarvāstivāda tradition). Its tree is the acacia, or Albizia lebbeck (Sukhōthai tradition). The people here are about 12 feet (3.7 m) tall and they live for 700 years. Their main work is trading and buying materials. Spatial cosmology: Aparagodānīya अपरगोदानीय or Aparagoyāna अपरगोयान (Tibetan: བ་ལང་སྤྱོད་, Wylie: ba lang spyod; Vietnamese: Tây Ngưu Hoá Châu; Burmese: အပရဂေါယာန; Thai: อปรโคยานทวีป or อปรโคทานียทวีป; Chinese: 牛貨洲) is located in the west, and is shaped like a circle with a circumference of about 7,500 yojanas (Sarvāstivāda tradition). The tree of this continent is a giant Kadamba tree (Anthocephalus chinensis). The human inhabitants of this continent do not live in houses but sleep on the ground. Their main transportation is Bullock cart. They are about 24 feet (7.3 m) tall and they live for 500 years. Spatial cosmology: Uttarakuru उत्तरकुरु (Tibetan: སྒྲ་མི་སྙན་, Wylie: sgra mi snyan; Vietnamese: Bắc Câu Lư Châu; Burmese: ဥတ္တရကုရု; Thai: อุตรกุรุทวีป; Chinese: 俱盧州) is located in the north, and is shaped like a square. It has a perimeter of 8,000 yojanas, being 2,000 yojanas on each side. This continent's tree is called a kalpavṛkṣa कल्पवृक्ष (Pāli: kapparukkha कप्परुक्ख) or kalpa-tree, because it lasts for the entire kalpa. The inhabitants of Uttarakuru have cities built in the air. They are said to be extraordinarily wealthy, not needing to labor for a living – as their food grows by itself – and having no private property. They are about 48 feet (15 m) tall and live for 1,000 years, and they are under the protection of Vaiśravaṇa. Spatial cosmology: Tiryagyoni-loka तिर्यग्योनिलोक or Tiracchāna-yoni तिरच्छानयोनि (Tibetan: དུད་འགྲོ་, Wylie: dud 'gro; Vietnamese: Súc Sanh; Chinese: 畜生; Japanese: chikushō; Burmese: တိရစ္ဆာန်ဘုံ; Thai: เดรัจฉานภูมิ or ติรยคฺโยนิโลก) – This world comprises all members of the animal kingdom that are capable of feeling suffering, regardless of size. The animal realm includes animals, insects, fish, birds, worms, etc.. Spatial cosmology: Pretaloka प्रेतलोक or Petaloka पेतलोक (Tibetan: ཡི་དྭགས་, Wylie: yi dwags; Vietnamese: Ngạ Quỷ; Burmese: ပြိတ္တာ; Thai: เปรตภูมิ or เปตฺตโลก) – The pretas, or "hungry ghosts", are mostly dwellers on earth, though due to their mental state they perceive it very differently from humans. They live for the most part in deserts and wastelands. This is the realm where ghost and unhappy spirits wander in vain, hopelessly in search of sensual fulfillment. Spatial cosmology: Hells (Narakas) Naraka नरक or Niraya निरय (Tibetan: དམྱལ་བ་, Wylie: dmyal ba; Vietnamese: Địa Ngục hoặc Na-Lạc-Ca; Burmese: ငရဲ; Thai: นรก) is the name given to one of the worlds of greatest suffering, usually translated into English as "hell" or "purgatory". These are realms of extreme sufferings. As with the other realms, a being is born into one of these worlds as a result of his karma, and resides there for a finite length of time until his karma has achieved its full result, after which he will be reborn in one of the higher worlds as the result of an earlier karma that had not yet ripened. The mentality of a being in the hells corresponds to states of extreme fear and helpless anguish in humans. Spatial cosmology: Physically, Naraka is thought of as a series of layers extending below Jambudvīpa into the earth. There are several schemes for counting these Narakas and enumerating their torments. One of the more common is that of the Eight Cold Narakas and Eight Hot Narakas. Spatial cosmology: Eight Great Cold Narakas Arbuda अर्बुद – the "blister" Naraka Nirarbuda निरर्बुद – the "burst blister" Naraka Ataṭa अतट – the Naraka of shivering Hahava हहव – the Naraka of lamentation Huhuva हुहुव – the Naraka of chattering teeth Utpala उत्पल – the Naraka of skin becoming blue as a blue lotus Padma पद्म – the Naraka of cracking skin Mahāpadma महापद्म – the Naraka of total frozen bodies falling apartEach lifetime in these Narakas is twenty times the length of the one before it. Spatial cosmology: Eight Great Hot Narakas Sañjīva सञ्जीव (Burmese: သိဉ္ဇိုး ငရဲ; Thai: สัญชีวมหานรก) – the "reviving" Naraka. Life in this Naraka is 162×1010 years long. Kālasūtra कालसूत्र (Burmese: ကာဠသုတ် ငရဲ; Thai: กาฬสุตตมหานรก/กาลสูตร) – the "black thread" Naraka. Life in this Naraka is 1296×1010 years long. Saṃghāta संघात (Burmese: သင်္ဃာတ ငရဲ; Thai: สังฆาฏมหานรก or สํฆาต) – the "crushing" Naraka. Life in this Naraka is 10,368×1010 years long. Raurava/Rīrava रौरव/रीरव (Burmese: ရောရုဝ ငရဲ; Thai: โรรุวมหานรก) – the "screaming" Naraka. Life in this Naraka is 82,944×1010 years long. Mahāraurava/Mahārīrava महारौरव/महारीरव (Burmese: မဟာရောရုဝ ငရဲ; Thai: มหาโรรุวมหานรก) – the "great screaming" Naraka. Life in this Naraka is 663,552×1010 years long. Tāpana/Tapana तापन/तपन (Burmese: တာပန ငရဲ; Thai: ตาปนมหานรก) – the "heating" Naraka. Life in this Naraka is 5,308,416×1010 years long. Mahātāpana महातापन (Burmese: မဟာတာပန ငရဲ; Thai: มหาตาปนมหานรก) – the "great heating" Naraka. Life in this Naraka is 42,467,328×1010 years long. Avīci अवीचि (Burmese: အဝီစိ ငရဲ;Thai: อเวจีมหานรก/อวิจี) – the "uninterrupted" Naraka. Life in this Naraka is 339,738,624×1010 years long.Each lifetime in these Narakas is eight times the length of the one before it. Horizontal cosmology – Sahasra cosmology: Sahasra means "one thousand". All of the planes, from the plane of neither perception nor non-perception (nevasanna-asanna-ayatana) down to the Avīci – the "without interval" niraya – constitutes the single world-system, Cakkavāla (intimating something circular, a "wheel" or one Planetary system, but the etymology is uncertain), described above. A collection of one thousand systems are called a "thousandfold minor world-system" (Culanika Lokadhātu) or a small chiliocosm. A collection of a million systems is a "thousandfold to the second power middling world-system" (Dvisahassi Majjhima Lokadhātu) or a medium dichiliocosm. Horizontal cosmology – Sahasra cosmology: The largest grouping, which consists of a billion world-systems, is called (Trisahassi Mahasassi Lokadhātu), a great trichiliocosm or The Galaxy. The Tathagata, if he so wished, could effect his voice and divine power throughout a great trichiliocosm. He does so by suffusing the trichiliocosm with his radiance, at which point the inhabitants of those world-system will perceive this light, and then proceeds to extend his voice and powers throughout that realm. Temporal cosmology: Buddhist temporal cosmology describes how the universe comes into being and is dissolved. Like other Indian cosmologies, it assumes an infinite span of time and is cyclical. This does not mean that the same events occur in identical form with each cycle, but merely that, as with the cycles of day and night or summer and winter, certain natural events occur over and over to give some structure to time. Temporal cosmology: The basic unit of time measurement is the mahākalpa or "Great Eon" (Chn/Jpn: 大劫 daigō; Thai: มหากัปป์ or มหากัลป์; Devanagari: महाकल्प / महाकप्प). The length of this time in human years is never defined exactly, but it is meant to be very long, to be measured in billions of years if not longer. Temporal cosmology: Maha Kalpa The word kalpa, means 'moment'. A maha kalpa consists of four moments (kalpa), the first of which is creation. The creation moment consists of the creation of the "receptacle", and the descent of beings from higher realms into more coarse forms of existence. During the rest of the creation moment, the world is populated. Human beings who exist at this point have no limit on their lifespan. The second moment is the duration moment, the start of this moment is signified by the first sentient being to enter hell (niraya), the hells and nirayas not existing or being empty prior to this moment. The duration moment consists of twenty "intermediate" moments (antarakappas), which unfold in a drama of the human lifespan descending from 80,000 years to 10, and then back up to 80,000 again. The interval between 2 of these "intermediate" moments is the "seven day purge", in which a variety of humans will kill each other (not knowing or recognizing each other), some humans will go into hiding. At the end of this purge, they will emerge from hiding and repopulate the world. After this purge, the lifespan will increase to 80,000, reach its peak and descend, at which point the purge will happen again. Temporal cosmology: Within the duration 'moment', this purge and repeat cycle seems to happen around 18 times, the first "intermediate" moment consisting only of the descent from 80,000 – the second intermediate moment consisting of a rise and descent, and the last consisting only of an ascent. After the duration 'moment' is the dissolution moment, the hells will gradually be emptied, as well as all coarser forms of existence. The beings will flock to the form realms (rupa dhatu), a destruction of fire occurs, sparing everything from the realms of the 'radiant' gods and above (abha deva). After 7 of these destructions by 'fire', a destruction by water occurs, and everything from the realms of the 'pleasant' gods and above is spared (subha deva). Temporal cosmology: After 64 of these destructions by fire and water, that is – 56 destructions by fire, and 7 by water – a destruction by wind occurs, this eliminates everything below the realms of the 'fruitful' devas (vehapphala devas, literally of "great fruit"). The pure abodes (suddhavasa, meaning something like pure, unmixed, similar to the connotation of "pure bred German shepherd"), are never destroyed. Although without the appearance of a Buddha, these realms may remain empty for a long time. The inhabitants of these realms have exceedingly long life spans. Temporal cosmology: The formless realms are never destroyed because they do not consist of form (rupa). The reason the world is destroyed by fire, water and wind, and not earth is because earth is the 'receptacle'. Temporal cosmology: After the dissolution moment, this particular world system remains dissolved for a long time, this is called the 'empty' moment, but the more accurate term would be "the state of being dissolved". The beings that inhabited this realm formerly will migrate to other world systems, and perhaps return if their journeys lead here again.A mahākalpa is divided into four kalpas or "eons" (Chn/Jpn: 劫 kō; Thai: กัป; अन्तरकल्प), each distinguished from the others by the stage of evolution of the universe during that kalpa. The four kalpas are: Vivartakalpa विवर्तकल्प "Eon of evolution" – during this kalpa the universe comes into existence. Temporal cosmology: Vivartasthāyikalpa विवर्तस्थायिकल्प "Eon of evolution-duration" – during this kalpa the universe remains in existence in a steady state. Saṃvartakalpa संवर्तकल्प "Eon of dissolution" – during this kalpa the universe dissolves. Temporal cosmology: Saṃvartasthāyikalpa संवर्तस्थायिकल्प "Eon of dissolution-duration" – during this kalpa the universe remains in a state of emptiness.Each one of these kalpas is divided into twenty antarakalpas अन्तरकल्प (Pāli: antarakappa अन्तरकप्प; Chn/Jpn: 中劫, "inside eons"; Thai: อันตรกัป) each of about the same length. For the Saṃvartasthāyikalpa this division is merely nominal, as nothing changes from one antarakalpa to the next; but for the other three kalpas it marks an interior cycle within the kalpa. Temporal cosmology: Vivartakalpa The Vivartakalpa begins with the arising of the primordial wind, which begins the process of building up the structures of the universe that had been destroyed at the end of the last mahākalpa. As the extent of the destruction can vary, the nature of this evolution can vary as well, but it always takes the form of beings from a higher world being born into a lower world. The example of a Mahābrahmā being the rebirth of a deceased Ābhāsvara deva is just one instance of this, which continues throughout the Vivartakalpa until all the worlds are filled from the Brahmaloka down to Avīci Hell During the Vivartakalpa the first humans appear; they are not like present-day humans, but are beings shining in their own light, capable of moving through the air without mechanical aid, living for a very long time, and not requiring sustenance; they are more like a type of lower deity than present-day humans are.Over time, they acquire a taste for physical nutriment, and as they consume it, their bodies become heavier and more like human bodies; they lose their ability to shine, and begin to acquire differences in their appearance, and their length of life decreases. They differentiate into two sexes and begin to become sexually active. Then greed, theft and violence arise among them, and they establish social distinctions and government and elect a king to rule them, called Mahāsammata। महासम्मत, "the great appointed one". Some of them begin to hunt and eat the flesh of animals, which have by now come into existence. Temporal cosmology: Vivartasthāyikalpa First antarakalpa The Vivartasthāyikalpa begins when the first being is born into Naraka, thus filling the entire universe with beings. During the first antarakalpa of this eon, the duration of human lives declines from a vast but unspecified number of years (but at least several tens of thousands of years) toward the modern lifespan of less than 100 years. At the beginning of the antarakalpa, people are still generally happy. They live under the rule of a universal monarch or "wheel-turning king" (Sanskrit: cakravartin चक्रवर्ति; Jpn: 転輪聖王 Tenrin Jō-ō; Thai: พระเจ้าจักรพรรดิ), who conquer. The Mahāsudassana-sutta (DN.17) tells of the life of a cakravartin king, Mahāsudassana (Sanskrit: Mahāsudarśana) who lived for 336,000 years. The Cakkavatti-sīhanāda-sutta (DN.26) tells of a later dynasty of cakravartins, Daḷhanemi (Sanskrit: Dṛḍhanemi) and five of his descendants, who had a lifespan of over 80,000 years. The seventh of this line of cakravartins broke with the traditions of his forefathers, refusing to abdicate his position at a certain age, pass the throne on to his son, and enter the life of a śramaṇa श्रमण. As a result of his subsequent misrule, poverty increased; as a result of poverty, theft began; as a result of theft, capital punishment was instituted; and as a result of this contempt for life, murders and other crimes became rampant. Temporal cosmology: The human lifespan now quickly decreased from 80,000 to 100 years, apparently decreasing by about half with each generation (this is perhaps not to be taken literally), while with each generation other crimes and evils increased: lying, greed, hatred, sexual misconduct, disrespect for elders. During this period, according to the Mahāpadāna-sutta (DN.14) three of the four Buddhas of this antarakalpa lived: Kakusandha Buddha क्रकुच्छन्दः (Pāli: Kakusandha ककुन्ध), at the time when the lifespan was 40,000 years; Kanakamuni कनकमुनिः Buddha (Pāli: Konāgamana कोनागमन) when the lifespan was 30,000 years; and Kāśyapa काश्यपः Buddha (Pāli: Kassapa कस्सप) when the lifespan was 20,000 years. Temporal cosmology: Our present time is taken to be toward the end of the first antarakalpa of this Vivartasthāyikalpa, when the lifespan is less than 100 years, after the life of Śākyamuni शाक्यमुनिः Buddha (Pāli: Sakyamuni ), who lived to the age of 80. Temporal cosmology: The remainder of the antarakalpa is prophesied to be miserable: lifespans will continue to decrease, and all the evil tendencies of the past will reach their ultimate in destructiveness. People will live no longer than ten years, and will marry at five; foods will be poor and tasteless; no form of morality will be acknowledged. The most contemptuous and hateful people will become the rulers. Incest will be rampant. Hatred between people, even members of the same family, will grow until people think of each other as hunters do of their prey.Eventually a great war will ensue, in which the most hostile and aggressive will arm themselves with swords in their hands and go out to kill each other. The less aggressive will hide in forests and other secret places while the war rages. This war marks the end of the first antarakalpa. Temporal cosmology: Second antarakalpa At the end of the war, the survivors will emerge from their hiding places and repent their evil habits. As they begin to do good, their lifespan increases, and the health and welfare of the human race will also increase with it. After a long time, the descendants of those with a 10-year lifespan will live for 80,000 years, and at that time there will be a cakravartin king named Saṅkha शंख. During his reign, the current bodhisattva in the Tuṣita heaven will descend and be reborn under the name of Ajita अजित. He will enter the life of a śramaṇa and will gain perfect enlightenment as a Buddha; and he will then be known by the name of Maitreya (मैत्रेयः, Pāli: Metteyya मेत्तेय्य). Temporal cosmology: After Maitreya's time, the world will again worsen, and the lifespan will gradually decrease from 80,000 years to 10 years again, each antarakalpa being separated from the next by devastating war, with peaks of high civilization and morality in the middle. After the 19th antarakalpa, the lifespan will increase to 80,000 and then not decrease, because the Vivartasthāyikalpa will have come to an end. Temporal cosmology: Saṃvartakalpa The Saṃvartakalpa begins when beings cease to be born in Naraka. This cessation of birth then proceeds in reverse order up the vertical cosmology, i.e., pretas then cease to be born, then animals, then humans, and so on up to the realms of the deities. When these worlds as far as the Brahmaloka are devoid of inhabitants, a great fire consumes the entire physical structure of the world. It burns all the worlds below the Ābhāsvara worlds. When they are destroyed, the Saṃvartasthāyikalpa begins. Saṃvartasthāyikalpa There is nothing to say about the Saṃvartasthāyikalpa, since nothing happens in it below the Ābhāsvara worlds. It ends when the primordial wind begins to blow and build the structure of the worlds up again. Other destructions The destruction by fire is the normal type of destruction that occurs at the end of the Saṃvartakalpa. But every eighth mahākalpa, after seven destructions by fire, there is a destruction by water. This is more devastating, as it eliminates not just the Brahma worlds but also the Ābhāsvara worlds. Every sixty-fourth mahākalpa, after fifty six destructions by fire and seven destructions by water, there is a destruction by wind. This is the most devastating of all, as it also destroys the Śubhakṛtsna worlds. The higher worlds are never destroyed.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**XPOT** XPOT: Exportin-T is a protein that in humans is encoded by the XPOT gene.This gene encodes a protein belonging to the RAN-GTPase exportin family that mediates export of tRNA from the nucleus to the cytoplasm. Translocation of tRNA to the cytoplasm occurs once exportin has bound both tRNA and GTP-bound RAN.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Heat transfer physics** Heat transfer physics: Heat transfer physics describes the kinetics of energy storage, transport, and energy transformation by principal energy carriers: phonons (lattice vibration waves), electrons, fluid particles, and photons. Heat is energy stored in temperature-dependent motion of particles including electrons, atomic nuclei, individual atoms, and molecules. Heat is transferred to and from matter by the principal energy carriers. The state of energy stored within matter, or transported by the carriers, is described by a combination of classical and quantum statistical mechanics. The energy is different made (converted) among various carriers. Heat transfer physics: The heat transfer processes (or kinetics) are governed by the rates at which various related physical phenomena occur, such as (for example) the rate of particle collisions in classical mechanics. These various states and kinetics determine the heat transfer, i.e., the net rate of energy storage or transport. Governing these process from the atomic level (atom or molecule length scale) to macroscale are the laws of thermodynamics, including conservation of energy. Introduction: Heat is thermal energy associated with temperature-dependent motion of particles. The macroscopic energy equation for infinitesimal volume used in heat transfer analysis is where q is heat flux vector, −ρcp(∂T/∂t) is temporal change of internal energy (ρ is density, cp is specific heat capacity at constant pressure, T is temperature and t is time), and s˙ is the energy conversion to and from thermal energy (i and j are for principal energy carriers). So, the terms represent energy transport, storage and transformation. Heat flux vector q is composed of three macroscopic fundamental modes, which are conduction (qk = −k∇T, k: thermal conductivity), convection (qu = ρcpuT, u: velocity), and radiation ( sin {\textstyle \mathbf {q} _{r}=2\pi \int _{0}^{\infty }\int _{0}^{\pi }\mathbf {s} I_{ph,\omega }\sin(\theta )d\theta \,d\omega } , ω: angular frequency, θ: polar angle, Iph,ω: spectral, directional radiation intensity, s: unit vector), i.e., q = qk + qu + qr. Introduction: Once states and kinetics of the energy conversion and thermophysical properties are known, the fate of heat transfer is described by the above equation. These atomic-level mechanisms and kinetics are addressed in heat transfer physics. The microscopic thermal energy is stored, transported, and transformed by the principal energy carriers: phonons (p), electrons (e), fluid particles (f), and photons (ph). Length and time scales: Thermophysical properties of matter and the kinetics of interaction and energy exchange among the principal carriers are based on the atomic-level configuration and interaction. Transport properties such as thermal conductivity are calculated from these atomic-level properties using classical and quantum physics. Quantum states of principal carriers (e.g.. momentum, energy) are derived from the Schrödinger equation (called first principle or ab initio) and the interaction rates (for kinetics) are calculated using the quantum states and the quantum perturbation theory (formulated as the Fermi golden rule). Variety of ab initio (Latin for from the beginning) solvers (software) exist (e.g., ABINIT, CASTEP, Gaussian, Q-Chem, Quantum ESPRESSO, SIESTA, VASP, WIEN2k). Electrons in the inner shells (core) are not involved in heat transfer, and calculations are greatly reduced by proper approximations about the inner-shells electrons.The quantum treatments, including equilibrium and nonequilibrium ab initio molecular dynamics (MD), involving larger lengths and times are limited by the computation resources, so various alternate treatments with simplifying assumptions have been used and kinetics. In classical (Newtonian) MD, the motion of atom or molecule (particles) is based on the empirical or effective interaction potentials, which in turn can be based on curve-fit of ab initio calculations or curve-fit to thermophysical properties. From the ensembles of simulated particles, static or dynamics thermal properties or scattering rates are derived.At yet larger length scales (mesoscale, involving many mean free paths), the Boltzmann transport equation (BTE) which is based on the classical Hamiltonian-statistical mechanics is applied. BTE considers particle states in terms of position and momentum vectors (x, p) and this is represented as the state occupation probability. The occupation has equilibrium distributions (the known boson, fermion, and Maxwell–Boltzmann particles) and transport of energy (heat) is due to nonequilibrium (cause by a driving force or potential). Central to the transport is the role of scattering which turn the distribution toward equilibrium. The scattering is presented by the relations time or the mean free path. The relaxation time (or its inverse which is the interaction rate) is found from other calculations (ab initio or MD) or empirically. BTE can be numerically solved with Monte Carlo method, etc.Depending on the length and time scale, the proper level of treatment (ab initio, MD, or BTE) is selected. Heat transfer physics analyses may involve multiple scales (e.g., BTE using interaction rate from ab initio or classical MD) with states and kinetic related to thermal energy storage, transport and transformation. Length and time scales: So, heat transfer physics covers the four principal energy carries and their kinetics from classical and quantum mechanical perspectives. This enables multiscale (ab initio, MD, BTE and macroscale) analyses, including low-dimensionality and size effects. Phonon: Phonon (quantized lattice vibration wave) is a central thermal energy carrier contributing to heat capacity (sensible heat storage) and conductive heat transfer in condensed phase, and plays a very important role in thermal energy conversion. Its transport properties are represented by the phonon conductivity tensor Kp (W/m-K, from the Fourier law qk,p = -Kp⋅∇ T) for bulk materials, and the phonon boundary resistance ARp,b [K/(W/m2)] for solid interfaces, where A is the interface area. The phonon specific heat capacity cv,p (J/kg-K) includes the quantum effect. The thermal energy conversion rate involving phonon is included in s˙i-j . Heat transfer physics describes and predicts, cv,p, Kp, Rp,b (or conductance Gp,b) and s˙i-j , based on atomic-level properties. Phonon: For an equilibrium potential ⟨φ⟩o of a system with N atoms, the total potential ⟨φ⟩ is found by a Taylor series expansion at the equilibrium and this can be approximated by the second derivatives (the harmonic approximation) as where di is the displacement vector of atom i, and Γ is the spring (or force) constant as the second-order derivatives of the potential. The equation of motion for the lattice vibration in terms of the displacement of atoms [d(jl,t): displacement vector of the j-th atom in the l-th unit cell at time t] is where m is the atomic mass and Γ is the force constant tensor. The atomic displacement is the summation over the normal modes [sα: unit vector of mode α, ωp: angular frequency of wave, and κp: wave vector]. Using this plane-wave displacement, the equation of motion becomes the eigenvalue equation where M is the diagonal mass matrix and D is the harmonic dynamical matrix. Solving this eigenvalue equation gives the relation between the angular frequency ωp and the wave vector κp, and this relation is called the phonon dispersion relation. Thus, the phonon dispersion relation is determined by matrices M and D, which depend on the atomic structure and the strength of interaction among constituent atoms (the stronger the interaction and the lighter the atoms, the higher is the phonon frequency and the larger is the slope dωp/dκp). The Hamiltonian of phonon system with the harmonic approximation is where Dij is the dynamical matrix element between atoms i and j, and di (dj) is the displacement of i (j) atom, and p is momentum. From this and the solution to dispersion relation, the phonon annihilation operator for the quantum treatment is defined as where N is the number of normal modes divided by α and ħ is the reduced Planck constant. The creation operator is the adjoint of the annihilation operator, The Hamiltonian in terms of bκ,α† and bκ,α is Hp = Σκ,αħωp,α[bκ,α†bκ,α + 1/2] and bκ,α†bκ,α is the phonon number operator. The energy of quantum-harmonic oscillator is Ep = Σκ,α [fp(κ,α) + 1/2]ħωp,α(κp), and thus the quantum of phonon energy ħωp. Phonon: The phonon dispersion relation gives all possible phonon modes within the Brillouin zone (zone within the primitive cell in reciprocal space), and the phonon density of states Dp (the number density of possible phonon modes). The phonon group velocity up,g is the slope of the dispersion curve, dωp/dκp. Since phonon is a boson particle, its occupancy follows the Bose–Einstein distribution {fpo = [exp(ħωp/kBT)-1]−1, kB: Boltzmann constant}. Using the phonon density of states and this occupancy distribution, the phonon energy is Ep(T) = ∫Dp(ωp)fp(ωp,T)ħωpdωp, and the phonon density is np(T) = ∫Dp(ωp)fp(ωp,T)dωp. Phonon heat capacity cv,p (in solid cv,p = cp,p, cv,p : constant-volume heat capacity, cp,p: constant-pressure heat capacity) is the temperature derivatives of phonon energy for the Debye model (linear dispersion model), is where TD is the Debye temperature, m is atomic mass, and n is the atomic number density (number density of phonon modes for the crystal 3n). This gives the Debye T3 law at low temperature and Dulong-Petit law at high temperatures. Phonon: From the kinetic theory of gases, thermal conductivity of principal carrier i (p, e, f and ph) is where ni is the carrier density and the heat capacity is per carrier, ui is the carrier speed and λi is the mean free path (distance traveled by carrier before an scattering event). Thus, the larger the carrier density, heat capacity and speed, and the less significant the scattering, the higher is the conductivity. For phonon λp represents the interaction (scattering) kinetics of phonons and is related to the scattering relaxation time τp or rate (= 1/τp) through λp= upτp. Phonons interact with other phonons, and with electrons, boundaries, impurities, etc., and λp combines these interaction mechanisms through the Matthiessen rule. At low temperatures, scattering by boundaries is dominant and with increase in temperature the interaction rate with impurities, electron and other phonons become important, and finally the phonon-phonon scattering dominants for T > 0.2TD. The interaction rates are reviewed in and includes quantum perturbation theory and MD. Phonon: A number of conductivity models are available with approximations regarding the dispersion and λp. Using the single-mode relaxation time approximation (∂fp′/∂t|s = −fp′/τp) and the gas kinetic theory, Callaway phonon (lattice) conductivity model as With the Debye model (a single group velocity up,g, and a specific heat capacity calculated above), this becomes where a is the lattice constant a = n−1/3 for a cubic lattice, and n is the atomic number density. Slack phonon conductivity model mainly considering acoustic phonon scattering (three-phonon interaction) is given as where ⟨M⟩ is the mean atomic weight of the atoms in the primitive cell, Va=1/n is the average volume per atom, TD,∞ is the high-temperature Debye temperature, T is the temperature, No is the number of atoms in the primitive cell, and ⟨γ2G⟩ is the mode-averaged square of the Grüneisen constant or parameter at high temperatures. This model is widely tested with pure nonmetallic crystals, and the overall agreement is good, even for complex crystals. Phonon: Based on the kinetics and atomic structure consideration, a material with high crystalline and strong interactions, composed of light atoms (such as diamond and graphene) is expected to have large phonon conductivity. Solids with more than one atom in the smallest unit cell representing the lattice have two types of phonons, i.e., acoustic and optical. (Acoustic phonons are in-phase movements of atoms about their equilibrium positions, while optical phonons are out-of-phase movement of adjacent atoms in the lattice.) Optical phonons have higher energies (frequencies), but make smaller contribution to conduction heat transfer, because of their smaller group velocity and occupancy. Phonon: Phonon transport across hetero-structure boundaries (represented with Rp,b, phonon boundary resistance) according to the boundary scattering approximations are modeled as acoustic and diffuse mismatch models. Larger phonon transmission (small Rp,b) occurs at boundaries where material pairs have similar phonon properties (up, Dp, etc.), and in contract large Rp,b occurs when some material is softer (lower cut-off phonon frequency) than the other. Electron: Quantum electron energy states for electron are found using the electron quantum Hamiltonian, which is generally composed of kinetic (-ħ2∇2/2me) and potential energy terms (φe). Atomic orbital, a mathematical function describing the wave-like behavior of either an electron or a pair of electrons in an atom, can be found from the Schrödinger equation with this electron Hamiltonian. Hydrogen-like atoms (a nucleus and an electron) allow for closed-form solution to Schrödinger equation with the electrostatic potential (the Coulomb law). The Schrödinger equation of atoms or atomic ions with more than one electron has not been solved analytically, because of the Coulomb interactions among electrons. Thus, numerical techniques are used, and an electron configuration is approximated as product of simpler hydrogen-like atomic orbitals (isolate electron orbitals). Molecules with multiple atoms (nuclei and their electrons) have molecular orbital (MO, a mathematical function for the wave-like behavior of an electron in a molecule), and are obtained from simplified solution techniques such as linear combination of atomic orbitals (LCAO). The molecular orbital is used to predict chemical and physical properties, and the difference between highest occupied molecular orbital (HOMO) and the lowest unoccupied molecular orbital (LUMO) is a measure of excitability of the molecules. Electron: In a crystal structure of metallic solids, the free electron model (zero potential, φe = 0) for the behavior of valence electrons is used. However, in a periodic lattice (crystal), there is periodic crystal potential, so the electron Hamiltonian becomes where me is the electron mass, and the periodic potential is expressed as φc (x) = Σg φgexp[i(g∙x)] (g: reciprocal lattice vector). The time-independent Schrödinger equation with this Hamiltonian is given as (the eigenvalue equation) where the eigenfunction ψe,κ is the electron wave function, and eigenvalue Ee(κe), is the electron energy (κe: electron wavevector). The relation between wavevector, κe and energy Ee provides the electronic band structure. In practice, a lattice as many-body systems includes interactions between electrons and nuclei in potential, but this calculation can be too intricate. Thus, many approximate techniques have been suggested and one of them is density functional theory (DFT), uses functionals of the spatially dependent electron density instead of full interactions. DFT is widely used in ab initio software (ABINIT, CASTEP, Quantum ESPRESSO, SIESTA, VASP, WIEN2k, etc.). The electron specific heat is based on the energy states and occupancy distribution (the Fermi–Dirac statistics). In general, the heat capacity of electron is small except at very high temperature when they are in thermal equilibrium with phonons (lattice). Electrons contribute to heat conduction (in addition to charge carrying) in solid, especially in metals. Thermal conductivity tensor in solid is the sum of electric and phonon thermal conductivity tensors K = Ke + Kp. Electron: Electrons are affected by two thermodynamic forces [from the charge, ∇(EF/ec) where EF is the Fermi level and ec is the electron charge and temperature gradient, ∇(1/T)] because they carry both charge and thermal energy, and thus electric current je and heat flow q are described with the thermoelectric tensors (Aee, Aet, Ate, and Att) from the Onsager reciprocal relations as Converting these equations to have je equation in terms of electric field ee and ∇T and q equation with je and ∇T, (using scalar coefficients for isotropic transport, αee, αet, αte, and αtt instead of Aee, Aet, Ate, and Att) Electrical conductivity/resistivity σe (Ω−1m−1)/ ρe (Ω-m), electric thermal conductivity ke (W/m-K) and the Seebeck/Peltier coefficients αS (V/K)/αP (V) are defined as, Various carriers (electrons, magnons, phonons, and polarons) and their interactions substantially affect the Seebeck coefficient. The Seebeck coefficient can be decomposed with two contributions, αS = αS,pres + αS,trans, where αS,pres is the sum of contributions to the carrier-induced entropy change, i.e., αS,pres = αS,mix + αS,spin + αS,vib (αS,mix: entropy-of-mixing, αS,spin: spin entropy, and αS,vib: vibrational entropy). The other contribution αS,trans is the net energy transferred in moving a carrier divided by qT (q: carrier charge). The electron's contributions to the Seebeck coefficient are mostly in αS,pres. The αS,mix is usually dominant in lightly doped semiconductors. The change of the entropy-of-mixing upon adding an electron to a system is the so-called Heikes formula where feo = N/Na is the ratio of electrons to sites (carrier concentration). Using the chemical potential (μ), the thermal energy (kBT) and the Fermi function, above equation can be expressed in an alternative form, αS,mix = (kB/q)[(Ee − μ)/(kBT)]. Electron: Extending the Seebeck effect to spins, a ferromagnetic alloy can be a good example. The contribution to the Seebeck coefficient that results from electrons' presence altering the systems spin entropy is given by αS,spin = ΔSspin/q = (kB/q)ln[(2s + 1)/(2s0 +1)], where s0 and s are net spins of the magnetic site in the absence and presence of the carrier, respectively. Many vibrational effects with electrons also contribute to the Seebeck coefficient. The softening of the vibrational frequencies produces a change of the vibrational entropy is one of examples. The vibrational entropy is the negative derivative of the free energy, i.e., where Dp(ω) is the phonon density-of-states for the structure. For the high-temperature limit and series expansions of the hyperbolic functions, the above is simplified as αS,vib = (ΔSvib/q) = (kB/q)Σi(-Δωi/ωi). Electron: The Seebeck coefficient derived in the above Onsager formulation is the mixing component αS,mix, which dominates in most semiconductors. The vibrational component in high-band gap materials such as B13C2 is very important. Considering the microscopic transport (transport is a results of nonequilibrium), where ue is the electron velocity vector, fe (feo) is the electron nonequilibrium (equilibrium) distribution, τe is the electron scattering time, Ee is the electron energy, and Fte is the electric and thermal forces from ∇(EF/ec) and ∇(1/T). Electron: Relating the thermoelectric coefficients to the microscopic transport equations for je and q, the thermal, electric, and thermoelectric properties are calculated. Thus, ke increases with the electrical conductivity σe and temperature T, as the Wiedemann–Franz law presents [ke/(σeTe) = (1/3)(πkB/ec)2 = 2.44×10−8 W-Ω/K2]. Electron transport (represented as σe) is a function of carrier density ne,c and electron mobility μe (σe = ecne,cμe). μe is determined by electron scattering rates γ˙e (or relaxation time, τe=1/γ˙e ) in various interaction mechanisms including interaction with other electrons, phonons, impurities and boundaries. Electron: Electrons interact with other principal energy carriers. Electrons accelerated by an electric field are relaxed through the energy conversion to phonon (in semiconductors, mostly optical phonon), which is called Joule heating. Energy conversion between electric potential and phonon energy is considered in thermoelectrics such as Peltier cooling and thermoelectric generator. Also, study of interaction with photons is central in optoelectronic applications (i.e. light-emitting diode, solar photovoltaic cells, etc.). Interaction rates or energy conversion rates can be evaluated using the Fermi golden rule (from the perturbation theory) with ab initio approach. Fluid particle: Fluid particle is the smallest unit (atoms or molecules) in the fluid phase (gas, liquid or plasma) without breaking any chemical bond. Energy of fluid particle is divided into potential, electronic, translational, vibrational, and rotational energies. The heat (thermal) energy storage in fluid particle is through the temperature-dependent particle motion (translational, vibrational, and rotational energies). The electronic energy is included only if temperature is high enough to ionize or dissociate the fluid particles or to include other electronic transitions. These quantum energy states of the fluid particles are found using their respective quantum Hamiltonian. These are Hf,t = −(ħ2/2m)∇2, Hf,v = −(ħ2/2m)∇2 + Γx2/2 and Hf,r = −(ħ2/2If)∇2 for translational, vibrational and rotational modes. (Γ: spring constant, If: the moment of inertia for the molecule). From the Hamiltonian, the quantized fluid particle energy state Ef and partition functions Zf [with the Maxwell–Boltzmann (MB) occupancy distribution] are found as translational vibrational rotational total Here, gf is the degeneracy, n, l, and j are the transitional, vibrational and rotational quantum numbers, Tf,v is the characteristic temperature for vibration (= ħωf,v/kB, : vibration frequency), and Tf,r is the rotational temperature [= ħ2/(2IfkB)]. The average specific internal energy is related to the partition function through Zf, ef=(kBT2/m)(∂lnZf/∂T)|N,V. Fluid particle: With the energy states and the partition function, the fluid particle specific heat capacity cv,f is the summation of contribution from various kinetic energies (for non-ideal gas the potential energy is also added). Because the total degrees of freedom in molecules is determined by the atomic configuration, cv,f has different formulas depending on the configuration, monatomic ideal gas diatomic ideal gas nonlinear, polyatomic ideal gas where Rg is the gas constant (= NAkB, NA: the Avogadro constant) and M is the molecular mass (kg/kmol). (For the polyatomic ideal gas, No is the number of atoms in a molecule.) In gas, constant-pressure specific heat capacity cp,f has a larger value and the difference depends on the temperature T, volumetric thermal expansion coefficient β and the isothermal compressibility κ [cp,f – cv,f = Tβ2/(ρfκ), ρf : the fluid density]. For dense fluids that the interactions between the particles (the van der Waals interaction) should be included, and cv,f and cp,f would change accordingly. Fluid particle: The net motion of particles (under gravity or external pressure) gives rise to the convection heat flux qu = ρfcp,fufT. Conduction heat flux qk for ideal gas is derived with the gas kinetic theory or the Boltzmann transport equations, and the thermal conductivity is where ⟨uf2⟩1/2 is the RMS (root mean square) thermal velocity (3kBT/m from the MB distribution function, m: atomic mass) and τf-f is the relaxation time (or intercollision time period) [(21/2π d2nf ⟨uf⟩)−1 from the gas kinetic theory, ⟨uf⟩: average thermal speed (8kBT/πm)1/2, d: the collision diameter of fluid particle (atom or molecule), nf: fluid number density]. Fluid particle: kf is also calculated using molecular dynamics (MD), which simulates physical movements of the fluid particles with the Newton equations of motion (classical) and force field (from ab initio or empirical properties). For calculation of kf, the equilibrium MD with Green–Kubo relations, which express the transport coefficients in terms of integrals of time correlation functions (considering fluctuation), or nonequilibrium MD (prescribing heat flux or temperature difference in simulated system) are generally employed. Fluid particle: Fluid particles can interact with other principal particles. Vibrational or rotational modes, which have relatively high energy, are excited or decay through the interaction with photons. Gas lasers employ the interaction kinetics between fluid particles and photons, and laser cooling has been also considered in CO2 gas laser. Also, fluid particles can be adsorbed on solid surfaces (physisorption and chemisorption), and the frustrated vibrational modes in adsorbates (fluid particles) is decayed by creating e−-h+ pairs or phonons. These interaction rates are also calculated through ab initio calculation on fluid particle and the Fermi golden rule. Photon: Photon is the quanta of electromagnetic (EM) radiation and energy carrier for radiation heat transfer. The EM wave is governed by the classical Maxwell equations, and the quantization of EM wave is used for phenomena such as the blackbody radiation (in particular to explain the ultraviolet catastrophe). The quanta EM wave (photon) energy of angular frequency ωph is Eph = ħωph, and follows the Bose–Einstein distribution function (fph). The photon Hamiltonian for the quantized radiation field (second quantization) is where ee and be are the electric and magnetic fields of the EM radiation, εo and μo are the free-space permittivity and permeability, V is the interaction volume, ωph,α is the photon angular frequency for the α mode and cα† and cα are the photon creation and annihilation operators. The vector potential ae of EM fields (ee = −∂ae/∂t and be = ∇×ae) is where sph,α is the unit polarization vector, κα is the wave vector. Photon: Blackbody radiation among various types of photon emission employs the photon gas model with thermalized energy distribution without interphoton interaction. From the linear dispersion relation (i.e., dispersionless), phase and group speeds are equal (uph = d ωph/dκ = ωph/κ, uph: photon speed) and the Debye (used for dispersionless photon) density of states is Dph,b,ωdω = ωph2dωph/π2uph3. With Dph,b,ω and equilibrium distribution fph, photon energy spectral distribution dIb,ω or dIb,λ (λph: wavelength) and total emissive power Eb are derived as (Planck law), (Stefan–Boltzmann law). Photon: Compared to blackbody radiation, laser emission has high directionality (small solid angle ΔΩ) and spectral purity (narrow bands Δω). Lasers range far-infrared to X-rays/γ-rays regimes based on the resonant transition (stimulated emission) between electronic energy states.Near-field radiation from thermally excited dipoles and other electric/magnetic transitions is very effective within a short distance (order of wavelength) from emission sites.The BTE for photon particle momentum pph = ħωphs/uph along direction s experiencing absorption/emission s˙f,ph−e (= uphσph,ω[fph(ωph,T) - fph(s)], σph,ω: spectral absorption coefficient), and generation/removal s˙f,ph,i , is In terms of radiation intensity (Iph,ω = uphfphħωphDph,ω/4π, Dph,ω: photon density of states), this is called the equation of radiative transfer (ERT)The net radiative heat flux vector is {\textstyle \mathbf {q} _{r}=\mathbf {q} _{ph}=\int _{0}^{\infty }\int _{4\pi }\mathbf {s} I_{ph,\omega }d\Omega d\omega .} From the Einstein population rate equation, spectral absorption coefficient σph,ω in ERT is, where γ˙ph,a is the interaction probability (absorption) rate or the Einstein coefficient B12 (J−1 m3 s−1), which gives the probability per unit time per unit spectral energy density of the radiation field (1: ground state, 2: excited state), and ne is electron density (in ground state). This can be obtained using the transition dipole moment μe with the FGR and relationship between Einstein coefficients. Averaging σph,ω over ω gives the average photon absorption coefficient σph. Photon: For the case of optically thick medium of length L, i.e., σphL >> 1, and using the gas kinetic theory, the photon conductivity kph is 16σSBT3/3σph (σSB: Stefan–Boltzmann constant, σph: average photon absorption), and photon heat capacity nphcv,ph is 16σSBT3/uph. Photon: Photons have the largest range of energy and central in a variety of energy conversions. Photons interact with electric and magnetic entities. For example, electric dipole which in turn are excited by optical phonons or fluid particle vibration, or transition dipole moments of electronic transitions. In heat transfer physics, the interaction kinetics of phonon is treated using the perturbation theory (the Fermi golden rule) and the interaction Hamiltonian. The photon-electron interaction is where pe is the dipole moment vector and a† and a are the creation and annihilation of internal motion of electron. Photons also participate in ternary interactions, e.g., phonon-assisted photon absorption/emission (transition of electron energy level). The vibrational mode in fluid particles can decay or become excited by emitting or absorbing photons. Examples are solid and molecular gas laser cooling.Using ab initio calculations based on the first principles along with EM theory, various radiative properties such as dielectric function (electrical permittivity, εe,ω), spectral absorption coefficient (σph,ω), and the complex refraction index (mω), are calculated for various interactions between photons and electric/magnetic entities in matter. For example, the imaginary part (εe,c,ω) of complex dielectric function (εe,ω = εe,r,ω + i εe,c,ω) for electronic transition across a bandgap is where V is the unit-cell volume, VB and CB denote the valence and conduction bands, wκ is the weight associated with a κ-point, and pij is the transition momentum matrix element. Photon: The real part is εe,r,ω is obtained from εe,c,ω using the Kramers-Kronig relation Here, P denotes the principal value of the integral. Photon: In another example, for the far IR regions where the optical phonons are involved, the dielectric function (εe,ω) are calculated as where LO and TO denote the longitudinal and transverse optical phonon modes, j is all the IR-active modes, and γ is the temperature-dependent damping term in the oscillator model. εe,∞ is high frequency dielectric permittivity, which can be calculated DFT calculation when ions are treated as external potential. Photon: From these dielectric function (εe,ω) calculations (e.g., Abinit, VASP, etc.), the complex refractive index mω(= nω + i κω, nω: refraction index and κω: extinction index) is found, i.e., mω2 = εe,ω = εe,r,ω + i εe,c,ω). The surface reflectance R of an ideal surface with normal incident from vacuum or air is given as R = [(nω - 1)2 + κω2]/[(nω + 1)2 + κω2]. The spectral absorption coefficient is then found from σph,ω = 2ω κω/uph. The spectral absorption coefficient for various electric entities are listed in the below table.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Extended play** Extended play: An extended play (EP) is a musical recording that contains more tracks than a single but fewer than an album or LP record. Contemporary EPs generally contain four to five tracks, and are considered "less expensive and time-consuming" for an artist to produce than an album. An EP originally referred to specific types of records other than 78 rpm standard play (SP) and LP, but it is now applied to mid-length CDs and downloads as well. In K-pop they are usually referred to as mini albums. Ricardo Baca of The Denver Post said, "EPs—originally extended-play 'single' releases that are shorter than traditional albums—have long been popular with punk and indie bands." In the United Kingdom, the Official Chart Company defines a boundary between EP and album classification at 25 minutes of maximum length and no more than four tracks (not counting alternative versions of featured songs, if present). Background: History EPs were released in various sizes in different eras. The earliest multi-track records, issued around 1919 by Grey Gull Records, were vertically cut 78 rpm discs known as "2-in-1" records. These had finer than usual grooves, like Edison Disc Records. By 1949, when the 45 rpm single and 331⁄3 rpm LP were competing formats, seven-inch 45 rpm singles had a maximum playing time of only about four minutes per side. Background: Partly as an attempt to compete with the LP introduced in 1948 by rival Columbia, RCA Victor introduced "Extended Play" 45s during 1952. Their narrower grooves, achieved by lowering the cutting levels and sound compression optionally, enabled them to hold up to 7.5 minutes per side—but still be played by a standard 45rpm phonograph. In the early era, record companies released the entire content of LPs as 45rpm EPs. These were usually 10-inch LPs (released until the mid-1950s) split onto two seven-inch EPs or 12-inch LPs split onto three seven-inch EPs, either sold separately or together in gatefold covers. This practice became much less common with the advent of triple-speed-available phonographs.Introduced by RCA in the US in 1952, EMI issued the first EPs in Britain in April 1954. EPs were usually compilations of singles or album samplers and were typically played at 45 rpm on seven-inch (18cm) discs, with two songs on each side. Background: RCA had success in the format with their top money earner, Elvis Presley, issuing 28 Elvis EPs between 1956 and 1967, many of which topped the separate Billboard EP chart during its brief existence. Other than those published by RCA, EPs were relatively uncommon in the United States and Canada, but they were widely sold in the United Kingdom, and in some other European countries, during the 1950s and 1960s. In Sweden EP was for long the most popular record format, with as much as 85% of the market in the late 1950s being EPs.Billboard introduced a weekly EP chart in October 1957, noting that "the teen-age market apparently dominates the EP business, with seven out of the top 10 best-selling EP's featuring artists with powerful teen-age appeal — four sets by Elvis Presley, two by Pat Boone and one by Little Richard". Record Retailer printed an EP chart in 1960. The New Musical Express (NME), Melody Maker, Disc and Music Echo and the Record Mirror continued to list EPs on their respective singles charts. When the BBC and Record Retailer commissioned the British Market Research Bureau (BMRB) (now: Kantar Group) to compile a chart, it was restricted to singles, and EPs disappeared from the listings.The popularity of EPs in the US had declined in the early 1960s in favor of LPs. In the UK Cliff Richard and the Shadows, both individually and collectively, and the Beatles were the most prolific artists issuing EPs in the 1960s, many of them highly successful releases. The Beatles' Twist and Shout outsold most singles for some weeks in 1963. The success of the EP in Britain lasted until around 1967, but it later had a strong revival with punk rock in the late 1970s and the adaptation of the format for 12" and CD singles.In Britain EPs were sometimes used to repackage songs that had previously been issued on albums. The Shadows released EPs The Shadows No. 2 and The Shadows No. 3 both of which included songs found on the The Shadows album. The songs from Adam Faith's first album were also released on three EPs, all of which had the same cover as the album, but listed the tracks on the top. The Beatles EP Twist and Shout contained only songs found on their Please Please Me album. Background: Notable EP releases Some classical music albums released at the beginning of the LP era were also distributed as EP albums—notably, the seven operas that Arturo Toscanini conducted on radio between 1944 and 1954. These opera EPs, originally broadcast on the NBC Radio network and manufactured by RCA, which owned the NBC network then, were made available both in 45 rpm and 331⁄3 rpm. In the 1990s, they began appearing on compact discs.During the 1950s, RCA published several EP albums of Walt Disney movies, containing both the story and the songs. These usually featured the original casts of actors and actresses. Each album contained two seven-inch records, plus a fully illustrated booklet containing the text of the recording so that children could follow along by reading. Some of the titles included Snow White and the Seven Dwarfs (1937), Pinocchio (1940), and what was then a recent release, the movie version of 20,000 Leagues Under the Sea that was presented in 1954. The recording and publishing of 20,000 was unusual: it did not employ the movie's cast, and years later, a 12 in 33+1⁄3 rpm album, with a nearly identical script, but another different cast, was sold by Disneyland Records in conjunction with the re-release of the movie in 1963.Because of the popularity of 7" and other formats, SP (78 rpm, 10") records became less popular and the production of SPs in Japan was suspended in 1963.In the Philippines, seven-inch EPs marketed as "mini-LPs" (but distinctly different from the mini-LPs of the 1980s) were introduced in 1970, with tracks selected from an album and packaging resembling the album they were taken from. This mini-LP format also became popular in America in the early 1970s for promotional releases, and also for use in jukeboxes.Stevie Wonder included a bonus four-song EP with his double LP Songs in the Key of Life in 1976. During the 1970s and 1980s, there was less standardization and EPs were made on seven-inch (18cm), 10-inch (25cm) or 12-inch (30cm) discs running either 331⁄3 or 45 rpm. Some novelty EPs used odd shapes and colors, and a few of them were picture discs.Alice in Chains was the first band to ever have an EP reach number one on the Billboard album chart. Its EP, Jar of Flies, was released on January 25, 1994. In December of 1997, American rapper Eminem released the Slim Shady EP, which would be the first introduction of his "Slim Shady" alter ego. In 2004, Linkin Park and Jay-Z's collaboration EP, Collision Course, was the next to reach the number one spot after Alice in Chains. In 2010, the cast of the television series Glee became the first artist to have two EPs reach number one, with Glee: The Music, The Power of Madonna on the week of May 8, 2010, and Glee: The Music, Journey to Regionals on the week of June 26, 2010.. Background: In 2010, Warner Bros. Records revived the format with their "Six-Pak" offering of six songs on a compact disc. Background: EPs in the digital and streaming era Due to the increased popularity of music downloads and music streaming beginning the late 2000s, EPs have become a common marketing strategy for pop musicians wishing to remain relevant and deliver music in more consistent timeframes leading to or following full studio albums. In the late 2000s to early 2010s, reissues of studio albums with expanded track listings were common, with the new music often being released as stand-alone EPs. In October 2010, a Vanity Fair article regarding the trend noted post-album EPs as "the next step in extending albums' shelf lives, following the "deluxe" editions that populated stores during the past few holiday seasons—add a few tracks to the back end of an album and release one of them to radio, slap on a new coat of paint, and—voila!—a stocking stuffer is born." Examples of such releases include Lady Gaga's The Fame Monster (2009) following her debut album The Fame (2008), and Kesha's Cannibal (2010) following her debut album Animal (2010). Background: A 2019 article in Forbes discussing Miley Cyrus' plan to release her then-upcoming seventh studio album as a trilogy of three EPs, beginning with She Is Coming, stated: "By delivering a trio of EPs throughout a period of several months, Miley is giving her fans more of what they want, only in smaller doses. When an artist drops an album, they run the risk of it being forgotten in a few weeks, at which point they need to start work on the follow-up, while still promoting and touring their recent effort. Miley is doing her best to game the system by recording an album and delivering it to fans in pieces." However, this release strategy was later scrapped in favor of the conventional album release of Plastic Hearts. Major-label pop musicians who had previously employed such release strategies include Colbie Caillat with her fifth album Gypsy Heart (2014) being released following an EP of the album's first five tracks known as Gypsy Heart: Side A three months prior to the full album; and Jessie J's fourth studio album R.O.S.E. (2018) which was released as four EPs in as many days entitled R (Realizations), O (Obsessions), S (Sex) and E (Empowerment). Definition: The first EPs were seven-inch vinyl records with more tracks than a normal single (typically five to nine of them). Although they shared size and speed with singles, they were a recognizably different format than the seven-inch single. Although they could be named after a lead track, they were generally given a different title. Examples include The Beatles' The Beatles' Hits EP from 1963, and The Troggs' Troggs Tops EP from 1966, both of which collected previously released tracks. The playing time was generally between 10 and 15 minutes. They also came in cardboard picture sleeves at a time when singles were usually issued in paper company sleeves. EPs tended to be album samplers or collections of singles. EPs of all original material began to appear in the 1950s. Examples are Elvis Presley's Love Me Tender from 1956 and "Just for You", "Peace in the Valley" and "Jailhouse Rock" from 1957, and The Kinks' Kinksize Session from 1964. Definition: Twelve-inch EPs were similar, but generally had between three and five tracks and a length of over 12 minutes. Like seven-inch EPs, these were given titles. EP releases were also issued in cassette and 10-inch vinyl formats. With the advent of the compact disc (CD), more music was often included on "single" releases, with four or five tracks being common, and playing times of up to 25 minutes. These extended-length singles became known as maxi singles and while commensurate in length to an EP were distinguished by being designed to feature a single song, with the remaining songs considered B-sides, whereas an EP was designed not to feature a single song, instead resembling a mini album. Definition: EPs of original material regained popularity in the punk rock era, when they were commonly used for the release of new material, e.g. Buzzcocks' Spiral Scratch EP. These featured four-track seven-inch singles played at 331⁄3 rpm, the most common understanding of the term EP.Beginning in the 1980s, many so-called "singles" have been sold in formats with more than two tracks. Because of this, the definition of an EP is not determined only by the number of tracks or the playing time; an EP is typically seen as four (or more) tracks of equal importance, as opposed to a four-track single with an obvious A-side and three B-sides. Definition: In the United States, the Recording Industry Association of America, the organization that declares releases "gold" or "platinum" based on numbers of sales, defines an EP as containing three to five songs or under 30 minutes. On the other hand, The Recording Academy's rules for Grammy Awards state that any release with five or more different songs and a running time of over 15 minutes is considered an album, with no mention of EPs.In the United Kingdom, any record with more than four distinct tracks or with a playing time of more than 25 minutes is classified as an album for sales-chart purposes. If priced as a single, they will not qualify for the main album chart but can appear in the separate Budget Albums chart.An intermediate format between EPs and full-length LPs is the mini-LP, which was a common album format in the 1980s. These generally contained 20–30 minutes of music and about seven tracks.In underground dance music, vinyl EPs have been a longstanding medium for releasing new material, e.g. Fourteenth Century Sky by The Dust Brothers. Double EPs: A double extended play is a name typically given to vinyl records or compact discs released as a set of two discs, each of which would normally qualify as an EP. The name is thus analogous to double album. As vinyl records, the most common format for the double EP, they consist of a pair of 7-inch discs recorded at 45 or 331⁄3 rpm, or two 12-inch discs recorded at 45 rpm. The format is useful when an album's worth of material is being pressed by a small plant geared for the production of singles rather than albums and may have novelty value which can be turned to advantage for publicity purposes. Double EPs are rare, since the amount of material recordable on a double EP could usually be more economically and sensibly recorded on a single vinyl LP. Double EPs: In the 1950s, Capitol Records had released a number of double EPs by its more popular artists, including Les Paul. The pair of double EPs (EBF 1–577, sides 1 to 8!) were described on the original covers as "parts ... of a four-part album". In 1960, Joe Meek released four tracks from his planned I Hear a New World LP on an EP that was marked "Part 1". A second EP was planned, but never appeared; only the sleeve was printed. The first double EP released in Britain was the Beatles' Magical Mystery Tour film soundtrack. Released in December 1967 on EMI's Parlophone label, it contained six songs spread over two 7-inch discs and was packaged with a lavish color booklet. In the United States and some other countries, the songs were augmented by the band's single A- and B-sides from 1967 to create a full LP –a practice that was common in the US but considered exploitative in the UK. The Style Council album The Cost of Loving was originally issued as two 12-inch EPs. Double EPs: It is more common for artists to release two 12-inch 45s rather than a single 12-inch LP. Though there are 11 songs that total about 40 minutes, enough for one LP, the songs are spread across two 12" 45 rpm discs. Also, the vinyl pressing of Hail to the Thief by Radiohead uses this practice but is considered to be a full-length album. In 1982 Cabaret Voltaire released their studio album "2x45" on the UK-based label Rough Trade, featuring extended tracks over four sides of two 12-inch 45 rpm discs, with graphics by artist Neville Brody. The band subsequently released a further album in this format, 1985's "Drinking Gasoline", on the Virgin Records label. Double EPs: There are a limited number of double EPs which serve other purposes, however. An example of this is the Dunedin Double EP, which contains tracks by four different bands. Using a double EP in this instance allowed each band to have its tracks occupying a different side. In addition, the groove on the physical record could be wider and thus allow for a louder album. Jukebox EP: In the 1960s and 1970s, record companies released EP versions of long-play (LP) albums for use in jukeboxes. These were commonly known as "compact 33s" or "little LPs". It was played at 331⁄3 rpm, was pressed on seven-inch vinyl and frequently had as many as six songs. What made them EP-like was that some songs were omitted for time purposes, and the tracks deemed the most popular were left on. Unlike most EPs before them, and most seven-inch vinyl in general (pre-1970s), these were issued in stereo. Biggest selling debut EP of all time: The hard rock band Ugly Kid Joe holds the record of highest selling debut EP with As Ugly as They Wanna Be, which sold two million copies in 1991. In the United Kingdom As Ugly as They Wanna Be was classed as a mini-album, and therefore became their first Top 75 album chart hit, peaking at number 9 in 1992. Where the UK singles charts is concerned (the chart where most EPs charted between the scrapping of the EPs charts and the advent of single track downloads), the first EP to reach number one was Excerpts from "The Roussos Phenomenon" by Greek singer Demis Roussos, a 4-tracker known for its lead track "Forever and Ever".
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Sleep-related breathing disorder** Sleep-related breathing disorder: A sleep-related breathing disorder is a sleep disorder in which abnormalities in breathing occur during sleep that may or may not be present while awake. According to the International Classification of Sleep Disorders, sleep-related breathing disorders are classified as follows: Sleep apnea, including the more specific disorders of obstructive sleep apnea and central sleep apnea Central hypoventilation syndromes Obesity hypoventilation syndrome Sleep-related hypoxemia disorder Sleep-related hypoventilation due to a medication or substance, or due to a medical disorder Isolated symptoms produced by breathing during sleep, including snoring and catathrenia. Severity of sleep apneas: The most severe of the sleep apneas is obstructive sleep apnea. Apnea is obstructive only when polysomnography reveals a continued inspiratory effort, evidenced by abdominal and thoracic muscle contraction. Sleep apnea is measured by the apnea-hypopnea index (AHI). An AHI is determined with a sleep study. AHI values for adults are categorized as: Normal: AHI<5 Mild sleep apnea: 5≤AHI<15 Moderate sleep apnea: 15≤AHI<30 Severe sleep apnea: AHI≥30An episode is when a person hesitates to breathe or stops their breathing altogether. Treatments of sleep apnea: The most common treatment for sleep apnea is the use of continuous positive airway pressure with a CPAP machine. A CPAP machine pushes air through the nose and/or mouth, which applies air pressure to keep the throat open while asleep. This prevents pauses in breathing. This is helpful in the sense of people with more severe sleep apnea related problems. This CPAP machine is usually the first line of treatment and one of the most effective ways to treat the disease. This machine can help with the quality of sleep someone receives when using it and also lowering their blood pressure and other medical-related problems one might have when being diagnosed with this disease. Treatments of sleep apnea: Other forms of treatment to consider are oral appliances or surgery. An oral appliance helps keep the jaw forward and the tongue relaxed. There are a number of surgical treatments available. The most common of these is called a uvulopalatopharyngoplasty (UPPP). One can also consult their doctors about losing some weight which can also help in most cases. Central hypoventilation syndrome: "Central hypoventilation syndrome, sometimes referred to as Ondine's curse, is an inability of the brain to detect changes in carbon dioxide levels in the body during sleep. The human body determines the amount of oxygen it needs by monitoring how much carbon dioxide is in the blood." Central hypoventilation syndrome is caused by certain receptors in the brain failing to recognize changes in carbon dioxide levels during sleep, leading to a low breathing rate and low blood concentration of oxygen. Some of the causes of this disease are sudden onset obesity or spinal cord surgery. This is one of the least common sleep disorders and has the least amount of research done for it. Treatments of central hypoventilation syndrome: The treatment for concentral hypoventilation syndrome involves breathing support during sleep, often through the assistance of a mechanical ventilator. In some cases, this type of breathing support may be necessary during waking hours as well. Often referred to as oxygen therapy, it can use a CPAP machine as well. Again weight loss is a big help as well and listed as one of the top ways to get rid of the problem.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Part (music)** Part (music): A part in music refers to a component of a musical composition. Because there are multiple ways to separate these components, there are several contradictory senses in which the word "part" is used: the musical instructions for any individual instrument or voice (often given as a handwritten, printed, or digitized document) of sheet music (as opposed to the full score which shows all parts of the ensemble in the same document). A musician's part usually does not contain instructions for the other players in the ensemble, only instructions for that individual. Part (music): the music played by any group of musicians who all perform together for a given piece; in a symphony orchestra, a dozen or more cello players may all play "the same part" even if they each have their own physical copy of the music. This part may be in unison or may be harmonized, and may even sometimes contain counter-melodies within it. A percussion part may sometimes only contain rhythm. This sense of "part" does not require a written copy of the music; a bass player in a rock band "plays the bass part" even if there is no written version of the song. Part (music): any individual melody (or voice) that can be abstracted as continuous and independent from other notes being performed simultaneously in polyphony. Within the music played by a single pianist, one can often identify outer parts (the top and bottom parts) or an inner part (those in between). On the other hand, within a choir, "outer parts" and "inner parts" would refer to music performed by different singers. (See § Polyphony and part-writing) a section in the large-scale form of a piece. (See § Musical form) Polyphony and part-writing: Part-writing (or voice leading) is the composition of parts in consideration of harmony and counterpoint. In the context of polyphonic composition the term voice may be used instead of part to denote a single melodic line or textural layer. The term is generic, and is not meant to imply that the line should necessarily be vocal in character, instead referring to instrumentation, the function of the line within the counterpoint structure, or simply to register.The historical development of polyphony and part-writing is a central thread through European music history. The earliest notated pieces of music in Europe were gregorian chant melodies. It appears that the Codex Calixtinus (12th century) contains the earliest extant decipherable part music. Many histories of music trace the development of new rules for dissonances, and shifting stylistic possibilities for relationships between parts. Polyphony and part-writing: In some places and time periods, part-writing has been systematized as a set of counterpoint rules taught to musicians as part of their early education. One notable example is Johann Fux's Gradus ad Parnassum, which dictates a style of counterpoint writing that resembles the work of the famous Renaissance composer Palestrina. The standard for most Western music theory in the twentieth century is generalized from the work of Classical composers in the common practice period. For example, a recent general music textbook states, Part writing is derived from four-voice chorales written by J.S. Bach. The late baroque era composer wrote a total of 371 harmonized chorales. Today most students' reference Albert Riemenschneider's 1941 compilation of Bach chorales. Polyphony and part-writing: Polyphony and part-writing are also present in many popular music and folk music traditions, although they may not be described as explicitly or systematically as they sometimes are in the Western tradition.The lead part or lead voice is the most prominent, melodically-important voice (often the highest in pitch but not necessarily) and is played by a lead instrument (e.g. a lead vocalist). Musical form: In musical forms, a part may refer to a subdivision in the structure of a piece. Sometimes "part" is a title given by the composer or publisher to the main sections of a large-scale work, especially oratorios. For example, Handel's Messiah, which is organized into Part I, Part II, and Part III, each of which contains multiple scenes and one or two dozen individual arias or choruses. Musical form: Other times, "part" is used to refer in a more general sense to any identifiable section of the piece. This is for example the case in the widely used ternary form, usually schematized as A–B–A. In this form the first and third parts (A) are musically identical, or very nearly so, while the second part (B) in some way provides a contrast with them. In this meaning of part, similar terms used are section, strain, or turn.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**TAAR2** TAAR2: Trace amine-associated receptor 2 (TAAR2), formerly known as G protein-coupled receptor 58 (GPR58), is a protein that in humans is encoded by the TAAR2 gene. TAAR2 is coexpressed with Gα proteins; however, as of February 2017, its signal transduction mechanisms have not been determined. Tissue distribution: Human TAAR2 (hTAAR2) is expressed in the cerebellum, olfactory sensory neurons in the olfactory epithelium, and leukocytes (i.e., white blood cells), among other tissues. hTAAR1 and hTAAR2 are both required for white blood cell activation by trace amines in granulocytes.Using brain histochemistry staining of mice with LacZ insertion into TAAR2 gene histochemical reaction was found in the glomerular layer of the olfactory bulb, but intensive staining was found in the deeper layer as well. The histochemical reaction was observed in the fibers of the olfactory nerve, in the glomeruli of the glomerular layer, several short axon (SA) cells (outer plexiform layer or granular layer) and neuronal projections that were visualized throughout the depth of the olfactory bulb. Furthermore, LacZ staining was observed in the limbic areas of the brain receiving olfactory input, i.e., piriform cortex molecular area, hippocampus (CA1 field, pyramidal layer), hypothalamic lateral zone (zone incerta) and lateral habenula. In addition, a histochemical reaction was found in the midbrain raphe nuclei and primary somatosensory area of the cortex (layer 5). Real-time quantitative PCR with reverse transcription confirmed TAAR2 gene expression in the mouse brain areas such as the frontal cortex, hypothalamus, and brainstem. Involvement in the functioning of monoamine systems: TAAR2 knockout mice have significantly higher level of dopamine in the striatum tissue than wild-type littermates and lower level of norepinephrine in hippocampus. Also, they have lower levels of MAO-B expression in midbrain and striatum. A significantly higher number of the dopamine neurons was detected in TAAR2-KO mice in the substantia nigra pars compacta. TAAR2 knockout mice have significantly higher level of horizontal activity and lower immobilization time in forced swim test. Involvement in adult neurogenesis: It has been found that TAAR2 knockout mice have an increased number of neuroblast-like and proliferating cells in both subventricular and subgranular zones of the dentate gyrus in comparison to wild type animals. Furthermore, TAAR2 knockout mice have an increased the brain-derived neurotrophic factor (BDNF) level in the striatum.A single nucleotide polymorphism nonsense mutation of the TAAR2 gene is associated with schizophrenia. TAAR2 is a probable pseudogene in 10–15% of Asians as a result of a polymorphism that produces a premature stop codon at amino acid 168. Involvement in immune cell migration and function: TCells, B Cells and Peripheral Mononuclear cells express TAAR2 mRNA. Migration toward TAAR1 ligands required both TAAR1 and TAAR2 expression based on siRNA experiments. In T cells, the same stimuli triggered cytokine secretion while in B cells Immunoglobulin secretion is triggered. Possible Ligands: 3‐iodo‐thyronamine(T1AM) was identified as a nonselective ligand for TAAR2. Additional TAAR1 ligands, p-tyramine and Beta-phenylethylamine(2-PEA) trigger TAAR2 dependant actions, though direct binding has not been demonstrated.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Steam whistle** Steam whistle: A steam whistle is a device used to produce sound in the form of a whistle using live steam, which creates, projects, and amplifies its sound by acting as a vibrating system (compare to train horn). Operation: The whistle consists of the following main parts, as seen on the drawing: the whistle bell (1), the steam orifice or aperture (2), and the valve (9). Operation: When the lever (10) is actuated (usually via a pull cord), the valve opens and lets the steam escape through the orifice. The steam will alternately compress and rarefy in the bell, creating the sound. The pitch, or tone, is dependent on the length of the bell; and also how far the operator has opened the valve. Some locomotive engineers invented their own distinctive style of whistling. Uses of steam whistles: Steam whistles were often used in factories, and similar places to signal the start or end of a shift, etc. steam-powered railway locomotives, traction engines, and steam ships have traditionally been fitted with a steam whistle for warning and communication purposes. Large diameter, low-pitched steam whistles were used on light houses, likely beginning in the 1850s.The earliest use of steam whistles was as boiler low-water alarms in the 18th century and early 19th century. During the 1830s, whistles were adopted by railroads and steamship companies. Uses of steam whistles: Gallery Railway whistles Steam warning devices have been used on trains since 1833, when George Stephenson invented and patented a steam trumpet for use on the Leicester and Swannington Railway. Period literature makes a distinction between a steam trumpet and a steam whistle. Uses of steam whistles: A copy of the trumpet drawing signed May 1833 shows a device about eighteen inches high with an ever-widening trumpet shape with a six-inch diameter at its top or mouth. It is said that George Stephenson invented his trumpet after an accident on the Leicester and Swannington Railway where a train hit either a cart, or a herd of cows, on a level crossing and there were calls for a better way of giving a warning. Although no-one was injured, the accident was deemed serious enough to warrant Stephenson's personal intervention. One account states that [driver] Weatherburn had 'mouthblown his horn' at the crossing in an attempt to prevent the accident, but that no attention had been paid to this audible warning, perhaps because it had not been heard. Uses of steam whistles: Stephenson subsequently called a meeting of directors and accepted the suggestion of the company manager, Ashlin Bagster, that a horn or whistle which could be activated by steam should be constructed and fixed to the locomotives. Stephenson later visited a musical instrument maker on Duke Street in Leicester, who on Stephenson's instructions constructed a 'Steam Trumpet' which was tried out in the presence of the board of Directors ten days later. Uses of steam whistles: Stephenson mounted the trumpet on the top of the boiler's steam dome, which delivers dry steam to the cylinders. The company went on to mount the device on its other locomotives Locomotive steam trumpets were soon replaced by steam whistles. Air whistles were used on some diesel and electric locomotives, but these mostly employ air horns. Music An array of steam whistles arranged to play music is referred to as a calliope. Uses of steam whistles: In York, Pennsylvania, a variable pitch steam whistle at the New York Wire Company has been played annually on Christmas Eve since 1925 (except in 1986 and 2005) in what has come to be known as "York's Annual Steam Whistle Christmas Concert". On windy nights, area residents report hearing the concert as far as 12 to 15 miles away. The whistle, which is in the Guinness Book of World Records, was powered by an air compressor during the 2010 concert due to the costs of maintaining and running the boiler. Uses of steam whistles: Lighthouse fog signals Beginning in 1869, steam whistles began being installed at lighthouse stations as a way of warning mariners in periods of fog, when the lighthouse is not visible. 10" diameter whistles were used as fog signals throughout the United States for many years, until they were later replaced by other compressed air diaphragm or diaphone horns. Types of whistles: Plain whistle – an inverted cup mounted on a stem, as in the illustration above. In Europe, railway steam whistles were typically loud, shrill, single-note plain whistles. In the UK, locomotives were usually fitted with only one or two of these whistles, the latter having different tones and being controlled individually to allow more complex signalling. On railroads in Finland, two single-note whistles were used on every engine; one shrill, one of a lower tone. They were used for different signaling purposes. The Deutsche Reichsbahn of Germany introduced another whistle design in the 1920s called "Einheitspfeife", conceived as a single-note plain whistle which already had a very deep-pitched and loud sound, but if the whistle trigger is just pulled down half of its way an even lower tone like from a chime-whistle could also be caused. This whistle is the reason for the typical "long high - short low - short high" signal sound of steam locomotives in Germany. Types of whistles: Chime whistle – two or more resonant bells or chambers that sound simultaneously. In America, railway steam whistles were typically compact chime whistles with more than one whistle contained within, creating a chord. In Australia the New South Wales Government Railways after the 1924 re-classification many steam locomotives either had 5 chimes whistles fitted (this include many locomotives from the pre 1924 re-classification, or were built new with 5 chime whistles. 3-chimes (3 compact whistles within one) were very popular, as well as 5-chimes, and 6-chimes. In some cases chime whistles were used in Europe. Ships such as the Titanic were equipped with chimes consisting of three separate whistles (in the case of the Titanic the whistles measured 9, 12, and 15 inches diameter). The Japanese National Railways used a chime whistle that sounds like a very deep single-note plain whistle, because the chords where just accessed in a simple parallel circuit if the whistle trigger is pulled down. Types of whistles: Organ whistle – a whistle with mouths cut in the side, usually a long whistle in relation to diameter, hence the name. These whistle were very common on steamships, especially those manufactured in the UK. Gong – two whistles facing in opposite directions on a common axis. These were popular as factory whistles. Some were composed of three whistle chimes. Variable pitch whistle – a whistle containing an internal piston available for changing pitch. This whistle type could be made to sound like a siren or to play a melody. Often called a fire alarm whistle, wildcat whistle, or mocking bird whistle. Types of whistles: Toroidal or Levavasseur whistle – a whistle with a torus-shaped (doughnut-shaped) resonant cavity paralleling the annular gas orifice, named after Robert Levavasseur, its inventor. Unlike a conventional whistle, the diameter (and sound level) of a ring-shaped whistle can be increased without altering resonance chamber cross-sectional area (preserving frequency), allowing construction of a very large diameter high frequency whistle. The frequency of a conventional whistle declines as diameter is increased. Other ring-shaped whistles include the Hall-Teichmann whistle, Graber whistle, Ultrawhistle, and Dynawhistle.Helmholtz whistle – a whistle with a cross-sectional area exceeding that of the whistle bell opening, often shaped like a bottle or incandescent light bulb. The frequency of this whistle relative to its size is lower than that of a conventional whistle and therefore these whistles have found application in small gauge steam locomotives. Also termed a Bangham whistle. Types of whistles: Hooter whistle - a single note whistle of greater diameter with a longer bell, resulting in a deeper “hoot” sound when blown. These found use in rail, marine, and industrial applications. In the United States, the Norfolk and Western Railway made extensive use of these kinds of whistles and were noted for the squeaks and chirps produced when blown in addition to their low pitch. Whistle acoustics: Resonant frequency A whistle has a characteristic natural resonant frequency that can be detected by gently blowing human breath across the whistle rim, much as one might blow over the mouth of a bottle. The active sounding frequency (when the whistle is blown on steam) may differ from the natural frequency as discussed below. These comments apply to whistles with a mouth area at least equal to the cross-sectional area of the whistle. Whistle acoustics: Whistle length – The natural resonant frequency decreases as the length of the whistle is increased. Doubling the effective length of a whistle reduces the frequency by one half, assuming that the whistle cross-sectional area is uniform. A whistle is a quarter-wave generator, which means that a sound wave generated by a whistle is about four times the whistle length. If the speed of sound in the steam supplied to a whistle were 15936 inches per second, a pipe with a 15-inch effective length blowing its natural frequency would sound near middle C: 15936/(4 x 15) = 266 Hz. When a whistle is sounding its natural frequency, the effective length referred to here is somewhat longer than the physical length above the mouth if the whistle is of uniform cross-sectional area. That is, the vibrating length of the whistle includes some portion of the mouth. This effect (the “end correction”) is caused by the vibrating steam inside the whistle engaging vibration of some steam outside the enclosed pipe, where there is a transition from plane waves to spherical waves. Formulas are available to estimate the effective length of a whistle, but an accurate formula to predict sounding frequency would have to incorporate whistle length, scale, gas flow rate, mouth height, and mouth wall area (see below). Whistle acoustics: Blowing pressure – Frequency increases with blowing pressure, which determines gas volume flow through the whistle, allowing a locomotive engineer to play a whistle like a musical instrument, using the valve to vary the flow of steam. The term for this was “quilling.” An experiment with a short plain whistle reported in 1883 showed that incrementally increasing steam pressure drove the whistle from E to D-flat, a 68 percent increase in frequency. Pitch deviations from the whistle natural frequency likely follow velocity differences in the steam jet downstream from the aperture, creating phase differences between driving frequency and natural frequency of the whistle. Although at normal blowing pressures the aperture constrains the jet to the speed of sound, once it exits the aperture and expands, velocity decay is a function of absolute pressure. Also, frequency may vary at a fixed blowing pressure with differences in temperature of steam or compressed air. Industrial steam whistles typically were operated in the range of 100 to 300 pounds per square inch gauge pressure (psig) (0.7 - 2.1 megapascals, MPa), although some were constructed for use on pressures as high as 600 psig (4.1 MPa). All of these pressures are within the choked flow regime, where mass flow scales with upstream absolute pressure and inversely with the square root of absolute temperature. This means that for dry saturated steam, a halving of absolute pressure results in almost a halving of flow. This has been confirmed by tests of whistle steam consumption at various pressures. Excessive pressure for a given whistle design will drive the whistle into an overblown mode, where the fundamental frequency will be replaced by an odd harmonic, that is a frequency that is an odd number multiple of the fundamental. Usually this is the third harmonic (second overtone frequency), but an example has been noted where a large whistle jumped to the fifteenth harmonic. A long narrow whistle such as that of the Liberty ship John W. Brown sounds a rich spectrum of overtones, but is not overblown. (In overblowing the "amplitude of the pipe fundamental frequency falls to zero.") Increasing whistle length increases the number and amplitude of harmonics, as has been demonstrated in experiments with a variable-pitch whistle. Whistles tested on steam produce both even-numbered and odd-numbered harmonics. The harmonic profile of a whistle might also be influenced by aperture width, mouth cut-up, and lip-aperture offset, as is the case for organ pipes.Steam quality – The dryness of steam provided to a whistles is variable and will affect whistle tone frequency. Steam quality determines the velocity of sound, which declines with decreasing dryness due to the inertia of the liquid phase. The speed of sound in steam is predictable if steam dryness is known. Also, the specific volume of steam for a given temperature decreases with decreasing dryness. Two examples of estimates of speed of sound in steam calculated from whistles blown under field conditions are 1,326 and 1,352 feet per second. Whistle acoustics: Aspect ratio – The more squat the whistle, the greater is the change in pitch with blowing pressure. This may be caused by differences in the Q factor. The pitch of a very squat whistle may rise several semitones as pressure is raised. Whistle frequency prediction thus requires establishment of a set of frequency/pressure curves unique to whistle scale, and a set of whistles may fail to track a musical chord as blowing pressure changes if each whistle is of a different scale. This is true of many antique whistles divided into a series of compartments of the same diameter but of different lengths. Some whistle designers minimized this problem by building resonant chambers of a similar scale.Mouth vertical length (“cut-up”) – Frequency of a plain whistle declines as the whistle bell is raised away from the steam source. If the cut-up of an organ whistle or single bell chime is raised (without raising the whistle ceiling), the effective chamber length is shortened. Shortening the chamber drives frequency up, but raising the cut-up drives frequency down. The resulting frequency (higher, lower, or unchanged) will be determined by whistle scale and by competition between the two drivers. The cut-up prescribed by whistle-maker Robert Swanson for 150 psig steam pressure was 0.35 x bell diameter for a plain whistle, which is about 1.45 x net bell cross-sectional area (subtracting stud area). The Nathan Manufacturing Company used a cut-up of 1.56 x chamber cross-sectional area for their 6-note railway chime whistle.Cut-up in relation to mouth arc – A large change in cut-up (e.g., 4x difference) may have little impact on whistle natural frequency if mouth area and total resonator length are held constant. For example, a plain whistle, which has a 360-degree mouth (that extends completely around the whistle circumference), can emit a similar frequency to a partial mouth organ whistle of the same mouth area and same overall resonator length (aperture to ceiling), despite an immensely different cut-up. (Cut-up is the distance between the steam aperture and the upper lip of the mouth.) This suggests that effective cut-up is determined by proximity of the oscillating gas column to the steam jet rather than by the distance between the upper mouth lip and the steam aperture. Whistle acoustics: Steam aperture width – Frequency may rise as steam aperture width declines and the slope of the frequency/pressure curve may vary with aperture width.Gas composition – The frequency of a whistle driven by steam is typically higher than that of a whistle driven by compressed air at the same pressure. This frequency difference is caused by the greater speed of sound in steam, which is less dense than air. The magnitude of the frequency difference can vary because the speed of sound is influenced by air temperature and by steam quality. Also, the more squat the whistle, the more sensitive it is to the difference in gas flow rate between steam and air that occurs at a fixed blowing pressure. Data from 14 whistles (34 resonant chambers) sounded under a variety of field conditions showed a wide range of frequency differences between steam and air (5 - 43 percent higher frequency on steam). Very elongate whistles, which are fairly resistant to gas flow differences, sounded a frequency 18 - 22 percent higher on steam (about three semitones). Whistle acoustics: Sound pressure level Whistle sound level varies with several factors: Blowing pressure – Sound level increases as blowing pressure is raised, although there may be an optimum pressure at which sound level peaks.Aspect ratio – Sound level increases as whistle length is reduced, increasing frequency. For example, depressing the piston of a variable-pitch steam whistle changed the frequency from 333 Hz to 753 Hz and raised the sound pressure level from 116 dBC to 123 dBC. That five-fold difference in the square of the frequency resulted in a five-fold difference in sound intensity. Sound level also increases as whistle cross-sectional area is increased. A sample of 12 single-note whistles ranging in size from one-inch diameter to 12-inches diameter showed a relationship between sound intensity and the square of the cross-sectional area (when differences in frequency were taken into account). In other words, relative whistle sound intensity can be estimated using the square of the cross-sectional area divided by the square of the wavelength. For example, the sound intensity from a whistle bell of 6-inch diameter x 7.5-inch length (113 dBC) was 10x that of a 2 x 4-inch whistle (103 dBC) and twice that of a (lower frequency) 10 x 40-inch whistle (110 dBC). These whistles were sounded on compressed air at 125 pounds per square inch gauge pressure (862 Kilopascals) and sound levels were recorded at 100 feet distance. Elongate organ whistles may exhibit disproportionately high sound levels due to their strong higher frequency overtones. At a separate venue a 20-inch diameter Ultrawhistle (ring-shaped whistle) operating at 15 pounds per square inch gauge pressure (103.4 kilopascals) produced 124 dBC at 100 feet. It is unknown how the sound level of this whistle would compare to that of a conventional whistle of the same frequency and resonant chamber area. By comparison, a Bell-Chrysler air raid siren generates 138 dBC at 100 feet. The sound level of a Levavasseur toroidal whistle is enhanced by about 10 decibels by a secondary cavity parallel to the resonant cavity, the former creating a vortex that augments the oscillations of the jet driving the whistle.Steam aperture width – If gas flow is restricted by the area of the steam aperture, widening the aperture will increase the sound level for a fixed blowing pressure. Enlarging the steam aperture can compensate for the loss of sound output if pressure is reduced. It has been known since at least the 1830s that whistles can be modified for low pressure operation and still achieve a high sound level. Data on the compensatory relationship between pressure and aperture size are scant, but tests on compressed air indicate that a halving of absolute pressure requires that the aperture size be at least doubled in width to maintain the original sound level, and aperture width in some antique whistle arrays increases with diameter (aperture area thus increasing with whistle cross-sectional area) for whistles of the same scale. Applying the physics of high pressure jets exiting circular apertures, a doubling of velocity and gas concentration at a fixed point in the whistle mouth would require a quadrupling of either aperture area or absolute pressure. (A quartering of absolute pressure would be compensated by a quadrupling of aperture area—the velocity decay constant increases approximately with the square root of absolute pressure in the normal whistle-blowing pressure range.) In reality, trading pressure loss for greater aperture area may be less efficient as pressure-dependent adjustments occur to virtual origin displacement. Quadrupling the width of an organ pipe aperture at a fixed blowing pressure resulted in somewhat less than a doubling of velocity at the flue exit. Whistle acoustics: Steam aperture profile – Gas flow rate (and thus sound level) is set not only by aperture area and blowing pressure, but also by aperture geometry. Friction and turbulence influence the flow rate, and are accounted for by a discharge coefficient. A mean estimate of the discharge coefficient from whistle field tests is 0.72 (range 0.69 - 0.74). Whistle acoustics: Mouth vertical length (“cut-up”) – The mouth length (cut-up) that provides the highest sound level at a fixed blowing pressure varies with whistle scale, and some makers of multi-tone whistles therefore cut a mouth height unique to the scale of each resonant chamber, maximizing sound output of the whistle. Ideal cut-up for whistles of a fixed diameter and aperture width (including single-bell chime compartments) at a fixed blowing pressure appears to vary approximately with the square root of effective length. Antique whistle makers commonly used a compromise mouth area of about 1.4x whistle cross-sectional area. If a whistle is driven to its maximum sound level with the mouth area set equal to the whistle cross-sectional area, it may be possible to increase the sound level by further increasing the mouth area. .Frequency and distance – Sound pressure level decreases by half (six decibels) with each doubling of distance due to divergence from the source, an inversely proportional relationship. (Distinct from the inverse square law, applicable to sound intensity, rather than pressure.) Sound pressure level also decreases due to atmospheric absorption, which is strongly dependent upon frequency, lower frequencies traveling farthest. For example, a 1000 Hz whistle has an atmospheric attenuation coefficient one half that of a 2000 Hz whistle (calculated for 50 percent relative humidity at 20 degrees Celsius). This means that in addition to divergent sound dampening, there would be a loss of 0.5 decibel per 100 meters from the 1000 Hz whistle and 1.0 decibel per 100 meters for the 2000 Hz whistle. Additional factors affecting sound propagation include barriers, atmospheric temperature gradients, and "ground effects.” Terminology Acoustic length or effective length is the quarter wavelength generated by the whistle. It is calculated as one quarter the ratio of speed of sound to the whistle's frequency. Acoustic length may differ from the whistle's physical length, also termed geometric length. depending upon mouth configuration, etc. The end correction is the difference between the acoustic length and the physical length above the mouth. The end correction is a function of diameter whereas the ratio of acoustic length to physical length is a function of scale. These calculations are useful in whistle design to obtain a desired sounding frequency. Working length in early usage meant whistle acoustic length, i.e., the effective length of the working whistle, but recently has been used for physical length including the mouth. Loudest and largest whistles: Loudness is a subjective perception that is influenced by sound pressure level, sound duration, and sound frequency. High sound pressure level potential has been claimed for the whistles of Vladimir Gavreau, who tested whistles as large as 1.5 meter (59-inch) diameter (37 Hz).A 20-inch diameter ring-shaped whistle (“Ultrawhistle”) patented and produced by Richard Weisenberger sounded 124 decibels at 100 feet. The variable pitch steam whistle at the New York Wire Company in York, Pennsylvania, was entered in the Guinness Book of World Records in 2002 as the loudest steam whistle on record at 124.1dBA from a set distance used by Guinness. The York whistle was also measured at 134.1 decibels from a distance of 23-feet.A fire-warning whistle supplied to a Canadian saw mill by the Eaton, Cole, and Burnham Company in 1882 measured 20 inches in diameter, four feet nine inches from bowl to ornament, and weighed 400 pounds. The spindle supporting the whistle bell measured 3.5 inches diameter and the whistle was supplied by a four-inch feed pipe.Other records of large whistles include an 1893 account of U.S. President Grover Cleveland activating the “largest steam whistle in the world,” said to be “five feet” at the Chicago World's Fair.The sounding chamber of a whistle installed at the 1924 Long-Bell Lumber Company, Longview, Washington measured 16 inches diameter x 49 inches in length.The whistle bells of multi-bell chimes used on ocean liners such as the RMS Titanic measured 9, 12, and 15 inches diameter.The whistle bells of the Canadian Pacific steamships Assiniboia and Keewatin measured 12 inches in diameter and that of the Keewatin measured 60 inches in length.A multi-bell chime whistle installed at the Standard Sanitary Manufacturing Company in 1926 was composed of five separate whistle bells measuring 5 x15, 7 x 21, 8x 24, 10 x 30, and 12 x36 inches, all plumbed to a five-inch steam pipe.The Union Water Meter Company of Worcester Massachusetts produced a gong whistle composed of three bells, 8 x 9-3/4, 12 x 15, and 12 x 25 inches. Twelve-inch diameter steam whistles were commonly used at light houses in the 19th century.It has been claimed that the sound level of an Ultrawhistle would be significantly greater than that of a conventional whistle, but comparative tests of large whistles have not been undertaken. Tests of small Ultrawhistles have not shown higher sound levels compared to conventional whistles of the same diameter.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Law of cotangents** Law of cotangents: In trigonometry, the law of cotangents is a relationship among the lengths of the sides of a triangle and the cotangents of the halves of the three angles. This is also known as the Cot Theorem. Just as three quantities whose equality is expressed by the law of sines are equal to the diameter of the circumscribed circle of the triangle (or to its reciprocal, depending on how the law is expressed), so also the law of cotangents relates the radius of the inscribed circle of a triangle (the inradius) to its sides and angles. Statement: Using the usual notations for a triangle (see the figure at the upper right), where a, b, c are the lengths of the three sides, A, B, C are the vertices opposite those three respective sides, α, β, γ are the corresponding angles at those vertices, s is the semiperimeter, that is, s = a + b + c/2, and r is the radius of the inscribed circle, the law of cotangents states that cot cot cot ⁡(γ2)s−c=1r and furthermore that the inradius is given by r=(s−a)(s−b)(s−c)s. Proof: In the upper figure, the points of tangency of the incircle with the sides of the triangle break the perimeter into 6 segments, in 3 pairs. In each pair the segments are of equal length. For example, the 2 segments adjacent to vertex A are equal. If we pick one segment from each pair, their sum will be the semiperimeter s. An example of this is the segments shown in color in the figure. The two segments making up the red line add up to a, so the blue segment must be of length s − a. Obviously, the other five segments must also have lengths s − a, s − b, or s − c, as shown in the lower figure. Proof: By inspection of the figure, using the definition of the cotangent function, we have cot ⁡(α2)=s−ar and similarly for the other two angles, proving the first assertion. For the second one—the inradius formula—we start from the general addition formula: cot cot cot cot cot cot cot cot cot cot cot cot cot ⁡u. Applying to cot(α/2 + β/2 + γ/2) = cot π/2 = 0, we obtain: cot cot cot cot cot cot ⁡(γ2). (This is also the triple cotangent identity) Substituting the values obtained in the first part, we get: (s−a)r(s−b)r(s−c)r=s−ar+s−br+s−cr=3s−2sr=sr. Multiplying through by r3/s gives the value of r2, proving the second assertion. Some proofs using the law of cotangents: A number of other results can be derived from the law of cotangents. Some proofs using the law of cotangents: Heron's formula. Note that the area of triangle ABC is also divided into 6 smaller triangles, also in 3 pairs, with the triangles in each pair having the same area. For example, the two triangles near vertex A, being right triangles of width s − a and height r, each have an area of 1/2r(s − a). So those two triangles together have an area of r(s − a), and the area S of the whole triangle is therefore This gives the result as required. Some proofs using the law of cotangents: Mollweide's first formula. From the addition formula and the law of cotangents we have This gives the result as required. Mollweide's second formula. From the addition formula and the law of cotangents we have Here, an extra step is required to transform a product into a sum, according to the sum/product formula. This gives the result as required. The law of tangents can also be derived from this (Silvester 2001, p. 99).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Duflo isomorphism** Duflo isomorphism: In mathematics, the Duflo isomorphism is an isomorphism between the center of the universal enveloping algebra of a finite-dimensional Lie algebra and the invariants of its symmetric algebra. It was introduced by Michel Duflo (1977) and later generalized to arbitrary finite-dimensional Lie algebras by Kontsevich. Duflo isomorphism: The Poincaré-Birkoff-Witt theorem gives for any Lie algebra g a vector space isomorphism from the polynomial algebra S(g) to the universal enveloping algebra U(g) . This map is not an algebra homomorphism. It is equivariant with respect to the natural representation of g on these spaces, so it restricts to a vector space isomorphism F:S(g)g→U(g)g where the superscript indicates the subspace annihilated by the action of g . Both S(g)g and U(g)g are commutative subalgebras, indeed U(g)g is the center of U(g) , but F is still not an algebra homomorphism. However, Duflo proved that in some cases we can compose F with a map G:S(g)g→S(g)g to get an algebra isomorphism F∘G:S(g)g→U(g)g. Duflo isomorphism: Later, using the Kontsevich formality theorem, Kontsevich showed that this works for all finite-dimensional Lie algebras. Following Calaque and Rossi, the map G can be defined as follows. The adjoint action of g is the map g→End(g) sending x∈g to the operation [x,−] on g . We can treat map as an element of g∗⊗End(g) or, for that matter, an element of the larger space S(g∗)⊗End(g) , since g∗⊂S(g∗) . Call this element ad∈S(g∗)⊗End(g) Both S(g∗) and End(g) are algebras so their tensor product is as well. Thus, we can take powers of ad , say adk∈S(g∗)⊗End(g). Duflo isomorphism: Going further, we can apply any formal power series to ad and obtain an element of S¯(g∗)⊗End(g) , where S¯(g∗) denotes the algebra of formal power series on g∗ . Working with formal power series, we thus obtain an element ead−e−adad∈S¯(g∗)⊗End(g) Since the dimension of g is finite, one can think of End(g) as Mn(R) , hence S¯(g∗)⊗End(g) is Mn(S¯(g∗)) and by applying the determinant map, we obtain an element := detead−e−adad∈S¯(g∗) which is related to the Todd class in algebraic topology. Duflo isomorphism: Now, g∗ acts as derivations on S(g) since any element of g∗ gives a translation-invariant vector field on g . As a result, the algebra S(g∗) acts on as differential operators on S(g) , and this extends to an action of S(g) on S(g) . We can thus define a linear map G:S(g)→S(g) by G(ψ)=J~1/2ψ and since the whole construction was invariant, G restricts to the desired linear map G:S(g)g→S(g)g. Properties: For a nilpotent Lie algebra the Duflo isomorphism coincides with the symmetrization map from symmetric algebra to universal enveloping algebra. For a semisimple Lie algebra the Duflo isomorphism is compatible in a natural way with the Harish-Chandra isomorphism.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**The Dental Manufacturing Company Limited** The Dental Manufacturing Company Limited: The Dental Manufacturing Company Limited were manufacturers of dental equipment, motor silencers and agricultural equipment. Company history: In the 1890s L. Gardner and Sons made dentists' chairs for the company, 106 units being produced over a period of three years, some lifted hydraulically, and others by a rack and pinion system. In 1914 the company was located at Alston House, Newman Street, London, W., and produced artificial teeth, dental rubbers, dental chairs, vulcanizers' instruments and dentists' requisites. It also had offices in Manchester, Glasgow, and Dublin.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Worst-case circuit analysis** Worst-case circuit analysis: Worst-case circuit analysis (WCCA or WCA) is a cost-effective means of screening a design to ensure with a high degree of confidence that potential defects and deficiencies are identified and eliminated prior to and during test, production, and delivery. It is a quantitative assessment of the equipment performance, accounting for manufacturing, environmental and aging effects. In addition to a circuit analysis, a WCCA often includes stress and derating analysis, failure modes and effects criticality (FMECA) and reliability prediction (MTBF). The specific objective is to verify that the design is robust enough to provide operation which meets the system performance specification over design life under worst-case conditions and tolerances (initial, aging, radiation, temperature, etc.). Stress and de rating analysis is intended to increase reliability by providing sufficient margin compared to the allowable stress limits. This reduces overstress conditions that may induce failure, and reduces the rate of stress-induced parameter change over life. It determines the maximum applied stress to each component in the system. General information: A worst-case circuit analysis should be performed on all circuitry that is safety and financially critical. Worst-case circuit analysis is an analysis technique which, by accounting for component variability, determines the circuit performance under a worst-case scenario (under extreme environmental or operating conditions). Environmental conditions are defined as external stresses applied to each circuit component. It includes temperature, humidity or radiation. Operating conditions include external electrical inputs, component quality level, interaction between parts, and drift due to component aging. General information: WCCA helps in the process of building design reliability into hardware for long-term field operation. Electronic piece-parts fail in two distinct modes: Out-of-tolerance limits: Through this, the circuit continues to operate, though with degraded performance, and ultimately exceeds the circuit's required operating limits. Catastrophic failures may be minimized through MTBF, stress and derating, and FMECA analyses that help to ensure that all components are properly derated, as well as that degradation is occurring “gracefully...” A WCCA permits you to predict and judge the circuit performance limits beneath all of the combos of half tolerances. There are many reasons to perform WCCA. Here are a few that may be impactful to schedule and cost. Methodology: Worst-case analysis is the analysis of a device (or system) that assures that the device meets its performance specifications. These are typically accounting for tolerances that are due to initial component tolerance, temperature tolerance, age tolerance and environmental exposures (such as radiation for a space device). The beginning of life analysis comprises the initial tolerance and provides the data sheet limits for the manufacturing test cycle. The end of life analysis provides the additional degradation resulting from the aging and temperature effects on the elements within the device or system. Methodology: This analysis is usually performed using SPICE, but mathematical models of individual circuits within the device (or system) are needed to determine the sensitivities or the worst-case performance. A computer program is frequently used to total and summarize the results. Methodology: A WCCA follows these steps: Generate/obtain circuit model Obtain correlation to validate model Determine sensitivity to each component parameter Determine component tolerances Calculate the variance of each component parameter as sensitivity times absolute tolerance Use at least two methods of analysis (e.g. hand analysis and SPICE or Saber, SPICE and measured data) to assure the result Generate a formal report to convey the information producedThe design is broken down into the appropriate functional sections. A mathematical model of the circuit is developed and the effects of various part/system tolerances are applied. The circuit's EVA and RSS results are determined for beginning-of-life and end-of-life states. Methodology: These results are used to calculate part stresses and are applied to other analysis. In order for the WCCA to be useful throughout the product’s life cycle, it is extremely important that the analysis be documented in a clear and concise format. This will allow for future updates and review by other than the original designer. A compliance matrix is generated that clearly identifies the results and all issues.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Interchangeable parts** Interchangeable parts: Interchangeable parts are parts (components) that are identical for practical purposes. They are made to specifications that ensure that they are so nearly identical that they will fit into any assembly of the same type. One such part can freely replace another, without any custom fitting, such as filing. This interchangeability allows easy assembly of new devices, and easier repair of existing devices, while minimizing both the time and skill required of the person doing the assembly or repair. Interchangeable parts: The concept of interchangeability was crucial to the introduction of the assembly line at the beginning of the 20th century, and has become an important element of some modern manufacturing but is missing from other important industries. Interchangeable parts: Interchangeability of parts was achieved by combining a number of innovations and improvements in machining operations and the invention of several machine tools, such as the slide rest lathe, screw-cutting lathe, turret lathe, milling machine and metal planer. Additional innovations included jigs for guiding the machine tools, fixtures for holding the workpiece in the proper position, and blocks and gauges to check the accuracy of the finished parts. Electrification allowed individual machine tools to be powered by electric motors, eliminating line shaft drives from steam engines or water power and allowing higher speeds, making modern large-scale manufacturing possible. Modern machine tools often have numerical control (NC) which evolved into CNC (computerized numeric control) when microprocessors became available. Interchangeable parts: Methods for industrial production of interchangeable parts in the United States were first developed in the nineteenth century. The term American system of manufacturing was sometimes applied to them at the time, in distinction from earlier methods. Within a few decades such methods were in use in various countries, so American system is now a term of historical reference rather than current industrial nomenclature. First use: Evidence of the use of interchangeable parts can be traced back over two thousand years to Carthage in the First Punic War. Carthaginian ships had standardized, interchangeable parts that even came with assembly instructions akin to "tab A into slot B" marked on them.In East Asia, during the Warring States period and later the Qin Dynasty, bronze crossbow triggers and locking mechanisms were mass-produced and made to be interchangeable. Origins of the modern concept: In the late 18th century, French General Jean-Baptiste Vaquette de Gribeauval promoted standardized weapons in what became known as the Système Gribeauval after it was issued as a royal order in 1765. (Its focus at the time was artillery more than muskets or handguns.) One of the accomplishments of the system was that solid cast cannons were bored to precise tolerances, which allowed the walls to be thinner than cannons poured with hollow cores. However, because cores were often off center, the wall thickness determined the size of the bore. Standardized boring allowed cannons to be shorter without sacrificing accuracy and range because of the tighter fit of the shells. It also allowed standardization of the shells.Before the 18th century, devices such as guns were made one at a time by gunsmiths in a unique manner. If one single component of a firearm needed a replacement, the entire firearm either had to be sent to an expert gunsmith for custom repairs, or discarded and replaced by another firearm. During the 18th and early 19th centuries, the idea of replacing these methods with a system of interchangeable manufacture was gradually developed. The development took decades and involved many people.Gribeauval provided patronage to Honoré Blanc, who attempted to implement the Système Gribeauval at the musket level. By around 1778, Honoré Blanc began producing some of the first firearms with interchangeable flint locks, although they were carefully made by craftsmen. Blanc demonstrated in front of a committee of scientists that his muskets could be fitted with flint locks picked at random from a pile of parts.Muskets with interchangeable locks caught the attention of Thomas Jefferson through the efforts of Honoré Blanc when Jefferson was Ambassador to France in 1785. Jefferson tried to persuade Blanc to move to America, but was not successful, so he wrote to the American Secretary of War with the idea, and when he returned to the USA he worked to fund its development. President George Washington approved of the idea, and by 1798 a contract was issued to Eli Whitney for 12,000 muskets built under the new system.Louis de Tousard, who fled the French Revolution, joined the U.S. Corp of Artillerists in 1795 and wrote an influential artillerist's manual that stressed the importance of standardization. Drawbacks and Limitations: Despite the numerous advantages of using interchangeable parts in manufacturing, there are several drawbacks and limitations that should be considered: Quality control issues: The mass production of standardized components can sometimes lead to a compromise in quality. As manufacturers aim to minimize costs and maximize efficiency, the quality of individual parts may suffer, leading to a higher risk of defects or failure in the final product. Drawbacks and Limitations: Loss of customization: While interchangeable parts simplify the manufacturing and repair process, they can also limit the ability to customize products to meet individual preferences or specific requirements. This may result in a reduced appeal to certain customers who value unique designs and tailored solutions. Dependence on standardized components: Interchangeable parts inherently rely on the use of standardized components, which can create a dependency on certain suppliers or manufacturers. This can lead to potential supply chain issues, such as limited availability or increased costs due to fluctuations in demand. Reduced adaptability: Companies that rely heavily on interchangeable parts may be less adaptable to changes in technology or market demands. This could result in a lack of innovation or an inability to quickly respond to evolving consumer needs. Drawbacks and Limitations: Intellectual property concerns: As interchangeable parts become more prevalent across industries, there may be increased risks of intellectual property theft or patent infringement. This can create legal challenges and affect the competitiveness of manufacturers who rely on proprietary designs or technologies.Overall, while interchangeable parts have played a significant role in the evolution of modern manufacturing, it is essential to carefully consider the potential drawbacks and limitations before fully committing to this approach in any given industry or product line. Implementation: Numerous inventors began to try to implement the principle Blanc had described. The development of the machine tools and manufacturing practices required would be a great expense to the U.S. Ordnance Department, and for some years while trying to achieve interchangeability, the firearms produced cost more to manufacture. By 1853, there was evidence that interchangeable parts, then perfected by the Federal Armories, led to savings. The Ordnance Department freely shared the techniques used with outside suppliers. Implementation: Eli Whitney and an early attempt In the US, Eli Whitney saw the potential benefit of developing "interchangeable parts" for the firearms of the United States military. In July 1801 he built ten guns, all containing the same exact parts and mechanisms, then disassembled them before the United States Congress. He placed the parts in a mixed pile and, with help, reassembled all of the firearms in front of Congress, much as Blanc had done some years before.The Congress was captivated and ordered a standard for all United States equipment. The use of interchangeable parts removed the problems of earlier eras concerning the difficulty or impossibility of producing new parts for old equipment. If one firearm part failed, another could be ordered, and the firearm would not need to be discarded. The catch was that Whitney's guns were costly and handmade by skilled workmen. Implementation: Charles Fitch credited Whitney with successfully executing a firearms contract with interchangeable parts using the American System, but historians Merritt Roe Smith and Robert B. Gordon have since determined that Whitney never actually achieved interchangeable parts manufacturing. His family's arms company, however, did so after his death. Implementation: Brunel's sailing blocks Mass production using interchangeable parts was first achieved in 1803 by Marc Isambard Brunel in cooperation with Henry Maudslay and Simon Goodrich, under the management of (and with contributions by) Brigadier-General Sir Samuel Bentham, the Inspector General of Naval Works at Portsmouth Block Mills, Portsmouth Dockyard, Hampshire, England. At the time, the Napoleonic War was at its height, and the Royal Navy was in a state of expansion that required 100,000 pulley blocks to be manufactured a year. Bentham had already achieved remarkable efficiency at the docks by introducing power-driven machinery and reorganising the dockyard system. Implementation: Marc Brunel, a pioneering engineer, and Maudslay, a founding father of machine tool technology who had developed the first industrially practical screw-cutting lathe in 1800 which standardized screw thread sizes for the first time, collaborated on plans to manufacture block-making machinery; the proposal was submitted to the Admiralty who agreed to commission his services. By 1805, the dockyard had been fully updated with the revolutionary, purpose-built machinery at a time when products were still built individually with different components. A total of 45 machines were required to perform 22 processes on the blocks, which could be made in three different sizes. The machines were almost entirely made of metal, thus improving their accuracy and durability. The machines would make markings and indentations on the blocks to ensure alignment throughout the process. One of the many advantages of this new method was the increase in labour productivity due to the less labour-intensive requirements of managing the machinery. Richard Beamish, assistant to Brunel's son and engineer, Isambard Kingdom Brunel, wrote: So that ten men, by the aid of this machinery, can accomplish with uniformity, celerity and ease, what formerly required the uncertain labour of one hundred and ten. Implementation: By 1808, annual production had reached 130,000 blocks and some of the equipment was still in operation as late as the mid-twentieth century. Implementation: Terry's clocks: success in wood Eli Terry was using interchangeable parts using a milling machine as early as 1800. Ward Francillon, a horologist, concluded in a study that Terry had already accomplished interchangeable parts as early as 1800. The study examined several of Terry's clocks produced between 1800–1807. The parts were labelled and interchanged as needed. The study concluded that all clock pieces were interchangeable. Implementation: The very first mass production using interchangeable parts in America was Eli Terry's 1806 Porter Contract, which called for the production of 4000 clocks in three years. During this contract, Terry crafted four-thousand wooden gear tall case movements, at a time when the annual average was about a dozen. Unlike Eli Whitney, Terry manufactured his products without government funding. Terry saw the potential of clocks becoming a household object. With the use of a milling machine, Terry was able to mass-produce clock wheels and plates a few dozen at the same time. Jigs and templates were used to make uniform pinions, so that all parts could be assembled using an assembly line. Implementation: North and Hall: success in metal The crucial step toward interchangeability in metal parts was taken by Simeon North, working only a few miles from Eli Terry. North created one of the world's first true milling machines to do metal shaping that had been done by hand with a file. Diana Muir believes that North's milling machine was online around 1816. Muir, Merritt Roe Smith, and Robert B. Gordon all agree that before 1832 both Simeon North and John Hall were able to mass-produce complex machines with moving parts (guns) using a system that entailed the use of rough-forged parts, with a milling machine that milled the parts to near-correct size, and that were then "filed to gage by hand with the aid of filing jigs."Historians differ over the question of whether Hall or North made the crucial improvement. Merrit Roe Smith believes that it was done by Hall. Muir demonstrates the close personal ties and professional alliances between Simeon North and neighbouring mechanics mass-producing wooden clocks to argue that the process for manufacturing guns with interchangeable parts was most probably devised by North in emulation of the successful methods used in mass-producing clocks. It may not be possible to resolve the question with absolute certainty unless documents now unknown should surface in the future. Implementation: Late 19th and early 20th centuries: dissemination throughout manufacturing Skilled engineers and machinists, many with armoury experience, spread interchangeable manufacturing techniques to other American industries, including clockmakers and sewing machine manufacturers Wilcox and Gibbs and Wheeler and Wilson, who used interchangeable parts before 1860. Late to adopt the interchangeable system were Singer Corporation sewing machine (1870s), reaper manufacturer McCormick Harvesting Machine Company (1870s–1880s) and several large steam engine manufacturers such as Corliss (mid-1880s) as well as locomotive makers. Typewriters followed some years later. Then large scale production of bicycles in the 1880s began to use the interchangeable system.During these decades, true interchangeability grew from a scarce and difficult achievement into an everyday capability throughout the manufacturing industries. In the 1950s and 1960s, historians of technology broadened the world's understanding of the history of the development. Few people outside that academic discipline knew much about the topic until as recently as the 1980s and 1990s, when the academic knowledge began finding wider audiences. As recently as the 1960s, when Alfred P. Sloan published his famous memoir and management treatise, My Years with General Motors, even the long-time president and chair of the largest manufacturing enterprise that had ever existed knew very little about the history of the development, other than to say that: [Henry M. Leland was], I believe, one of those mainly responsible for bringing the technique of interchangeable parts into automobile manufacturing. […] It has been called to my attention that Eli Whitney, long before, had started the development of interchangeable parts in connection with the manufacture of guns, a fact which suggests a line of descent from Whitney to Leland to the automobile industry. Implementation: One of the better-known books on the subject, which was first published in 1984 and has enjoyed a readership beyond academia, has been David A. Hounshell's From the American System to Mass Production, 1800–1932: The Development of Manufacturing Technology in the United States. Socioeconomic context: The principle of interchangeable parts flourished and developed throughout the 19th century, and led to mass production in many industries. It was based on the use of templates and other jigs and fixtures, applied by semi-skilled labor using machine tools to augment (and later largely replace) the traditional hand tools. Throughout this century there was much development work to be done in creating gauges, measuring tools (such as calipers and micrometers), standards (such as those for screw threads), and processes (such as scientific management), but the principle of interchangeability remained constant. With the introduction of the assembly line at the beginning of the 20th century, interchangeable parts became ubiquitous elements of manufacturing. Selective assembly: Interchangeability relies on parts' dimensions falling within the tolerance range. The most common mode of assembly is to design and manufacture such that, as long as each part that reaches assembly is within tolerance, the mating of parts can be totally random. This has value for all the reasons already discussed earlier. Selective assembly: There is another mode of assembly, called "selective assembly", which gives up some of the randomness capability in trade-off for other value. There are two main areas of application that benefit economically from selective assembly: when tolerance ranges are so tight that they cannot quite be held reliably (making the total randomness unavailable); and when tolerance ranges can be reliably held, but the fit and finish of the final assembly is being maximized by voluntarily giving up some of the randomness (which makes it available but not ideally desirable). In either case the principle of selective assembly is the same: parts are selected for mating, rather than being mated at random. As the parts are inspected, they are graded out into separate bins based on what end of the range they fall in (or violate). Falling within the high or low end of a range is usually called being heavy or light; violating the high or low end of a range is usually called being oversize or undersize. Examples are given below. Selective assembly: French and Vierck provide a one-paragraph description of selective assembly that aptly summarizes the concept. Selective assembly: One might ask, if parts must be selected for mating, then what makes selective assembly any different from the oldest craft methods? But there is in fact a significant difference. Selective assembly merely grades the parts into several ranges; within each range, there is still random interchangeability. This is quite different from the older method of fitting by a craftsman, where each mated set of parts is specifically filed to fit each part with a specific, unique counterpart. Selective assembly: Random assembly not available: oversize and undersize parts In contexts where the application requires extremely tight (narrow) tolerance ranges, the requirement may push slightly past the limit of the ability of the machining and other processes (stamping, rolling, bending, etc.) to stay within the range. In such cases, selective assembly is used to compensate for a lack of total interchangeability among the parts. Thus, for a pin that must have a sliding fit in its hole (free but not sloppy), the dimension may be spec'd as 12.00 +0 −0.01 mm for the pin, and 12.00 +.01 −0 for the hole. Pins that came out oversize (say a pin at 12.003 mm diameter) are not necessarily scrap, but they can only be mated with counterparts that also came out oversize (say a hole at 12.013 mm). The same is then true for matching undersize parts with undersize counterparts. Inherent in this example is that for this product's application, the 12 mm dimension does not require extreme accuracy, but the desired fit between the parts does require good precision (see the article on accuracy and precision). This allows the makers to "cheat a little" on total interchangeability in order to get more value out of the manufacturing effort by reducing the rejection rate (scrap rate). This is a sound engineering decision as long as the application and context support it. For example, for machines for which there is no intention for any future field service of a parts-replacing nature (but rather only simple replacement of the whole unit), this makes good economic sense. It lowers the unit cost of the products, and it does not impede future service work. Selective assembly: An example of a product that might benefit from this approach could be a car transmission where there is no expectation that the field service person will repair the old transmission; instead, he will simply swap in a new one. Therefore, total interchangeability was not absolutely required for the assemblies inside the transmissions. It would have been specified anyway, simply on general principle, except for a certain shaft that required precision so high as to cause great annoyance and high scrap rates in the grinding area, but for which only decent accuracy was required, as long as the fit with its hole was good in every case. Money could be saved by saving many shafts from the scrap bin. Selective assembly: Economic and commercial realities Examples like the one above are not as common in real commerce as they conceivably could be, mostly because of separation of concerns, where each part of a complex system is expected to give performance that does not make any limiting assumptions about other parts of the system. In the car transmission example, the separation of concerns is that individual firms and customers accept no lack of freedom or options from others in the supply chain. For example, in the car buyer's view, the car manufacturer is "not within its rights" to assume that no field-service mechanic will ever repair the old transmission instead of replacing it. The customer expects that that decision will be preserved for him to make later, at the repair shop, based on which option is less expensive for him at that time (figuring that replacing one shaft is cheaper than replacing a whole transmission). This logic is not always valid in reality; it might have been better for the customer's total ownership cost to pay a lower initial price for the car (especially if the transmission service is covered under the standard warranty for 10 years, and the buyer intends to replace the car before then anyway) than to pay a higher initial price for the car but preserve the option of total interchangeability of every last nut, bolt, and shaft throughout the car (when it is not going to be taken advantage of anyway). But commerce is generally too chaotically multivariate for this logic to prevail, so total interchangeability ends up being specified and achieved even when it adds expense that was "needless" from a holistic view of the commercial system. But this may be avoided to the extent that customers experience the overall value (which their minds can detect and appreciate) without having to understand its logical analysis. Thus, buyers of an amazingly affordable car (surprisingly low initial price) will probably never complain that the transmission was not field-serviceable as long as they themselves never had to pay for transmission service in the lifespan of their ownership. This analysis can be important for the manufacturer to understand (even if it is lost on the customer), because he can carve for himself a competitive advantage in the marketplace if he can accurately predict where to "cut corners" in ways that the customer will never have to pay for. Thus, he could give himself lower transmission unit cost. However, he must be sure when he does so that the transmissions he's using are reliable, because their replacement, being covered under a long warranty, will be at his expense. Selective assembly: Random assembly available but not ideally desirable: "light" and "heavy" parts The other main area of application for selective assembly is in contexts where total interchangeability is in fact achieved, but the "fit and finish" of the final products can be enhanced by minimizing the dimensional mismatch between mating parts. Consider another application similar to the one above with the 12 mm pin. But say that in this example, not only is the precision important (to produce the desired fit), but the accuracy is also important (because the 12 mm pin must interact with something else that will have to be accurately sized at 12 mm). Some of the implications of this example are that the rejection rate cannot be lowered; all parts must fall within tolerance range or be scrapped. So there are no savings to be had from salvaging oversize or undersize parts from scrap, then. However, there is still one bit of value to be had from selective assembly: having all the mated pairs have as close to identical sliding fit as possible (as opposed to some tighter fits and some looser fits—all sliding, but with varying resistance). Selective assembly: An example of a product that might benefit from this approach could be a toolroom-grade machine tool, where not only is the accuracy highly important, but also the fit and finish.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Aircrew** Aircrew: Aircrew, also called flight crew, are personnel who operate an aircraft while in flight. The composition of a flight's crew depends on the type of aircraft, plus the flight's duration and purpose. Commercial aviation: Flight deck positions In commercial aviation, the aircrew are called flight crew. Some flight crew position names are derived from nautical terms and indicate a rank or command structure similar to that on ocean-going vessels, allowing for quick executive decision making during normal operations or emergency situations. Historical flightdeck positions include: Captain, the pilot highest-ranking member or members of a flight crew. Commercial aviation: First officer (FO, also called a co-pilot), another pilot who is normally seated to the right of the captain. (On helicopters, an FO is normally seated to the left of the captain, who occupies the right-hand seat). Commercial aviation: Second officer (SO), a person lower in rank to the first officer, and who typically performs selected duties and also acts as a relief pilot. The rank of second officer was traditionally a flight engineer, who was often the person who handled the engine controls. In the 21st century, second officers on some airlines are pilots who act as "cruise relief" on long haul flights. Commercial aviation: Third officer (TO), a person lower in rank to a second officer, and who typically performs selected duties and can also act as a relief pilot. Largely redundant in the present day. Commercial aviation: 'Relief Crew' members in the present day are fully licensed and trained captains and first officers who accompany long-haul airline flights, and who relieve the primary pilots during designated times from the commercial operator or consented portions between the two crews to provide them with the opportunities for rest or sleep breaks to avoid the risk of pilot fatigue (some large wide-body airliners are equipped with special pilot sleeper berths, but more typically reserved seats in the section closest to the flight deck, or cockpit, are used for the relief crew). A relief crew will take over most predominantly during the middle portions of a flight when an aircraft is usually on autopilot and at cruising altitude. The number of relief crew members assigned to a flight depends in part on the length of the flight and the official air regulations the airline operates under. Commercial aviation: Flight Engineer (FE), a position originally called an 'Air Mechanic'. On older aircraft, typically between the late-1920s and the 1970s, the flight engineer was the crew member responsible for engines, systems and fuel management. As aircraft became increasingly sophisticated and automated, this function has been mostly assumed by the primary pilots (Captain and FO), resulting in a continued downsizing in the number of aircrew positions on commercial flights. The flight engineer's position is commonly staffed as a second officer. Flight engineers can still be found in the present day (in greatly diminished numbers), used on airline or air freight operations still flying such older aircraft. The position is typically crewed by a dual-licensed Pilot-Flight Engineer in the present day. Commercial aviation: Airborne Sensor Operator, An airborne sensor operator (aerial sensor operator, ASO, Aerial Remote Sensing Data Acquisition Specialist, Aerial Payload Operator, Police Tactical Flight Officer, Tactical Coordinator etc.) is the functional profession of gathering information from an airborne platform (Manned or Unmanned) and/or oversee mission management systems for academic, commercial, public safety or military remote sensing purposes. The airborne sensor operator is considered a principal flight crew or aircrew member. Commercial aviation: Navigator (archaic), also called 'Air Navigators' or 'Flight Navigators'. A position on older aircraft, typically between the late-1910s and the 1970s, where separate crew members (sometimes two navigation crew members) were often responsible for the flight navigation, including its dead reckoning and celestial navigation, especially when flown over oceans or other featureless areas where radio navigation aids were not originally available. As sophisticated electronic air navigation aids and universal space-based GPS navigation systems came online, the dedicated Navigator's position was discontinued and its function was assumed by dual-licensed Pilot-Navigators, and still later by the aircraft's primary pilots (Captain and FO), resulting in a continued downsizing in the number of aircrew positions on commercial flights. Modern electronic navigation systems made the navigator redundant by the early 1980s. Commercial aviation: Radio Operator (archaic). A position on much older aircraft, typically between the mid-1910s and the 1940s, where a separate crew member was often responsible for handling telegraphic and voice radio communications between the aircraft and ground stations. As radio sets became increasingly sophisticated and easier to operate, the function was taken over directly by a FO or SO, and still later by the pilot-in-command and co-pilot, making the radio operator's position redundant. Commercial aviation: Cabin positions Aircraft cabin crew members can consist of: Purser or In-flight Service Manager or Cabin Services Director, is responsible for the cabin crew as a team leader. Flight attendant or Cabin Crew, is the crew member responsible for the safety of passengers. Historically during the early era of commercial aviation, the position was staffed by young 'cabin boys' who assisted passengers. Cabin boys were replaced by female nurses, originally called 'stewardesses'. The medical background requirement for the flight attendant position was later dropped. Flight medic, is a specialized paramedic employed on air ambulance aircraft or flights. Loadmaster, is a crew member on a cargo aircraft responsible for loading freight and personnel, and for calculating the aircraft's weight and balance prior to flight, which must be within the aircraft manufacturer's prescribed limits, for safe flight. On non-cargo aircraft, weight and balance tasks are performed by the flight crew. Military: From the start of military aviation, additional crew members have flown on military aircraft. Over time these duties have expanded: Pilot Co-pilot Air gunner, crew member responsible for the operation of defensive weapons, for example gun turrets. Specific positions include nose gunner, door gunner and tail gunner Bombardier or Bomb Aimer is a crew member for the release of ordnance, particularly bombs. Military: Boom operator, an aircrew member on tanker aircraft responsible for operating the flying boom and the transfer of fuel. Combat systems officer Airborne Mission Systems Specialist, an aircrew member who operates some form of electronic or other type equipment such as computers, radars, or intelligence gathering equipment to assist or complete the aircraft's mission. Airborne Sensor Operator, An airborne sensor operator (Aerial Sensor Operator, Tactical Coordinator, EWO etc.) is the functional profession of gathering information from an airborne platform (Manned or Unmanned) and/or oversee mission management systems for tactical, operational and strategic remote sensing purposes. Crew chief, an enlisted aircraft mechanic with many various responsibilities. Primary among those are aircraft maintenance, pre-flight/postflight inspections, passenger management, acting as a doorgunner, in-air fire fighting, airspace surveillance, assisting the pilots to land the aircraft in difficult landing zones, assisting pilots with engine start up and shutdown safety, fuel checks, monitoring "hot" refuels (refueling with engines running). Flight attendant, a crew member who tends to passengers on military aircraft. This position is similar to the duties performed by commercial flight attendants. Flight engineer, a crew member responsible for engines, systems and fuel management. Flight officer Flight surgeon or flight nurse, aerial medical staff not involved in the operation of the aircraft but is considered by some militaries to be aircrew. Loadmaster, crew member responsible for loading freight and personnel and the weight and balance of the aircraft. Navigator, a crew member responsible for air navigation. Still actively trained and licensed in some present day militaries, as electronic navigation aids can not be assumed to be operational during warfare. Air observer Radar intercept officer Rescue swimmer on air-sea rescue aircraft Air Signaller or radio operator, crew member responsible for the operation of the aircraft communications systems. Tactical coordinator (TACCO), Weapon System Officer on board a Maritime Patrol Aircraft. Weapon Systems Officer (WSO) Commissioned Aircrew Officer Weapons or Mission System Specialist. Weapon Systems Operator (WSOp), as above but Enlisted.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Dragon Skin** Dragon Skin: Dragon Skin is a type of ballistic vest first-produced by the now-defunct company Pinnacle Armor, and was subsequently manufactured by North American Development Group LLC. The vest manufacturer claimed that it could absorb a high number of bullets because of its unique design involving circular discs that overlapped, similar to scale armor.The Department of Justice (DOJ), Office of Justice Programs (OJP) announced in 2007 that the armor did not comply with the OJP's National Institute of Justice (NIJ) 2005 Interim Requirements as a Level III armor system. This failure to comply with safety standards and additional testing led to the U.S. Military to ban it from active use. Pinnacle Armor: Pinnacle Armor was a United States-based armor manufacturing company. It was founded in 2000 and was based in Fresno, California. Pinnacle acquired the patent rights Dragon Skin from Armor Technology Corp in 2000 as well. As well as Dragon Skin body armor, they also produced reinforced materials for use on vehicles and buildings, as well as related training materials. Pinnacle Armor began producing Dragon Skin in the 2000s and the armor was available to military members, law enforcement, the Central Intelligence Agency (CIA), U.S. Secret Service personnel, and civilian contractors. Pinnacle Armor filed for Chapter 11 bankruptcy on January 3, 2010. Structure: Dragon Skin armor is made of overlapping, two-inch wide high tensile strength ceramic discs, composed of silicon carbide ceramic matrices and laminates, that overlap like scale armor, encased in a fiberglass textile. Testing: Television and internet In a test for the History Channel's military show, Future Weapons, the vest repelled nine rounds of steel-core ammunition from an AK-47 full automatic and 35 rounds of 9×19mm from a Heckler & Koch MP5A3, all fired into a 10-by-12-inch area on the vest. On Test Lab, also on the History Channel, the vest withstood 120 rounds fired from a Type 56 (7.62×39mm) rifle and Heckler & Koch MP5 (9×19mm). In another demonstration on the Discovery Channel series Future Weapons, a Dragon Skin vest withstood numerous rounds (including steel core rounds) from an AK-47, a Heckler & Koch MP5SD, an M4 carbine (5.56×45mm), and a point-blank detonation of an M67 grenade. While the vest was heavily damaged (mainly by the grenade), there was no penetration of the armor.In 2007, NBC News had independent ballistics testing conducted comparing Dragon Skin against Interceptor body armor. Retired four-star general Wayne A. Downing observed the tests and concluded that although the number of trials performed was limited, the Dragon Skin armor performed significantly better than Interceptor. It was also featured on Time Warp on the Discovery Channel.NBC also interviewed retired USMC Colonel James Magee, who was a developer of the Army's then-current Interceptor body armor, stated "Dragon Skin is the best out there, hands down. It's better than the Interceptor. It is state of the art. In some cases, it's two steps ahead of anything I've ever seen."The Defense Review website also published a positive article, noting that in their test and review of the Dragon Skin armor, they had found that it was "significantly superior in every combat-relevant way to U.S. Army PEO Soldier's and U.S. Army Natick Soldier Center (NSC)/Soldier Systems Center's Interceptor Body Armor".In light of the May 2007 media investigations, senators Hillary Clinton and Jim Webb requested that Comptroller General of the United States David M. Walker initiate a Government Accountability Office investigation into the Army's body armor systems.After being confronted with conflicting information by lawmakers who questioned the NBC test results and Army-supplied data of vest failures from a May 2006 test, the technical expert solicited by NBC to certify its test rescinded his previous support of Dragon Skin and stated that the vests "weren't ready for prime time". Testing: Law enforcement In Fresno, California, a police department commissioned the purchase of Dragon Skin for its officers after a vest stopped all the bullets fired during a test, including .308 rounds from a rifle and 30 rounds from a 9mm MP5 fired from five feet away. The armor also stopped 40 rounds of PS-M1943 mild steel-core bullets from an AK-47 along with 200 9mm full metal jacket bullets fired from a submachine gun. Testing: Military testing Dragon Skin became the subject of controversy with the U.S. Army over testing it against its Interceptor body armor. The Army claimed Pinnacle's body armor was not proven to be effective. In test runs for the Air Force there were multiple failures to meet the claimed level of protection. This coupled with poor quality control (over 200 of the 380 vests delivered to USAF OSI were recalled due to improperly manufactured armor disks) and accusations of fraudulent claims of official NIJ rating (Pinnacle had not actually obtained the rating at the time of purchase) led to the termination of the USAF contract. Pinnacle attempted to appeal this decision, but courts found in favor of the USAF.Dragon Skin armor did not meet military standards when subjected to various environmental conditions, including: high (+150°F) and low (-60°F) temperature, diesel fuel, oil, and saltwater immersion, and a 14 hour temperature cycle from -25°F to +120°F. Military testing revealed that the epoxy glue that held its disc plates together would come undone when subjected to high temperatures, causing the discs to delaminate and accumulate in the lower portion of the armor panel. This exposed significant portions of the armor, resulting in Dragon Skin vests suffering 13 first or second shot complete penetrations.On April 26, 2006 Pinnacle Armor issued a press release to address these claims and a product recall instigated by the United States Navy. The company stated that although vests were returned due to a manufacturing issue, a test on the Dragon Skin Level III armor was conducted by the United States Air Force Office of Special Investigations at Aberdeen Proving Ground in February 2006, which concluded that it "did not fail any written contract specifications" set forth by the Air Force, which was further stated by Pinnacle Armor to require high ballistic performance due to the hostile environments in which AFOSI operates.The Pentagon stated that the test results were classified and neither side could agree to terms on another, more comprehensive test. The Army wanted to hold and inspect the vests for 1–2 weeks before shooting at them, and Pinnacle wanted them shot at right away from out of the box.On May 19, 2006 it was announced that the dispute had been resolved and the vests were going to be retested again by the Army to clear the dispute. On May 20, 2006 it was announced by The Washington Post (and other newspapers) in an article titled "Potential Advance in Body Armor Fails Tests" that the Dragon Skin vests had failed the retest according to their anonymous source. Official results of these tests were classified at the time but have since been released by the Army. Testing: On June 6, 2006, Karl Masters, director of engineering for Program Manager - Soldier Equipment, said he recently supervised the retest and commented on it. "I was recently tasked by the army to conduct the test of the 30 Dragon Skin SOV-3000 level IV body armor purchased for T&E [tests and evaluation]," Masters wrote. "My day job is acting product manager for Interceptor Body Armor. I'm under a gag order until the test results make it up the chain. I will, however, offer an enlightened and informed recommendation to anyone considering purchasing an SOV-3000 Dragon Skin—don't. I do not recommend this design for use in an AOR with a 7.62×54R AP threat and an ambient temperature that could range to 49°C (120 F). I do, however, highly recommend this system for use by insurgents..." In response to these claims, Pinnacle Armor released a press release on June 30, 2006. Official results of these tests are classified. Testing: According to the Army, the vests failed because the extreme temperature tests caused the discs to dislodge, thus rendering the vest ineffective. Pinnacle Armor affirms that their products can withstand environmental tests in accordance with military standards, as does testing by the Aberdeen Test Center.In response to claims made by several U.S. senators, Dragon Skin and special interest groups, on Monday, May 21, 2007, the Army held a press conference where they released the results of the tests they claimed Dragon Skin failed.In April 2008 one of the Dragon Skin vests, with a serial number that identifies it as one of 30 vests bought by the Department of Defense for U.S. Army for testing in 2006, was listed and later bought from eBay.The seller, David Bronson, allegedly was connected to a U.S. Army testing facility. The U.S. Government Accountability Office (GAO), the U.S. Department of Justice, and the F.B.I. are investigating the matter as of May 2008. The buyer described the vest as having been shot at least 20 times, with not a single through-penetration. U.S. Army ban: On March 30, 2006 the Army banned all privately purchased commercial body armor in theater. Army officials said the ban order was prompted by concerns that soldiers or their families were buying inadequate or untested commercial armor from private companies. The Army ban refers specifically to Pinnacle's Dragon Skin armor saying that the company advertising implies that Dragon Skin "is superior in performance" to the Interceptor Body Armor the military issues to soldiers. The United States Marine Corps has not issued a similar directive, but Marines are "encouraged to wear Marine Corps-issued body armor since this armor has been tested to meet fleet standards." NBC News learned that well after the Army ban, select elite forces assigned to protect generals and VIPs in Iraq and Afghanistan wore Dragon Skin. General Peter W. Chiarelli made a statement that, "he never wore Dragon Skin but that some members of his staff did wear a lighter version of the banned armor on certain limited occasions, despite the Army ban."Chris Kyle stated in his book American Sniper that he wore Dragon Skin body armor after his third deployment which he received from his wife's parents as a gift.H.P. White Labs conducted tests on Dragon Skin in May 2006. Even under normal conditions model SOV 3000 Dragon Skin failed to stop the second impact of M2AP. Then when the other tests were run, SOV 3000 failed multiple times, with the exception of the Salt Water test. Certification and subsequent decertification: In an interview with KSEE 24 News, an NBC affiliate, on November 14 and 16, 2006, Pinnacle Armor detailed the five-year process that the NIJ and Pinnacle Armor went through to establish a test protocol and procedure for flexible rifle defeating armor, which it passed and then received certification.On December 20, 2006, Pinnacle Armor said that they received the official letter from the National Institute of Justice (NIJ) stating that they had passed the Level III tests, and that Dragon Skin SOV-2000 was now certified for Level III protection.The Air Force, which ordered the Dragon Skin vests partially based on claims they were NIJ certified at a time when they were not, has opened a criminal investigation into the firm Pinnacle Armor over allegations that it had fraudulently placed a label on their Dragon Skin armor improperly stating that it had been certified to a ballistic level. Murray Neal, the Pinnacle Armor chief executive, claimed that he was given verbal authorization by the NIJ to label the vests although he did not have written authorization.On August 3, 2007, the Department of Justice announced that the NIJ had reviewed evidence provided by the body armor manufacturer and had determined that the evidence was insufficient to demonstrate that the body armor model would maintain its ballistic performance over its six-year declared warranty period. Because of this, Dragon Skin was found to not be in compliance with the NIJ's testing program and has been removed from the NIJ's list of bullet-resistant body armor models that satisfy its requirements. Pinnacle CEO Murray Neal responded that this move was unprecedented, political, and not about the quality of the vests, because the NIJ were not claiming failure of any ballistics tests. Neal stated that the finding was motivated by a dispute regarding a warranty issue instead, in which the warranty period of Dragon Skin is longer than that of most other commercial vests. Certification and subsequent decertification: Subsequent testing On August 20, 2007, at the United States Test Laboratory in Wichita, Kansas, nine Dragon Skin SOV-2000 (Level III) body armor panels were retested, for the purpose of validating Pinnacle Armor's six-year warranty. The panels tested were between 5.7 years old and 6.8 years old. All items met the NIJ Level III ballistic protection, confirming Pinnacle Armor's six-year warranty for full ballistic protection. Pinnacle resubmitted the SOV-2000 vest to the NIJ for certification based on this successful testing, but this application was rejected because the test had not been properly documented. In November 2007, Pinnacle sued to force the NIJ to recertify the SOV-2000 vest; their case was found to be without merit and summarily dismissed in November 2013.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Trillian (software)** Trillian (software): Trillian is a proprietary multiprotocol instant messaging application created by Cerulean Studios. It is currently available for Microsoft Windows, Mac OS X, Linux, Android, iOS, BlackBerry OS, and the Web. It can connect to multiple IM services, such as AIM, Bonjour, Facebook Messenger, Google Talk (Hangouts), IRC, XMPP (Jabber), VZ, and Yahoo! Messenger networks; as well as social networking sites, such as Facebook, Foursquare, LinkedIn, and Twitter; and email services, such as POP3 and IMAP. Trillian (software): Trillian no longer supports Windows Live Messenger or Skype as these services have combined and Microsoft chose to discontinue Skypekit that was used for connection. They also no longer support connecting to MySpace, or a distinct connection for Gmail, Hotmail or Yahoo! Mail although these can still be connected to via POP3 or IMAP. Currently, Trillian supports Facebook, Google, Jabber (XMPP), and Olark. Trillian (software): Initially released July 1, 2000, as a freeware IRC client, the first commercial version (Trillian Pro 1.0) was published on September 10, 2002. The program was named after Trillian, a fictional character in The Hitchhiker's Guide to the Galaxy by Douglas Adams. A previous version of the official web site even had a tribute to Douglas Adams on its front page. On August 14, 2009, Trillian "Astra" (4.0) for Windows was released, along with its own Astra network. Trillian 5 for Windows was released in May 2011, and Trillian 6.0 was initially released in February 2017. Features: Connection to multiple IM services Trillian connects to multiple instant messaging services without the need of running multiple clients. Users can create multiple connections to the same service, and can also group connections under separate identities to prevent confusion. All contacts are gathered under the same contact list. Contacts are not bound to their own IM service groups, and can be dragged and dropped freely. Trillian represents each service with a different-colored sphere. Prior versions used the corporate logos for each service, but these were removed to avoid copyright issues, although some skins still use the original icons. The Trillian designers chose a color-coding scheme based on the underground maps used by the London Underground that uses different colors to differentiate between different lines. Features: IM services Green And Blue for Trillian Astra Network Grey for IRC Amber and Dark Gray for Bonjour (Rendezvous) Purple for Jabber/XMPP (partially broken as of 10/27/2017) Teal and Amber for Google Talk (discontinued as of 2022) Blue And Teal for Facebook (discontinued as of 2022) Mail services A White Envelope for POP emails a Manila Envelope for IMAP emails a Teal Envelope for TwitterPrior versions of Trillian supported: Microsoft Exchange Lotus Sametime Novell GroupWise Messenger Metacontact To eliminate duplicates and simplify the structure of the contact list, users can bundle multiple contact entries for the same person into one entry in the contact list, using the Metacontact feature (similarly to Ayttm's fallback messaging feature). Subcontacts will appear under the metacontact as small icons aligned in a manner of a tree. Features: Activity history Trillian Pro comes with Activity History, and both log the history as both plain text files and as XML files. Pro has a History Manager that shows the chat history and allows the user to add bookmarks for revision later on. XML-based history makes the log easy to manipulate, searchable and extendable for future functions. Stream manipulation Trillian Pro also has a stream manipulation feature labelled 'time travel', which allows the user to record, and subsequently review, pause, rewind, and fast forward live video and audio sessions. SecureIM SecureIM is an encryption system built into the Trillian Instant Messenger Client. It encrypts messages from user to user, so no passively observing node between the two is supposedly able to read the encrypted messages. SecureIM does not authenticate its messages, and therefore it is susceptible to active attacks including simple forms of man-in-the-middle attacks. According to Cerulean Studios, the makers of Trillian, SecureIM enciphers messages with 128-bit Blowfish encryption. It only works with the OSCAR protocol and if both chat partners use Trillian. However, the key used for encryption is established using a Diffie–Hellman key exchange which only uses a 128 bit prime number as modulus, which is extremely insecure and can be broken within minutes on a standard PC. Features: Instant lookup Starting with version 3.0 in both the Basic and Pro suites, Trillian makes use of the English-language version of the Wikipedia free online encyclopedia for real-time referencing using its database of free knowledge. The feature is employed directly within a conversation window of a user. When one or more words are entered (by either user), Trillian checks all words against a database file and if a match is found, the word appears with a dotted green underline. When users point their mouse over the word, the lead paragraph of the corresponding article is downloaded from Wikipedia and displayed on screen as a tooltip. When users click on the underlined word, they are given the choice to visit the article online. Features: Emotiblips Emotiblips are the video equivalent of an emoticon. During video sessions, the user may stream a song or video to the other user in real time. One can send MP3s, WAVs, WMVs, and MPGs with this feature. QuickTime MOV files as Emotiblips are not currently supported. Hidden smileys In version 2.0 to the current, the default emoticon set contains emoticons that don't appear in the menu but can be used in conversations. Some of these are animations that can only be viewed in Trillian Pro, but all of them can be used regardless. Features: Skins and interfaces (Discontinued) Trillian has its own unique skinning engine known as SkinXML. Many skins have been developed for Trillian and they can be downloaded from the official skins gallery or deviantArt.Trillian also came with an easier skinning language, Stixe, which is essentially a set of XML Entities that simplifies repetitive codes and allows skinners to share XML and graphics in the form of emoticon packs, sound packs and interfaces. Features: The default skins of Trillian are designed by Madelena Mak. Trillian Cordillera was used in Trillian 0.7x, while Trillian Whistler has been the default skin for Trillian since Pro 1.0. Small cosmetic changes were noticeable in each major release. The Trillian Astra features a brand new design for the front-end UI, named Trillian Cordonata. Features: Plugins (Discontinued) Trillian is a closed-source application, but the Pro version can be extended by plugins. Plugins by Cerulean Studios itself include spell-check, weather monitor, a mini-browser (for viewing AIM profiles), Winamp song title scroller, stock exchange monitor, RSS feedreader, and conversation abilities for the Logitech G15 keyboard, as well as a plug-in for the XMPP and Bonjour networks. Others have developed various plug-ins, such as a games plug-in which can be used to play chess and checkers, a protocol plugin to send NetBIOS messages through Trillian, a plug-in to interact with Lotus Sametime clients, a plug-in to interact with Microsoft Exchange, a POP3 and IMAP email checker, or an automatic translator for many European languages to and from English. Features: Trillian 5.1 for Windows and later included a plug-in that allows you to chat and make calls on Skype without Skype being installed. As of July 2014, Skype is no longer accessible from the Trillian client, as the Skype plug-in no longer works (some had been able to use older versions of the Trillian client, but now these also no longer work with Skype.) Plugins are available for free and are hosted on the official web site, but most need Trillian Pro 2+ to run. Features: In-Game Chat Starting at version 5.3, Trillian users can toggle an overlay when playing a video game on the computer that allows the user to use Trillian's chat features, in a similar vein to Steam's overlay chat. When toggled, the overlay will show the time according to the system's clock, and the chat window itself is a variation Trillian's base chat window, with tabs used for different sets of queries and channels. Also, when the overlay is not activated, users can view a toggle-able sticker that tells the user how many messages are unread. History: Early beginnings After several internal builds, the first ever public release of Trillian, version 0.50, was available on July 1, 2000, and was designed to be an IRC client. The release was deemed 'too buggy' and was immediately pulled off the shelf and replaced by a new version 0.51 on the same day. It featured a simple Connection Manager and skinned windows. History: A month later, two minor builds were released with additional IRC features and bug fixes. Despite these efforts, Trillian was not popular, as reflected in the number of downloads from CNET's Download.com.Trillian was a donateware at that time. They used PayPal for receiving donations through their web site. Introduction of interoperability Version 0.6, released November 29, 2000, represented a major change in the direction of development, when the client became able to connect to AOL Instant Messenger, ICQ and MSN Messenger simultaneously in one window. History: Although similar products, such as Odigo and Imici, already existed, Trillian was novel in the way that it distinguished contacts from different IM services clearly on the contact list, and it did not require registration of a proprietary account. It also did not lose connection easily like the other clients.A month later, Yahoo! Messenger support was introduced in Trillian 0.61, and it also featured a holiday skin for Christmas. Meanwhile, the Trillian community forums were opened to the public. History: During this period, new versions were released frequently, attracting many enthusiasts to the community. Skinning activity boomed and fan sites were created. A skinning contest was held on deviantArt in Summer, and the winner was selected to design the default skin for the next version of Trillian. Trillian hit 100,000 downloads on August 14, 2001. Entry into mainstream and the "IM Wars" Contrary to the anticipation for version "0.64" in the community, the next version of Trillian was numbered 0.70. It was released December 5, 2001. Development took five months, considerably longer than development of prior builds. History: The new version implemented file transfer in all IM services, a feature most requested by the community at the time. It also represented a number of skin language changes. It used the contact list as the main window (as opposed to a status window 'container' in prior versions) and featured a brand new default skin, Trillian Cordillera, and an emoticon set boasting over 100 emoticons, setting a record apart from other messengers available at that time. History: Version 0.71 was released on December 18, 2001. It supported AIM group chats and was the first major IM client which included the ability to encrypt messages with SecureIM. In the following months, the number of downloads of Trillian surged, reaching 1 million on 27 January 2002, and 5 million within 6 months. Trillian received coverage and favorable reviews from mainstream media worldwide, particularly by CNET, Wired and BetaNews. The lead developer and co-founder, Scott Werndorfer, was also interviewed on TechTV. History: AOL became aware that Trillian users were able to chat with their AIM buddies without having to download the AIM client, and on January 28, 2002, AOL blocked SecureIM access from Trillian clients. Cerulean appeared to have circumvented the block with version 0.721 of its client software, released one day later. This "AOL War" continued for the next couple of weeks, with Cerulean releasing subsequent patches 0.722, 0.723 and 0.724.Trillian appeared in the Jupiter Media Metrix Internet audience ratings in February 2002 with 344,000 unique users, and grew to 610,000 by April 2002. While those numbers are very small compared to the major IM networks, Jupiter said Trillian consistently ranks highest according to the number of average minutes spent per month.Trillian also created a special version for Iomega ActiveDisk. History: Commercialisation with Trillian Pro On September 9, 2002, a commercial version, Trillian Pro 1.0, was released concurrently with Trillian Basic 0.74. The commercial version was sold for $25 US for a year of subscription, but all those who donated to the development of Trillian before were eligible to a year of subscription at no cost. The new version had added SMS and mobile messaging abilities, Yahoo! Messenger webcam support, pop-up e-mail alerts and new plug-ins to shuttle news, weather and stock quotes directly to buddy lists. It appeared Trillian Pro would be marketed to corporate clients looking to keep in touch with suppliers or customers via a secured, interoperable IM network, and a relatively stern user interface. The company had no venture capital backing, and had depended entirely on donations from users to stay alive. Trillian Pro 1.0 was nominated and picked among three other nominees as the Best Internet Communication shareware in its debut year of being a "try before you buy" shareware.On April 26, 2003, total downloads of Trillian reached ten million. Blocking from Yahoo! and cooperation with Gaim A few weeks after Trillian Pro 2.0 was released, Yahoo! attempted to block Trillian from connecting to its service in their "efforts to implement preventative measures to protect our users from potential spammers." A few patches were released by the Trillian developers, which resolved the issue. History: The Trillian developers assisted its open-source cross-platform rival Gaim in solving the Yahoo! connection issues. Sean Egan, the developer of Gaim, posted in its site, "Our friends over at Cerulean Studios managed to break my speed record at cracking Yahoo! authentication schemes with an impressive feat of hackery. They sent it over and here it is in Gaim 0.70." It was later revealed that the developers were friends and had helped each other on past occasions.Meanwhile, as Microsoft forced its users to upgrade to MSN Messenger 5.0 for upgrades in their servers for security issues, October 15, 2003 also would mark the deadline for Trillian support for MSN Messenger. However, it appeared that Cerulean Studios worked with Microsoft to resolve the issue on August 2, 2003, long before the deadline. History: On March 7, 2004, and June 23, 2004, Yahoo! changed its instant messaging language again to prevent third-party services, such as Trillian, from accessing its service. Like prior statements, the company said the block is meant as a pre-emptive measure against spammers. Cerulean Studios released a few patches to fix the issues within a day or two. Trillian 3 Series In August 2004, a new official blog was created in attempt to rebuild connections between the Studios and its customers. Trillian 3 was announced in the blog, and a sneak preview was made available to a small group of testers. History: After months of beta-testing, the final build of Trillian 3 was released on December 18, 2004, with features such as new video and audio chat abilities throughout AIM, MSN Messenger and Yahoo! Messenger, an enhanced logging manager and integration with the Wikipedia online encyclopedia. It also featured a clean and re-organized user interface and a brand new official web site. History: The release also updated the long-abandoned Trillian Basic .74 to match the new user interface and functionalities as Trillian Basic 3.0. The number of accumulated downloads of Trillian Basic in Download.com hit 20 million within a matter of weeks. Trillian 3.1 was released February 23, 2005. It included new features such as Universal Plug and Play (UPnP) and multiple identities support. On June 10, 2011, all instances of Trillian 3 Basic got an automatic upgrade to Trillian 3 Pro, free of charge. U3 and Google Pack A version of Trillian that could run on U3 USB flash drives was released on October 21, 2005. Trillian could previously be run from generic flash drives or other storage devices with some minor unofficial modifications, known as "Trillian Anywhere". A U3 version of Trillian Astra is also posted on the official Cerulean Studios forum. On January 6, 2006, Larry Page, President of Products at Google, announced Google Pack, a bundle of various applications including Trillian Basic 3.0 as "a free collection of safe, useful software from Google and other companies that improves the user experience online and on the desktop". History: According to the Cerulean Studios blog, Trillian was discontinued from Google Pack on 19 May 2006.The inclusion of Trillian in Google Pack was perplexing to some media analysts as Google had at the time its own Google Talk service which touted the benefits of an open IM system. The free Trillian Basic client could not be used with Google Talk, however, the paid Trillian Pro was listed as one of the "client choices" in the Google Talk client choices list until Google Talk was replaced by Google Hangouts in May 2013. History: Trillian Astra (Trillian 4) More than a year after the release of Trillian 3.1, the Cerulean Studios blog began spreading news again and announced the next version of Trillian, to be named Trillian Astra. The name for version 4, Astra, is the nickname used by the same fictional character that is the namesake of the software, which is a reference to The Hitchhiker's Guide to the Galaxy. The new release claimed to be faster and include a new login screen. A new domain, www.trillianastra.com, was disclosed to the public, with only the logo and blue background. On July 3, 2009, Cerulean Studios reopened the premium web version of Astra to public testing. On August 14, 2009, Cerulean Studios released the final gold build. Trillian has its own social network named Astra Network, in which users who have Astra ID can communicate with each other on the network regardless of platform. Cerulean Studios later registered a new domain, www.trillian.im, to provide a more user-friendly experience. History: On November 18, 2009, the first mobile version of Trillian was launched for iPhone. As of 2010, final builds for Android, BlackBerry, and Apple iOS were available for their markets (Market, App World and App Store respectively). Trillian initially cost US$4.99 but became free of charge, supported by ads, in 2011.As of August 2010, the Mac OS X version was in beta testing. History: Trillian 5 On August 2, 2010, Trillian 5.0 was released as a public beta. Newer features included a resize-able interface, History synchronization, a new ribbon inspired interface with Windows theme integration, new "marble-like" icons for service providers, the option to revert to the Trillian 3 & 4 interfaces, and a new social network interface window were introduced. Along with Trillian 5.0 For Windows and the aforementioned Mac beta. As of 2010, the Android and BlackBerry OS final builds were available on their respective markets for free. History: OpenCandy Included with the installation of Trillian 5.0 was a program called OpenCandy, which some security programs, including Microsoft Security Essentials, classed as adware. OpenCandy was removed shortly after on May 5, 2011. Trillian 6 On January 8, 2016, Trillian 6 was released. Loss of Networks Trillian has stopped attempting to work around the systems to make their client work with other networks. They have also not done any development to integrate support for any of the newer networks. Instead they urge people to use their own IM service instead. As Yahoo! has decided to shut down the legacy Yahoo Messenger clients and servers, Trillian and all other clients are no longer able to connect to Yahoo! Messenger as of August 31, 2016. As AOL has decided to shut down the AIM network, Trillian, and all other clients are no longer able to connect to AIM as of December 15, 2017. As ICQ has decided to disable support for 3rd party IM clients, Trillian is no longer able to connect to ICQ as of April 1, 2019. MSN IM accounts were also able to be used as Skype accounts, when Microsoft Acquired Skype in 2011, but could still use the service at that time. The service was shut down in 2013. As (Microsoft) Skype has decided to disable support for 3rd party IM clients, Trillian is no longer able to connect to Skype in 2013 As Google Talk has shut down, Trillian is no longer able to connect to the service, as of June 16, 2022.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Surfactant protein D** Surfactant protein D: Surfactant protein D, also known as SP-D, is a lung surfactant protein part of the collagenous family of proteins called collectin. In humans, SP-D is encoded by the SFTPD gene and is part of the innate immune system. Each SP-D subunit is composed of an N-terminal domain, a collagenous region, a nucleating neck region, and a C-terminal lectin domain. Three of these subunits assemble to form a homotrimer, which further assemble into a tetrameric complex. Interactions: Surfactant protein D has been shown to interact with DMBT1, and hemagglutinin of influenza A virus. Post-translational modification of SP-D i.e. S-nitrosylation switches its function.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Moshing** Moshing: Moshing (also known as slam dancing or simply slamming) is an extreme style of dancing in which participants push or slam into each other. Taking place in an area called the mosh pit (or simply the pit), it is typically performed to aggressive styles of live music such as punk rock and heavy metal. Moshing: The dance style originated in the southern California hardcore punk scene, particularly Huntington Beach and Long Beach around 1978. Through the 1980s it spread to the hardcore scenes of Washington, D.C., Boston and New York where it developed local variants. In New York, the crossover between the city's hardcore scene and its metal scene led to moshing incorporating itself into metal beginning around 1985. In the 1990s, the success of grunge music led to moshing entering mainstream understanding and soon being incorporated into genres like electronic dance music and hip hop. Moshing: Due to its violence, moshing has been subject to controversy, with a number of concert venues banning the practice, and some musicians being arrested for encouraging it and concertgoers for participating. Etymology: The name "mosh" originates from the word "mash". While performing their song "Banned in D.C." in either 1979 or 1980, H.R., vocalist of Washington D.C. hardcore band the Bad Brains, shouted "mash it - mash down Babylon!" Because of his faux Jamaican accent, some audience members heard this as "mosh it - mosh down Babylon". Beginning around 1983, metalheads began to refer to the slower sections of hardcore songs as "mosh parts", while hardcore musicians had called them "skank parts". Once Stormtroopers of Death released their debut album Speak English or Die in 1985, which included the track "Milano Mosh", the term began being applied to the style of dance. The term was then further popularised by Anthrax's 1987 song "Caught in a Mosh". History: Origins and early developments (1970s–1980s) The direct predecessor to moshing was the pogo, a style of dance done in the 1970s English punk rock scene, in which crowds members would jump up and down while holding their arms beside them. According to The Filth and the Fury, it was invented by Sex Pistols bassist Sid Vicious in 1976.As a prominent punk rock scene in Southern California began to form in the late 1970s and early 1980s with early hardcore punk groups like Fear and Black Flag, moshing as it is understood today began to develop, originally termed "slam dancing". Participants in slam dancing at this time modified the pogo by bringing additional physical contact to those around them by pushing and running, as well introducing the idea of a recognised area where it takes place called a "pit". According to Steven Blush's book American Hardcore: A Tribal History (2001), there is a common belief amongst those involved in this scene that the dance was invented by, former US army marine, Mike Marine in 1978. His specific style, involving "strutting around in a circle, swinging your arms and hitting everyone within reach", would go on to be termed "the Huntington Beach Strut". The Orange County Register writer Tom Berg credited, Costa Mesa venue, the Cuckoo's Nest (1976–1981) as the "birthplace of slam dancing". Examples of this early moshing were featured in the documentaries Another State of Mind, Urban Struggle, the Decline of Western Civilization, and American Hardcore. Fear's 1981 musical performance on Saturday Night Live also helped to expose moshing to a much wider audience.By 1981, slam dancing had become the predominant style of crowd interaction in the southern California scene, as Huntington Beach and Long Beach became the scene's heart. Washington, D.C. band the Teen Idles toured California in August 1980, where they were first exposed to slam dancing. Upon returning home, they introduced the practice to the Washington, D.C., hardcore scene. That particular scene took a more chaotic approach to slam dancing and saw an increase in stage diving, whereas in the Boston hardcore scene slam dancing became violent and incorporated punching below the neck, developing a style called the "Boston thrash" or "punching penguins". Another development in the Boston scene was "pig piles" in which one person was pushed to the ground and others would begin to pile on top of them. This originated during a D.O.A. set, which was initiated by SSD guitarist Al Barile. The New York hardcore scene of the mid-1980s, modified this early slam dancing into an additional, more violent style. In their distinction, participants may stay in one position on their own or collide with others, while executing a more exaggerated version of the arm and leg swinging of California slam dancing.As fans of heavy metal music began to attend New York hardcore performances, they developed their own style of dancing based on New York hardcore's style of slam dancing. It was this group, particularly Scott Ian and Billy Milano who popularised the word "moshing". Ian and Milano's band Stormtroopers of Death released their debut album Speak English or Die in 1985, which included the track "Milano Mosh". This led to the term being applied to the style of dance. The same year, moshing began to incorporate itself into live performances by heavy metal bands, with one early example being during Anthrax's 1985 set at the Ritz. History: Mainstream crossover (1990s–present) Moshing entered mainstream consciousness with the rise of grunge in the early 1990s. Grunge becoming the dominant force in rock music, brought with it aspects of genres like hardcore, punk and ska, and in turn, pop culture became aware of the mosh pit. This was exacerbated by the success of Lollapalooza, which began in 1991 as a touring festival. In his book Festivals: A Music Lover's Guide to the Festivals You Need To Know, writer Oliver Keens stated that "Lollapalooza's greatest impact was to expose Middle America to the joys of stage-diving and moshing...You can see Lollapalooza's legacy in the way mosh pits have become an integral part of youth culture; beyond rock and metal". By 1992, the practice had become so common that concertgoers began to mosh to non-aggressive rock bands like the Cranberries.Moshing slowly entered hip hop during live performances by the Beastie Boys, who began as a hardcore punk band before adopting the hip hop style they became known for. During Public Enemy and Ice-T's European tour in the late 1980s, the artists witnessed moshing during their performances, which was still not commonplace during hip hop concerts. The 1991 collaboration song Bring the Noise by thrash metal band Anthrax and hip hop group Public Enemy led to a number of mixed genre tours, which brought metal's moshing to the attention of hip hop fans. This was solidified as a part of hip hop by Onyx's 1993 single "Slam", a song which alluded to slam dancing and had a music video featuring moshing. Following the video's release, pits became increasingly common during performances by hip hop artists including Busta Rhymes, M.O.P. and the Wu-Tang Clan.Moshing has been present during electronic dance music performance since at least 1996, with the Prodigy's performance at Endfest. By 1999, moshing had become commonplace during techno performances, especially hardcore techno. At late 1990s parties such as New York's H-Bomb, Milwaukee's Afternoon Delight and Los Angeles' Twilight, attendees inverted the intellectualism and PLUR credo which permeated electronic music genres, like intelligent dance music, earlier in the decade, by incorporating crowd participation acts similar to those found at hardcore punk, metal and goth performances. In the 2010s, the success of Skrillex and his "DJ as rock star" attitude brought moshing into mainstream dance music.The 2010s saw the rise of a number of hip hop artists who used an "anarchic energy", which some critics at the time compared to that of punk. These artists, notably A$AP Mob, Odd Future and Danny Brown, revived moshing in mainstream hip hop, which led to pits becoming a staple of performances in the genre. Amongst this era, Travis Scott's performances became particularly notable for their violet combination of moshing and crowd surfing, which he called "raging". Scott was arrested in 2015 and 2017 for inciting riots after encouraging these actions, with the latter event leading to an attendee partially paralyzed. However, the most infamous example of this at his concerts was the 2021 Astroworld Festival crowd crush, which left 25 hospitalized and 10 dead. Variations: The Huntington Beach strut or simply the HB Strut is the original style of slam dancing which was popular the Southern California hardcore in the late 1970s and 1980s. It involves "strutting around in a circle, swinging your arms and hitting everyone within reach".The Boston thrash or punching penguins is Boston's more violent development upon the Huntington Beach strut, which incorporates punching below the neck.A pig pile is a style moshing popular amongst the Boston hardcore scene in the 1980s. It involved one person being pushed to the ground and others beginning to pile on top of them.A circle pit is a form of moshing in which participants run in a circular motion around the edges of the pit, often leaving an open space in the centre.A wall of death is a form of moshing which sees the audience divide down the middle into two halves either side of the venue, before each side runs towards the other, slamming the two sides together. According to Noisecreep, the consensus is that it was invented by American hardcore punk band Sick of it All. However, the band's vocalist Lou Koller has stated that he merely revived the practice in 1996, as he often saw a similar act performed in the 1980s New York hardcore scene. Loudwire senior writer Graham Hartmann referred to it as "Perhaps the most bad ass and dangerous ritual you can experience in a mosh pit". Venues will often ask bands not to organize the Wall of Death themselves due to the inherent risk involved and liability.Hardcore dancing is a term that covers multiple style of moshing including windmilling two stepping, floorpunching, picking up pennies, axehandling, bucking, and wheelbarrowing. The practice began in New York City in the 1980s.Crowd killing is when a mosher moshes against the crowd around the sides of the pit. According to Kerrang! writer Amanda van Poznak it is generally looked down upon.Hip hop pits are generally less violent than those in hardcore, instead consisting of "a mass of people enthusiastically nudg[ing] each other while jumping in unison". Physical properties of emergent behavior: Researchers from Cornell University studied the emergent behavior of crowds at mosh pits by analyzing online videos, finding similarities with models of 2-D gases in equilibrium. Simulating the crowds with computer models, they found out that a simulation dominated by flocking parameters produced highly ordered behavior, forming vortices like those seen in the videos. Opposition, criticism and controversy: While moshing is seen by some as a form of positive fan feedback or expression of enjoyment, it has also drawn criticism over dangerous excesses in its violence. Injuries and even deaths have been reported in the crush of mosh pits.The American post-hardcore band Fugazi opposed slamdancing at their live shows. Members of Fugazi were reported to single out and confront specific members of the audience, politely asking them to stop hurting other audience members, or hauling them on stage to apologize on the microphone.Consolidated, an industrial dance group of the 1990s, stood against moshing. On their third album, Play More Music, they included the song "The Men's Movement", which proclaimed the inappropriate nature of slamdancing. The song consisted of audio recordings during concerts from the audience and members of Consolidated, arguing about moshing. Opposition, criticism and controversy: In the 1990s, the Smashing Pumpkins took a stance against moshing, following two incidents which resulted in fatalities. At a 1996 Pumpkins concert in Dublin, Ireland, 17-year-old Bernadette O'Brien was crushed by moshing crowd members and later died in the hospital, despite warnings from the band that people were getting hurt. At another concert, singer Billy Corgan said to the audience: I just want to say one thing to you, you young, college lughead-types. I've been watchin' people like you sluggin' around other people for seven years. And you know what? It's the same shit. I wish you'd understand that in an environment like this, and in a setting like this, it's fairly inappropriate and unfair to the rest of the people around you. I, and we, publicly take a stand against moshing! Another fan died at a Smashing Pumpkins concert in Vancouver, British Columbia, Canada, on September 24, 2007. The 20-year-old man was dragged out of the mosh pit, unconscious, to be pronounced dead at a hospital after first-aid specialists attempted to save him. Opposition, criticism and controversy: Reel Big Fish's 1998 album Why Do They Rock So Hard? included their mosh-criticizing song "Thank You for Not Moshing", which contained lyrics that suggested that at least some individuals in the mosh pit were simply bullies who were finding conformity in the violence. Opposition, criticism and controversy: Mike Portnoy, founder and ex-drummer of Dream Theater, and Avenged Sevenfold where he briefly filled in after the death of The Rev, criticized moshing in an interview published on his website: I think our audience have become a little bit more attentive and less of that type of [mosh] mentality [...] I understand you want to release that energy... [but] once people start doing that during "Through Her Eyes" it gets ridiculous [...] So this time around we're consciously aiming at theaters that people can actually sit down and enjoy the show and be comfortable [...] without having to worry about their legs falling off or being kicked in the face by a Mosh Pit. So [that] will probably eliminate that problem anyway. Opposition, criticism and controversy: Sixteen-year-old Jessica Michalik was an Australian girl who died as a result of asphyxiation after being crushed in a mosh pit during the 2001 Big Day Out festival during a performance by nu metal band Limp Bizkit. At that same festival, post-hardcore band At the Drive-In ended their set early after only three songs due to the audience's moshing.Joey DeMaio of American heavy metal band Manowar has been known to temporarily stop concerts upon seeing moshing and crowd surfing, claiming it is dangerous to other fans.Former Slipknot percussionist Chris Fehn spoke about the state of audience interaction following the onstage incident and subsequent legal issues involving Lamb of God’s Randy Blythe, who was eventually found not guilty of criminal wrongdoing in the death of a concertgoer, despite being held "morally responsible". Fehn briefly addressed the Blythe situation, stating "I think, especially in America, moshing has turned into a form of bullying. The big guy stands in the middle and just trucks any small kid that comes near him. They don’t mosh properly anymore. It sucks because that’s not what it’s about. Those guys need to be kicked out. A proper mosh pit is a great way to be as a group and dance, and just do your thing."
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**TRON (encoding)** TRON (encoding): TRON Code is a multi-byte character encoding used in the TRON project. It is similar to Unicode but does not use Unicode's Han unification process: each character from each CJK character set is encoded separately, including archaic and historical equivalents of modern characters. This means that Chinese, Japanese, and Korean text can be mixed without any ambiguity as to the exact form of the characters; however, it also means that many characters with equivalent semantics will be encoded more than once, complicating some operations. TRON (encoding): TRON has room for 150 million code points. Separate code points for Chinese, Korean, and Japanese variants of the 70,000+ Han characters in Unicode 4.1 (if that were deemed necessary) would require more than 200,000 code points in TRON. TRON includes the non-Han characters from Unicode 2.0, but it has not been keeping up to date with recent editions to Unicode as Unicode expands beyond the Basic Multilingual Plane and adds characters to existing scripts. The TRON encoding has been updated to include other recent code page updates like JIS X 0213.Fonts for the TRON encoding are available, but they have restrictions for commercial use. Structure: Each character in TRON Code is two bytes. Similarly to ISO/IEC 2022, the TRON character encoding handles characters in multiple character sets within a single character encoding by using escape sequences, referred to as language specifier codes, to switch between planes of 48,400 code points. Character sets incorporated into TRON Code include existing character sets such as JIS X 0208 and GB 2312, as well as other character sources such as the Dai Kan-Wa Jiten, and some scripts not included in other encodings such as Dongba symbols. Structure: Owing to the incorporation of entire character sets into TRON Code, many characters with equivalent semantics are encoded multiple times; for example, all of the kanji characters in the GT Typeface receive their own codepoints, despite many of them overlapping with other kanji character sets that are already included such as JIS X 0208. One such example is the character 亜 (located in Unicode at U+4E9C) which appears in the JIS X 0208 region at 1-3021, the GT Typeface region at 2-2464, and the Dai Kan-Wa Jiten region at 8-2373. Structure: Control codes Bytes in the range 0x00 to 0x20 and 0x7F are reserved for use in control codes. Character codes Characters in each plane are divided into four zones. Each zone is allocated separately; for example, in plane 1 JIS X 0208 characters reside in Zone A starting at 0x2121, JIS X 0213 characters reside in both Zone A and Zone B, and GB 2312 characters reside in Zone C starting at 0x2180. Structure: Codepoints are notated as X-YYYY, where X is the plane number in decimal and YYYY is the codepoint in hexadecimal. Alternatively, the notation 0xNNYYYY can be used, where NN is the second byte of the language specifier code. A text format "&TNNYYYY;" can be used to denote a TRON codepoint in ASCII text, in a similar manner to numeric character references in HTML. Structure: Language specifier codes Language specifier codes are prefixed with 0xFE. Valid suffixes are 0x21 to 0x7E and 0x80 to 0xFE, many of which are unallocated. Special and escape codes Special codes are prefixed with 0xFF. Planes The following are the planes allocated for use in TRON Code, along with their corresponding language specifier codes and a description of the character sets included in each plane. Planes 11 to 15 were originally allocated to store the Mojikyō character set, but disputes have led to the planes being excluded. All other planes up to 31 are currently reserved for future allocation.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Morphism of finite type** Morphism of finite type: For a homomorphism A → B of commutative rings, B is called an A-algebra of finite type if B is a finitely generated as an A-algebra. It is much stronger for B to be a finite A-algebra, which means that B is finitely generated as an A-module. For example, for any commutative ring A and natural number n, the polynomial ring A[x1, ..., xn] is an A-algebra of finite type, but it is not a finite A-module unless A = 0 or n = 0. Another example of a finite-type morphism which is not finite is C[t]→C[t][x,y]/(y2−x3−t) The analogous notion in terms of schemes is: a morphism f: X → Y of schemes is of finite type if Y has a covering by affine open subschemes Vi = Spec Ai such that f−1(Vi) has a finite covering by affine open subschemes Uij = Spec Bij with Bij an Ai-algebra of finite type. One also says that X is of finite type over Y. Morphism of finite type: For example, for any natural number n and field k, affine n-space and projective n-space over k are of finite type over k (that is, over Spec k), while they are not finite over k unless n = 0. More generally, any quasi-projective scheme over k is of finite type over k. Morphism of finite type: The Noether normalization lemma says, in geometric terms, that every affine scheme X of finite type over a field k has a finite surjective morphism to affine space An over k, where n is the dimension of X. Likewise, every projective scheme X over a field has a finite surjective morphism to projective space Pn, where n is the dimension of X.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Smuxi** Smuxi: Smuxi is a cross-platform IRC client for the GNOME desktop inspired by Irssi. It pioneered the concept of separating the frontend client from the backend engine which manages connections to IRC servers inside a single graphical application. Architecture: Smuxi is based on the client–server model: The core application exists in the Smuxi back-end server which is connected to the Internet around-the-clock. The user interacts with one or more Smuxi front-end clients which are connected to the Smuxi back-end server. This way, the Smuxi back-end server can maintain connections to IRC servers even when all Smuxi front-end clients have been closed.The combination of screen and Irssi served as an example of this architecture. The Quassel IRC client has a similar design. Architecture: Smuxi also supports the regular single application mode. This behaves like a typical IRC client with no separation of back-end and front-end. It utilizes a local IRC engine that is used by the local front-end client. Features: Smuxi supports nick colors which are identical across channels and networks, a Caret Mode as seen in Firefox that allows users to navigate through the messages using the keyboard, theming with colors and fonts, configurable tray-icon support, optional stripping of colours and formattings and convenience features like CTCP support, channel search and nickname completion. It has a tabbed document interface, tabbed user interface, and support for multiple servers. Smuxi can attach to a local backend engine or a remote engine utilizing the Engine drop down menu (similar to screen used with irssi). It also includes, in client-server operation, a visual marker showing the user's last activity in an open session, and ignore filtering. Distribution: Smuxi can be found in many major free operating systems such as: Debian GNU/Linux (including Debian GNU/kFreeBSD),Ubuntu,Gentoo Linux,Arch Linux,openSUSE Community Repository,Frugalware Linux,Slackware, and FreeBSD.Smuxi is also available for Microsoft Windows XP, Vista, 7, 8.x and 10 (32-bit and 64-bit architectures).Smuxi is available for Mac OS X starting with the 0.8.9 release. Reception: Smuxi was selected in "Hot Picks" by Linux Format Magazine in March 2009.TuxRadar wrote: If you're looking for IRC clients you're spoilt for choice with many distributions, as there are plenty to choose from. Some are text-based (IRSSI), some integrate well with instant messenger applications (Pidgin) while others are simply IRC clients through and through. Smuxi falls into the latter category, and we're glad it does, because it's a good little IRC client. In Tom's Hardware, Adam Overa wrote: smuxi is a lightweight client with a slim, yet fully customizable interface. [...] smuxi allows the user to completely change the default interface, moving or removing just about any aspect. In LinuxToday, Joe Brockmeier wrote: If you spend much time with any open source project, you're probably going to be spending time in IRC. If you want to make sure you don't miss a minute of your project's conversations, you'll want to check out Smuxi.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Small nucleolar RNA SNORD101** Small nucleolar RNA SNORD101: In molecular biology, snoRNA U101 (also known as SNORD101) is a non-coding RNA (ncRNA) molecule which functions in the modification of other small nuclear RNAs (snRNAs). This type of modifying RNA is usually located in the nucleolus of the eukaryotic cell which is a major site of snRNA biogenesis. It is known as a small nucleolar RNA (snoRNA) and also often referred to as a guide RNA. Small nucleolar RNA SNORD101: snoRNA U101 belongs to the C/D box class of snoRNAs which contain the conserved sequence motifs known as the C box (UGAUGA) and the D box (CUGA). Most of the members of the box C/D family function in directing site-specific 2'-O-methylation of substrate RNAs.U101 was identified by computational screening of the introns of ribosomal protein genes for conserved C/D box sequence motifs and expression experimentally verified by northern blotting. snoRNA U101 resides in intron 3 of the ribosomal protein S12. U101 shares the same host gene with C/D box snoRNA HBII-429, and the H/ACA box snoRNA ACA33.There is currently no predicted methylation target for U101.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded