text
stringlengths 60
353k
| source
stringclasses 2
values |
|---|---|
**Carbon nanotube nanomotor**
Carbon nanotube nanomotor:
A device generating linear or rotational motion using carbon nanotube(s) as the primary component, is termed a nanotube nanomotor. Nature already has some of the most efficient and powerful kinds of nanomotors. Some of these natural biological nanomotors have been re-engineered to serve desired purposes. However, such biological nanomotors are designed to work in specific environmental conditions (pH, liquid medium, sources of energy, etc.). Laboratory-made nanotube nanomotors on the other hand are significantly more robust and can operate in diverse environments including varied frequency, temperature, mediums and chemical environments. The vast differences in the dominant forces and criteria between macroscale and micro/nanoscale offer new avenues to construct tailor-made nanomotors. The various beneficial properties of carbon nanotubes makes them the most attractive material to base such nanomotors on.
History:
Just fifteen years after making the world's first micrometer-sized motor, Alex Zettl led his group at University of California at Berkeley to construct the first nanotube nanomotor in 2003. A few concepts and models have been spun off ever since including the nanoactuator driven by a thermal gradient as well as the conceptual electron windmill, both of which were revealed in 2008.
Size effects:
Electrostatic forces Coulomb's law states that the electrostatic force between two objects is inversely proportional to the square of their distance. Hence, as the distance is reduced to less than a few micrometers, a large force can be generated from seemingly small charges on two bodies. However, electrostatic charge scales quadratically, thereby the electrostatic force also scales quadratically, as the following equations show: C=εAd∝L E∝L0 V=E∗L=E∗L∝L Q=CV∝L2 F=A∗E2∝L2 Alternatively F=Q1∗Q2d2∝L2∗L2L2∝L2 Here A is area, C is capacitance, F is electrostatic force, E is electrostatic field, L is length, V is voltage and Q is charge. Despite the scaling nature of the electrostatic force it is one of the major mechanisms of sensing and actuation in the field of microelectromechanical systems (MEMS) and is the backbone for the working mechanism of the first NEMS nanomotor. The quadratic scaling is alleviated by increasing the number of units generating the electrostatic force as seen in comb drives in many MEMS devices.
Size effects:
Friction Just as the electrostatic force, the frictional force scales quadratically with size F ~ L2.Friction is an ever plaguing problem regardless of the scale of a device. It becomes all the more prominent when a device is scaled down. In the nano scale it can wreak havoc if not accounted for because the parts of a Nano-Electro-Mechanical-Systems (NEMS) device are sometimes only a few atoms thick. Furthermore, such NEMS devices typically have a very large surface area-to-volume ratio. Surfaces in the nanoscale resemble a mountain range, where each peak corresponds to an atom or a molecule. Friction at the nanoscale is proportional to the number of atoms that interact between two surfaces. Hence, friction between perfectly smooth surfaces in the macroscale is actually similar to large rough objects rubbing against each other.In the case of nanotube nanomotors however, the intershell friction in the multi-walled nanotubes (MWNT) is remarkably small. Molecular dynamics studies show that, with the exception of small peaks, the frictional force remains almost negligible for all sliding velocities until a special sliding velocity is reached. Simulations relating the sliding velocity, induced rotation, inter-shell frictional force to the applied force provide explanations for the low inter-wall friction. Contrary to macroscale expectations the speed at which an inner tube travels within an outer tube does not follow a linear relationship with the applied force. Instead, the speed remains constant (as in a plateau) despite increasing applied force occasionally jumping in value to the next plateau. No real rotation is noticed in nonchiral inner tubes. In the case of chiral tubes a true rotation is noticed and the angular velocity also jumps to plateaus along with the jumps in the linear velocity. These plateaus and jumps can be explained as a natural outcome of frictional peaks for growing velocity, the stable (rising) side of the peak leading to a plateau, the dropping (unstable) side leading to a jump. These peaks occur due to parametric excitation of vibrational modes in the walls of the tubes due to the sliding of the inner tube. With the exception of small peaks, that correspond to the speed plateaus, the frictional force remains almost negligible for all sliding velocities until a special sliding velocity. These velocity plateaus correspond to the peaks in the frictional force. The sudden rise in sliding velocity is due to a resonance condition between a frequency that is dependent on the inter-tube corrugation period and particular phonon frequencies of the outer tube which happen to possess a group velocity approximately equal to the sliding velocity.
First NEMS nanomotor:
The first nanomotor can be thought of as a scaled down version of a comparable microelectromechanical systems (MEMS) motor. The nanoactuator consists of a gold plate rotor, rotating about the axis of a multi-walled nanotube (MWNT). The ends of the MWNT rest on a SiO2 layer which form the two electrodes at the contact points. Three fixed stator electrodes (two visible 'in-plane' stators and one 'gate' stator buried beneath the surface) surround the rotor assembly. Four independent voltage signals (one to the rotor and one to each stators) are applied to control the position, velocity and direction of rotation. Empirical angular velocities recorded provide a lower bound of 17 Hz (although capable of operating at much higher frequencies) during complete rotations.
First NEMS nanomotor:
Fabrication The MWNTs are synthesized by the arc-discharge technique, suspended in 1,2-dichlorobenzene and deposited on degenerately doped silicon substrates with a 1 µm of SiO2. The MWNT can be aligned according to pre-made markings on the substrate by using an atomic force microscope (AFM) or a scanning electron microscope (SEM). The rotor, electrodes and the 'in-plane' stators are patterned using electron beam lithography using an appropriately masked photo-resist. Gold with a chromium adhesion layer is thermally evaporated, lifted off in acetone and then annealed at 400 °C to ensure better electrical and mechanical contact with the MWNT. The rotor measures 250–500 nm on a side. An HF etch is then used to remove sufficient thickness (500 nm of SiO2) of the substrate to make room for the rotor when it rotates. The Si substrate serves as the gate stator. The MWNT at this point displays a very high torsional spring constant (10−15 to 10−13 N m with resonant frequencies in the tens of megahertz), hence, preventing large angular displacements. To overcome this, one or more outer MWNT shells are compromised or removed in the region between the anchors and the rotor plate. One simple way to accomplish this is by successively applying very large stator voltages (around 80 V DC) that cause mechanical fatigue and eventually shear the outer shells of the MWNT. An alternative method involves the reduction of the outermost MWNT tubes to smaller, wider concentric nanotubes beneath the rotor plate.The smaller nanotube(s) are fabricated using the Electrical driven vaporization (EDV) which is a variant of the electrical-breakdown technique. Passing current between the two electrodes typically results in failure of the outermost shell only on one side of the nanotube. Current is therefore passed between one electrode and the center of the MWNT which results in the failure of the outermost shell between this electrode and the center. The process is repeated on the opposite side to result in the formation of the short concentric nanotube that behaves like a low friction bearing along the longer tube.
First NEMS nanomotor:
Arrays of nanoactuators Due to the minuscule magnitude of output generated by a single nanoactuator the necessity to use arrays of such actuators to accomplish a higher task comes into picture. Conventional methods like chemical vapor deposition (CVD) allow the exact placement of nanotubes by growing them directly on the substrate. However, such methods are unable to produce very high qualities of MWNT. Moreover, CVD is a high temperature process that would severely limit the compatibility with other materials in the system. A Si substrate is coated with electron beam resist and soaked in acetone to leave only a thin polymer layer. The substrate is selectively exposed to a low energy electron beam of an SEM that activates the adhesive properties of the polymer later. This forms the basis for the targeting method. The alignment method exploits the surface velocity obtained by a fluid as it flows off a spinning substrate. MWNTs are suspended in orthodicholrobenzene (ODCB) by ultrasonication in an aquasonic bath that separates most MWNT bundles into individual MWNTs. Drops of this suspension are then pipetted one by one onto the center of a silicon substrate mounted on a spin coater rotating at 3000 rpm. Each subsequent drop of the suspension is pipetted only after the previous drop has completely dried to ensure larger density and better alignment of the MWNTs (90% of the MWNTs over 1 µm long lie within 1°). Standard electron beam lithography is used to pattern the remaining components of the nanoactuators.
First NEMS nanomotor:
Arc-discharge evaporation technique This technique is a variant of the standard arc-discharge technique used for the synthesis of fullerenes in an inert gas atmosphere. As Figure 1.3 shows, the experiment is carried out in a reaction vessel containing an inert gas such as helium, argon, etc. flowing at a constant pressure. A potential of around 18 V is applied across two graphite electrodes (diameters of the anode and cathode are 6 mm and 9 mm) separated by a short distance of usually 1–4 mm within this chamber. The amount of current (usually 50–100 A) passed through the electrodes to ensure nanotube formation depends on the dimensions of the electrodes, separation distance and the inert gas used. As a result, carbon atoms are ejected from the anode and are deposited onto the cathode hence shrinking the mass of the anode and increasing the mass of the cathode. The black carbonaceous deposit (a mixture of nanoparticles and nanotubes in a ratio of 1:2) is seen growing on the inside of the cathode while a hard grey metallic shell forms on the outside. The total yield of nanotubes as a proportion of starting graphitic material peaks at a pressure of 500 torr at which point 75% of graphite rod consumed is converted to nanotubes. The nanotubes formed range from 2 to 20 nm in diameter and few to several micrometers in length. There are several advantages of choosing this method over the other techniques such as laser ablation and chemical vapor deposition such as fewer structural defects (due to high growth temperature), better electrical, mechanical and thermal properties, high production rates (several hundred mg in ten minutes), etc.
First NEMS nanomotor:
Electrical-breakdown technique Large-scale synthesis of carbon nanotubes typically results in a randomly varied proportion of different types of carbon nanotubes. Some may be semiconducting while others may be metallic in their electrical properties. Most applications require the use of such specific types of nanotubes. Electrical-breakdown technique provides a means for separating and selecting desired type of nanotubes. Carbon nanotubes are known to withstand very large current densities up to 109 A/cm2 partly due to the strong sigma bonds between carbon atoms. However, at sufficiently high currents the nanotubes fail primarily due to rapid oxidation of the outermost shell. This results in a partial conductance drop that becomes apparent within a few seconds. Applying an increased bias displays multiple independent and stepwise drops in conductance (figure 1.4) resulting from the sequential failure of carbon shells. Current in a MWNT typically travels in the outermost shell due to the direct contact between this shell and the electrodes. This controlled destruction of shells without affecting disturbing inner layers of MWNTs permits the effective separation of the nanotubes.
First NEMS nanomotor:
Principle The rotor is made to rotate using electrostatic actuation. An out-of-phase common frequency sinusoidal voltages to two in-plane stators S1, S2, a doubled frequency voltage signal to the gate stator S3 and a DC offset voltage to the rotor plate R are applied as shown below: sin (ωt) sin (ωt−π) sin (2ωt+π/2) R=−V0 By the sequential application of these asymmetrical stator voltages (less than 5 V) the rotor plate can be drawn to successive stators hence making the plate complete rotations. The high proximity between the stators and the rotor plate is one reason why a large force is not required for electrostatic actuation. Reversing the bias causes the rotor to rotate in the opposite direction as expected.
First NEMS nanomotor:
Applications The rotating metal plate could serve as a mirror for ultra-high-density optical sweeping and switching devices as the plate is at the limit of visible light focusing. An array of such actuators, each serving as a high frequency mechanical filter, could be used for parallel signal processing in telecommunications.
The plate could serve as a paddle for inducing or detecting fluid motion in microfluidic applications. It could serve as a bio-mechanical element in biological systems, a gated catalyst in wet chemistry reactions or as a general sensor element.
A charged oscillating metal plate could be used as a transmitter of electromagnetic radiation.
Thermal gradient driven nanotube actuators:
The nanoactuator, as shown in Figure 2.1 comprises two electrodes connected via a long MWNT. A gold plate acts as the cargo and is attached to a shorter and wider concentric nanotube. The cargo moves towards the cooler electrode (Figure 2.2) due to the thermal gradient in the longer nanotube induced by the high current that is passed through it. The maximum velocity was approximated to 1 µm/s which is comparable to the speeds attained by kinesin biomotors.
Thermal gradient driven nanotube actuators:
Fabrication The MWNT are fabricated using the standard arc-discharge evaporation process and deposited on an oxidized silicon substrate. The gold plate in the center of the MWNT is patterned using electron-beam lithography and Cr/Au evaporation. During the same process, the electrodes are attached to the nanotube. Finally, electrical-breakdown technique is used to selectively remove a few outer walls of the MWNT. Just as the nanoactuator from the Zettl group, this enables low friction rotation and translation of the shorter nanotube along the axis of the longer tube. The application of the electrical-breakdown technique does not result in the removal of the tube(s) below the cargo. This might be because the metal cargo absorbs the heat generated in the portion of the tube in its immediate vicinity hence delaying or possibly even preventing tube oxidation in this part.
Thermal gradient driven nanotube actuators:
Principle The interaction between the longer and shorter tubes generates an energy surface that confines the motion to specific tracks – translation and rotation. The degree of translational and rotational motion of the shorter tube are highly dependent on the chiralities of the two tubes as shown in Figure 2.3. Motion in the nanoactuator displayed a proclivity of the shorter tube to follow a path of minimum energy. This path could either have a roughly constant energy or have a series of barriers. In the former case, friction and vibrational motion of atoms can be neglected whereas a stepwise motion is expected in the latter scenario.
Thermal gradient driven nanotube actuators:
Stepwise motion The stepwise motion can be explained by the existence of periodic energy barriers for relative motion between the longer and shorter tubes. For a given pair of nanotubes, the ratio of the step in rotation to the step in translation is typically a constant, the value of which depends on the chirality of the nanotubes. The energy of such barriers could be estimated from the temperature in the nanotube, a lower bound for which can be estimated as the melting temperature of gold (1300 K) by noting that the gold plate melts (Figure 2.4) to form a spherical structure as current is passed through the nanomotor. The motion rate γ can be written as a function of the attempt frequency ω , the Boltzmann constant k , and temperature T as: Taking Hz , using the approximation: where m is the mass of the cargo and a02 represents the contact area, the barrier height is estimated as 17 μeV per atom.
Thermal gradient driven nanotube actuators:
Mechanism for actuation Many proposals were made to explain the driving mechanism behind the nanoactuator. The high current (0.1 mA) required to drive the actuator is likely to cause sufficient dissipation to clean the surface of contaminants; hence, ruling out the possibility of contaminants playing a major role. The possibility of electromigration, where the electrons move atomic impurities via momentum transfer due to collisions, was also ruled out because the reversal of the current direction did not affect the direction of displacement. Similarly, rotational motion could not have been caused by an induced magnetic field due to the current passing through the nanotube because the rotation could either be left or right-handed depending on the device. Stray electric field effect could not be the driving factor because the metal plate staid immobile for high resistive devices even under a large applied potential. The thermal gradient in the nanotube provides the best explanation for the driving mechanism.
Thermal gradient driven nanotube actuators:
Thermal gradient induced motion The induced motion of the shorter nanotube is explained as the reverse of the heat dissipation that occurs in friction wherein the sliding of two objects in contact results in the dissipation of some of the kinetic energy as phononic excitations caused by the interface corrugation. The presence of a thermal gradient in a nanotube causes a net current of phononic excitations traveling from the hotter region to the cooler region. The interaction of these phononic excitations with mobile elements (the carbon atoms in the shorter nanotube) causes the motion of the shorter nanotube. This explains why the shorter nanotube moves towards the cooler electrode. Changing the direction of the current has no effect on the shape of thermal gradient in the longer nanotube. Hence, direction of the movement of the cargo is independent of the direction of the bias applied. The direct dependence of the velocity of the cargo to the temperature of the nanotube is inferred from the fact that the velocity of the cargo decreases exponentially as the distance from the midpoint of the long nanotube increases.
Thermal gradient driven nanotube actuators:
Shortcomings The temperatures and the thermal gradient that the MWNT are subjected to are very high. On one hand, the high thermal gradient seems to have a highly detrimental effect on the lifetime of such nanoactuators. On the other hand, experiments show that the displacement of the shorter tube is directly proportional to the thermal gradient (see Figure 2.5). Therefore, a compromise needs to be reached to optimize the thermal gradient. The dimensions of movable nanotube is directly related to the energy barrier height. Although the current model excites multiple phonon modes, selective phonon mode excitation would enable lowering the phonon bath temperature.
Thermal gradient driven nanotube actuators:
Applications Pharmaceutical/Nanofluidic – thermal gradient could be used to drive fluids within the nanotubes or in nanofluidic devices as well as for drug delivery by nanosyringes.
Running bio-engineered nanopores using heat generated from adenosine triphosphate (ATP) molecules.
Electron windmill:
Structure As figure 3.1 shows, the nanomotor consists of a double-walled CNT (DWNT) formed from an achiral (18,0) outer tube clamped to external gold electrodes and a narrower chiral (6,4) inner tube. The central portion of the outer tube is removed using the electrical-breakdown technique to expose the free-to-rotate, inner tube. The nanodrill also comprises an achiral outer nanotube attached to a gold electrode but the inner tube is connected to a mercury bath.
Electron windmill:
Principle Conventional nanotube nanomotors make use of static forces that include elastic, electrostatic, friction and van der Waals forces. The electron windmill model makes use of a new "electron-turbine" drive mechanism that obviates that need for metallic plates and gates that the above nanoactuators require. When a DC voltage is applied between the electrodes, a "wind" of electrons is produced from left to right. The incident electron flux in the outer achiral tube initially possesses zero angular momentum, but acquires a finite angular momentum after interacting with the inner chiral tube. By Newton's third law, this flux produces a tangential force (hence a torque) on the inner nanotube causing it to rotate hence giving this model the name – "electron windmill". For moderate voltages, the tangential force produced by the electron wind is much greatly exceed the associated frictional forces.
Electron windmill:
Applications Some of the main applications of the electron windmill include: A voltage pulse could cause the inner element to rotate at a calculated angle hence making the device behave as a switch or a nanoscale memory element.
Modification of the electron windmill to construct a nanofluidic pump by replacing the electrical contacts with reservoirs of atoms or molecules under the influence of an applied pressure difference.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Rob Horne (professor)**
Rob Horne (professor):
Rob Horne is Professor of Behavioural Medicine at the School of Pharmacy, University College London (UCL). In September 2006, he founded the Centre for Behavioural Medicine at UCL, which he continues to lead. Horne was designated a Fellow of the Royal College of Physicians Faculty of Pharmaceutical Medicine in 2013 and is a founding fellow of the Royal Pharmaceutical Society of Great Britain. He was appointed as a National Institute for Health Research (NIHR) Senior Investigator in 2011. He is an internationally recognised expert in self-management of chronic illness and adherence to medications.
Biography:
Career Horne qualified as a pharmacist and has a PhD in medical psychology from King's College London.
Biography:
Before joining UCL, Horne was Professor of Psychology in Health Care and Director of the Centre for Health Care Research at the University of Brighton. Horne founded and is Director of the Centre for Behavioural Medicine, which is part of the UCL School of Pharmacy. The overall aim of the Centre is to make healthcare more efficient by understanding and addressing the psychological and behavioural factors explaining variation in response to treatment.
Biography:
Academic research Horne's academic research focuses on the role of psychological and behavioural factors in explaining the variation in patients’ response to medication. He has developed a range of tools and models for assessing patients’ perspectives of illness and treatment e.g. the Beliefs about Medicines Questionnaire (BMQ) and Medication Adherence Report (MARS) as well as frameworks for understanding treatment-related behaviours with a particular focus on adherence to medication e.g. the Necessity-Concerns Framework and Perceptions and Practicalities approach.
Biography:
To date, these tools have been validated in the following long-term medical conditions: renal dialysis; renal transplantation; asthma; cancer; coronary heart disease; hypertension; diabetes; HIV/AIDS; haemophilia; depression; bipolar disorder; rheumatoid arthritis; inflammatory bowel disease and also for newly prescribed medications in primary care.
His current research focuses on the development of theory-based interventions to support informed choice and optimal adherence to medication or other treatments in chronic illness. Other research interests include emotion and health and the placebo effect.
Over the past decade, his research has generated over 140 peer-reviewed publications and book chapters, and grants over £7 million.
Health policy contributions Horne and his research team regularly contribute to UK and international reports and guidelines on adherence, and to consultancy for national charities, the NHS and commercial health organisations.
Professor Horne’s recent contributions to health policy include adherence guidelines for the National Institute for Health and Clinical Excellence (NICE) published in 2009 and a report for the National Co-ordinating Centre for NHS Service Delivery and Organisation R&D (NCCSDO) published in 2005.
Application of research In November 2011, Horne co-founded a UCLBusiness spinout company. The company, called Spoonful of Sugar, applies Horne's research to behavioural change consultancy, evidence-based adherence support, validated behavioural research and perspectives mapping and personalised communications.
Medical innovation Horne is an Academic Fellow of the Centre for the Advancement of Sustainable Medical Innovation (CASMI), a partnership between Oxford University and UCL created to develop new models for medical innovation. In November 2012, Horne was appointed as UCL's academic lead for CASMI.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**LINA (software)**
LINA (software):
LINA was a piece of open-source software that enabled users to run applications compiled for Linux under Windows and Mac OS X with a native look and feel. Version 1.00 beta1 was released in October 2009 and was available at the Open Lina web site. However, that domain is now up for sale.
Release:
The latest binary version, still a beta, was released on October 15, 2009.
As the tool was open sourced under the terms of the GNU General Public License v2, its source code had been made available since July 19, 2007.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Raschen bag**
Raschen bag:
A Raschen bag is a bag of ballast that is placed underneath the baseplate of a mortar to improve its accuracy when used on snow or other soft ground conditions. Raschen bags are named after Colonel Dan Raschen, Royal Engineers, who invented but did not name them.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Delaunay refinement**
Delaunay refinement:
In mesh generation, Delaunay refinements are algorithms for mesh generation based on the principle of adding Steiner points to the geometry of an input to be meshed, in a way that causes the Delaunay triangulation or constrained Delaunay triangulation of the augmented input to meet the quality requirements of the meshing application. Delaunay refinement methods include methods by Chew and by Ruppert.
Chew's second algorithm:
Chew's second algorithm takes a piecewise linear system (PLS) and returns a constrained Delaunay triangulation of only quality triangles where quality is defined by the minimum angle in a triangle. Developed by L. Paul Chew for meshing surfaces embedded in three-dimensional space, Chew's second algorithm has been adopted as a two-dimensional mesh generator due to practical advantages over Ruppert's algorithm in certain cases and is the default quality mesh generator implemented in the freely available Triangle package. Chew's second algorithm is guaranteed to terminate and produce a local feature size-graded meshes with minimum angle up to about 28.6 degrees.The algorithm begins with a constrained Delaunay triangulation of the input vertices. At each step, the circumcenter of a poor-quality triangle is inserted into the triangulation with one exception: If the circumcenter lies on the opposite side of an input segment as the poor quality triangle, the midpoint of the segment is inserted. Moreover, any previously inserted circumcenters inside the diametral ball of the original segment (before it is split) are removed from the triangulation.
Chew's second algorithm:
Circumcenter insertion is repeated until no poor-quality triangles exist.
Ruppert's algorithm:
Ruppert's algorithm takes a planar straight-line graph (or in dimension higher than two a piecewise linear system) and returns a conforming Delaunay triangulation of only quality triangles. A triangle is considered poor-quality if it has a circumradius to shortest edge ratio larger than some prescribed threshold. Discovered by Jim Ruppert in the early 1990s, "Ruppert's algorithm for two-dimensional quality mesh generation is perhaps the first theoretically guaranteed meshing algorithm to be truly satisfactory in practice." Motivation When doing computer simulations such as computational fluid dynamics, one starts with a model such as a 2D outline of a wing section.
Ruppert's algorithm:
The input to a 2D finite element method needs to be in the form of triangles that fill all space, and each triangle to be filled with one kind of material – in this example, either "air" or "wing".
Long, skinny triangles cannot be simulated accurately.
The simulation time is generally proportional to the number of triangles, and so one wants to minimize the number of triangles, while still using enough triangles to give reasonably accurate results – typically by using an unstructured grid.
The computer uses Ruppert's algorithm (or some similar meshing algorithm) to convert the polygonal model into triangles suitable for the finite element method.
Algorithm The algorithm begins with a Delaunay triangulation of the input vertices and then consists of two main operations.
The midpoint of a segment with non-empty diametral circles is inserted into the triangulation.
The circumcenter of a poor-quality triangle is inserted into the triangulation, unless this circumcenter lies in the diametral circle of some segment. In this case, the encroached segment is split instead.These operations are repeated until no poor-quality triangles exist and all segments are not encroached.
Ruppert's algorithm:
Pseudocodefunction Ruppert(points, segments, threshold) is T := DelaunayTriangulation(points) Q := the set of encroached segments and poor quality triangles while Q is not empty: // The main loop if Q contains a segment s: insert the midpoint of s into T else Q contains poor quality triangle t: if the circumcenter of t encroaches a segment s: add s to Q; else: insert the circumcenter of t into T end if end if update Q end while return T end Ruppert.
Ruppert's algorithm:
Practical usage Without modification Ruppert's algorithm is guaranteed to terminate and generate a quality mesh for non-acute input and any poor-quality threshold less than about 20.7 degrees. To relax these restrictions various small improvements have been made. By relaxing the quality requirement near small input angles, the algorithm can be extended to handle any straight-line input. Curved input can also be meshed using similar techniques.
Ruppert's algorithm:
Ruppert's algorithm can be naturally extended to three dimensions, however its output guarantees are somewhat weaker due to the sliver type tetrahedron.
Ruppert's algorithm:
An extension of Ruppert's algorithm in two dimensions is implemented in the freely available Triangle package. Two variants of Ruppert's algorithm in this package are guaranteed to terminate for a poor-quality threshold of about 26.5 degrees. In practice these algorithms are successful for poor-quality thresholds over 30 degrees. However, examples are known which cause the algorithm to fail with a threshold greater than 29.06 degrees.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Ischiopatellar dysplasia**
Ischiopatellar dysplasia:
Ischiopatellar dysplasia is a rare autosomal dominant disorder characterized by a hypoplasia of the patellae as well as other bone anomalies, especially concerning the pelvis and feet. It is also known as small patella syndrome, with earlier synonyms being Scott-Taor syndrome, Coxo-podo-patellar syndrome, Patella aplasia, coxa vara, tarsal synostosis, Congenital coxa vara, patella aplasia and tarsal synostosis ischiocoxopodopatellar syndrome.
Signs and symptoms:
Individuals affected by ischiopatellar dysplasia commonly have abnormalities of the patella and pelvic girdle, such as absent or delayed patellar and ischial ossification as well as infra-acetabular axe-cut notches. Patellae are typically absent or small in these individuals, when patellae are present they are small and laterally displaced or dislocated. In addition, abnormalities in other parts of their skeleton and dysmorphic features are common in those affected. Other features that have been identified in patients with ischiopatellar dysplasia include foot anomalies, specifically flat feet (pes planus), syndactylism of the toes, short fourth and fifth toes, and a large gap between the first and second toes, femur anomalies, cleft palate, and craniofacial dysmorphisms.Ischiopubic junction ossification can be absent, delayed, or abnormal. Other findings include hallux varus, brachymetatarsia affecting the fourth and fifth metatarsals, flat feet, and the presence of an elongated medial patellofemoral ligament. Less common findings include micrognathia, cleft palate, frontal bossing and nose prominence. Complications include infancy-onset recurrence of luxations, pain of the knee, impaired ability of running and riding bicycles, and late-onset gonarthrosis, although it is not uncommon for some cases to be asymptomatic.
Causes:
Ischiopatellar dysplasia is often considered a familial condition. Ischiopatellar dysplasia has been identified on region 5.6 cM on chromosome 17q22. Mutations in the TBX4 (T-box protein 4) gene, in chromosome 17, have been found to cause ischiopatellar dysplasia due to the essential role TBX4 plays in lower limb development since TBX4 is a transcription factor.
Diagnosis:
Ischiopatellar dysplasia is usually identified through radiographic evidence since its characteristic changes are most notable in radiographic tests that indicate delayed bone age or absent ossification. A full skeletal survey should be performed on any patient that has an absent or hypoplastic patellae since they could potentially have ischiopatellar dysplasia. Magnetic resonance imaging (MRI) is especially helpful in the diagnosis of ischiopatellar syndrome and is recommended when an individual affected by ischiopatellar dysplasia has a traumatic injury to the knee.Around 50-70 cases have been described in medical literature. Diagnosis is made through genetic testing and radiography.
History:
Ischiopatellar dysplasia is sometimes referred to as Scott-Taor syndrome after the researchers who first described ischiopatellar dysplasia as they recognized it in a family as an autosomal dominant disorder in 1979. This finding was important as they were the first to note that it was a benign disorder that is separate from the more severe nail-patella syndrome. Other common names for ischiopatellar syndrome are small patella syndrome (SPS), since the patellae are often small or absent in patients who have this syndrome, and coxo-podo-patellaire syndrome.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Cathedral Peak**
Cathedral Peak:
Cathedral Peak may be any of several mountains, typically those with steep sides and towers reminiscent of a cathedral. In the United States alone, the USGS identifies 17 summits named "Cathedral Peak".
In other countries: Cathedral Peak (South Africa), summit in the Drakensberg Cathedral Peak, Karakoram, peak in Karakoram
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Fish protein powder**
Fish protein powder:
Fish protein powder (FPP) describes a food grade powder product designated primarily for human consumption applications. It differs significantly from fish meal products which are designated for animal feed applications. Fish protein powders have various sanitary processing, purity and functional characteristics which establish them as human food ingredients. Production plants registered for the USA market are located in Peru and France.
History:
Historically, the fish processing methods used for human consumption have been: fresh, canned, frozen, smoked or dehydrated - all of which would be used as a whole food rather than as an ingredient in other foods. Additionally, an industrial fish industry exists where whole fish and by products from fish processing have been cooked and dehydrated to form a product termed fish meal, which is used for animal feed, pet food and fish feed.With the evolution of refining and processing technology and expanded research on the nutrition of fish proteins and peptides, a new industry has developed for the specific purpose of producing a fish protein powder for human consumption with the intent of reaching new ingredient uses and markets. The FPP end product is now used in a variety of food ingredient applications including sports nutrition, food additives and supplements, all of which depend on the finished fish protein powder produced such that it is hygienically safe and also meets sensory requirements of taste, odor and function in prepared foods.
Process:
Enzymatic hydrolysis similar to the body's natural digestive process provides the most efficient breakdown of the proteins into smaller fractions termed peptides which can then be separated from the oil and non-digested proteins during liquid phase processing. Subsequent steps of solids and oil removal through various mechanical separation techniques are required to create a final fish protein fraction with acceptable organoleptic properties for use in human food. Minimization of odor through the elimination of fat and oil from the protein fraction, as well as separating out the lowest molecular weight protein fractions from the larger fractions all serve to create a refined fish protein. Some processes utilize solvents to extract the fat but these can result in dangerous handling and potential residual issues. The final step in producing the product is typically spray drying, which involves atomizing the liquid protein in a hot air chamber resulting in rapid evaporation of the water and a fine powder falling to the bottom of the chamber for removal.
Categories:
The two basic categories used to classify fish protein powders are dependent on the levels of protein, fat, mineral and carbohydrate contained in the powder. The minerals are mostly naturally occurring, organic complexes of magnesium, calcium and phosphorus. The spray drying process may utilize other minerals and carbohydrates to improve flow characteristics of the final product thus altering the natural balance. Powders will all have a residual moisture content in the 4-8% range.
Categories:
Fish protein concentrate (FPC) - is a powder concentrate with medium level of protein (50-70%) and will contain some level of fat/oil (1-20%) in the powder form as well.
Fish protein isolate (FPi) - where the product contains less than 1% fat/oil and more than 90% protein.
New manufacturing techniques are also producing hybrid FPi products where the fat/oil content is very low, (<0.3%) with the protein levels in the 80% range. The hybrid FPi does not reach 90% protein (often a definition point for an isolate) as the natural minerals are not removed and thus represent up to 15% of the final mass balance.
Peptides vs proteins and amino acids in the digestive tract:
Any animal that consumes a whole protein must break down and digest the protein order to absorb the nutrients. For humans this begins with chewing and the addition of saliva enzymes, followed by acid and protease enzyme digestion in the stomach, whereby the end result is a peptide or amino acid fraction ready for uptake into the blood stream via the small intestine. Research has confirmed that most animals have more Peptide receptors in the gut and lower intestine than they do free amino acid receptors - as such the peptide form of fish protein powder is most conducive for optimal nutritional benefits.Hygienic production of fish protein powder mimics these natural digestion steps, and pending the degree of hydrolysis, the protein powder will actually be a partial or complete peptide powder, ready for immediate absorption in the intestine.
Nutritional aspects:
Significant elements of the nutritional science of fish protein powders centers around the bioactive and antioxidant properties of the peptide fractions produced during hydrolysis and their ability to have a positive impact on many conditions including gastrointestinal issues associated with irritable bowel syndrome (IBS) and Crohn's disease - as well reduction effects on hypertension and fast absorption functionality promotes the addition of lean muscle mass to humans consuming the products. Further studies showed that peptides in fish protein powders can minimize injurious effects of anti-inflammatory pain drugs. The University of Maryland School of Medicine concluded that certain peptide fractions from fish may inhibit prostate cancer and possibly other cancers from spreading.Additional benefits of fish protein powders are centered around diet needs of various subsets of the human population. Individuals who have lactose intolerance, milk allergy, gluten intolerance or coeliac disease (aka Celiac's) require alternate protein sources.The hydrolyzed nature of fish protein powder (low molecular weight profile) leads it to be used in hypoallergenic applications such as infant formulas. There is no evidence that infants who have a high risk of having an allergy to cows milk should be fed hydrolyzed infant formula instead of breast milk for allergy prevention. For infants who have a high-risk of a cows milk allergy but cannot be fed breast milk, there is low-quality evidence suggesting that hydrolyzed protein-based formula may reduce the risk of a cows milk allergy compared to cow milk protein formula.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Caffeine-Free Pepsi**
Caffeine-Free Pepsi:
Caffeine-Free Pepsi is a version of the cola Pepsi that omits the caffeine that is customarily part of a cola. It was introduced under the brand name "Pepsi Free" in 1982 by PepsiCo. It was 99.7 percent caffeine free. A sugar-free variant was also introduced and known as "Diet Pepsi Free," The "Pepsi Free" name itself was phased out in 1987, and today these colas are known simply as "Caffeine-Free Pepsi" and "Caffeine-Free Diet Pepsi."
Background:
When it was first introduced, Caffeine-Free Pepsi's label background was red, but to avoid any confusion with Coca-Cola, the background color was changed to gold in 1987. As part of Pepsi's changing their background to blue in 1998, Pepsi Caffeine Free's background changed to blue with the letters outlined in gold. In 2009, the caffeine-free version reverted to a gold background. Caffeine-Free Coca-Cola labels also have a gold background. The logo letters are bordered in red for the regular variety; in the case of the diet variety, they are entirely red in color.
Background:
When introduced, Pepsi Free was available in cans, 2-liter bottles, and 20-ounce glass bottles. Caffeine-Free Pepsi is currently available in cans, 16 oz. plastic bottles and 2 liters, though availability varies from store to store (for instance, 16 oz. bottles are typically only available in convenience stores, and some grocers may only have the product in 12 oz cans, if they carry it at all).
In popular culture:
Two cans of Pepsi Free are seen, at separate times, in the 1983 film Mr. Mom.
Two-liter bottles and six-packs of cans of Pepsi Free appear in a refrigerator case behind Sylvester Stallone's character in the grocery store scene in the 1986 film Cobra.
In popular culture:
Pepsi Free was the subject of a scene in the 1985 film Back to the Future. Upon entering a café in 1955, Marty McFly (Michael J. Fox) asks for a Tab (Coca-Cola's first version of a sugar-free soft drink, which was not available until 1963) and is told that he cannot have a "tab," unless he orders something. He then asks for a Pepsi Free (also not available in the 1950s) and is told, "If you want a Pepsi, pal, you're gonna pay for it!" ("Free" is here being mistaken for gratis.) Finally, he asks for "something without any sugar in it," and is served black coffee.
In popular culture:
A can of Diet Pepsi Free can be seen beside Marty's alarm clock towards the beginning of the movie when Doc (Christopher Lloyd) calls him to remind him to meet him at the mall. The can is also seen toward the end of the movie when Marty wakes up in the morning at his house in 1985.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Pronunciation of GIF**
Pronunciation of GIF:
The pronunciation of GIF, an acronym for the Graphics Interchange Format, has been disputed since the 1990s. Popularly rendered in English as a one-syllable word, the acronym is most commonly pronounced (listen) (with a hard g as in gift) or (listen) (with a soft g as in gem), differing in the phoneme represented by the letter G. Many public figures and institutions have taken sides in the debate; Steve Wilhite, the creator of the image file format, gave a speech at the 2013 Webby Awards arguing for the soft-g pronunciation. Others have pointed to the term's origin from abbreviation of the hard-g word graphics to argue for the other pronunciation.
Pronunciation of GIF:
The controversy stems partly from the fact that there is no general rule for how the letter sequence gi is to be pronounced; the hard g prevails in words such as gift, while the soft g is used in others such as ginger. In addition, some speakers enunciate each letter in the acronym, producing (listen). English dictionaries generally accept both main alternatives as valid, and linguistic analyses show no clear advantage for either based on the pronunciation frequencies of similar English words. The pronunciation of the acronym can also vary in languages other than English.
Background:
The Graphics Interchange Format (GIF) is an image file format developed in 1987 by Steve Wilhite at the American online service provider CompuServe. GIFs are popularly used to display short, looped animations. The acronym GIF, commonly pronounced as a monosyllable, has a disputed pronunciation. Some individuals pronounce the word with a hard g, as in (listen), whereas others pronounce it with a soft g, as in (listen). A minority prefer to enunciate each letter of the acronym individually, creating the pronunciation (listen).Wilhite and the team who developed the file format included in the technical specifications that the acronym was to be pronounced with a soft g. In the specifications, the team wrote that "choosy programmers choose ... 'jif'", in homage to the peanut butter company Jif's advertising slogan of "choosy moms choose Jif". According to ABC News, the debate stretches as far back as 1994, with an author of an encyclopedia of image formats stating that "most people" seem to prefer the hard g pronunciation over his preferred soft g.
Background:
Other languages In French, the acronym tends to be pronounced [ʒif] (listen), with the voiced postalveolar fricative, [ʒ], as in the j in the French joie or the s in the English measure or vision, even though [dʒ], which does not occur in native vocabulary, tends to be retained in English loanwords (such as jeans). Some languages lack English's soft and hard g sounds in their phonologies; Spanish and Finnish, for example, lack [ʒ] in their native words. In Norwegian, GIF is pronounced with a hard g, [ɡ], unlike native words, for which the sequence ⟨gi⟩ would be pronounced with a voiced palatal approximant, [j], like the y in English yes.
Analysis of the dispute:
Cause In English, the linguistic controversy stems partly from the fact that there is no general rule for how the letter sequence gi is to be pronounced; the hard g prevails in words such as gift, while the soft g is used in others, such as ginger. In Old English, g would make the soft g sound as well as y's consonant sound, and when the hard g was added, both its hard and soft variations persisted when followed by i.An analysis of 269 words by linguist Michael Dow found near-tied results on whether a hard or soft g was more appropriate based on other English words; the results varied somewhat depending on what parameters were used. Of the 105 words that contained gi somewhere in the word, 68 used the soft g while only 37 employed its counterpart. However, the hard g words were found to be significantly more common in everyday English; comparatively obscure words like flibbertigibbet and tergiversate, both pronounced with a soft g, were included in the list of 68 soft gi words. When the prevalence of each word was taken into account, it was found that the hard and soft g appeared in nearly equal frequencies in gi words. No clear favorite was found by only using the words that begin with gi, nor by only using words with one syllable such as gift and gin.
Analysis of the dispute:
In her coverage of Dow's piece, Canadian linguist Gretchen McCulloch theorizes that since the hard and soft g in this context are used with near-equal frequency, when a person first encounters the word GIF, they make a guess akin to flipping a coin by comparing it to other words they have encountered in the past. Once they have a favorite one way or the other, the notion is solidified—leading McCulloch to comment that this "probably means we'll be fighting the gif pronunciation war for generations to come".
Analysis of the dispute:
Arguments A 2019 analysis by linguist Marten van der Meulen found that the most common arguments employed online over the pronunciation of GIF are "system" arguments, which support one side of the debate by contending that the pronunciation should flow from a consistent rule of language. One example of this would be the "system acronym" argument: the idea that because the letter G in GIF stands for the word graphics, it ought to be pronounced in the acronym with the same phoneme as in the word, i.e. with a hard g. This particular argument is sometimes accompanied by the quip that if the acronym were to be pronounced with a soft g, the word should be pronounced likewise, as ("jraphics"). A rebuttal to this argument is that acronyms are not required to follow the pronunciations of their root words. For example, the letter u in the word scuba (listen)—an acronym for self-contained underwater breathing apparatus—is pronounced even though its deriving word, underwater, is pronounced instead with . A similar acronym discrepancy arises with NASA (National Aeronautics and Space Administration, pronounced (listen)).Another example of a "system" argument is frequency analysis, which examines how many other English words employ hard or soft g pronunciations in other situations, similar to Dow's analysis. After Steve Wilhite announced his opinion that the soft g pronunciation was the only correct form, there was significant chatter on social media and in the press on both sides of the issue. An article by Casey Chan, writing for Gizmodo, argued that Wilhite was wrong because soft g words followed by if should be spelled with the letter j, such as the "jiffy" in "Jiffy Lube" and "be back in a jiffy", as well as the peanut butter company Jif.The next most common argument found in van der Meulen's analysis was an argument that cited an authority, usually Wilhite, as the creator of the file format. After Wilhite announced his support for the soft g pronunciation, many recognized him as the authority on the pronunciation of the word due to his creation of its format. Wilhite is the most commonly cited authority for the pronunciation of GIF; 65.2 percent of surveyed arguments citing an authority favored a soft g. Some, including Casey Chan, cited U.S. President Barack Obama in supporting the hard g; others cited various dictionaries, or software assistants such as Siri as authorities for GIF's pronunciation.
Analysis of the dispute:
Polling A 2014 Mashable poll of more than 30,000 people worldwide found that seven in ten used the hard g. Van der Meulen's analysis found that 57.2 percent of users who offered an opinion supported the hard g, while 31.8 percent favored the soft g. The analysis also found that 8.2 percent of users support both pronunciations, while favoring the soft g, and 2.8 percent favored enunciating each letter.An informal poll of developers on Stack Overflow showed that 65.6 percent of respondents favored the hard g pronunciation, while 26.3 percent used the soft g, 6 percent sounded out every letter, and 2 percent employed a different pronunciation altogether. However, an analysis from The Economist argued that the disparities in the results were exaggerated by sampling bias; the article commented that while the countries where the hard g is used make up 45 percent of the world's population, respondents from those countries comprised 79 percent of the sample. When the populations of each country were adjusted for, the analysis found that hard g still led, albeit by a narrower margin of 44 percent to 32 percent for soft g. In addition, this adjustment brought the popularity of pronouncing each letter up to 21 percent; this variation is common in Asian countries, where it is employed by half of Chinese respondents and 70 percent of South Korean respondents. Developed countries as a whole tended to favor the hard g pronunciation.
Analysis of the dispute:
Dictionaries Dictionary.com lists both the hard and soft g pronunciations for GIF, indicating the latter as the primary pronunciation, while the Cambridge Dictionary of American English and the Cambridge Advanced Learner's Dictionary offer only the hard g pronunciation. The online Merriam-Webster and Lexico dictionaries list both pronunciations. The 2005 edition of the New Oxford American Dictionary gave only the soft g pronunciation; the 2010 edition of the Oxford Dictionary of English listed both pronunciations, listing the soft g first. The French Petit Robert and Petit Larousse list only [ʒif] in their entries. In the Norwegian Academy's dictionary of the Norwegian language, the pronunciation is transcribed with a hard g, as [gifː].
Incidents:
In May 2013, Wilhite was presented with a lifetime achievement award at the annual Webby Awards honoring excellence on the Internet. Upon accepting the award at the ceremony, Wilhite displayed a five-word slide that simply read, in all caps: "It's pronounced 'jif' not 'gif'". Here, jif refers to the soft g pronunciation. Following the speech, Wilhite told The New York Times: "The Oxford English Dictionary accepts both pronunciations. They are wrong. It is a soft g ... End of story."The audience attending the ceremony reacted positively to the short speech, but it generated controversy online, with some commentators pushing back against Wilhite's pronunciation. Van der Meulen remarked that this "seems to be the first ever coiner of a word (or acronym, to be more specific) who gave usage advice about his own creation". More than 17,000 tweets were made in the aftermath of the speech, making "GIF" a trending topic, and more than 50 news articles were written on the incident. The Columbia Journalism Review remarked three years later that the debate seemed to peak with this incident. The peanut butter company Jif responded to a tweet asking how they were feeling following the speech, commenting, "We're nuts about him today." Seven years later, Jif performed a publicity stunt with GIF-hosting platform Giphy. The two companies released a joint statement, arguing that the correct pronunciation employs a hard g and releasing limited-time jars of peanut butter labeled "GIF" instead of "JIF".In October 2013, The New York Times faced some light criticism on social media for an article that began with the words, "A GIF, pronounced jif, is a compressed image file format invented in 1987." The article included a link to an earlier article from the newspaper, covering Wilhite's speech and the quote he gave them. In December 2013, Alex Trebek, the host of game show Jeopardy!, attracted media attention when the final clue of the episode referenced Wilhite's presentation and opinion on the pronunciation. Trebek read out the responses of contestants using a soft g when the word "GIF" appeared in the correct responses of all three contestants. In the past, Trebek had pronounced each letter individually, to remain neutral.In June 2014, Barack Obama, then President of the United States, opined that the acronym should be pronounced with a hard g when prompted in a conversation with David Karp, the founder of Tumblr. Miles Klee of The Daily Dot highlighted an April 2013 post on the White House's Tumblr blog, which included a humorous infographic with the text "animated GIFs (hard 'g')".
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Mazovia encoding**
Mazovia encoding:
Mazovia encoding is a character set used under DOS to represent Polish text. The character set derives from code page 437, with specific positions modified to accommodate Polish letters. Notably, the Mazovia encoding maintains the block graphic characters from code page 437, distinguishing it from IBM's later official Central European code page 852, which failed to preserve all block graphics, leading to incorrect display in programs such as Norton Commander.
Mazovia encoding:
The Mazovia encoding was designed in 1984 by Jan Klimowicz of IMM. It was designed as part of a project to develop and produce a Polish IBM PC clone codenamed "Mazovia 1016". The code page was specifically optimized for the peripheral devices commonly used with the Mazovia 1016 computer, including a graphics card with dual switchable graphics, a keyboard with US English and Russian layouts, and printers with Polish fonts. The Mazovia encoding gained widespread acceptance and distribution in Poland when the Polish National Bank (NBP) adopted it as a standard in 1986. The NBP played a significant role in facilitating the production of compatible computers by Ipaco, which utilized Taiwanese components under the guidance of Zbigniew Jakubas and Krzysztof Sochacki.
Mazovia encoding:
Some ambiguity exists in the official code page assignment for the Mazovia encoding: PTS-DOS and S/DOS support this encoding under code page 667 (CP667). The same encoding was also called code page 991 (CP991) in some Polish software, however, the FreeDOS implementation of code page 991 seems not to be identical to this original encoding.
Mazovia encoding:
The DOS code page switching file NECPINW.CPI for NEC Pinwriters supports the Mazovia encoding under both code pages 667 and 991. FreeDOS has meanwhile introduced support for the original Mazovia encoding under code page 790 (CP790) as well. The Fujitsu DL6400 (Pro) / DL6600 (Pro) printers support the Mazovia encoding as well. This encoding is known as code page 3843 in Star printers.
Character set:
Each character is shown with its equivalent Unicode code point. Only the second half of the table (128–255) is shown, all of the first half (0–127) being the same as ASCII and code page 437.
Character set:
Several variants of this encoding exists: Mazovia 157 (ś is at 9D instead of 9E) Fido Mazovia (ć is at 0x87 instead of 8D and Ć is at 0x80 instead of 0x95) FreeDOS Mazovia (złoty sign at 9B). FreeDOS supports this variant under code page 991, although the original definition of code page 991, which pre-dates FreeDOS, appears to have been identical to code page 667 / 790.These variants are not fully compliant with the definition of code page 667 / 790 and should therefore not be associated with these numbers.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Radical 89**
Radical 89:
Radical 89 or radical double x (爻部) meaning "trigrams" is one of the 34 Kangxi radicals (214 radicals in total) composed of 4 strokes.
In the Kangxi Dictionary, there are 16 characters (out of 49,030) to be found under this radical.
This radical does not exist in the Table of Indexing Chinese Character Components predominantly adopted by Simplified Chinese dictionaries published in mainland China.
Literature:
Fazzioli, Edoardo (1987). Chinese calligraphy : from pictograph to ideogram : the history of 214 essential Chinese/Japanese characters. calligraphy by Rebecca Hon Ko. New York: Abbeville Press. ISBN 0-89659-774-1.
Lunde, Ken (Jan 5, 2009). "Appendix J: Japanese Character Sets" (PDF). CJKV Information Processing: Chinese, Japanese, Korean & Vietnamese Computing (Second ed.). Sebastopol, Calif.: O'Reilly Media. ISBN 978-0-596-51447-1.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Allotropes of iron**
Allotropes of iron:
At atmospheric pressure, three allotropic forms of iron exist, depending on temperature: alpha iron (α-Fe, ferrite), gamma iron (γ-Fe, austenite), and delta iron (δ-Fe). At very high pressure, a fourth form exists, epsilon iron (ε-Fe, hexaferrum). Some controversial experimental evidence suggests the existence of a fifth high-pressure form that is stable at very high pressures and temperatures.The phases of iron at atmospheric pressure are important because of the differences in solubility of carbon, forming different types of steel. The high-pressure phases of iron are important as models for the solid parts of planetary cores. The inner core of the Earth is generally assumed to consist essentially of a crystalline iron-nickel alloy with ε structure. The outer core surrounding the solid inner core is believed to be composed of liquid iron mixed with nickel and trace amounts of lighter elements.
Standard pressure allotropes:
Alpha iron (α-Fe) Below 912 °C (1,674 °F), iron has a body-centered cubic (bcc) crystal structure and is known as α-iron or ferrite. It is thermodynamically stable and a fairly soft metal. α-Fe can be subjected to pressures up to ca. 15 GPa before transforming into a high-pressure form termed ε-Fe discussed below.
Standard pressure allotropes:
Magnetically, α-iron is paramagnetic at high temperatures. However, below its Curie temperature (TC or A2) of 771 °C (1044K or 1420 °F), it becomes ferromagnetic. In the past, the paramagnetic form of α-iron was known as beta iron (β-Fe). Even though the slight tetragonal distortion in the ferromagnetic state does constitute a true phase transition, the continuous nature of this transition results in only minor importance in steel heat treating. The A2 line forms the boundary between the beta iron and alpha fields in the phase diagram in Figure 1.
Standard pressure allotropes:
Similarly, the A2 boundary is of only minor importance compared to the A1 (eutectoid), A3 and Acm critical temperatures. The Acm, where austenite is in equilibrium with cementite + γ-Fe, is beyond the right edge in Fig. 1. The α + γ phase field is, technically, the β + γ field above the A2. The beta designation maintains continuity of the Greek-letter progression of phases in iron and steel: α-Fe, β-Fe, austenite (γ-Fe), high-temperature δ-Fe, and high-pressure hexaferrum (ε-Fe).
Standard pressure allotropes:
The primary phase of low-carbon or mild steel and most cast irons at room temperature is ferromagnetic α-Fe. It has a hardness of approximately 80 Brinell. The maximum solubility of carbon is about 0.02 wt% at 727 °C (1,341 °F) and 0.001% at 0 °C (32 °F). When it dissolves in iron, carbon atoms occupy interstitial "holes". Being about twice the diameter of the tetrahedral hole, the carbon introduces a strong local strain field.
Standard pressure allotropes:
Mild steel (carbon steel with up to about 0.2 wt% C) consists mostly of α-Fe and increasing amounts of cementite (Fe3C, an iron carbide). The mixture adopts a lamellar structure called pearlite. Since bainite and pearlite each contain α-Fe as a component, any iron-carbon alloy will contain some amount of α-Fe if it is allowed to reach equilibrium at room temperature. The amount of α-Fe depends on the cooling process.
Standard pressure allotropes:
A2 critical temperature and induction heating β-Fe and the A2 critical temperature are important in induction heating of steel, such as for surface-hardening heat treatments. Steel is typically austenitized at 900–1000 °C before it is quenched and tempered. The high-frequency alternating magnetic field of induction heating heats the steel by two mechanisms below the Curie temperature: resistance or Joule heating and ferromagnetic hysteresis losses. Above the A2 boundary, the hysteresis mechanism disappears and the required amount of energy per degree of temperature increase is thus substantially larger than below A2. Load-matching circuits may be needed to vary the impedance in the induction power source to compensate for the change.
Standard pressure allotropes:
Gamma iron (γ-Fe) When heating iron above 912 °C (1,674 °F), its crystal structure changes to a face-centered cubic (fcc) crystalline structure. In this form it is called gamma iron (γ-Fe) or austenite. γ-iron can dissolve considerably more carbon (as much as 2.04% by mass at 1,146 °C). This γ form of carbon saturation is exhibited in austenitic stainless steel.
Standard pressure allotropes:
Delta iron (δ-Fe) Peculiarly, above 1,394 °C (2,541 °F) iron changes back into the bcc structure, known as δ-Fe. δ-iron can dissolve as much as 0.08% of carbon by mass at 1,475 °C. It is stable up to its melting point of 1,538 °C (2,800 °F). δ-Fe cannot exist above 5.2 GPa, with austenite instead transitioning directly to a molten phase at these high pressures.
High pressure allotropes:
Epsilon iron / Hexaferrum (ε-Fe) At pressures above approximately 10-13 GPa and temperatures up to around 700 K, α-iron changes into a hexagonal close-packed (hcp) structure, which is also known as ε-iron or hexaferrum; the higher-temperature γ-phase also changes into ε-iron, but generally requires far higher pressures as temperature increases. The triple point of hexaferrum, ferrite, and austenite is 10.5 GPa at 750 K. Antiferromagnetism in alloys of epsilon-Fe with Mn, Os and Ru has been observed.
High pressure allotropes:
Experimental high temperature and pressure An alternate stable form, if it exists, may appear at pressures of at least 50 GPa and temperatures of at least 1,500 K; it has been thought to have an orthorhombic or a double hcp structure. As of December 2011, recent and ongoing experiments are being conducted on high-pressure and superdense carbon allotropes.
Phase transitions:
Melting and boiling points The melting point of iron is experimentally well defined for pressures less than 50 GPa.
Phase transitions:
For greater pressures, published data (as of 2007) put the γ-ε-liquid triple point at pressures that differ by tens of gigapascals and 1000 K in the melting point. Generally speaking, molecular dynamics computer simulations of iron melting and shock wave experiments suggest higher melting points and a much steeper slope of the melting curve than static experiments carried out in diamond anvil cells.The melting and boiling points of iron, along with its enthalpy of atomization, are lower than those of the earlier group 3d elements from scandium to chromium, showing the lessened contribution of the 3d electrons to metallic bonding as they are attracted more and more into the inert core by the nucleus; however, they are higher than the values for the previous element manganese because that element has a half-filled 3d subshell and consequently its d-electrons are not easily delocalized. This same trend appears for ruthenium but not osmium.
Phase transitions:
Structural phase transitions The exact temperatures at which iron will transition from one crystal structure to another depends on how much and what type of other elements are dissolved in the iron. The phase boundary between the different solid phases is drawn on a binary phase diagram, usually plotted as temperature versus percent iron. Adding some elements, such as Chromium, narrows the temperature range for the gamma phase, while others increase the temperature range of the gamma phase. In elements that reduce the gamma phase range, the alpha-gamma phase boundary connects with the gamma-delta phase boundary, forming what is usually called the Gamma loop. Adding Gamma loop additives keeps the iron in a body-centered cubic structure and prevents the steel from suffering phase transition to other solid states.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Charlotte (cake)**
Charlotte (cake):
A charlotte is a type of bread pudding that can be served hot or cold. It is also referred to as an "icebox cake". Bread, sponge cake, crumbs or biscuits/cookies are used to line a mold, which is then filled with a fruit puree or custard. The baked pudding could then be sprinkled with powdered sugar and glazed with a salamander, a red-hot iron plate attached to a long handle, though modern recipes would likely use more practical tools to achieve a similar effect.
Charlotte (cake):
The variant charlotte russe also called charlotte parisienne, created by the French chef Antonin Carême, uses a mold lined with ladyfingers and filled with Bavarian cream.
Classically, stale bread dipped in butter was used as the lining, but sponge cake or ladyfingers may be used today. The filling may be covered with a thin layer of similarly flavoured gelatin.
History:
The charlotte is known to have existed by the late-18th century. In 1796, The New-York Magazine published a poem by Joel Barlow called The Hasty-Pudding which included the following lines: Some have claimed that it was a tribute to Britain's Queen Charlotte.In 1815, Marie-Antoine Carême claims to have thought of charlotte à la parisienne "pendant mon établissement", presumably in 1803, when he opened his own pastry shop.: 446 The earliest known English recipe is from the 1808 London edition of Maria Rundell's New System of Domestic Cookery: Cut as many very thin slices of white bread as will cover the bottom and line the sides of a baking dish, but first rub it thick with butter. Put apples, in thin slices, into the dish, in layers, till full, strewing sugar between, and bits of butter. In the mean time, soak as many thin slices of bread as will cover the whole, in warm milk, over which lay a plate, and a weight to keep the bread close on the apples. Bake slowly three hours. To a middling sized dish use half a pound of butter in the whole.
History:
In Carême's 1815 Le Pâtissier royal parisien, he mentions many varieties of charlotte: à la parisienne, à la française, à l'italienne, aux macarons d'avelines, aux gaufres aux pistaches, de pommes, de pomme d'api, d'abricots, de pêches, de pommes glacée aux abricots, de pommes au beurre, parisienne à la vanille, de pommes; he mentions à la russe as the name used by others for what he called à la parisienne.
Types:
There are many variants. Most charlottes are served cool, so they are more common in warmer seasons. Fruit charlottes usually combine a fruit purée or preserve, like raspberry or pear, with a custard filling or whipped cream. Charlottes are not always made with fruit; some, notably charlotte russe, use custard or Bavarian cream, and a chocolate charlotte is made with layers of chocolate mousse filling. The Algerian charlotte is made with honey, dates, orange rind, and almonds.The 19th-century Russian sharlotka is a baked pudding with layers of brown bread and apple sauce, and has since evolved into a simple dessert of chopped apples baked in a sweet batter.
Types:
Charlotte russe Charlotte russe or charlotte à la russe is a cold dessert of Bavarian cream set in a mold lined with ladyfingers.A simplified version of charlotte russe was a popular dessert or on-the-go treat sold in candy stores and luncheonettes in New York City, during the 1930s, 1940s, and 1950s. It consisted of a paper cup filled with yellow cake and whipped cream topped with half a maraschino cherry. The bottom of the cup is pushed up to eat.Charlotte royale is made with the same filling as a Charlotte russe, but the ladyfingers are replaced by slices of Swiss roll.
Etymology:
The earliest attestation of "charlotte" is in a New York magazine in 1796. Its origins are unclear. It may come from the woman's name. One etymology suggests it is a corruption of the Old English word charlyt, a kind of custard, or charlets, a meat dish.It is often claimed that Carême named it charlotte after one of the various foreign royals he served, but the name appears years earlier.
Etymology:
Carême's preferred name for charlotte à la russe was charlotte à la parisienne, and he says (in 1815) that "others" prefer to call it russe,: 446 so it is unlikely that he named it russe for Czar Alexander I as has been proposed.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Nailset**
Nailset:
A nailset or nail punch is a hand tool used for driving the exposed head of a nail or pin below the surface of a piece of wood, such as when installing decorative moulding or face-fastening wood flooring.
Nailset:
Though they vary in design, nailsets are typically made from a hard round or square steel rod which tapers at one end to a flat or slightly hollowed tip. The tip is placed against the head of the nail, while the other end of the nailset is struck with a hammer. Nailsets come with different sized tips suited to different sized nail heads.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**'t**
't:
In the Dutch language, the word 't (Dutch pronunciation: [ət]) is a contraction of the article "het", meaning "the". 't can be found as a tussenvoegsel, a word that is positioned between a person's first and last name. Careful writers should use an apostrophe (U+2019 ’ ) in front of the t – and not confuse it with a left quotation mark (U+2018 ‘ ) and put a space before the apostrophe.
Examples:
Dirk van 't Klooster Evert-Jan 't Hoen Gerard 't Hooft Haas Visser 't Hooft in 't Veld (surname) Bart Spring in 't Veld Sophie in 't Veld Jacobus Henricus van 't Hoff John van 't Schip Maarten 't Hart Tom van 't Hek Van 't Hof (surname) Van 't Wout (surname) Willem Visser 't Hooft Youp van 't Hek
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Kirsanov reaction**
Kirsanov reaction:
The Kirsanov reaction is a method for the synthesis of certain organophosphorus compounds. In this reaction a tertiary phosphine is combined with a halogen and then an amine to give the iminophosphines, which are useful ligands and useful reagents.
Kirsanov reaction:
A typical reaction involves triphenylphosphine with bromine to give bromotriphenylphosphonium bromide: Ph3P + Br2 → Ph3PBr+Br−This salt is treated in situ with alkylamines to give the iminophosphorane: Ph3PBr+Br− + 3 RNH2 → Ph3PNR + 2 RNH3+Br−The method is used when the conventional Staudinger reaction is not applicable, i.e. when the organic azide is not available to generate the iminophosphorane. Thus, it is used to make iminophosphoranes from alkyl amines.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Pasta allo scarpariello**
Pasta allo scarpariello:
Pasta allo scarpariello is a traditional Italian pasta dish from Naples.It is typically made with spaghetti, tomatoes, Pecorino Romano cheese, Parmigiano Reggiano cheese, basil, chili pepper, extra virgin olive oil, garlic and salt.Its name literally means "shoemaker's pasta".
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Epigram**
Epigram:
An epigram is a brief, interesting, memorable, and sometimes surprising or satirical statement. The word is derived from the Greek ἐπίγραμμα epígramma "inscription" from ἐπιγράφειν epigráphein "to write on, to inscribe", and the literary device has been employed for over two millennia.
The presence of wit or sarcasm tends to distinguish non-poetic epigrams from aphorisms and adages, which tend to lack those qualities.
Ancient Greek:
The Greek tradition of epigrams began as poems inscribed on votive offerings at sanctuaries – including statues of athletes – and on funerary monuments, for example "Go tell it to the Spartans, passersby...". These original epigrams did the same job as a short prose text might have done, but in verse. Epigram became a literary genre in the Hellenistic period, probably developing out of scholarly collections of inscriptional epigrams.
Ancient Greek:
Though modern epigrams are usually thought of as very short, Greek literary epigram was not always as short as later examples, and the divide between "epigram" and "elegy" is sometimes indistinct (they share a characteristic metre, elegiac couplets). In the classical period, the clear distinction between them was that epigrams were inscribed and meant to be read, while elegies were recited and meant to be heard. Some elegies could be quite short, but only public epigrams were longer than ten lines. All the same, the origin of epigram in inscription exerted a residual pressure to keep things concise, even when they were recited in Hellenistic times. Many of the characteristic types of literary epigram look back to inscriptional contexts, particularly funerary epigram, which in the Hellenistic era becomes a literary exercise. Many "sympotic" epigrams combine sympotic and funerary elements – they tell their readers (or listeners) to drink and live for today because life is short. Generally, any theme found in classical elegies could be and were adapted for later literary epigrams.
Ancient Greek:
Hellenistic epigrams are also thought of as having a "point" – that is, the poem ends in a punchline or satirical twist. By no means do all Greek epigrams behave this way; many are simply descriptive, but Meleager of Gadara and Philippus of Thessalonica, the first comprehensive anthologists, preferred the short and witty epigram. Since their collections helped form knowledge of the genre in Rome and then later throughout Europe, Epigram came to be associated with 'point', especially because the European epigram tradition takes the Latin poet Martial as its principal model; he copied and adapted Greek models (particularly the contemporary poets Lucillius and Nicarchus) selectively and in the process redefined the genre, aligning it with the indigenous Roman tradition of "satura", hexameter satire, as practised by (among others) his contemporary Juvenal. Greek epigram was actually much more diverse, as the Milan Papyrus now indicates.
Ancient Greek:
A major source for Greek literary epigram is the Greek Anthology, a compilation from the 10th century AD based on older collections, including those of Meleager and Philippus. It contains epigrams ranging from the Hellenistic period through the Imperial period and Late Antiquity into the compiler's own Byzantine era – a thousand years of short elegiac texts on every topic under the sun. The Anthology includes one book of Christian epigrams as well as one book of erotic and amorous homosexual epigrams called the Μοῦσα Παιδικἠ (Mousa Paidike, "The Boyish Muse").
Ancient Roman:
Roman epigrams owe much to their Greek predecessors and contemporaries. Roman epigrams, however, were often more satirical than Greek ones, and at times used obscene language for effect. Latin epigrams could be composed as inscriptions or graffiti, such as this one from Pompeii, which exists in several versions and seems from its inexact meter to have been composed by a less educated person. Its content makes it clear how popular such poems were: Admiror, O paries, te non cecidisse ruinis qui tot scriptorum taedia sustineas.I'm astonished, wall, that you haven't collapsed into ruins, since you're holding up the weary verse of so many poets.However, in the literary world, epigrams were most often gifts to patrons or entertaining verse to be published, not inscriptions. Many Roman writers seem to have composed epigrams, including Domitius Marsus, whose collection Cicuta (now lost) was named after the poisonous plant Cicuta for its biting wit, and Lucan, more famous for his epic Pharsalia. Authors whose epigrams survive include Catullus, who wrote both invectives and love epigrams – his poem 85 is one of the latter.
Ancient Roman:
Odi et amo. Quare id faciam fortasse requiris.
Ancient Roman:
Nescio, sed fieri sentio, et excrucior.I hate and I love. Maybe you'd like to know why I do? I don't know, but I feel it happening, and I am tormented.Martial, however, is considered to be the master of the Latin epigram. His technique relies heavily on the satirical poem with a joke in the last line, thus drawing him closer to the modern idea of epigram as a genre. Here he defines his genre against a (probably fictional) critic (in the latter half of 2.77): Disce quod ignoras: Marsi doctique Pedonis saepe duplex unum pagina tractat opus.
Ancient Roman:
Non sunt longa quibus nihil est quod demere possis, sed tu, Cosconi, disticha longa facis.Learn what you don't know: one work of (Domitius) Marsus or learned Pedo often stretches out over a doublesided page.
A work isn't long if you can't take anything out of it, but you, Cosconius, write even a couplet too long.Poets known for their epigrams whose work has been lost include Cornificia.
English:
In early English literature the short couplet poem was dominated by the poetic epigram and proverb, especially in the translations of the Bible and the Greek and Roman poets.
English:
Two successive lines of verse that rhyme with each other are known as a couplet. Since 1600, the couplet has been featured as a part of the longer sonnet form, most notably in William Shakespeare's sonnets. Sonnet 76 is an example. The two-line poetic form as a closed couplet was also used by William Blake in his poem "Auguries of Innocence", and also by Byron in his poem Don Juan, by John Gay in his fables, and by Alexander Pope in his An Essay on Man.
English:
The first work of English literature penned in North America was Robert Hayman's Quodlibets, Lately Come Over from New Britaniola, Old Newfoundland, which is a collection of over 300 epigrams, many of which do not conform to the two-line rule or trend. While the collection was written between 1618 and 1628 in what is now Harbour Grace, Newfoundland, it was published shortly after his return to Britain.In Victorian times the epigram couplet was often used by the prolific American poet Emily Dickinson. Her poem No. 1534 is a typical example of her eleven poetic epigrams. The novelist George Eliot also included couplets throughout her writings. Her best example is in her sequenced sonnet poem entitled Brother and Sister in which each of the eleven sequenced sonnet ends with a couplet. In her sonnets, the preceding lead-in-line, to the couplet ending of each, could be thought of as a title for the couplet, as is shown in Sonnet VIII of the sequence.
English:
During the early 20th century, the rhymed epigram couplet form developed into a fixed verse image form, with an integral title as the third line. Adelaide Crapsey codified the couplet form into a two-line rhymed verse of ten syllables per line with her image couplet poem On Seeing Weather-Beaten Trees, first published in 1915.
By the 1930s, the five-line cinquain verse form became widely known in the poetry of the Scottish poet William Soutar. These were originally labelled epigrams but later identified as image cinquains in the style of Adelaide Crapsey.
J. V. Cunningham was also a noted writer of epigrams (a medium suited to a "short-breathed" person).
Poetic epigrams:
What is an Epigram? a dwarfish whole, Its body brevity, and wit its soul.
— Samuel Taylor Coleridge ("Epigram", 1809)Some can gaze and not be sick But I could never learn the trick.
There's this to say for blood and breath; They give a man a taste for death.
— A. E. HousmanLittle strokes Fell great oaks.
— Benjamin FranklinHere lies my wife: here let her lie! Now she's at rest – and so am I.
— John DrydenThree Poets, in three distant Ages born, Greece, Italy, and England did adorn.
The First in loftiness of thought surpassed; The Next in Majesty; in both the Last.
The force of Nature could no farther go: To make a third she joined the former two.
— John Dryden ("Epigram on Milton", 1688 (Epigram about John Milton: many poets commented on Milton, including DrydenWe have a pretty witty king, Whose word no man relies on.
He never said a foolish thing, And never did a wise one.
— John Wilmot, 2nd Earl of Rochester (epigram about Charles II of England)I am His Highness' dog at Kew; Pray tell me, sir, whose dog are you? — Alexander PopeI'm tired of Love: I'm still more tired of Rhyme.
But Money gives me pleasure all the time.
— Hilaire BellocI hope for nothing. I fear nothing. I am free.
— Nikos KazantzakisTo define the beautiful is to misunderstand it.
— Charles Robert Anon (Fernando Pessoa)This Humanist whom no belief constrained Grew so broad-minded he was scatter-brained.
— J.V. CunninghamAll things pass Love and mankind is grass.
— Stevie Smith
In art:
"When Guns Speak, Death Settles Disputes" is Charles Marion Russell's epigrammatic title for a clash by gunfighters of the Old West in America.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Goldbach's conjecture**
Goldbach's conjecture:
Goldbach's conjecture is one of the oldest and best-known unsolved problems in number theory and all of mathematics. It states that every even natural number greater than 2 is the sum of two prime numbers.
The conjecture has been shown to hold for all integers less than 4×1018, but remains unproven despite considerable effort.
History:
On 7 June 1742, the German mathematician Christian Goldbach wrote a letter to Leonhard Euler (letter XLIII), in which he proposed the following conjecture: Goldbach was following the now-abandoned convention of considering 1 to be a prime number, so that a sum of units would indeed be a sum of primes.
He then proposed a second conjecture in the margin of his letter, which implies the first: ... eine jede Zahl, die grösser ist als 2, ein aggregatum trium numerorum primorum sey.
Every integer greater than 2 can be written as the sum of three primes.
Euler replied in a letter dated 30 June 1742 and reminded Goldbach of an earlier conversation they had had ("... so Ew vormals mit mir communicirt haben ..."), in which Goldbach had remarked that the first of those two conjectures would follow from the statement This is in fact equivalent to his second, marginal conjecture.
In the letter dated 30 June 1742, Euler stated: Dass ... ein jeder numerus par eine summa duorum primorum sey, halte ich für ein ganz gewisses theorema, ungeachtet ich dasselbe nicht demonstriren kann.That ... every even integer is a sum of two primes, I regard as a completely certain theorem, although I cannot prove it.
Each of the three conjectures above has a natural analog in terms of the modern definition of a prime, under which 1 is excluded.
History:
A modern version of the first conjecture is: A modern version of the marginal conjecture is: And a modern version of Goldbach's older conjecture of which Euler reminded him is: These modern versions might not be entirely equivalent to the corresponding original statements. For example, if there were an even integer N = p + 1 larger than 4, for p a prime, that could not be expressed as the sum of two primes in the modern sense, then it would be a counterexample to the modern version of the third conjecture (without being a counterexample to the original version). The modern version is thus probably stronger (but in order to confirm that, one would have to prove that the first version, freely applied to any positive even integer n, could not possibly rule out the existence of such a specific counterexample N). In any case, the modern statements have the same relationships with each other as the older statements did. That is, the second and third modern statements are equivalent, and either implies the first modern statement.
History:
The third modern statement (equivalent to the second) is the form in which the conjecture is usually expressed today. It is also known as the "strong", "even", or "binary" Goldbach conjecture. A weaker form of the second modern statement, known as "Goldbach's weak conjecture", the "odd Goldbach conjecture", or the "ternary Goldbach conjecture", asserts that A proof for the weak conjecture was proposed in 2013 by Harald Helfgott. Helfgott's proof has not yet appeared in a peer-reviewed publication, though was accepted for publication in the Annals of Mathematics Studies series in 2015 and has been undergoing further review and revision since. The weak conjecture would be a corollary of the strong conjecture: if n − 3 is a sum of two primes, then n is a sum of three primes. However, the converse implication and thus the strong Goldbach conjecture remain unproven.
Verified results:
For small values of n, the strong Goldbach conjecture (and hence the weak Goldbach conjecture) can be verified directly. For instance, in 1938, Nils Pipping laboriously verified the conjecture up to n = 100000. With the advent of computers, many more values of n have been checked; T. Oliveira e Silva ran a distributed computer search that has verified the conjecture for n ≤ 4×1018 (and double-checked up to 4×1017) as of 2013. One record from this search is that 3325581707333960528 is the smallest number that cannot be written as a sum of two primes where one is smaller than 9781.
Heuristic justification:
Statistical considerations that focus on the probabilistic distribution of prime numbers present informal evidence in favour of the conjecture (in both the weak and strong forms) for sufficiently large integers: the greater the integer, the more ways there are available for that number to be represented as the sum of two or three other numbers, and the more "likely" it becomes that at least one of these representations consists entirely of primes.
Heuristic justification:
A very crude version of the heuristic probabilistic argument (for the strong form of the Goldbach conjecture) is as follows. The prime number theorem asserts that an integer m selected at random has roughly a 1/ln m chance of being prime. Thus if n is a large even integer and m is a number between 3 and n/2, then one might expect the probability of m and n − m simultaneously being prime to be 1/ln m ln(n − m). If one pursues this heuristic, one might expect the total number of ways to write a large even integer n as the sum of two odd primes to be roughly ln ln ln n)2.
Heuristic justification:
Since ln n ≪ √n, this quantity goes to infinity as n increases, and one would expect that every large even integer has not just one representation as the sum of two primes, but in fact very many such representations.
Heuristic justification:
This heuristic argument is actually somewhat inaccurate, because it assumes that the events of m and n − m being prime are statistically independent of each other. For instance, if m is odd, then n − m is also odd, and if m is even, then n − m is even, a non-trivial relation because, besides the number 2, only odd numbers can be prime. Similarly, if n is divisible by 3, and m was already a prime other than 3, then n − m would also be coprime to 3 and thus be slightly more likely to be prime than a general number. Pursuing this type of analysis more carefully, G. H. Hardy and John Edensor Littlewood in 1923 conjectured (as part of their Hardy–Littlewood prime tuple conjecture) that for any fixed c ≥ 2, the number of representations of a large integer n as the sum of c primes n = p1 + ⋯ + pc with p1 ≤ ⋯ ≤ pc should be asymptotically equal to ln ln xc, where the product is over all primes p, and γc,p(n) is the number of solutions to the equation n = q1 + ⋯ + qc mod p in modular arithmetic, subject to the constraints q1, …, qc ≠ 0 mod p. This formula has been rigorously proven to be asymptotically valid for c ≥ 3 from the work of Ivan Matveevich Vinogradov, but is still only a conjecture when c = 2. In the latter case, the above formula simplifies to 0 when n is odd, and to ln ln n)2 when n is even, where Π2 is Hardy–Littlewood's twin prime constant := 0.66016 18158 46869 57392 78121 10014 … This is sometimes known as the extended Goldbach conjecture. The strong Goldbach conjecture is in fact very similar to the twin prime conjecture, and the two conjectures are believed to be of roughly comparable difficulty.
Heuristic justification:
The Goldbach partition function is the function that associates to each even integer the number of ways it can be decomposed into a sum of two primes. Its graph looks as a comet, and is therefore called Goldbach's comet.Goldbach's comet suggests tight upper and lower bounds on the number of representations of an even number as the sum of two primes, and also that the number of these representations depend strongly on the value modulo 3 of the number.
Rigorous results:
The strong Goldbach conjecture is much more difficult than the weak Goldbach conjecture. Using Vinogradov's method, Nikolai Chudakov, Johannes van der Corput, and Theodor Estermann showed that almost all even numbers can be written as the sum of two primes (in the sense that the fraction of even numbers up to some N which can be so written tends towards 1 as N increases). In 1930, Lev Schnirelmann proved that any natural number greater than 1 can be written as the sum of not more than C prime numbers, where C is an effectively computable constant; see Schnirelmann density. Schnirelmann's constant is the lowest number C with this property. Schnirelmann himself obtained C < 800000. This result was subsequently enhanced by many authors, such as Olivier Ramaré, who in 1995 showed that every even number n ≥ 4 is in fact the sum of at most 6 primes. The best known result currently stems from the proof of the weak Goldbach conjecture by Harald Helfgott, which directly implies that every even number n ≥ 4 is the sum of at most 4 primes.In 1924, Hardy and Littlewood showed under the assumption of the generalized Riemann hypothesis that the number of even numbers up to X violating the Goldbach conjecture is much less than X1⁄2 + c for small c.In 1948, using sieve theory, Alfréd Rényi showed that every sufficiently large even number can be written as the sum of a prime and an almost prime with at most K factors. Chen Jingrun showed in 1973 using the methods of sieve theory that every sufficiently large even number can be written as the sum of either two primes, or a prime and a semiprime (the product of two primes). See Chen's theorem for further information.
Rigorous results:
In 1975, Hugh Montgomery and Robert Charles Vaughan showed that "most" even numbers are expressible as the sum of two primes. More precisely, they showed that there exist positive constants c and C such that for all sufficiently large numbers N, every even number less than N is the sum of two primes, with at most CN1 − c exceptions. In particular, the set of even integers that are not the sum of two primes has density zero.
Rigorous results:
In 1951, Yuri Linnik proved the existence of a constant K such that every sufficiently large even number is the sum of two primes and at most K powers of 2. János Pintz and Imre Ruzsa found in 2020 that K = 8 works. Assuming the generalized Riemann hypothesis, K = 7 also works, as shown by Roger Heath-Brown and Jan-Christoph Schlage-Puchta in 2002.
Related problems:
Although Goldbach's conjecture implies that every positive integer greater than one can be written as a sum of at most three primes, it is not always possible to find such a sum using a greedy algorithm that uses the largest possible prime at each step. The Pillai sequence tracks the numbers requiring the largest number of primes in their greedy representations.Similar problems to Goldbach's conjecture exist in which primes are replaced by other particular sets of numbers, such as the squares: It was proven by Lagrange that every positive integer is the sum of four squares. See Waring's problem and the related Waring–Goldbach problem on sums of powers of primes.
Related problems:
Hardy and Littlewood listed as their Conjecture I: "Every large odd number (n > 5) is the sum of a prime and the double of a prime". This conjecture is known as Lemoine's conjecture and is also called Levy's conjecture.
The Goldbach conjecture for practical numbers, a prime-like sequence of integers, was stated by Margenstern in 1984, and proved by Melfi in 1996: every even number is a sum of two practical numbers.
Related problems:
A strengthening of the Goldbach conjecture proposed by Harvey Dubner states that every even integer greater than 4208 is the sum of two twin primes. Only 34 even integers less than 4208 are not the sum of two twin primes. Dubner has verified computationally that this list is complete up to 2×1010. A proof of this stronger conjecture would not only imply Goldbach's conjecture, but also the twin prime conjecture.
In popular culture:
Goldbach's Conjecture (Chinese: 哥德巴赫猜想) is the title of the biography of Chinese mathematician and number theorist Chen Jingrun, written by Xu Chi.
In popular culture:
The conjecture is a central point in the plot of the 1992 novel Uncle Petros and Goldbach's Conjecture by Greek author Apostolos Doxiadis, in the short story "Sixty Million Trillion Combinations" by Isaac Asimov and also in the 2008 mystery novel No One You Know by Michelle Richmond.Goldbach's conjecture is part of the plot of the 2007 Spanish film Fermat's Room.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Voluntary Control Council for Interference by Information Technology Equipment**
Voluntary Control Council for Interference by Information Technology Equipment:
The Voluntary Control Council for Interference by Information Technology Equipment or VCCI is the Japanese body governing RF emissions (i.e. electromagnetic interference) standards. [1] It was formed in December 1985.
The VCCI mark of conformance also appears on some electrical equipment sold outside Japan.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Ski cross**
Ski cross:
Ski cross is a skiing competition which incorporates terrain features traditionally found in freestyle skiing with courses which include big-air jumps and high-banked turns. In spite of the fact that it is a timed racing event, it is often considered a type of freestyle skiing. What sets ski cross apart from other alpine skiing disciplines is that it involves more than one skier racing down the course. Any intentional contact with other competitors like grabbing or any other forms of contact meant to give the competitor an advantage leads to disqualification.
Ski cross:
Ski cross is a part of the FIS Freestyle World Ski Championships, the world championship organized by the FIS for freestyle skiing. First organized in 1986, the world championship is now held every odd year. In 2010 the sport debuted as a part of the Winter Olympic Games and has been contested ever since. It was a part of the Winter X Games until 2012.
Overview:
In a time trial or qualification round, every competitor skis down the course, which is built to encompass both naturally occurring terrain and artificial features like jumps, rollers or banks. After the time trial, the fastest 32 skiers (fastest 16 if not 32 competitors) compete in a knockout series in rounds of four. A group of four skiers start simultaneously and attempt to reach the end of the course. The first two to cross the finish line will advance to the next round. At the end, the big final and small final rounds determine 1st to 4th and 5th to 8th places, respectively.
History:
The idea for a multi-racer single run with obstacles seems to have been borne at Alyeska Ski Resort in Alaska (USA) during the late 1970s. A group of racers, led by Scott Hunter an employee at Alyeska wanted to take advantage of the mountain's natural bobsled-like gullies and rollers in a race that was a hybrid between a downhill ski race and Motocross. It eventually evolved into a race using up to 5 skiers on the course at the same time, all racing against each other. No ski poles were allowed, and racers were allowed to interfere and contact other racers as much as they wished. As a result, there were typically several falls (and some injuries) from intentional collisions during each run. Interest waned in the early 1980s due to athletes graduating high school and leaving for college, while other racers concentrated on USSA and FIS sanctioned events. The last ski cross event on the original “silvertip” track occurred in the early 1980s.
History:
A similar idea originated with Jim "Too Tall" Essick, one of the founders of Recreational Sports Marketing (RSM), in the late 1980s. Essick wanted to bring the excitement of motocross to skiing, in order to make ski races more exciting for spectators. The idea was pitched to several corporations, but none wanted to sponsor the concept at the time. In 1991, a television programme filmed a snowboard cross segment, and the name "boarder cross" was trademarked. Eventually, similar events were staged with skis and, thus, skier cross was born.
FIS Freestyle World Ski Championships:
In addition to moguls and aerials, ski cross competitions were added to the International Ski Federation (FIS)'s FIS Freestyle Skiing World Cup calendar in 2004.
Winter Olympic Games:
Ski cross debuted in the Olympics at the 2010 Winter Olympics where Michael Schmid won the men's event, and Ashleigh McIvor of Canada won the women's event.
In the 2014 Winter Olympics France's Men swept the podium while in the women's event, Canadians Marielle Thompson and Kelsey Serwa finished first and second respectively. Swedish athlete Anna Holmlund took bronze.
Winter Olympic Games:
In the 2018 Winter Olympics in Korea, Canada continued its domination of the sport. Kelsey Serwa won her second Olympic medal, this time a gold. Canadian teammate Brittany Phelan took home the silver. Swiss skier Fanny Smith won bronze. On the men's side, Brady Leman got redemption after crashing in the final at Sochi by winning gold in Korea. Swiss athlete Marc Bischofberger won silver and Russian Sergey Ridzik won bronze (competing under the Olympic Flag).
Winter X Games:
Ski cross was in the first fifteen Winter X Games, an event which features extreme sports, and was in all Winter X Games until the 2012 Winter X Games. Ski cross, boardercross, and mono ski cross were cut from the 2013 Winter X Games due to the cost of building the cross course.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Tampon tax**
Tampon tax:
Tampon tax (or period tax) is a popular term used to call attention to tampons, and other feminine hygiene products, being subject to value-added tax (VAT) or sales tax, unlike the tax exemption status granted to other products considered basic necessities. Proponents of tax exemption argue that tampons, sanitary napkins, menstrual cups and comparable products constitute basic, unavoidable necessities for women, and any additional taxes constitute a pink tax.
Tampon tax:
Proponents of tax exemption argue that tampons, sanitary napkins, menstrual cups and other products which serve the basic menstrual cycle constitute unavoidable necessities for women and should be classified alongside other unavoidable, tax-exempt necessities, such as groceries and personal medical items. The BBC estimates that women need to use feminine hygiene products for about a week each month for about 30 years. According to the American Medical Association over 17,000 menstrual hygiene items are needed in a user's lifetime amounting to a cost of around 2,000 dollars. While sales tax policy varies across jurisdictions, these products were typically taxed at the same rate as non-essential goods, such as in the United States, while other countries, such as the United Kingdom and Ireland, reduced or eliminated their general consumption tax on sanitary products. When asked about equivalent exemptions for men, proponents argue that no male products, condoms included, are comparable to feminine hygiene products, since menstruation is biological and "feminine hygiene is not a choice". However, others argue that other basic necessities such as toilet paper are still taxed in many countries, for example in the UK at 20%. As the vast majority of consumers of feminine hygiene products are women, the increased cost has been criticized as being discriminatory against women. The tampon tax is not a special tax levied directly on feminine hygiene products.Since about 2004, many countries have abolished or reduced sales taxes for tampons and pads, including Kenya, Canada, India, Colombia, Australia, Germany, and Rwanda.
Tax law by jurisdiction:
Below are examples of countries that have or used to have a tampon tax (ordered by most recent changes to the country's tax system first): Belize will eliminate the General Sales Tax on feminine hygiene products on April 1, 2023. They will also no longer be subject to importation duties.
The tampon tax was abolished in Britain on 1 January 2021, following Britain's departure from the EU, meaning there is now a zero rate of VAT applying to women's sanitary products.
Rwanda removed their VAT on all sanitary products on 10 December 2019. The change was made in response to school absence and dropouts caused by 18% of Rwandan women and girls being unable to attend school or work due to not being able to afford feminine hygiene products.
Australia repealed the 10% tax on tampons and pads on 1 January 2019 after an 18-year campaign, after all states and territories agreed to make sanitary products explicitly exempt from the GST.
In Colombia, on 14 November 2018, the Constitutional Court unanimously ruled to strike down a 5 per cent tax on tampons and pads on gender equality grounds.
India eliminated its 12% tax on feminine hygiene products in 2018. This was after a year of lobbying by advocacy groups and celebrities. Actor Akshay Kumar featured as the lead male actor in Pad Man and raised awareness about the taboo on menstruation.
Mauritius eradicated its tampon tax in 2017 following a popular online petition initiated and led by gender consultant and feminist Trisha Gukhool.
Canada removed its tampon tax in mid-2015 following an online petition signed by thousands.
In 2004, Kenya was the first country to abolish sales tax for menstrual products.
European Union European Union member states can decide whether to continue to apply VAT to menstrual hygiene products, as EU rules preventing the creation of new VAT exemptions have been relaxed.
Ireland levies no value-added tax on tampons, panty liners, and sanitary towels. Ireland is the only EU country to have a zero tax rate on sanitary goods. The rate predates legislation restricting zero-rating (a grandfather clause).
In Germany, the amount of tax on sanitary items was cut from 19% (the basic rate) to 7% (the reduced rate) as of 1 January 2020. This is said to be a step toward a tax system that does not discriminate against women.
Other European countries France, Spain, Portugal, and the Netherlands either plan to, or have already, slashed their taxes in recent years.
Tax law by jurisdiction:
United Kingdom There is a zero rate of VAT applying to women's sanitary products in the UK. The United Kingdom had levied a value-added tax on sanitary products since it joined the European Economic Community in 1973. This rate was reduced to 5% specifically for sanitary products in 2000 with lobbying from Member of Parliament Dawn Primarolo saying that this reduction was "about fairness, and doing what we can to lower the cost of a necessity." This is the lowest rate possible under the European Union's value added tax law, which as of 2016 does not allow a reduction to zero rates. The only goods that can be zero rated are those with historic zero rates that have been applied continually since before 1991. The UK Independence Party raised the issue in the 2015 general election with promised to withdraw from the European Union and allow the zero rate. Prime Minister David Cameron commented, when prompted, that the tampon tax campaign was "long-standing" and a complicated issue within the European Union. In England, one in ten women between 14 and 21 cannot afford menstrual management products.Laura Coryton led a "Stop taxing periods, period" campaign with an online petition to have the European Union remove the value-added tax for sanitary products. George Osborne mentioned the petition by name in his 2015 Autumn Statement pledge to end the tampon tax at the European Union level. The petition platform's CEO cited the campaign as an example of successful clicktivism, with over 320,000 signatures. In March 2016, Parliament created legislation to eliminate the tampon VAT, following a budget amendment by opposition Labour MP Paula Sherriff. It was expected to go into effect by April 2018 but did not do so; several British women protested for it publicly while displaying blood stains from their periods. On 3 October 2018, new EU VAT rules were put forward by the European Parliament which will allow EU countries to stop taxing sanitary products, but these will not come into effect until 2022. The UK left the EU in January 2020, and following the end of the transition period (at the beginning of 2021) the tampon tax was abolished in the UK, meaning there is now a zero rate of VAT applying to women's sanitary products. Research published by Tax Policy Associates in November 2022 suggested that savings resulting from the abolition of the tax had been retained by retailers, rather than passed onto women.
Tax law by jurisdiction:
Scotland In July 2017, a pilot programme began in Scotland to have free sanitary products available at schools and food banks for women who cannot afford them. The pilot scheme was launched for six months in Aberdeen, with £42,500 of funding from the devolved Scottish Government in order to address the growing scandal of "period poverty". It was believed 1,000 girls would benefit from the scheme, as there were reports of teenage girls using tissues, toilet roll, torn T-shirts, and even newspaper as makeshift sanitary products, with some girls even skipping school altogether. It was decided to launch the scheme to improve attainment and school attendance, as well as improve confidence amongst teenage girls during their period; Scotland is believed to be the first country in the world to give out free sanitary products as part of a government-sponsored initiative. Further to this half-year pilot programme, Scotland's opposition Labour Party stated their intention to introduce a bill to make this permanent.
Tax law by jurisdiction:
A study by the WHO and UNICEF showed that one out of five women in Scotland have been forced to improvise with items including toilet paper and old clothes because of the high cost of commercial products.
Tax law by jurisdiction:
The Scottish government in 2019 began providing free sanitary products for poorer students at schools, with hopes that this would be rolled out across the entire nation.A bill to make period products available for free to everyone who needs them received preliminary approval in the Scottish Parliament in February 2020 and Members for the Scottish Parliament (MSPs) approved The Period Products (Free Provision) (Scotland) Act on Tuesday 24 November 2020. Local authorities in Scotland now have a legal duty to ensure that tampons and sanitary pads are available freely to "anyone who needs them". The bill was introduced by Labour MSP Monica Lennon who began campaigning to end period poverty in 2016. She stated that "Periods don't stop for pandemics and the work to improve access to essential tampons, pads and reusables has never been more important". The measure requires the provision of free period products in schools, colleges, and universities, as well as football clubs, restaurants, pubs, and public concert halls.The act will impose a legal duty on the local authorities to make period products available free of cost. With this act Scotland became the first country in the world to provide universal access to free period products.
Tax law by jurisdiction:
United States Menstrual hygiene products are considered by many states within the United States as "tangible individual property" resulting in additional sales tax. This additional tax increases the overall price and further limits accessibility to menstrual hygiene products to lower-income women. These products are classified as medical devices but are not eligible for purchase through government funded assistance programs.In the United States, almost all states tax "tangible individual property" but exempt non-luxury "necessities": groceries, prescriptions, prosthetics, agriculture supplies, and sometimes clothes—the exemptions vary between states. Most states charge sales tax for women's pads and tampons. Five states do not have a state sales tax (Alaska, Delaware, Montana, New Hampshire, and Oregon), and as of June 2019, thirteen US states specifically exempted essential hygiene products: Utah, Ohio, California, Connecticut, Florida, Illinois, Maryland, Massachusetts, Minnesota, New Jersey, New York, Nevada, Pennsylvania, and Rhode Island. California repealed the tax in its 2019 state budget, but only for the two-year duration of the budget. Seven other states have introduced such legislation, most recently Nebraska, Virginia, and Arizona. In November 2021, Michigan ended its tampon tax.Many federal assistance programs such as SNAP (Supplemental Nutrition Assistance Program) and WIC (Women, Infants and Children) do not allow the use of those funds for products such as pads or tampons despite the products' classification as medical devices. The IRS does not classify female products as medical devices, thus blocking women from buying them with pre-tax dollars in both flexible spending accounts and health savings accounts.Recently, there is a movement to ensure access to the basic necessity of menstrual products for women.
Tax law by jurisdiction:
The movement of menstrual equity has been gaining traction in recent years. This movement is based on the central tenet that period products should be affordable and accessible to women who menstruate. The movement aims to reduce the stigma around menstruation that has prevented legislative action toward achieving menstrual equity and reproductive education. Significant barriers to menstrual equity are the costs that affect women in shelters, low-income women and their daughters, LGBTQ people with uteruses, and those facing housing insecurity.In 2019, House representative Grace Meng introduced the Menstrual Equity for All bill. The bill would ensure menstrual products are free and un-rationed in schools, jails, shelters, and in all public federal buildings with federal funds. This bill proposes that menstrual products are covered under Medicaid to limit financial barriers for low-income women. The bill would also mandate large employers to free period products to employees. Since being introduced in the House, the bill is under review by the appropriate subcommittee.There have been some changes to the tampon taxes, but most of these changes are at the state or city level. On a smaller scale, individual cities have also changed their laws in favor of eliminating the tampon tax (e.g. Denver, Colorado). Maine eliminated the tax in 2022.
Tax law by jurisdiction:
California California Assemblywoman Cristina Garcia reported that California women each pay roughly US$7 per month over 40 years, constituting US$20 million in annual taxes. Garcia and Ling Ling Chang proposed a bill to remove the tampon tax in early 2016. At this time, only a handful of the country's states exempted tampons, and several others had no state sales tax. Garcia held that women were taxed "for being women" and bore an economic burden for having no other choice but to buy these products. Garcia and Chang added that the tax was "regulatory discrimination" that disproportionately affected poor women and women of color, and that it likely persisted due to social taboos against discussing menstruation. Both houses of the California State Legislature voted to exempt tampons from taxation in June 2016, but the bill was vetoed by the state's governor, Jerry Brown, three months later.California Governor Jerry Brown vetoed AB-1561 due to the potential loss of money in taxing feminine hygiene products. In response, Cristina Garcia co-authored AB-0479: Common Cents Tax Reform Act with Lorena Gonzalez Fletcher, which is a new measure outlining a solution to offset the feminine product and diaper tax exemption by increasing the tax on hard liquor. This bill was ultimately gutted and amended with provisions on workers' compensation.In 2017, California State Legislature passed AB 10 (Ch. 687) requiring public middle schools and high schools where at least 40% of students meet the federal poverty level to stock half of the restrooms with free tampons and sanitary napkins. The law was passed in an effort to eliminate the cost burden and keep low-income students in schools during their menstrual cycle.Companies involved in supplying the necessary feminine hygiene products (tampons and pads) for complete menstrual care in the restrooms of schools include WAXIE and Hospeco. They also supply various options for menstrual product dispensers that have a time delay mechanism to prevent products from being overused and/or abused.In June 2019, menstrual products were exempted from the sales tax in the state budget, but only for the two-year duration of the budget.In July 2021, California passed AB 150, making the menstrual-product tax exemption permanent.In September 2021, California passed AB367, requiring public schools grades 6–12, California State University and community college districts, as well as encouraging the Regents of the University of California and private institutions of higher learning to provide free menstrual products.
Tax law by jurisdiction:
New York In July 2016, New York State exempted feminine hygiene products from taxation, reducing the state's tax revenue by an estimated US$10 million annually. In the court case of the "Tampon Tax", attorney Zoe Salzman defended the movement of repealing the taxes on feminine menstrual products. Part of the case was also a plea for refunding the women for all of the taxes that they had to pay on feminine menstrual products in the past. Ultimately the case ruled to repeal the taxes on feminine menstrual products, but not to refund the women of New York the previous taxes. Connecticut and Illinois also removed their tax in 2016, with Florida following suit in 2017.
Tax law by jurisdiction:
New Jersey A 2018 empirical study on New Jersey's 2005 tax break on menstrual products found that "repealing tampon taxes removes an unequal tax burden and could make menstrual hygiene products more accessible for low-income consumers". The study utilized data from more than 16,000 purchases in 2004–2006 made in New Jersey, Delaware, Connecticut, Maryland, and Pennsylvania, using these latter nearby states as the control group. Through a differences-in-differences approach, they found that after the repeal, consumer prices on menstrual products decreased by 7.3%, relative to the control states. This was greater than the 6.9% sales tax, suggesting that the consumers benefitted from the tax break. Upon further analysis, the study also found that the decrease in consumer prices was greater for low-income consumers than high-income consumers (3.9% decrease versus 12.4% decrease). This suggests that low-income consumers received the most benefit from the tax break, while high-income consumers shared the benefit with producers of menstrual products.
Tax law by jurisdiction:
Washington On July 1, 2020, Washington became the 20th state to remove tax from menstrual products.
Tax law by jurisdiction:
Michigan On November 5, 2021, Michigan Governor Gretchen Whitmer signed into law bill SB 153 repealing the tax on feminine hygiene products. The bill went into effect 90 days later on February 3, 2022 Other states Many states that have tampon taxes have tried to repeal or eliminate the tax via legislation and have been denied. US states such as Tennessee, Arizona, and Virginia have introduced legislation. In Utah, Representative Susan Duckworth introduced a bill that would have exempted menstrual hygiene products from sales tax, titled "Hygiene Tax Act". Products exempted included such items as tampons and disposable diapers. Legal scholars point out that when the bill was sent to the Utah taxation committee to be voted on, eight of the eleven men voted against the bill. In November 2019, during a “special legislative session” and a Governor's signature, Utah became the thirteenth US state to abolish the tampon tax. Effective from January 1, 2020.In November 2019, Ohio became the 12th US state to repeal the pink or tampon tax. Both Representatives Greta Johnson and Brigid Kelly introduced the bills for years and finally became law in November 2019 – that would exempt feminine menstrual products from the state's sales tax. Legal scholars note that Ohio women still have to pay around four million dollars each year due to taxes on these items as they are not exempt from local taxes.In Tennessee, the same bill was sent to the Senate and House to reduce the 7% sales tax on feminine products, defined as "any product to be used by women with respect to menstruation ... [including] tampons, pads, liners, [and] cups". Both the Senate and the House did not pass the bill.In Virginia, Delegate Mark Keam introduced House Bill 952. The bill wanted to exempt the same products as Ohio and Utah from the 5.3% sales tax. Like the other two states, the bill was not passed.
Tax law by jurisdiction:
Kenya In 2004, Kenya became the first country to exempt menstrual products from Value Added Tax. In 2011, Kenya exempted imported menstrual products from excise tax. In 2016, Kenya exempted the raw materials used for the manufacture of menstrual products from the 16% value added tax (VAT) and 25% excise tax.The government also allocated Ksh 240M to provision of free sanitary pads to girls in public governmental schools through the National Sanitary Towel Programme. This increased to Ksh 400M in 2015. However, this funding declined to 260M in 2022/2023 budget.In 2016, the Kenyan parliament introduced an amendment to the Basic Education Act which guaranteed the provision of free, sufficient and quality sanitary towels to every girl child registered and enrolled in a public basic education institution who has reached puberty and the provision of a safe and environmentally sound mechanism for disposal of the sanitary towels. These began to be distributed in 2018. The government established a Menstrual Hygiene Management Policy in 2019.
Tax law by jurisdiction:
Canada In January 2015, the Canadian government recognised sanitary products as an essential item, ending the GST tax on all sanitary products. The Canadian government is currently debating whether to make menstrual products free in the workplace. The Government of Canada has published a Notice of Intent to the Canadian Gazette seeking feedback on providing free products in federally regulated workplaces; stakeholders and Canadians were able to feedback until July 2, 2019. Providing free menstrual products in workplaces is expected to bring better health and workplace productivity and reduced stigma around the conversation of menstruation. Under Part II of the Canada labour Code, employers are already required to provide toilet paper, soap, warm water, and a way to dry hands. Women or gender non-conforming persons who require menstrual products make up 40% of the federal workforce, and the financial burden of sanitary products rests entirely on them, burdening or severely negatively impacting those who need them, adding required sanitation products will allow for greater equality in the workplace and more opportunity for people with lower income.On May 28, 2015, the Canadian Federal Government voted in favour of lifting the tampon tax federally. The tax was ultimately repealed July 1, 2015. This was inspired by an online petition organized by Canadian Menstruators, an online advocacy group, which thousands of Canadians signed and presented to the Federal Government of Canada in Ottawa.Critics have pointed out that sanitary products are still taxed under tariffs under Canadian tariff laws.
Tax law by jurisdiction:
China In China, menstrual products are subject to a 13% sales tax, the same as for most consumer items.
Indian Population and Menstruation With a population of 355 million, India has approximately 88% of women who are unable to acquire safe menstrual products because of a lack of capital access. The menstrual products aren't thought to be essential, therefore overpriced, and out of reach for over 70% of Indian women who menstruate.
Activism:
Supporters of the exemption of said taxes are calling their efforts "menstrual equity", explaining it as a social movement that strives for feminine products like tampons to be considered necessities. Activists are often led by members of the government. At the beginning of 2016, councilwoman Julissa Ferreras-Copeland led a movement with a tampon tax pilot project ultimately providing free pads and tampons at a local high school in Queens, New York. Ferreras-Copeland's effort has now been expanded into 25 different schools around New York City. Other democrats including Ydanis Rodriguez and council speaker Melissa Mark-Viverito are advocating for state legislature to stop taxing sanitary products.
Activism:
Free the Tampon, an advocate for free menstrual products estimates that it would cost less than $5 a year per user to provide tampons and pads in restrooms at schools and businesses.Activists with United for Access organized a petition and march to put pressure on the US Department of Education to eradicate period poverty in the US. They called on the government to treat period products as health necessities, support policies that protect students who menstruate, and fund period products in school bathrooms. The campaign was built in partnership with the period poverty-focused nonprofit founded by social entrepreneur, Nadya Okamoto. Okamoto is also the author of the book, Period Power: a Manifesto for the Menstrual Movement, which focuses heavily on advocating against the "tampon tax." When Okamoto was 21-years-old, she led her organization to host the first-ever National Period Day on October 19, 2019, which focused on pushing legislators to eliminate the "tampon tax." On National Period Day 2019, the organization supported local organizers to host 60 rallies in all 50 states.Slovakia levies a 20% tax on sanitary products—the basic goods rate. A Slovakian film director commented that there are no plans to change the law and that east Europe missed elements of feminist change while living under communist government.Other campaigns have emerged such as #Freeperiods encouraging state policies to provide menstrual products. #Freeperiods is a campaign started by Amika George who started a petition aimed at encouraging the UK government to provide low-income families with subsidised menstrual products. This campaign since then has grown exponentially. The Free periods initiative has recently paired up with The Red Box Project, which is a community-based initiative that provides free menstrual products and underwear to young women who struggle financially. The Red Box Projects notes the importance of their initiative as according to #Freeperiods one out of 10 girls can't afford to purchase menstrual products and over 137,000 girls have missed school due to period poverty.Within the Global North, tampon activism has been strong and well-supported. Countries are moving forward and either removing tampon taxes or providing free menstrual products. In 2018 the Scottish Government moved forward and became the first country to provide free menstrual products for students at schools and universities. Additionally, other countries have moved forward in implementing policies around providing sanitary products and abolishing taxes on menstrual products. Kenya and Uganda moved forward and removed taxes on these products. Furthermore, the Kenyan government also provides funding to schools that provide pads.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Flash mob**
Flash mob:
A flash mob (or flashmob) is a group of people who assemble suddenly in a public place, perform for a brief time, then quickly disperse, often for the purposes of entertainment, satire, and artistic expression. Flash mobs may be organized via telecommunications, social media, or viral emails.The term, coined in 2003, is generally not applied to events and performances organized for the purposes of politics (such as protests), commercial advertisement, publicity stunts that involve public relation firms, or paid professionals. In these cases of a planned purpose for the social activity in question, the term smart mobs is often applied instead.
Flash mob:
The term "flash rob" or "flash mob robberies", a reference to the way flash mobs assemble, has been used to describe a number of robberies and assaults perpetrated suddenly by groups of teenage youth. Bill Wasik, originator of the first flash mobs, and a number of other commentators have questioned or objected to the usage of "flash mob" to describe criminal acts. Flash mob has also been featured in some Hollywood movie series, such as Step Up.
History:
First flash mob The first flash mobs were created in Manhattan in 2003, by Bill Wasik, senior editor of Harper's Magazine. The first attempt was unsuccessful after the targeted retail store was tipped off about the plan for people to gather. Wasik avoided such problems during the first successful flash mob, which occurred on June 17, 2003, at Macy's department store, by sending participants to preliminary staging areas—in four Manhattan bars—where they received further instructions about the ultimate event and location just before the event began.More than 130 people converged upon the ninth-floor rug department of the store, gathering around an expensive rug. Anyone approached by a sales assistant was advised to say that the gatherers lived together in a warehouse on the outskirts of New York, that they were shopping for a "love rug", and that they made all their purchase decisions as a group. Subsequently, 200 people flooded the lobby and mezzanine of the Hyatt hotel in synchronized applause for about 15 seconds, and a shoe boutique in SoHo was invaded by participants pretending to be tourists on a bus trip.Wasik claimed that he created flash mobs as a social experiment designed to poke fun at hipsters and to highlight the cultural atmosphere of conformity and of wanting to be an insider or part of "the next big thing". The Vancouver Sun wrote, "It may have backfired on him ... [Wasik] may instead have ended up giving conformity a vehicle that allowed it to appear nonconforming." In another interview he said "the mobs started as a kind of playful social experiment meant to encourage spontaneity and big gatherings to temporarily take over commercial and public areas simply to show that they could".
History:
Precedents and precursors In 19th-century Tasmania, the term flash mob was used to describe a subculture consisting of female prisoners, based on the term flash language for the jargon that these women used. The 19th-century Australian term flash mob referred to a segment of society, not an event, and showed no other similarities to the modern term flash mob or the events it describes.In 1973, the story "Flash Crowd" by Larry Niven described a concept similar to flash mobs. With the invention of popular and very inexpensive teleportation, an argument at a shopping mall—which happens to be covered by a news crew—quickly swells into a riot. In the story, broadcast coverage attracts the attention of other people, who use the widely available technology of the teleportation booth to swarm first that event—thus intensifying the riot—and then other events as they happen. Commenting on the social impact of such mobs, one character (articulating the police view) says, "We call them flash crowds, and we watch for them." In related short stories, they are named as a prime location for illegal activities (such as pickpocketing and looting) to take place. Lev Grossman suggests that the story title is a source of the term "flash mob".
History:
Flash mobs began as a form of performance art. While they started as an apolitical act, flash mobs may share superficial similarities to political demonstrations. In the 1960s, groups such as the Yippies used street theatre to expose the public to political issues. Flash mobs can be seen as a specialized form of smart mob, a term and concept proposed by author Howard Rheingold in his 2002 book Smart Mobs: The Next Social Revolution.
Use of the term:
The first documented use of the term flash mob as it is understood today was in 2003 in a blog entry posted in the aftermath of Wasik's event. The term was inspired by the earlier term smart mob.Flash mob was added to the 11th edition of the Concise Oxford English Dictionary on July 8, 2004, where it noted it as an "unusual and pointless act" separating it from other forms of smart mobs such as types of performance, protests, and other gatherings. Also recognized noun derivatives are flash mobber and flash mobbing. Webster's New Millennium Dictionary of English defines flash mob as "a group of people who organize on the Internet and then quickly assemble in a public place, do something bizarre, and disperse." This definition is consistent with the original use of the term; however, both news media and promoters have subsequently used the term to refer to any form of smart mob, including political protests; a collaborative Internet denial of service attack; a collaborative supercomputing demonstration; and promotional appearances by pop musicians. The press has also used the term flash mob to refer to a practice in China where groups of shoppers arrange online to meet at a store in order to drive a collective bargain.
Legality:
The city of Brunswick, Germany, has stopped flash mobs by strictly enforcing the already existing law of requiring a permit to use any public space for an event. In the United Kingdom, a number of flash mobs have been stopped over concerns for public health and safety. The British Transport Police have urged flash mob organizers to "refrain from holding such events at railway stations".
Crime:
Referred to as flash robs, flash mob robberies, or flash robberies by the media, crimes organized by teenage youth using social media rose to international notoriety beginning in 2011. The National Retail Federation does not classify these crimes as "flash mobs" but rather "multiple offender crimes" that utilize "flash mob tactics". In a report, the NRF noted, "multiple offender crimes tend to involve groups or gangs of juveniles who already know each other, which does not earn them the term 'flash mob'." Mark Leary, a professor of psychology and neuroscience at Duke University, said that most "flash mob thuggery" involves crimes of violence that are otherwise ordinary, but are perpetrated suddenly by large, organized groups of people: "What social media adds is the ability to recruit such a large group of people, that individuals who would not rob a store or riot on their own feel freer to misbehave without being identified." It's hard for me to believe that these kids saw some YouTube video of people Christmas caroling in a food court, and said, 'Hey, we should do that, except as a robbery!' More likely, they stumbled on the simple realization (like I did back in 2003, but like lots of other people had before and have since) that one consequence of all this technology is that you can coordinate a ton of people to show up in the same place at the same time.
Crime:
These kids are taking part in what's basically a meme. They heard about it from friends, and probably saw it on YouTube, and now they're getting their chance to participate in it themselves.
Crime:
HuffPost raised the question asking if "the media was responsible for stirring things up", and added that in some cases the local authorities did not confirm the use of social media making the "use of the term flash mob questionable". Amanda Walgrove wrote that criminals involved in such activities don't refer to themselves as "flash mobs", but that this use of the term is nonetheless appropriate. Dr. Linda Kiltz drew similar parallels between flash robs and the Occupy Movement stating, "As the use of social media increases, the potential for more flash mobs that are used for political protest and for criminal purposes is likely to increase.".
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Urban heat island**
Urban heat island:
An urban heat island (UHI) is an urban area that is significantly warmer than its surrounding rural areas due to human activities. The temperature difference is usually larger at night than during the day, and is most apparent when winds are weak. UHI is most noticeable during the summer and winter. The main cause of the UHI effect is from the modification of land surfaces. A study has shown that heat islands can be affected by proximity to different types of land cover, so that proximity to barren land causes urban land to become hotter and proximity to vegetation makes it cooler. Waste heat generated by energy usage is a secondary contributor. As a population center grows, it tends to expand its area and increase its average temperature. The term heat island is also used; the term can be used to refer to any area that is relatively hotter than the surrounding, but generally refers to human-disturbed areas.Monthly rainfall is greater downwind of cities, partially due to the UHI. Increases in heat within urban centers increases the length of growing seasons and decreases the occurrence of weak tornadoes. The UHI decreases air quality by increasing the production of pollutants such as ozone, and decreases water quality as warmer waters flow into area streams and put stress on their ecosystems.
Urban heat island:
Not all cities have a distinct urban heat island, and the heat island characteristics depend strongly on the background climate of the area in which the city is located. Effects within a city can vary significantly depending on local environmental conditions. Heat can be reduced by tree cover and green space, which act as sources of shade and promote evaporative cooling.
Urban heat island:
Other options include green roofs, passive daytime radiative cooling applications, and the use of lighter-colored surfaces and less absorptive building materials in urban areas, to reflect more sunlight and absorb less heat.Climate change is not the cause of urban heat islands but it is causing more frequent and more intense heat waves which in turn amplify the urban heat island effect in cities.: 993 Compact, dense urban development may increase the urban heat island effect, leading to higher temperatures and increased exposure.
Description:
Definition A definition of urban heat island is: "The relative warmth of a city compared with surrounding rural areas.": 2926 This relative warmth is caused by "heat trapping due to land use, the configuration and design of the built environment, including street layout and building size, the heat-absorbing properties of urban building materials, reduced ventilation, reduced greenery and water features, and domestic and industrial heat emissions generated directly from human activities".: 2926 Diurnal variability For most cities, the difference in temperature between the urban and surrounding rural area is largest at night. While temperature difference is significant all year round, the difference is generally bigger in winter. The typical temperature difference is several degrees between the city and surrounding areas. The difference in temperature between an inner city and its surrounding suburbs is frequently mentioned in weather reports, as in "68 °F (20 °C) downtown, 64 °F (18 °C) in the suburbs". In the United States, the difference during the day is between 0.6–3.9 °C (1–7 °F), while the difference during the night is 1.1–2.8 °C (2–5 °F). The difference is larger for bigger cities and areas with a high air humidity.Though the warmer air temperature within the UHI is generally most apparent at night, urban heat islands exhibit significant and somewhat paradoxical diurnal behavior. The air temperature difference between the UHI and the surrounding environment is large at night and small during the day.Throughout the daytime, particularly when the skies are cloudless, urban surfaces are warmed by the absorption of solar radiation. Surfaces in the urban areas tend to warm faster than those of the surrounding rural areas. By virtue of their high heat capacities, urban surfaces act as a giant reservoir of heat energy. For example, concrete can hold roughly 2,000 times as much heat as an equivalent volume of air. As a result, the large daytime surface temperature within the UHI is easily seen via thermal remote sensing. As is often the case with daytime heating, this warming also has the effect of generating convective winds within the urban boundary layer. It is theorized that, due to the atmospheric mixing that results, the air temperature perturbation within the UHI is generally minimal or nonexistent during the day, though the surface temperatures can reach extremely high levels.At night, the situation reverses. The absence of solar heating leads to the decrease of atmospheric convection and the stabilization of urban boundary layer. If enough stabilization occurs, an inversion layer is formed. This traps urban air near the surface, and keeping surface air warm from the still-warm urban surfaces, resulting in warmer nighttime air temperatures within the UHI. Other than the heat retention properties of urban areas, the nighttime maximum in urban canyons could also be due to the blocking of "sky view" during cooling: surfaces lose heat at night principally by radiation to the comparatively cool sky, and this is blocked by the buildings in an urban area. Radiative cooling is more dominant when wind speed is low and the sky is cloudless, and indeed the UHI is found to be largest at night in these conditions.
Description:
Seasonal variability The urban heat island temperature difference is not only usually larger at night than during the day, but also larger in winter than in summer. This is especially true in areas where snow is common, as cities tend to hold snow for shorter periods of time than surrounding rural areas (this is due to the higher insulation capacity of cities, as well as human activities such as plowing). This decreases the albedo of the city and thereby magnifies the heating effect. Higher wind speeds in rural areas, particularly in winter, can also function to make them cooler than urban areas. Regions with distinct wet and dry seasons will exhibit a larger urban heat island effect during the dry season.
Description:
Models and simulations If a city or town has a good system of taking weather observations the UHI can be measured directly. An alternative is to use a complex simulation of the location to calculate the UHI, or to use an approximate empirical method. Such models allow the UHI to be included in estimates of future temperatures rises within cities due to climate change.
Description:
Leonard O. Myrup published the first comprehensive numerical treatment to predict the effects of the urban heat island (UHI) in 1969. The heat island effect was found to be the net result of several competing physical processes. In general, reduced evaporation in the city center and the thermal properties of the city building and paving materials are the dominant parameters. Modern simulation environments include ENVI-met, which simulates all interactions between building and ground surfaces, plants and ambient air.
Causes:
There are several causes of an urban heat island (UHI); for example, dark surfaces absorb significantly more solar radiation, which causes urban concentrations of roads and buildings to heat more than suburban and rural areas during the day; materials commonly used in urban areas for pavement and roofs, such as concrete and asphalt, have significantly different thermal bulk properties (including heat capacity and thermal conductivity) and surface radiative properties (albedo and emissivity) than the surrounding rural areas. This causes a change in the energy budget of the urban area, often leading to higher temperatures than surrounding rural areas.Pavements, parking lots, roads or, more generally speaking transport infrastructure, contribute significantly to the urban heat island effect. For example pavement infrastructure is a main contributor to urban heat during summer afternoons in Phoenix, United States.Another major reason is the lack of evapotranspiration (for example, through lack of vegetation) in urban areas. The U.S. Forest Service found in 2018 that cities in the United States are losing 36 million trees each year. With a decreased amount of vegetation, cities also lose the shade and evaporative cooling effect of trees.Other causes of a UHI are due to geometric effects. The tall buildings within many urban areas provide multiple surfaces for the reflection and absorption of sunlight, increasing the efficiency with which urban areas are heated. This is called the "urban canyon effect". Another effect of buildings is the blocking of wind, which also inhibits cooling by convection and prevents pollutants from dissipating. Waste heat from automobiles, air conditioning, industry, and other sources also contributes to the UHI.High levels of pollution in urban areas can also increase the UHI, as many forms of pollution change the radiative properties of the atmosphere. UHI not only raises urban temperatures but also increases ozone concentrations because ozone is a greenhouse gas whose formation will accelerate with the increase of temperature.
Causes:
Climate change as an amplifier Climate change is not a cause but an amplifier of the urban heat island effect. The IPCC Sixth Assessment Report from 2022 summarized the available research accordingly: "Climate change increases heat stress risks in cities [...] and amplifies the urban heat island across Asian cities at 1.5°C and 2°C warming levels, both substantially larger than under present climates [...].": 66 The report goes on to say: "In a warming world, increasing air temperature makes the urban heat island effect in cities worse. One key risk is heatwaves in cities that are likely to affect half of the future global urban population, with negative impacts on human health and economic productivity.": 993 There are unhelpful interactions between heat and built infrastructure: These interactions increase the risk of heat stress for people living in cities.: 993
Impacts:
On weather and climate Aside from the effect on temperature, UHIs can produce secondary effects on local meteorology, including the altering of local wind patterns, the development of clouds and fog, the humidity, and the rates of precipitation. The extra heat provided by the UHI leads to greater upward motion, which can induce additional shower and thunderstorm activity. In addition, the UHI creates during the day a local low pressure area where relatively moist air from its rural surroundings converges, possibly leading to more favorable conditions for cloud formation. Rainfall rates downwind of cities are increased between 48% and 116%. Partly as a result of this warming, monthly rainfall is about 28% greater between 20 miles (32 km) to 40 miles (64 km) downwind of cities, compared with upwind. Some cities show a total precipitation increase of 51%.One study concluded that cities change the climate in area 2–4 times larger than their own area. One 1999 comparison between urban and rural areas proposed that urban heat island effects have little influence on global mean temperature trends. Others suggested that urban heat islands affect global climate by impacting the jet stream.
Impacts:
On human health UHIs have the potential to directly influence the health and welfare of urban residents. As UHIs are characterized by increased temperature, they can potentially increase the magnitude and duration of heat waves within cities. The number of individuals exposed to extreme temperatures is increased by the UHI-induced warming. The nighttime effect of UHIs can be particularly harmful during a heat wave, as it deprives urban residents of the cool relief found in rural areas during the night.Increased temperatures have been reported to cause heat illnesses, such as heat stroke, heat exhaustion, heat syncope, and heat cramps.High UHI intensity correlates with increased concentrations of air pollutants that gathered at night, which can affect the next day's air quality. These pollutants include volatile organic compounds, carbon monoxide, nitrogen oxides, and particulate matter. The production of these pollutants combined with the higher temperatures in UHIs can quicken the production of ozone. Ozone at surface level is considered to be a harmful pollutant. Studies suggest that increased temperatures in UHIs can increase polluted days but also note that other factors (e.g. air pressure, cloud cover, wind speed) can also have an effect on pollution.Studies from Hong Kong have found that areas of the city with poorer outdoor urban air ventilation tended to have stronger urban heat island effects and had significantly higher all-cause mortality compared to areas with better ventilation.
Impacts:
On water bodies and aquatic organisms UHIs also impair water quality. Hot pavement and rooftop surfaces transfer their excess heat to stormwater, which then drains into storm sewers and raises water temperatures as it is released into streams, rivers, ponds, and lakes. Additionally, increased urban water body temperatures lead to a decrease in diversity in the water. For example, in August 2001, rains over Cedar Rapids, Iowa led to a 10.5C (18.9F) rise in the nearby stream within one hour, resulting in a fish kill which affected an estimated 188 fish. Since the temperature of the rain was comparatively cool, it could be attributed to the hot pavement of the city. Similar events have been documented across the American Midwest, as well as Oregon and California. Rapid temperature changes can be stressful to aquatic ecosystems.With the temperature of the nearby buildings sometimes reaching a difference of over 50 °F (28 °C) from the near-surface air temperature, precipitation will warm rapidly, causing run-off into nearby streams, lakes and rivers (or other bodies of water) to provide excessive thermal pollution. The increase in thermal pollution has the potential to increase water temperature by 20 to 30 °F (11 to 17 °C). This increase will cause the fish species inhabiting the body of water to undergo thermal stress and shock due to the rapid change in temperature of their habitat.Permeable pavements may reduce these effects by percolating water through the pavement into subsurface storage areas where it can be dissipated through absorption and evaporation.
Impacts:
On animals Species that are good at colonizing can utilize conditions provided by urban heat islands to thrive in regions outside of their normal range. Examples of this include the grey-headed flying fox (Pteropus poliocephalus) and the common house gecko (Hemidactylus frenatus). Grey-headed flying foxes, found in Melbourne, Australia, colonized urban habitats following the increase in temperatures there. Increased temperatures, causing warmer winter conditions, made the city more similar in climate to the more northerly wildland habitat of the species.
Impacts:
With temperate climates, urban heat islands will extend the growing season, therefore altering breeding strategies of inhabiting species. This can be best observed in the effects that urban heat islands have on water temperature (see effects on water bodies). Urban heat islands caused by cities have altered the natural selection process. Selective pressures like temporal variation in food, predation and water are relaxed causing a new set of selective forces to roll out. For example, within urban habitats, insects are more abundant than in rural areas. Insects are ectotherms. This means that they depend on the temperature of the environment to control their body temperature, making the warmer climates of the city perfect for their ability to thrive. A study done in Raleigh, North Carolina conducted on Parthenolecanium quercifex (oak scales), showed that this particular species preferred warmer climates and were therefore found in higher abundance in urban habitats than on oak trees in rural habitats. Over time spent living in urban habitats, they have adapted to thrive in warmer climates than in cooler ones.
Impacts:
On energy usage for cooling Another consequence of urban heat islands is the increased energy required for air conditioning and refrigeration in cities that are in comparatively hot climates. The heat island effect costs Los Angeles about US$ 100 million per year in energy (in the year 2000). Through the implementation of heat island reduction strategies, significant annual net energy savings have been calculated for northern locations such as Chicago, Salt Lake City, and Toronto.Every year in the U.S. 15% of energy goes towards the air conditioning of buildings in these urban heat islands. It was reported in 1998 that "the air conditioning demand has risen 10% within the last 40 years."
Options for reducing heat island effects:
Strategies to improve urban resilience by reducing excessive heat in cities include: Planting trees in cities, white roofs and light-coloured concrete, green infrastructure (including green roofs), passive daytime radiative cooling.The temperature difference between urban areas and the surrounding suburban or rural areas can be as much as 5 °C (9.0 °F). Nearly 40 percent of that increase is due to the prevalence of dark roofs, with the remainder coming from dark-colored pavement and the declining presence of vegetation. The heat island effect can be counteracted slightly by using white or reflective materials to build houses, roofs, pavements, and roads, thus increasing the overall albedo of the city.
Options for reducing heat island effects:
Planting trees in cities Planting trees around the city can be another way of increasing albedo and decreasing the urban heat island effect. It is recommended to plant deciduous trees because they can provide many benefits such as more shade in the summer and not blocking warmth in winter. Trees are a necessary feature in combating most of the urban heat island effect because they reduce air temperatures by 10 °F (5.6 °C), and surface temperatures by up to 20–45 °F (11–25 °C).
Options for reducing heat island effects:
White roofs and light-coloured concrete Painting rooftops white has become a common strategy to reduce the heat island effect. In cities, there are many dark colored surfaces that absorb the heat of the sun in turn lowering the albedo of the city. White rooftops allow high solar reflectance and high solar emittance, increasing the albedo of the city or area the effect is occurring.Relative to remedying the other sources of the problem, replacing dark roofing requires the least amount of investment for the most immediate return. A cool roof made from a reflective material such as vinyl reflects at least 75 percent of the sun's rays, and emit at least 70 percent of the solar radiation absorbed by the building envelope. Asphalt built-up roofs (BUR), by comparison, reflect 6 percent to 26 percent of solar radiation.Using light-colored concrete has proven effective in reflecting up to 50% more light than asphalt and reducing ambient temperature. A low albedo value, characteristic of black asphalt, absorbs a large percentage of solar heat creating warmer near-surface temperatures. Paving with light-colored concrete, in addition to replacing asphalt with light-colored concrete, communities may be able to lower average temperatures. However, research into the interaction between reflective pavements and buildings has found that, unless the nearby buildings are fitted with reflective glass, solar radiation reflected off light-colored pavements can increase building temperatures, increasing air conditioning demands.There are specific paint formulations for daytime radiative cooling that reflect up to 98.1% of sunlight.
Options for reducing heat island effects:
Green infrastructure Another option is to increase the amount of well-watered vegetation. These two options can be combined with the implementation of green roofs. Green roofs are excellent insulators during the warm weather months and the plants cool the surrounding environment. Air quality is improved as the plants absorb carbon dioxide with concomitant production of oxygen.Green roofs decrease the urban heat island effect. Green roofery is the practice of having vegetation on a roof; such as having trees or a garden. The plants that are on the roof increase the albedo and decreases the urban heat island effect. This method has been studied and criticized for the fact that green roofs are affected by climatic conditions, green roof variables are hard to measure, and are very complex systems.The cost efficiency of green roofs is quite high because of several reasons. For one, green roofs have over double the lifespan of a conventional roof, effectively decelerating the amount of roof replacements every year. In addition to roof-life, green roofs add stormwater management reducing fees for utilities. The cost for green roofs is more in the beginning, but over a period of time, their efficiency provides financial as well as health benefits. However, "A conventional roof is estimated to be $83.78/m2 while a green roof was estimated at $158.82/m2."Green parking lots use vegetation and surfaces other than asphalt to limit the urban heat island effect.
Options for reducing heat island effects:
Passive daytime radiative cooling A passive daytime radiative cooling roof application can double the energy savings of a white roof, attributed to high solar reflectance and thermal emittance in the infrared window, with the highest cooling potential in hot and dry cities such as Phoenix and Las Vegas. When installed on roofs in dense urban areas, passive daytime radiative cooling panels can significantly lower outdoor surface temperatures at the pedestrian level.
Society and culture:
History of research The phenomenon was first investigated and described by Luke Howard in the 1810s, although he was not the one to name the phenomenon. A description of the very first report of the UHI by Luke Howard said that the urban center of London was warmer at night than the surrounding countryside by 2.1 °C (3.7 °F).Investigations of the urban atmosphere continued throughout the nineteenth century. Between the 1920s and the 1940s, researchers in the emerging field of local climatology or microscale meteorology in Europe, Mexico, India, Japan, and the United States pursued new methods to understand the phenomenon. In 1929, Albert Peppler used the term in a German publication believed to be the first instance of an equivalent to urban heat island: städtische Wärmeinsel (which is urban heat island in German). Between 1990 and 2000, about 30 studies were published annually; by 2010, that number had increased to 100, and by 2015, it was more than 300.Leonard O. Myrup published the first comprehensive numerical treatment to predict the effects of the urban heat island (UHI) in 1969. His paper surveys UHI and criticizes then-existing theories as being excessively qualitative.
Society and culture:
Aspects of social inequality Some studies suggest that the effects of UHIs on health may be disproportionate, since the impacts may be unevenly distributed based on a variety of factors such as age, ethnicity and socioeconomic status. This raises the possibility of health impacts from UHIs being an environmental justice issue.
Society and culture:
There is a correlation between neighborhood income and tree canopy cover. Low-income neighborhoods tend to have significantly fewer trees than neighborhoods with higher incomes. Researchers hypothesized that less-well-off neighborhoods do not have the financial resources to plant and maintain trees. Affluent neighborhoods can afford more trees, on "both public and private property." Part of this is also that wealthier homeowners and communities can afford more land, which can be kept open as green space, whereas poorer ones are often rentals, where landowners try to maximize their profit by putting as much density as possible on their land.
Society and culture:
Researchers have also noted that the spread of impervious surfaces is correlated with low socioeconomic status neighborhoods across various U.S. cities and states. The presence of these materials, which include concrete, tar and asphalt, serves as a predictor of "intra-urban variation in temperature".
Chief heat officers Beginning in the 2020s, a number of cities worldwide began creating Chief Heat Officer positions to organize and manage work counteracting the urban heat island effect.
Examples:
United States Bill S.4280, introduced to the U.S. Senate in 2020, would authorize the National Integrated Heat Health Information System Interagency Committee (NIHHIS) to tackle extreme heat in the United States. Successful passage of this legislation would fund NIHHIS for five years and would instate a $100 million grant program within NIHHIS to encourage and fund urban heat reduction projects, including those using cools roofs and pavements and those improving HVAC systems. As of July 22, 2020 the bill has not moved past introduction to Congress.
Examples:
The city of New York determined that the cooling potential per area was highest for street trees, followed by living roofs, light covered surface, and open space planting. From the standpoint of cost effectiveness, light surfaces, light roofs, and curbside planting have lower costs per temperature reduction.
Examples:
Los Angeles A hypothetical "cool communities" program in Los Angeles has projected in 1997 that urban temperatures could be reduced by approximately 3 °C (5 °F) after planting ten million trees, reroofing five million homes, and painting one-quarter of the roads at an estimated cost of US$1 billion, giving estimated annual benefits of US$170 million from reduced air-conditioning costs and US$360 million in smog related health savings.In a case study of the Los Angeles Basin in 1998, simulations showed that even when trees are not strategically placed in these urban heat islands, they can still aid in minimization of pollutants and energy reduction. It is estimated that with this wide-scale implementation, the city of Los Angeles can annually save $100M with most of the savings coming from cool roofs, lighter colored pavement, and the planting of trees. With a citywide implementation, added benefits from the lowering smog-level would result in at least one billion dollars of saving per year.Los Angeles TreePeople is an example of how tree planting can empower a community. Tree people provides the opportunity for people to come together, build capacity, community pride and the opportunity to collaborate and network with each other.
Examples:
Athens green space initiative Athens, the capital of Greece, has undertaken initiatives to reduce the urban heat island effect and reduce the impact of pollution from vehicles. To create green spaces that offer cooling, small unused plots of land are being reconfigured into pocket parks.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Malpuech facial clefting syndrome**
Malpuech facial clefting syndrome:
Malpuech facial clefting syndrome, also called Malpuech syndrome or Gypsy type facial clefting syndrome, is a rare congenital syndrome. It is characterized by facial clefting (any type of cleft in the bones and tissues of the face, including a cleft lip and palate), a caudal appendage (a "human tail"), growth deficiency, intellectual and developmental disability, and abnormalities of the renal system (kidneys) and the male genitalia. Abnormalities of the heart, and other skeletal malformations may also be present. The syndrome was initially described by Georges Malpuech and associates in 1983. It is thought to be genetically related to Juberg-Hayward syndrome. Malpuech syndrome has also been considered as part of a spectrum of congenital genetic disorders associated with similar facial, urogenital and skeletal anomalies. Termed "3MC syndrome", this proposed spectrum includes Malpuech, Michels and Mingarelli-Carnevale (OSA) syndromes. Mutations in the COLLEC11 and MASP1 genes are believed to be a cause of these syndromes. The incidence of Malpuech syndrome is unknown. The pattern of inheritance is autosomal recessive, which means a defective (mutated) gene associated with the syndrome is located on an autosome, and the syndrome occurs when two copies of this defective gene are inherited.
Signs and symptoms:
Malpuech syndrome is congenital, being apparent at birth. It is characterized by a feature known as facial clefting. Observed and noted in the initial description of the syndrome as a cleft lip and palate, facial clefting is identified by clefts in the bones, muscles and tissues of the face, including the lips and palate. The forms of cleft lip and palate typically seen with Malpuech syndrome are midline (down the middle of the lip and palate) or bilateral (affecting both sides of the mouth and palate). Facial clefting generally encompasses a wide range of severity, ranging from minor anomalies such as a bifid (split) uvula, to a cleft lip and palate, to major developmental and structural defects of the facial bones and soft tissues. Clefting of the lip and palate occurs during embryogenesis. Additional facial and ortho-dental anomalies that have been described with the syndrome include: hypertelorism (unusually wide-set eyes, sometimes reported as telecanthus), narrow palpebral fissures (the separation between the upper and lower eyelids) and ptosis (drooping) of the eyelids, frontal bossing (prominent eyebrow ridge) with synophris, highly arched eyebrows, wide nasal root and a flattened nasal tip, malar hypoplasia (underdeveloped upper cheek bone), micrognathia (an undersized lower jaw), and prominent incisors. Auditory anomalies include an enlarged ear ridge, and hearing impairment associated with congenital otitis media (or "glue ear", inflammation of the middle ear) and sensorineural hearing loss.Another feature identified with Malpuech syndrome is a caudal appendage. A caudal appendage is a congenital outgrowth stemming from the coccyx (tailbone). Present in many non-human animal species as a typical tail, this feature when seen in an infant has been described as a "human tail". This was observed by Guion-Almeida (1995) in three individuals from Brazil. The appendage on X-rays variously appeared as a prominent protrusion of the coccyx. On a physical examination, the appendage resembles a nodule-like stub of an animal tail.Deficiencies such as intellectual disability, learning disability, growth retardation and developmental delay are common. Psychiatric manifestations that have been reported with the syndrome include psychotic behavior, obsessive–compulsive disorder, loss of inhibition, hyperactivity, aggression, fear of physical contact, and compulsive actions like echolalia (repeating the words spoken by another person). Neuromuscular tics have also been noted.Urogenital abnormalities, or those affecting the urinary and reproductive systems, are common with the syndrome. Malpuech et al. (1983) and Kerstjens-Frederikse et al. (2005) reported variously in affected males a micropenis, hypospadias (a congenital mislocation of the urinary meatus), cryptorchidism (ectopic or undescended testes), bifid (split) and underdeveloped scrotum, and an obstructive urethral valve. An affected boy was also reported by Reardon et al. (2001) with left renal agenesis, an enlarged and downwardly displaced right kidney, cryptorchidism and a shawl scrotum. Other malformations that have been noted with the syndrome are omphalocele and an umbilical hernia.
Signs and symptoms:
Congenital abnormalities of the heart have also been observed with Malpuech syndrome. From a healthy Japanese couple, Chinen and Naritomi (1995) described the sixth child who had features consistent with the disorder. This two-month-old male infant was also affected by cardiac anomalies including patent ductus arteriosus (PDA) and ventricular septal defect. The opening in the ductus arteriosus associated with PDA had been surgically repaired in the infant at 38 days of age. A number of minor skeletal aberrations were also reported in the infant, including wormian bones at the lambdoid sutures.
Genetics:
Malpuech syndrome, as with the other disorders within the 3MC syndrome consideration, is caused by mutations in the COLLEC11 and MASP1 genes. In an investigation by Rooryck et al. (2011), eleven families affected by 3MC syndrome were studied, which resulted in the identification of these two mutations. Both genes encode proteins of the lectin complement pathway, which plays a role in the complement system of innate, or non-specific immunity in humans and other species.The COLLEC11, or CL-K1 gene is located on the short arm of chromosome 2 (2p25.3) in humans. The CL-K1 protein is a C-type lectin, and belongs to the collectin family of these proteins. Other than its role in innate immunity, the protein is thought to be involved in the development of tissues including craniofacial cartilage, the heart and kidney during embryogenesis. This function in facial development was corroborated through study of the zebrafish, where mutations in its version of CL-K1 contributed to craniofacial abnormalities (such as Craniofacial clefts) possibly associated with errors in neural crest cell migration.
Genetics:
The MASP1, or Mannan-binding Serine Protease I gene is located on the long arm of human chromosome 3 at 3q27-q28. The protein is a type of connectin called a mannan-binding lectin, which plays a role in innate immunity by binding to pathogens such as viruses including HIV.As described by Sirmaci et al. (2010), three Turkish individuals from two consanguineous families (the children of relatives such as cousins are said to be in a consanguineous family) with various characteristics of 3MC syndrome, including facial dysmorphism and a caudal appendage, were evaluated. Investigation of homologous chromosomes through gene mapping revealed an autozygous region (a location on a chromosome where both alleles of a gene originate from a common ancestor) at chromosome 3q27 in both families. In one family, a missense mutation in MASP1 at this location resulted in the replacement of the amino acid glycine by arginine at position 687 in the gene sequence. The mutation cosegregated with the observed phenotype. In individuals from the second family, DNA sequencing of MASP1 showed a nonsense mutation that resulted in a deactivation of tryptophan at position 290 in the gene, that also cosegregated with the phenotype. Both mutations occur in a form of MASP1 known to process IGFBP5; loss of this function associated with mutation of MASP1 causes disruptions in the availability of insulin-like growth factor during craniofacial and musculoskeletal development during the embryonic period. These results indicate that mutations in MASP1 are responsible for an array of features found with malformation disorders including Malpuech syndrome.The syndrome is inherited in an autosomal recessive manner. This means the defective gene(s) responsible for the disorder (COLLEC11, MASP1) is located on an autosome (chromosomes 2 and 3 are autosomes), and two copies of the defective gene (one inherited from each parent) are required in order to be born with the disorder. The parents of an individual with an autosomal recessive disorder both carry one copy of the defective gene, but usually do not experience any signs or symptoms of the disorder.
Diagnosis:
It is suggested that the diagnostic criteria for Malpuech syndrome should include cleft lip and/or palate, typical associated facial features, and at least two of the following: urogenital anomalies, caudal appendage, and growth or developmental delay.
Diagnosis:
Due to the relatively high rate of hearing impairment found with the disorder, it too may be considered in the diagnosis. Another congenital disorder, Wolf-Hirschhorn (Pitt-Rogers-Danks) syndrome, shares Malpuech features in its diagnostic criteria. Because of this lacking differentiation, karyotyping (microscopic analysis of the chromosomes of an individual) can be employed to distinguish the two. Whereas deletions in the short arm of chromosome 4 would be revealed with Wolf-Hirschhorn, a karyotype without this aberration present would favor a Malpuech syndrome diagnosis. Also, the karyotype of an individual with Malpuech syndrome alone will be normal.
Diagnosis:
Classification Malpuech syndrome has been shown to have physical, or phenotypical similarities with several other genetic disorders. A report by Reardon et al. (2001) of a nine-year-old boy exhibiting facial, caudal and urogenital anomalies consistent with Malpuech syndrome, who also had skeletal malformites indicative of Juberg-Hayward syndrome, suggests that the two disorders may be allelic (caused by different mutations of the same gene).Along with several other disorders that have similar, or overlapping features and autosomal recessive inheritance, Malpuech syndrome has been considered to belong under the designation "3MC syndrome". Titomanlio et al. (2005) described a three-year-old female known to have Michels syndrome. In their review of the physical similarities between Michels, Malpuech and Mingarelli-Carnevale syndromes—particularly the facial appearance including instances of cleft lip and palate, and ptosis, and a similarity of congenital abdominal and urogenital anomalies—they believed the syndromes may represent a spectrum of genetic disorders rather than three individual disorders. They initially suggested this spectrum could be named 3MC (Michels-Malpuech-Mingarelli-Carnevale) syndrome. This conclusion and the name 3MC syndrome was supported by Leal et al. (2008), who reported a brother and sister with an array of symptoms that overlapped the various syndromes. Further assertion of 3MC syndrome was by Rooryck et al. (2011) in an elaboration of its cause.
Management:
Many of the congenital malformations found with Malpuech syndrome can be corrected surgically. These include cleft lip and palate, omphalocele, urogenital and craniofacial abnormalities, skeletal deformities such as a caudal appendage or scoliosis, and hernias of the umbilicus. The primary area of concern for these procedures applied to a neonate with congenital disorders including Malpuech syndrome regards the logistics of anesthesia. Methods like tracheal intubation for management of the airway during general anesthesia can be hampered by the even smaller, or maldeveloped mouth of the infant. For regional anesthesia, methods like spinal blocking are more difficult where scoliosis is present. In a 2010 report by Kiernan et al., a four-year-old girl with Malpuech syndrome was being prepared for an unrelated tonsillectomy and adenoidectomy. While undergoing intubation, insertion of a laryngoscope, needed to identify the airway for the placement of the endotracheal tube, was made troublesome by the presence of micrognathia attributed to the syndrome. After replacement with a laryngoscope of adjusted size, intubation proceeded normally. Successful general anesthesia followed.A rare follow-up of a male with Malpuech syndrome was presented by Priolo et al. (2007). Born at term from an uneventful pregnancy and delivery, the infant underwent a surgical repair of a cleft lip and palate. No problems were reported with the procedure. A heart abnormality, atrial septal defect, was also apparent but required no intervention. At age three years, intellectual disability, hyperactivity and obsessive compulsive disorder were diagnosed; hearing impairment was diagnosed at age six, managed with the use of hearing aids. Over the course of the decade that followed, a number of psychiatric evaluations were performed. At age 14, he exhibited a fear of physical contact; at age 15, he experienced a severe psychotic episode, characterized by agitation and a loss of sociosexual inhibition. This array of symptoms were treated pharmacologically (with prescription medications). He maintained a low level of mental deficiency by age 17, with moments of compulsive echolalia.
History:
The incidence of Malpuech syndrome has not been determined. A 1999 report by Crisponi et al. suggested that only about 12 individuals worldwide were affected by the disorder at that time. The syndrome was first reported by Guilliaume Malpuech and colleagues in 1983, observed in four children of unspecified gender in what was described as a gypsy family. The children included three siblings and their first cousin; the family was known to be highly consanguineous.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**False singular**
False singular:
In English grammar, a false singular occurs when a singular noun ending in a s or z sound is understood as a plural from which a new singular is constructed. The false singular is a form of back-formation.
Some false singulars become standard English. For example, pea was originally a false singular from pease pl. peasen. (The old word remains in the phrase pease porridge.) The non-standard historical forms Chinee and Portuguee are also false singulars, from Chinese and Portuguese.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Halyard bend**
Halyard bend:
Studding-Sail Bend is a way to attach the end of a rope at right angle to a cylindrical object such as a beam.
Tying:
wrap the end two or more times around the object make the end hook around the standing part and under all wrappings, to come out by the last wrap make the end turn back and cross over the wrappings, to tuck/pass it under the first wrapHalyard bend may be considered to be the "double-loop-around, and single-tuck-under" version of timber hitch which itself is usually tied as "single-loop-around, and double-tuck-under".
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Secondary growth**
Secondary growth:
In botany, secondary growth is the growth that results from cell division in the cambia or lateral meristems and that causes the stems and roots to thicken, while primary growth is growth that occurs as a result of cell division at the tips of stems and roots, causing them to elongate, and gives rise to primary tissue. Secondary growth occurs in most seed plants, but monocots usually lack secondary growth. If they do have secondary growth, it differs from the typical pattern of other seed plants.
Secondary growth:
The formation of secondary vascular tissues from the cambium is a characteristic feature of dicotyledons and gymnosperms. In certain monocots, the vascular tissues are also increased after the primary growth is completed but the cambium of these plants is of a different nature. In the living pteridophytes this feature is extremely rare, only occurring in Isoetes.
Lateral meristems:
In many vascular plants, secondary growth is the result of the activity of the two lateral meristems, the cork cambium and vascular cambium. Arising from lateral meristems, secondary growth increases the width of the plant root or stem, rather than its length. As long as the lateral meristems continue to produce new cells, the stem or root will continue to grow in diameter. In woody plants, this process produces wood, and shapes the plant into a tree with a thickened trunk.
Lateral meristems:
Because this growth usually ruptures the epidermis of the stem or roots, plants with secondary growth usually also develop a cork cambium. The cork cambium gives rise to thickened cork cells to protect the surface of the plant and reduce water loss. If this is kept up over many years, this process may produce a layer of cork. In the case of the cork oak it will yield harvestable cork.
In nonwoody plants:
Secondary growth also occurs in many nonwoody plants, e.g. tomato, potato tuber, carrot taproot and sweet potato tuberous root. A few long-lived leaves also have secondary growth.
Abnormal secondary growth:
Abnormal secondary growth does not follow the pattern of a single vascular cambium producing xylem to the inside and phloem to the outside as in ancestral lignophytes. Some dicots have anomalous secondary growth, e.g. in Bougainvillea a series of cambia arise outside the oldest phloem.Ancestral monocots lost their secondary growth and their stele has changed in a way it could not be recovered without major changes that are very unlikely to occur. Monocots either have no secondary growth, as is the ancestral case, or they have an "anomalous secondary growth" of some type, or, in the case of palms, they enlarge their diameter in what is called a sort of secondary growth or not depending on the definition given to the term. Palm trees increase their trunk diameter due to division and enlargement of parenchyma cells, which is termed "primary gigantism" because there is no production of secondary xylem and phloem tissues, or sometimes "diffuse secondary growth". In some other monocot stems as in Yucca and Dracaena with anomalous secondary growth, a cambium forms, but it produces vascular bundles and parenchyma internally and just parenchyma externally. Some monocot stems increase in diameter due to the activity of a primary thickening meristem, which is derived from the apical meristem.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Fenoprop**
Fenoprop:
Fenoprop, also called 2,4,5-TP, is the organic compound 2-(2,4,5-trichlorophenoxy)propionic acid. It is a phenoxy herbicide and a plant growth regulator, an analog of 2,4,5-T in which the latter's acetic acid sidechain is replaced with a propionate group (with an extra CH3). The addition of this extra methyl group creates a chiral centre in the molecule and useful biological activity is found only in the (2R)-isomer. The compound's mechanism of action is to mimic the auxin growth hormone indoleacetic acid (IAA). When sprayed on plants it induces rapid, uncontrolled growth. As with 2,4,5-T, fenoprop is toxic to shrubs and trees.
Fenoprop:
The name Silvex was used in the USA but it has been banned from use there since 1985. According to the Environmental Protection Agency its greatest use was as a postemergence herbicide for control of woody plants, and broadleaf herbaceous weeds in rice and bluegrass turf, in sugarcane, in rangeland improvement programs and on lawns.
Fenoprop and some of its esters were in use from 1945 but are now obsolete.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Cantic octagonal tiling**
Cantic octagonal tiling:
In geometry, the tritetratrigonal tiling or shieldotritetragonal tiling is a uniform tiling of the hyperbolic plane. It has Schläfli symbol of t1,2(4,3,3). It can also be named as a cantic octagonal tiling, h2{8,3}.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Signal-regulatory protein alpha**
Signal-regulatory protein alpha:
Signal regulatory protein α (SIRPα) is a regulatory membrane glycoprotein from SIRP family expressed mainly by myeloid cells and also by stem cells or neurons.
Signal-regulatory protein alpha:
SIRPα acts as inhibitory receptor and interacts with a broadly expressed transmembrane protein CD47 also called the "don't eat me" signal. This interaction negatively controls effector function of innate immune cells such as host cell phagocytosis. SIRPα diffuses laterally on the macrophage membrane and accumulates at a phagocytic synapse to bind CD47 and signal 'self', which inhibits the cytoskeleton-intensive process of phagocytosis by the macrophage. This is analogous to the self signals provided by MHC class I molecules to NK cells via Ig-like or Ly49 receptors. NB. Protein shown to the right is CD47 not SIRP α.
Structure:
The cytoplasmic region of SIRPα is highly conserved between rats, mice and humans. Cytoplasmic region contains a number of tyrosine residues, which likely act as ITIMs. Upon CD47 ligation, SIRPα is phosphorylated and recruits phosphatases like SHP1 and SHP2. The extracellular region contains three Immunoglobulin superfamily domains – single V-set and two C1-set IgSF domains. SIRP β and γ have the similar extracellular structure but different cytoplasmic regions giving contrasting types of signals. SIRP α polymorphisms are found in ligand-binding IgSF V-set domain but it does not affect ligand binding. One idea is that the polymorphism is important to protect the receptor of pathogens binding.
Ligands:
SIRPα recognizes CD47, an anti-phagocytic signal that distinguishes live cells from dying cells. CD47 has a single Ig-like extracellular domain and five membrane spanning regions. The interaction between SIRPα and CD47 can be modified by endocytosis or cleavage of the receptor, or interaction with surfactant proteins. Surfactant protein A and D are soluble ligands, highly expressed in the lungs, that bind to the same region of SIRPα as CD47 and can therefore competitively block binding.
Signalling:
The extracellular domain of SIRP α binds to CD47 and transmits intracellular signals through its cytoplasmic domain. CD47-binding is mediated through the NH2-terminal V-like domain of SIRP α. The cytoplasmic region contains four ITIMs that become phosphorylated after binding of ligand. The phosphorylation mediates activation of tyrosine kinase SHP2. SIRP α has been shown to bind also phosphatase SHP1, adaptor protein SCAP2 and FYN-binding protein. Recruitment of SHP phosphatases to the membrane leads to the inhibition of myosin accumulation at the cell surface and results in the inhibition of phagocytosis.
Cancer:
Cancer cells highly expressed CD47 that activate SIRP α and inhibit macrophage-mediated destruction. In one study, they engineered high-affinity variants of SIRP α that antagonized CD47 on cancer cells and caused increase phagocytosis of cancer cells. Another study (in mice) found anti-SIRPα antibodies helped macrophages to reduce cancer growth and metastasis, alone and in synergy with other cancer treatments.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**ClinVar**
ClinVar:
ClinVar is a public archive with free access to reports on the relationships between human variations and phenotypes, with supporting evidence. The database includes germline and somatic variants of any size, type or genomic location. Interpretations are submitted by clinical testing laboratories, research laboratories, locus-specific databases, UniProt, expert panels and practical guidelines.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Contract manufacturing organization**
Contract manufacturing organization:
A contract manufacturing organization (CMO), more recently referred to (and more commonly used now) as a contract development and manufacturing organization (CDMO) to avoid the acronym confusion of Chief Medical Officer or Clinical Monitoring Organization in the pharma industry, is a company that serves other companies in the pharmaceutical industry on a contract basis to provide comprehensive services from drug development through drug manufacturing. This allows major pharmaceutical companies to outsource those aspects of the business, which can help with scalability or can allow the major company to focus on drug discovery and drug marketing instead.
Contract manufacturing organization:
Services offered by CDMOs include, but are not limited to: pre-formulation, formulation development, stability studies, method development, pre-clinical and Phase I clinical trial materials, late-stage clinical trial materials, formal stability, scale-up, registration batches and commercial production.CDMOs are contract manufacturers, yet they provide development as a standard part of their services.
Their customers are not only expecting competitive pricing, but also regulatory compliance, flexibility on the production capability and on time delivery. Overall it is required that CMO complies with good manufacturing practice from their client and regulatory bodies such as Food and Drug Administration.
Overview:
The pharmaceutical market uses outsourcing services from providers in the form of contract research organizations (CROs) who work on very early-stage drug development on very small scale providing medicinal chemistry services. These are now often called CDROs as they provide some small scale development work. CDMOs work on the scale-up and later stages of drug development often preparing materials ranging from hundreds of grams to multi-kilo amounts. As the drug moves through the various clinical stages, the volumes tend to grow as well. Commercial scale amounts could range to metric tons. Over the years, the concept of a comprehensive single-source provider from drug development (a one-stop shop) through commercial manufacture of drug substance and drug product has been tried to varying success.
Overview:
CDMOs are a response to the competitive international nature of the pharmaceutical market as well as the increasing demand for outsourced services. The best-positioned service providers focus on a specific technology or dosage form and promote end-to-end continuity and efficiency for their outsourcing clients. With lower-cost international manufacturers capturing an increasing percentage of the contract manufacturing market, specialization may be an effective hedge against loss of market share.
History:
Before the financial crisis of 2007–2008, 75% of the candidate that outsourced services were small and mid-sized biotechnology and pharmaceutical companies. Following the financial crash in 2008 the CMO industry started to be funded by private equity as a result of a substantial growth and a more qualified management. The one-stop CDMO concept could be the direction the industry is heading by offering the whole spectrum of development services (e.g. development, production and analysis).The acquisitions that have been finalized in 2017 in CMO and CDMO industry brought some of these companies to a level that allows them to compete with global bio/pharma companies. The value of the mergers and acquisitions in 2017 was likely to exceed $20 billion, below are some examples of these M&A:Another aspect of these acquisition is coming from CMO that are acquiring manufacturing site from bio/pharma companies. In 2017, Pfizer established a manufacturing site in Liscate, Italy, which was followed that same year by AstraZeneca in Reims, France. Novartis Sandoz acquired a site in Boucherville, Canada in 2018, as well as Glaxo Smith Kline, which began manufacturing out of South Carolina in the United States. Samsung Biologics built three manufacturing plants with a capacity of more than 360,000 liters, making it the world's largest contract-based manufacturer in the biopharmaceutical sector at a single site as of 2018.The industry has experienced an increase in private equity investment and this has led to the consolidation of choices in the CDMO industry as many larger CDMOs have been formed. Many of those are active at aiming to be a larger scale suppliers in the CDMO environment but the number of attractive acquisitions are limited. One could argue that this has had both positive and negative effects on the industry. Larger pharma companies like the idea of a larger CDMO while smaller pharma companies tend to see it more difficult to get the kind of service they expect.
Advantages:
The bio/pharma companies used to build and staff dedicated manufacturing capacities for drugs in development only to see them cancelled if the product failed in Phase III of clinical research; working with a CDMO limits that financial risk. Using a CDMO also allows drug and biologic manufacturers to get advantage of specific expertise and capability. Some CDMOs are specialized in manufacturing of specialty products or formulations which some pharmaceutical companies may not have the capability to produce in house. In these situations, contracting with a CDMO may be a faster and less costly solution than developing new manufacturing capabilities.
Disadvantages:
The pharmaceutical client using the services of a CDMO does not have direct control of the project in regard to scheduling, cost, quality, or accountability yet should be heavily invested to work closely with the CDMO partner to ensure success. Data security can be an issue when considering a CDMO, as intellectual property and other proprietary data are exchanged between client and service provider.
Disadvantages:
One of the major risk remains in the lack of control over the CDMO's compliance for the client, for example when an FDA warning letter is issued, a resulting interruption of production may result in major delay or interruption of shipping thus it is critical to properly vet the selected CDMO. The rise of the CDMO industry led to an increase of inspectors from various divisions of the Food and Drug Administration (e.g.: Center for Biologics Evaluation and Research or Center for Drug Evaluation and Research).
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Dobson unit**
Dobson unit:
The Dobson unit (DU) is a unit of measurement of the amount of a trace gas in a vertical column through the Earth's atmosphere. It originated, and continues to be primarily used in respect to, atmospheric ozone, whose total column amount, usually termed "total ozone", and sometimes "column abundance", is dominated by the high concentrations of ozone in the stratospheric ozone layer.
Dobson unit:
The Dobson unit is defined as the thickness (in units of 10 μm) of that layer of pure gas which would be formed by the total column amount at standard conditions for temperature and pressure (STP). This is sometimes referred to as a 'milli-atmo-centimeter'. A typical column amount of 300 DU of atmospheric ozone therefore would form a 3 mm layer of pure gas at the surface of the Earth if its temperature and pressure conformed to STP.
Dobson unit:
The Dobson unit is named after Gordon Dobson, a researcher at the University of Oxford who in the 1920s built the first instrument to measure total ozone from the ground, making use of a double prism monochromator to measure the differential absorption of different bands of solar ultraviolet radiation by the ozone layer. This instrument, called the Dobson ozone spectrophotometer, has formed the backbone of the global network for monitoring atmospheric ozone and was the source of the discovery in 1984 of the Antarctic ozone hole.
Ozone:
NASA uses a baseline value of 220 DU for ozone. This was chosen as the starting point for observations of the Antarctic ozone hole, since values of less than 220 Dobson units were not found before 1979. Also, from direct measurements over Antarctica, a column ozone level of less than 220 Dobson units is a result of the ozone loss from chlorine and bromine compounds.
Sulfur dioxide:
In addition, Dobson units are often used to describe total column densities of sulfur dioxide, which occurs in the atmosphere in small amounts due to the combustion of fossil fuels, from biological processes releasing dimethyl sulfide, or by natural combustion such as forest fires. Large amounts of sulfur dioxide may be released into the atmosphere as well by volcanic eruptions. The Dobson unit is used to describe total column amounts of sulfur dioxide because it appeared in the early days of ozone remote sensing on ultraviolet satellite instruments (such as TOMS).
Derivation:
The Dobson unit arises from the ideal gas law PV=nRT, where P and V are pressure and volume respectively, and n, R and T are the number of moles of gas, the gas constant (8.314 J/(mol·K)), and T is temperature in kelvins (K).
The number density of air is the number of molecules or atoms per unit volume: air =AavnV, and when plugged into the real gas law, the number density of air is found by using pressure, temperature and the real gas constant: air =AavPRT.
The number density (molecules/volume) of air at standard temperature and pressure (T = 273 K and P = 101325 Pa) is, by using this equation, air 6.02 10 23 molecules mol 101325 Pa 8.314 mol K 273 K.
With some unit conversions of joules to pascal cubic meters, the equation for molecules/volume is 6.02 10 23 molecules mol 101325 Pa 8.314 Pa mol K 273 2.69 10 25 molecules ⋅m−3.
Derivation:
A Dobson unit is the total amount of a trace gas per unit area. In atmospheric sciences, this is referred to as a column density. How, though, do we go from units of molecules per cubic meter, a volume, to molecules per square centimeter, an area? This must be done by integration. To get a column density, we must integrate the total column over a height. Per the definition of Dobson units, we see that 1 DU = 0.01 mm of trace gas when compressed down to sea level at standard temperature and pressure. So if we integrate our number density of air from 0 to 0.01 mm, we find the number density which is equal to 1 DU: mm 0.01 mm 2.69 10 25 molecules 2.69 10 25 molecules 0.01 mm 2.69 10 25 molecules mm 2.69 10 25 molecules 10 2.69 10 20 molecules ⋅m−2.
Derivation:
And thus we come up with the value of 1 DU, which is 2.69×1020 molecules per meter squared.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Indigenous bundle**
Indigenous bundle:
In mathematics, an indigenous bundle on a Riemann surface is a fiber bundle with a flat connection associated to some complex projective structure. Indigenous bundles were introduced by Robert C. Gunning (1967). Indigenous bundles for curves over p-adic fields were introduced by Shinichi Mochizuki (1996) in his study of p-adic Teichmüller theory.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Jordan matrix**
Jordan matrix:
In the mathematical discipline of matrix theory, a Jordan matrix, named after Camille Jordan, is a block diagonal matrix over a ring R (whose identities are the zero 0 and one 1), where each block along the diagonal, called a Jordan block, has the following form:
Definition:
Every Jordan block is specified by its dimension n and its eigenvalue λ∈R , and is denoted as Jλ,n. It is an n×n matrix of zeroes everywhere except for the diagonal, which is filled with λ and for the superdiagonal, which is composed of ones.
Any block diagonal matrix whose blocks are Jordan blocks is called a Jordan matrix. This (n1 + ⋯ + nr) × (n1 + ⋯ + nr) square matrix, consisting of r diagonal blocks, can be compactly indicated as Jλ1,n1⊕⋯⊕Jλr,nr or diag(Jλ1,n1,…,Jλr,nr) , where the i-th Jordan block is Jλi,ni.
For example, the matrix is a 10 × 10 Jordan matrix with a 3 × 3 block with eigenvalue 0, two 2 × 2 blocks with eigenvalue the imaginary unit i, and a 3 × 3 block with eigenvalue 7. Its Jordan-block structure is written as either J0,3⊕Ji,2⊕Ji,2⊕J7,3 or diag(J0,3, Ji,2, Ji,2, J7,3).
Linear algebra:
Any n × n square matrix A whose elements are in an algebraically closed field K is similar to a Jordan matrix J, also in Mn(K) , which is unique up to a permutation of its diagonal blocks themselves. J is called the Jordan normal form of A and corresponds to a generalization of the diagonalization procedure. A diagonalizable matrix is similar, in fact, to a special case of Jordan matrix: the matrix whose blocks are all 1 × 1.More generally, given a Jordan matrix J=Jλ1,m1⊕Jλ2,m2⊕⋯⊕JλN,mN , that is, whose kth diagonal block, 1≤k≤N , is the Jordan block Jλk,mk and whose diagonal elements λk may not all be distinct, the geometric multiplicity of λ∈K for the matrix J, indicated as gmul Jλ , corresponds to the number of Jordan blocks whose eigenvalue is λ. Whereas the index of an eigenvalue λ for J, indicated as idx Jλ , is defined as the dimension of the largest Jordan block associated to that eigenvalue.
Linear algebra:
The same goes for all the matrices A similar to J, so idx Aλ can be defined accordingly with respect to the Jordan normal form of A for any of its eigenvalues spec A . In this case one can check that the index of λ for A is equal to its multiplicity as a root of the minimal polynomial of A (whereas, by definition, its algebraic multiplicity for A, mul Aλ , is its multiplicity as a root of the characteristic polynomial of A; that is, det (A−xI)∈K[x] ). An equivalent necessary and sufficient condition for A to be diagonalizable in K is that all of its eigenvalues have index equal to 1; that is, its minimal polynomial has only simple roots.
Linear algebra:
Note that knowing a matrix's spectrum with all of its algebraic/geometric multiplicities and indexes does not always allow for the computation of its Jordan normal form (this may be a sufficient condition only for spectrally simple, usually low-dimensional matrices): the Jordan decomposition is, in general, a computationally challenging task. From the vector space point of view, the Jordan decomposition is equivalent to finding an orthogonal decomposition (that is, via direct sums of eigenspaces represented by Jordan blocks) of the domain which the associated generalized eigenvectors make a basis for.
Functions of matrices:
Let A∈Mn(C) (that is, a n × n complex matrix) and C∈GLn(C) be the change of basis matrix to the Jordan normal form of A; that is, A = C−1JC. Now let f (z) be a holomorphic function on an open set Ω such that specA⊂Ω⊆C ; that is, the spectrum of the matrix is contained inside the domain of holomorphy of f. Let be the power series expansion of f around spec A , which will be hereinafter supposed to be 0 for simplicity's sake. The matrix f (A) is then defined via the following formal power series and is absolutely convergent with respect to the Euclidean norm of Mn(C) . To put it another way, f (A) converges absolutely for every square matrix whose spectral radius is less than the radius of convergence of f around 0 and is uniformly convergent on any compact subsets of Mn(C) satisfying this property in the matrix Lie group topology.
Functions of matrices:
The Jordan normal form allows the computation of functions of matrices without explicitly computing an infinite series, which is one of the main achievements of Jordan matrices. Using the facts that the kth power ( k∈N0 ) of a diagonal block matrix is the diagonal block matrix whose blocks are the kth powers of the respective blocks; that is, (A1⊕A2⊕A3⊕⋯)k=A1k⊕A2k⊕A3k⊕⋯ , and that Ak = C−1JkC, the above matrix power series becomes where the last series need not be computed explicitly via power series of every Jordan block. In fact, if λ∈Ω , any holomorphic function of a Jordan block f(Jλ,n)=f(λI+Z) has a finite power series around λI because Zn=0 . Here, Z is the nilpotent part of J and Zk has all 0's except 1's along the th superdiagonal. Thus it is the following upper triangular matrix: As a consequence of this, the computation of any function of a matrix is straightforward whenever its Jordan normal form and its change-of-basis matrix are known. For example, using f(z)=1/z , the inverse of Jλ,n is: Also, spec f(A) = f (spec A); that is, every eigenvalue λ∈specA corresponds to the eigenvalue spec f(A) , but it has, in general, different algebraic multiplicity, geometric multiplicity and index. However, the algebraic multiplicity may be computed as follows: The function f (T) of a linear transformation T between vector spaces can be defined in a similar way according to the holomorphic functional calculus, where Banach space and Riemann surface theories play a fundamental role. In the case of finite-dimensional spaces, both theories perfectly match.
Dynamical systems:
Now suppose a (complex) dynamical system is simply defined by the equation where z:R+→R is the (n-dimensional) curve parametrization of an orbit on the Riemann surface R of the dynamical system, whereas A(c) is an n × n complex matrix whose elements are complex functions of a d-dimensional parameter c∈Cd Even if A∈Mn(C0(Cd)) (that is, A continuously depends on the parameter c) the Jordan normal form of the matrix is continuously deformed almost everywhere on Cd but, in general, not everywhere: there is some critical submanifold of Cd on which the Jordan form abruptly changes its structure whenever the parameter crosses or simply "travels" around it (monodromy). Such changes mean that several Jordan blocks (either belonging to different eigenvalues or not) join to a unique Jordan block, or vice versa (that is, one Jordan block splits into two or more different ones). Many aspects of bifurcation theory for both continuous and discrete dynamical systems can be interpreted with the analysis of functional Jordan matrices.
Dynamical systems:
From the tangent space dynamics, this means that the orthogonal decomposition of the dynamical system's phase space changes and, for example, different orbits gain periodicity, or lose it, or shift from a certain kind of periodicity to another (such as period-doubling, cfr. logistic map).
In a sentence, the qualitative behaviour of such a dynamical system may substantially change as the versal deformation of the Jordan normal form of A(c).
Linear ordinary differential equations:
The simplest example of a dynamical system is a system of linear, constant-coefficient, ordinary differential equations; that is, let A∈Mn(C) and z0∈Cn whose direct closed-form solution involves computation of the matrix exponential: Another way, provided the solution is restricted to the local Lebesgue space of n-dimensional vector fields z∈Lloc1(R+)n , is to use its Laplace transform Z(s)=L[z](s) . In this case The matrix function (A − sI)−1 is called the resolvent matrix of the differential operator {\textstyle {\frac {\mathrm {d} }{\mathrm {d} t}}-A} . It is meromorphic with respect to the complex parameter s∈C since its matrix elements are rational functions whose denominator is equal for all to det(A − sI). Its polar singularities are the eigenvalues of A, whose order equals their index for it; that is, ord(A−sI)−1λ=idxAλ
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Undertow (water waves)**
Undertow (water waves):
In physical oceanography, undertow is the undercurrent that moves offshore while waves approach the shore. Undertow is a natural and universal feature for almost any large body of water; it is a return flow compensating for the onshore-directed average transport of water by the waves in the zone above the wave troughs. The undertow's flow velocities are generally strongest in the surf zone, where the water is shallow and the waves are high due to shoaling.In popular usage, the word undertow is often misapplied to rip currents. An undertow occurs everywhere underneath shore-approaching waves, whereas rip currents are localized narrow offshore currents occurring at certain locations along the coast.
Oceanography:
An "undertow" is a steady, offshore-directed compensation flow, which occurs below waves near the shore. Physically, nearshore, the wave-induced mass flux between wave crest and trough is onshore directed. This mass transport is localized in the upper part of the water column, i.e. above the wave troughs. To compensate for the amount of water being transported towards the shore, a second-order (i.e. proportional to the wave height squared), offshore-directed mean current takes place in the lower section of the water column. This flow – the undertow – affects the nearshore waves everywhere, unlike rip currents localized at certain positions along the shore.The term undertow is used in scientific coastal oceanography papers. The distribution of flow velocities in the undertow over the water column is important as it strongly influences the on- or offshore transport of sediment. Outside the surf zone there is a near-bed onshore-directed sediment transport induced by Stokes drift and skewed-asymmetric wave transport. In the surf zone, strong undertow generates a near-bed offshore sediment transport. These antagonistic flows may lead to sand bar formation where the flows converge near the wave breaking point, or in the wave breaking zone.
Oceanography:
Seaward mass flux An exact relation for the mass flux of a nonlinear periodic wave on an inviscid fluid layer was established by Levi-Civita in 1924. In a frame of reference according to Stokes' first definition of wave celerity, the mass flux Mw of the wave is related to the wave's kinetic energy density Ek (integrated over depth and thereafter averaged over wavelength) and phase speed c through: Mw=2Ekc.
Oceanography:
Similarly, Longuet Higgins showed in 1975 that – for the common situation of zero mass flux towards the shore (i.e. Stokes' second definition of wave celerity) – normal-incident periodic waves produce a depth- and time-averaged undertow velocity: u¯=−2Ekρch, with h the mean water depth and ρ the fluid density. The positive flow direction of u¯ is in the wave propagation direction.
Oceanography:
For small-amplitude waves, there is equipartition of kinetic ( Ek ) and potential energy ( Ep ): Ew=Ek+Ep≈2Ek≈2Ep, with Ew the total energy density of the wave, integrated over depth and averaged over horizontal space. Since in general the potential energy Ep is much easier to measure than the kinetic energy, the wave energy is approximately Ew≈18ρgH2 (with H the wave height). So u¯≈−18gH2ch.
Oceanography:
For irregular waves the required wave height is the root-mean-square wave height rms ≈8σ, with σ the standard deviation of the free-surface elevation.
The potential energy is Ep=12ρgσ2 and Ew≈ρgσ2.
The distribution of the undertow velocity over the water depth is a topic of ongoing research.
Confusion with rip currents:
In contrast to undertow, rip currents are responsible for the great majority of drownings close to beaches. When a swimmer enters a rip current, it starts to carry them offshore. The swimmer can exit the rip current by swimming at right angles to the flow, parallel to the shore, or by simply treading water or floating until the rip releases them. However, drowning can occur when swimmers exhaust themselves by trying unsuccessfully to swim directly against the flow of a rip.
Confusion with rip currents:
On the United States Lifesaving Association website, it is noted that some uses of the word "undertow" are incorrect: A rip current is a horizontal current. Rip currents do not pull people under the water—they pull people away from shore. Drowning deaths occur when people pulled offshore are unable to keep themselves afloat and swim to shore. This may be due to any combination of fear, panic, exhaustion, or lack of swimming skills.
Confusion with rip currents:
In some regions, rip currents are referred to by other, incorrect terms such as "rip tides" and "undertow". We encourage exclusive use of the correct term—rip currents. Use of other terms may confuse people and negatively impact public education efforts.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Voice (phonetics)**
Voice (phonetics):
Voice or voicing is a term used in phonetics and phonology to characterize speech sounds (usually consonants). Speech sounds can be described as either voiceless (otherwise known as unvoiced) or voiced.
The term, however, is used to refer to two separate concepts: Voicing can refer to the articulatory process in which the vocal folds vibrate, its primary use in phonetics to describe phones, which are particular speech sounds.
Voice (phonetics):
It can also refer to a classification of speech sounds that tend to be associated with vocal cord vibration but may not actually be voiced at the articulatory level. That is the term's primary use in phonology: to describe phonemes; while in phonetics its primary use is to describe phones.For example, voicing accounts for the difference between the pair of sounds associated with the English letters ⟨s⟩ and ⟨z⟩. The two sounds are transcribed as [s] and [z] to distinguish them from the English letters, which have several possible pronunciations, depending on the context. If one places the fingers on the voice box (i.e., the location of the Adam's apple in the upper throat), one can feel a vibration while [z] is pronounced but not with [s]. (For a more detailed, technical explanation, see modal voice and phonation.) In most European languages, with a notable exception being Icelandic, vowels and other sonorants (consonants such as m, n, l, and r) are modally voiced.
Voice (phonetics):
Yidiny has no underlyingly voiceless consonants, only voiced ones.When used to classify speech sounds, voiced and unvoiced are merely labels used to group phones and phonemes together for the purposes of classification.
Notation:
The International Phonetic Alphabet has distinct letters for many voiceless and voiced pairs of consonants (the obstruents), such as [p b], [t d], [k ɡ], [q ɢ]. In addition, there is a diacritic for voicedness: ⟨◌̬⟩. Diacritics are typically used with letters for prototypically voiceless sounds.
In Unicode, the symbols are encoded U+032C ◌̬ COMBINING CARON BELOW and U+0325 ◌̥ COMBINING RING BELOW.
Notation:
The extensions to the International Phonetic Alphabet have a notation for partial voicing and devoicing as well as for prevoicing: Partial voicing can mean light but continuous voicing, discontinuous voicing, or discontinuities in the degree of voicing. For example, ₍s̬₎ could be an [s] with (some) voicing in the middle and ₍z̥₎ could be [z] with (some) devoicing in the middle.
Notation:
Partial voicing can also be indicated in the normal IPA with transcriptions like [ᵇb̥iˑ] and [ædᵈ̥].
In English:
The distinction between the articulatory use of voice and the phonological use rests on the distinction between phone (represented between square brackets) and phoneme (represented between slashes). The difference is best illustrated by a rough example.
The English word nods is made up of a sequence of phonemes, represented symbolically as /nɒdz/, or the sequence of /n/, /ɒ/, /d/, and /z/. Each symbol is an abstract representation of a phoneme. That awareness is an inherent part of speakers' mental grammar that allows them to recognise words.
In English:
However, phonemes are not sounds in themselves. Rather, phonemes are, in a sense, converted to phones before being spoken. The /z/ phoneme, for instance, can actually be pronounced as either the [s] phone or the [z] phone since /z/ is frequently devoiced, even in fluent speech, especially at the end of an utterance. The sequence of phones for nods might be transcribed as [nɒts] or [nɒdz], depending on the presence or strength of this devoicing. While the [z] phone has articulatory voicing, the [s] phone does not have it.
In English:
What complicates the matter is that for English, consonant phonemes are classified as either voiced or voiceless even though it is not the primary distinctive feature between them. Still, the classification is used as a stand-in for phonological processes, such as vowel lengthening that occurs before voiced consonants but not before unvoiced consonants or vowel quality changes (the sound of the vowel) in some dialects of English that occur before unvoiced but not voiced consonants. Such processes allow English speakers to continue to perceive difference between voiced and voiceless consonants when the devoicing of the former would otherwise make them sound identical to the latter.
In English:
English has four pairs of fricative phonemes that can be divided into a table by place of articulation and voicing. The voiced fricatives can readily be felt to have voicing throughout the duration of the phone especially when they occur between vowels.
In English:
However, in the class of consonants called stops, such as /p, t, k, b, d, ɡ/, the contrast is more complicated for English. The "voiced" sounds do not typically feature articulatory voicing throughout the sound. The difference between the unvoiced stop phonemes and the voiced stop phonemes is not just a matter of whether articulatory voicing is present or not. Rather, it includes when voicing starts (if at all), the presence of aspiration (airflow burst following the release of the closure) and the duration of the closure and aspiration.
In English:
English voiceless stops are generally aspirated at the beginning of a stressed syllable, and in the same context, their voiced counterparts are voiced only partway through. In more narrow phonetic transcription, the voiced symbols are maybe used only to represent the presence of articulatory voicing, and aspiration is represented with a superscript h.
In English:
When the consonants come at the end of a syllable, however, what distinguishes them is quite different. Voiceless phonemes are typically unaspirated, glottalized and the closure itself may not even be released, making it sometimes difficult to hear the difference between, for example, light and like. However, auditory cues remain to distinguish between voiced and voiceless sounds, such as what has been described above, like the length of the preceding vowel.
In English:
Other English sounds, the vowels and sonorants, are normally fully voiced. However, they may be devoiced in certain positions, especially after aspirated consonants, as in coffee, tree, and play in which the voicing is delayed to the extent of missing the sonorant or vowel altogether.
Degrees of voicing:
There are two variables to degrees of voicing: intensity (discussed under phonation), and duration (discussed under voice onset time). When a sound is described as "half voiced" or "partially voiced", it is not always clear whether that means that the voicing is weak (low intensity) or if the voicing occurs during only part of the sound (short duration). In the case of English, it is the latter.
Degrees of voicing:
Juǀʼhoansi and some of its neighboring languages are typologically unusual in having contrastive partially-voiced consonants. They have aspirate and ejective consonants, which are normally incompatible with voicing, in voiceless and voiced pairs. The consonants start out voiced but become voiceless partway through and allow normal aspiration or ejection. They are [b͡pʰ, d͡tʰ, d͡tsʰ, d͡tʃʰ, ɡ͡kʰ] and [d͡tsʼ, d͡tʃʼ] and a similar series of clicks.
Voice and tenseness:
There are languages with two sets of contrasting obstruents that are labelled /p t k f s x …/ vs. /b d ɡ v z ɣ …/ even though there is no involvement of voice (or voice onset time) in that contrast. That happens, for instance, in several Alemannic German dialects. Because voice is not involved, this is explained as a contrast in tenseness, called a fortis and lenis contrast.
Voice and tenseness:
There is a hypothesis that the contrast between fortis and lenis consonants is related to the contrast between voiceless and voiced consonants. That relation is based on sound perception as well as on sound production, where consonant voice, tenseness and length are only different manifestations of a common sound feature.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Coining (metalworking)**
Coining (metalworking):
Coining is a form of precision stamping in which a workpiece is subjected to a sufficiently high stress to induce plastic flow on the surface of the material. A beneficial feature is that in some metals, the plastic flow reduces surface grain size, and work hardens the surface, while the material deeper in the part retains its toughness and ductility. The term comes from the initial use of the process: manufacturing of coins.
Coining (metalworking):
Coining is used to manufacture parts for all industries and is commonly used when high relief or very fine features are required. For example, it is used to produce coins, badges, buttons, precision-energy springs and precision parts with small or polished surface features.
Coining (metalworking):
Coining is a cold working process similar in other respects to forging, which takes place at elevated temperature; it uses a great deal of force to elastically deform a workpiece, so that it conforms to a die. Coining can be done using a gear driven press, a mechanical press, or more commonly, a hydraulically actuated press. Coining typically requires higher tonnage presses than stamping, because the workpiece is elastically deformed and not actually cut, as in some other forms of stamping. The coining process is preferred when there is a high tonnage.
Coining in electronic industry:
In soldering of electronic components, bumps are formed on bonding pads to enhance adhesion, which are further flattened by the coining process. Unlike typical coining applications, in this case the goal of coining is to create a flat, rather than patterned, surface.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Tetrahemihexahedron**
Tetrahemihexahedron:
In geometry, the tetrahemihexahedron or hemicuboctahedron is a uniform star polyhedron, indexed as U4. It has 7 faces (4 triangles and 3 squares), 12 edges, and 6 vertices. Its vertex figure is a crossed quadrilateral. Its Coxeter–Dynkin diagram is (although this is a double covering of the tetrahemihexahedron).
Tetrahemihexahedron:
It is the only non-prismatic uniform polyhedron with an odd number of faces. Its Wythoff symbol is 3/2 3 | 2, but that represents a double covering of the tetrahemihexahedron with eight triangles and six squares, paired and coinciding in space. (It can more intuitively be seen as two coinciding tetrahemihexahedra.) It is a hemipolyhedron. The "hemi" part of the name means some of the faces form a group with half as many members as some regular polyhedron—here, three square faces form a group with half as many faces as the regular hexahedron, better known as the cube—hence hemihexahedron. Hemi faces are also oriented in the same direction as the regular polyhedron's faces. The three square faces of the tetrahemihexahedron are, like the three facial orientations of the cube, mutually perpendicular.
Tetrahemihexahedron:
The "half-as-many" characteristic also means that hemi faces must pass through the center of the polyhedron, where they all intersect each other. Visually, each square is divided into four right triangles, with two visible from each side.
Related surfaces:
It is a non-orientable surface. It is unique as the only uniform polyhedron with an Euler characteristic of 1 and is hence a projective polyhedron, yielding a representation of the real projective plane very similar to the Roman surface.
Related polyhedra:
It has the same vertices and edges as the regular octahedron. It also shares 4 of the 8 triangular faces of the octahedron, but has three additional square faces passing through the centre of the polyhedron.
The dual figure is the tetrahemihexacron.
It is 2-covered by the cuboctahedron, which accordingly has the same abstract vertex figure (2 triangles and two squares: 3.4.3.4) and twice the vertices, edges, and faces. It has the same topology as the abstract polyhedron hemi-cuboctahedron.
It may also be constructed as a crossed triangular cuploid. All cuploids and their duals are topologically projective planes.
Tetrahemihexacron The tetrahemihexacron is the dual of the tetrahemihexahedron, and is one of nine dual hemipolyhedra.
Related polyhedra:
Since the hemipolyhedra have faces passing through the center, the dual figures have corresponding vertices at infinity; properly, on the real projective plane at infinity. In Magnus Wenninger's Dual Models, they are represented with intersecting prisms, each extending in both directions to the same vertex at infinity, in order to maintain symmetry. In practice the model prisms are cut off at a certain point that is convenient for the maker. Wenninger suggested these figures are members of a new class of stellation figures, called stellation to infinity. However, he also suggested that strictly speaking they are not polyhedra because their construction does not conform to the usual definitions.
Related polyhedra:
Topologically it is considered to contain seven vertices. The three vertices considered at infinity (the real projective plane at infinity) correspond directionally to the three vertices of the hemi-octahedron, an abstract polyhedron. The other four vertices exist at alternate corners of a central cube (a demicube, in this case a tetrahedron).
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Freudenthal suspension theorem**
Freudenthal suspension theorem:
In mathematics, and specifically in the field of homotopy theory, the Freudenthal suspension theorem is the fundamental result leading to the concept of stabilization of homotopy groups and ultimately to stable homotopy theory. It explains the behavior of simultaneously taking suspensions and increasing the index of the homotopy groups of the space in question. It was proved in 1937 by Hans Freudenthal.
Freudenthal suspension theorem:
The theorem is a corollary of the homotopy excision theorem.
Statement of the theorem:
Let X be an n-connected pointed space (a pointed CW-complex or pointed simplicial set). The map X→Ω(ΣX) induces a map πk(X)→πk(Ω(ΣX)) on homotopy groups, where Ω denotes the loop functor and Σ denotes the reduced suspension functor. The suspension theorem then states that the induced map on homotopy groups is an isomorphism if k ≤ 2n and an epimorphism if k = 2n + 1.
Statement of the theorem:
A basic result on loop spaces gives the relation πk(Ω(ΣX))≅πk+1(ΣX) so the theorem could otherwise be stated in terms of the map πk(X)→πk+1(ΣX), with the small caveat that in this case one must be careful with the indexing.
Statement of the theorem:
Proof As mentioned above, the Freudenthal suspension theorem follows quickly from homotopy excision; this proof is in terms of the natural map πk(X)→πk+1(ΣX) . If a space X is n -connected, then the pair of spaces (CX,X) is (n+1) -connected, where CX is the reduced cone over X ; this follows from the relative homotopy long exact sequence. We can decompose ΣX as two copies of CX , say (CX)+,(CX)− , whose intersection is X . Then, homotopy excision says the inclusion map: ((CX)+,X)⊂(ΣX,(CX)−) induces isomorphisms on πi,i<2n+2 and a surjection on π2n+2 . From the same relative long exact sequence, πi(X)=πi+1(CX,X), and since in addition cones are contractible, πi(ΣX,(CX)−)=πi(ΣX).
Statement of the theorem:
Putting this all together, we get πi(X)=πi+1((CX)+,X)=πi+1((ΣX,(CX)−)=πi+1(ΣX) for i+1<2n+2 , i.e. i⩽2n , as claimed above; for i=2n+1 the left and right maps are isomorphisms, regardless of how connected X is, and the middle one is a surjection by excision, so the composition is a surjection as claimed.
Corollary 1 Let Sn denote the n-sphere and note that it is (n − 1)-connected so that the groups πn+k(Sn) stabilize for n⩾k+2 by the Freudenthal theorem. These groups represent the kth stable homotopy group of spheres.
Corollary 2 More generally, for fixed k ≥ 1, k ≤ 2n for sufficiently large n, so that any n-connected space X will have corresponding stabilized homotopy groups. These groups are actually the homotopy groups of an object corresponding to X in the stable homotopy category.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Cichoń's diagram**
Cichoń's diagram:
In set theory, Cichoń's diagram or Cichon's diagram is a table of 10 infinite cardinal numbers related to the set theory of the reals displaying the provable relations between these cardinal characteristics of the continuum. All these cardinals are greater than or equal to ℵ1 , the smallest uncountable cardinal, and they are bounded above by 2ℵ0 , the cardinality of the continuum. Four cardinals describe properties of the ideal of sets of measure zero; four more describe the corresponding properties of the ideal of meager sets (first category sets).
Definitions:
Let I be an ideal of a fixed infinite set X, containing all finite subsets of X. We define the following "cardinal coefficients" of I: add min {|A|:A⊆I∧⋃A∉I}.
The "additivity" of I is the smallest number of sets from I whose union is not in I any more. As any ideal is closed under finite unions, this number is always at least ℵ0 ; if I is a σ-ideal, then add(I) ≥ ℵ1 cov min {|A|:A⊆I∧⋃A=X}.
The "covering number" of I is the smallest number of sets from I whose union is all of X. As X itself is not in I, we must have add(I) ≤ cov(I).
non min {|A|:A⊆X∧A∉I}, The "uniformity number" of I (sometimes also written unif (I) ) is the size of the smallest set not in I. By our assumption on I, add(I) ≤ non(I).
cof min {|A|:A⊆I∧(∀B∈I)(∃A∈A)(B⊆A)}.
Definitions:
The "cofinality" of I is the cofinality of the partial order (I, ⊆). It is easy to see that we must have non(I) ≤ cof(I) and cov(I) ≤ cof(I).Furthermore, the "bounding number" or "unboundedness number" b and the "dominating number" d are defined as follows: min {|F|:F⊆NN∧(∀g∈NN)(∃f∈F)(∃∞n∈N)(g(n)<f(n))}, min {|F|:F⊆NN∧(∀g∈NN)(∃f∈F)(∀∞n∈N)(g(n)<f(n))}, where " ∃∞n∈N " means: "there are infinitely many natural numbers n such that …", and " ∀∞n∈N " means "for all except finitely many natural numbers n we have …".
Diagram:
Let B be the σ-ideal of those subsets of the real line that are meager (or "of the first category") in the euclidean topology, and let L be the σ-ideal of those subsets of the real line that are of Lebesgue measure zero. Then the following inequalities hold: Where an arrow from x to y is to mean that x≤y . In addition, the following relations hold: It turns out that the inequalities described by the diagram, together with the relations mentioned above, are all the relations between these cardinals that are provable in ZFC, in the following limited sense. Let A be any assignment of the cardinals ℵ1 and ℵ2 to the 10 cardinals in Cichoń's diagram. Then if A is consistent with the diagram's relations, and if A also satisfies the two additional relations, then A can be realized in some model of ZFC.
Diagram:
For larger continuum sizes, the situation is less clear. It is consistent with ZFC that all of the Cichoń's diagram cardinals are simultaneously different apart from add (B) and cof (B) (which are equal to other entries), but (as of 2019) it remains open whether all combinations of the cardinal orderings consistent with the diagram are consistent.
Some inequalities in the diagram (such as "add ≤ cov") follow immediately from the definitions. The inequalities cov non (L) and cov non (B) are classical theorems and follow from the fact that the real line can be partitioned into a meager set and a set of measure zero.
Remarks:
The British mathematician David Fremlin named the diagram after the Polish mathematician from Wrocław, Jacek Cichoń.The continuum hypothesis, of 2ℵ0 being equal to ℵ1 , would make all of these relations equalities.
Martin's axiom, a weakening of the continuum hypothesis, implies that all cardinals in the diagram (except perhaps ℵ1 ) are equal to 2ℵ0 Similar diagrams can be drawn for cardinal characteristics of higher cardinals κ for κ strongly inaccessible, which assort various cardinals between κ+ and 2κ
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Mixed receptive-expressive language disorder**
Mixed receptive-expressive language disorder:
Mixed receptive-expressive language disorder (DSM-IV 315.32) is a communication disorder in which both the receptive and expressive areas of communication may be affected in any degree, from mild to severe. Children with this disorder have difficulty understanding words and sentences. This impairment is classified by deficiencies in expressive and receptive language development that is not attributed to sensory deficits, nonverbal intellectual deficits, a neurological condition, environmental deprivation or psychiatric impairments. Research illustrates that 2% to 4% of five year olds have mixed receptive-expressive language disorder. This distinction is made when children have issues in expressive language skills, the production of language, and when children also have issues in receptive language skills, the understanding of language. Those with mixed receptive-language disorder have a normal left-right anatomical asymmetry of the planum temporale and parietale. This is attributed to a reduced left hemisphere functional specialization for language. Taken from a measure of cerebral blood flow (SPECT) in phonemic discrimination tasks, children with mixed receptive-expressive language disorder do not exhibit the expected predominant left hemisphere activation. Mixed receptive-expressive language disorder is also known as receptive-expressive language impairment (RELI) or receptive language disorder.
Classification:
If assessed on the Wechsler Adult Intelligence Scale, for instance, symptoms of mixed receptive-expressive language disorder may show as relatively low scores for Information, Vocabulary and Comprehension (perhaps below the 25th percentile). If a person has difficulty with specific types of concepts, for example spatial terms, such as 'over', 'under', 'here' and 'there', they may also have difficulties with arithmetic, understanding word problems and instructions, or difficulties using words at all.
Classification:
They may also have a more general problem with words or sentences, both comprehension and orally. Some children will have issues with pragmatics – the use of language in social contexts as well; and therefore, will have difficulty with inferring meaning. Furthermore, they have severe impairment of spontaneous language production and for this reason, they have difficulty in formulating questions. Generally, children will have trouble with morphosyntax, which is word inflections. These children have difficulty understanding and applying grammatical rules, such as endings that mark verb tenses (e.g. -ed), third-person singular verbs (e.g. I think, he thinks), plurals (e.g. -s), auxiliary verbs that denote tenses (e.g. was running, is running), and with determiners (the, a). Moreover, children with mixed receptive-expressive language disorders have deficits in completing two cognitive operations at the same time and learning new words or morphemes under time pressure or when processing demands are high. These children also have auditory processing deficits in which they process auditory information at a slower rate and as a result, require more time for processing.
Presentation:
Related disorders Studies show that low receptive and expressive language at young ages was correlated to increased autism symptom severity in children in their early school years. Below is a chart depicting language deficits of children on the autistic spectrum. This table indicates the lower levels of language processing, receptive/expressive disorders, which is more severe in children with autism. When autistic children speak, they are often difficult to understand, their language is sparse and dysfluent, they speak in single, uninflected words or short phrases, and their supply of words is severely depleted. This leads to limited vocabulary while also having deficits in verbal short-term memory.
Management:
Children who demonstrate deficiencies early in their speech and language development are at risk for continued speech and language issues throughout later childhood. Similarly, even if these speech and language problems have been resolved, children with early language delay are more at risk for difficulties in phonological awareness, reading, and writing throughout their lives. Children with mixed receptive-expressive language disorder are often likely to have long-term implications for language development, literacy, behavior, social development, and even mental health problems. If suspected of having a mixed receptive-expressive language disorder, treatment is available from a speech therapist or pathologist. Most treatments are short term, and rely upon accommodations made within the environment, in order to minimize interfering with work or school. Programs that involve intervention planning that link verbal short-term memory with visual/nonverbal information may be helpful for these children. In addition, approaches such as parent training for language stimulation and monitoring language through the "watch and see" method are recommended. The watch-and-see technique advises children with mixed receptive-expressive language disorder who come from stable, middle-class homes without any other behavioral, medical, or hearing problems should be vigilantly monitored rather than receive intervention. It is often the case that children do not meet the eligibility criteria established through a comprehensive oral language evaluation; and as a result, are not best suited for early intervention programs and require a different approach besides the "one size fits all" model.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Film recorder**
Film recorder:
A film recorder is a graphical output device for transferring images to photographic film from a digital source. In a typical film recorder, an image is passed from a host computer to a mechanism to expose film through a variety of methods, historically by direct photography of a high-resolution cathode ray tube (CRT) display. The exposed film can then be developed using conventional developing techniques, and displayed with a slide or motion picture projector. The use of film recorders predates the current use of digital projectors, which eliminate the time and cost involved in the intermediate step of transferring computer images to film stock, instead directly displaying the image signal from a computer. Motion picture film scanners are the opposite of film recorders, copying content from film stock to a computer system. Film recorders can be thought of as modern versions of Kinescopes.
Design:
Operation All film recorders typically work in the same manner. The image is fed from a host computer as a raster stream over a digital interface. A film recorder exposes film through various mechanisms; flying spot (early recorders); photographing a high resolution video monitor; electron beam recorder (Sony HDVS); a CRT scanning dot (Celco); focused beam of light from a light valve technology (LVT) recorder; a scanning laser beam (Arrilaser); or recently, full-frame LCD array chips.
Design:
For color image recording on a CRT film recorder, the red, green, and blue channels are sequentially displayed on a single gray scale CRT, and exposed to the same piece of film as a multiple exposure through a filter of the appropriate color. This approach yields better resolution and color quality than possible with a tri-phosphor color CRT. The three filters are usually mounted on a motor-driven wheel. The filter wheel, as well as the camera's shutter, aperture, and film motion mechanism are usually controlled by the recorder's electronics and/or the driving software. CRT film recorders are further divided into analog and digital types. The analog film recorder uses the native video signal from the computer, while the digital type uses a separate display board in the computer to produce a digital signal for a display in the recorder. Digital CRT recorders provide a higher resolution at a higher cost compared to analog recorders due to the additional specialized hardware. Typical resolutions for digital recorders were quoted as 2K and 4K, referring to 2048×1366 and 4096×2732 pixels, respectively, while analog recorders provided a resolution of 640×428 pixels in comparison.Higher-quality LVT film recorders use a focused beam of light to write the image directly onto a film loaded spinning drum, one pixel at a time. In one example, the light valve was a liquid-crystal shutter, the light beam was steered with a lens, and text was printed using a pre-cut optical mask. The LVT will record pixel beyond grain. Some machines can burn 120-res or 120 lines per millimeter. The LVT is basically a reverse drum scanner. The exposed film is developed and printed by regular photographic chemical processing.
Design:
Formats Film recorders are available for a variety of film types and formats. The 35mm negative film and transparencies are popular because they can be processed by any photo shop. Single-image 4×5 film and 8×10 are often used for high-quality, large format printing.Some models have detachable film holders to handle multiple formats with the same camera or with Polaroid backs to provide on-site review of output before exposing film.
Uses:
Film recorders are used in digital printing to generate master negatives for offset and other bulk printing processes. For preview, archiving, and small-volume reproduction, film recorders have been rendered obsolete by modern printers that produce photographic-quality hardcopies directly on plain paper.
They are also used to produce the master copies of movies that use computer animation or other special effects based on digital image processing. However, most cinemas nowadays use Digital Cinema Packages on hard drives instead of film stock.
Computer graphics Film recorders were among the earliest computer graphics output devices; for example, the IBM 740 CRT Recorder was announced in 1954.
Film recorders were also commonly used to produce slides for slide projectors; but this need is now largely met by video projectors that project images directly from a computer to a screen. The terms "slide" and "slide deck" are still commonly used in presentation programs.
Uses:
Current uses Currently, film recorders are primarily used in the motion picture film-out process for the ever increasing amount of digital intermediate work being done. Although significant advances in large venue video projection alleviates the need to output to film, there remains a deadlock between the motion picture studios and theater owners over who should pay for the cost of these very costly projection systems. This, combined with the increase in international and independent film production, will keep the demand for film recording steady for at least a decade.
Key manufacturers:
Traditional film recorder manufacturers have all but vanished from the scene or have evolved their product lines to cater to the motion picture industry. Dicomed was one such early provider of digital color film recorders. Polaroid, Management Graphics, Inc, MacDonald-Detwiler, Information International, Inc., and Agfa were other producers of film recorders. Arri is the only current major manufacturer of film recorders.
Key manufacturers:
Kodak Lightning I film recorder. One of the first laser recorders. Needed an engineering staff to set up.
Kodak Lightning II film recorder used both gas and diode laser to record on to film.
The last LVT machines produced by Kodak / Durst-Dice stopped production in 2002. There are no LVT film recorders currently being produced. LVT Saturn 1010 uses a LED exposure (RGB) to 8"x10" film at 1000-3000ppi.
LUX Laser Cinema Recorder from Autologic/Information International in Thousand Oaks, California. Sales end in March 2000. Used on the 1997 film “Titanic”.
Arri produces the Arrilaser line of laser-based motion picture film recorders.
MGI produced the Solitaire line of CRT-based motion picture film recorders.
Matrix, originally ImaPRO, a branch of Agfa Division, produced the QCR line of CRT-based motion picture film recorders.
CCG, formerly Agfa film recorders, has been a steady manufacturer of film recorders based in Germany.
In 2004 CCG introduced Definity, a motion picture film recorder utilizing LCD technology. In 2010 CCG introduced the first full LED LCD film recorder as a new step in film recording.
Key manufacturers:
Cinevator was made by Cinevation AS, in Drammen, Norway. The Cinevator was a real-time digital film recorder. It could record IN, IP and prints with and without sound Oxberry produced the Model 3100 film recorder camera system, with interchangeable pin-registered movements (shuttles) for 35mm (full frame/Silent, 1.33:1) and 16mm (regular 16, "2R"), and others have adapted the Oxberry movements for CinemaScope, 1.85:1, 1.75:1, 1.66:1, as well as Academy/Sound (1.37:1) in 35mm and Super-16 in 16mm ("1R"). For instance, the "Solitaire" and numerous others employed the Oxberry 3100 camera system.
History:
Before video tape recorders or VTRs were invented, TV shows were either broadcast live or recorded to film for later showing, using the Kinescope process. In 1967, CBS Laboratories introduced the Electronic Video Recording format, which used video and telecined-to-video film sources, which were then recorded with an electron-beam recorder at CBS' EVR mastering plant at the time to 35mm film stock in a rank of 4 strips on the film, which was then slit down to 4 8.75 mm (0.344 in) film copies, for playback in an EVR player.
History:
All types of CRT recorders were (and still are) used for film recording. Some early examples used for computer-output recording were the 1954 IBM 740 CRT Recorder, and the 1962 Stromberg-Carlson SC-4020, the latter using a Charactron CRT for text and vector graphic output to either 16mm motion picture film, 16mm microfilm, or hard-copy paper output. Later 1970 and 80s-era recording to B&W (and color, with 3 separate exposures for red, green, and blue)) 16mm film was done with an EBR (Electron Beam Recorder), the most prominent examples made by 3M), for both video and COM (Computer Output Microfilm) applications. Image Transform in Universal City, California used specially modified 3M EBR film recorders that could perform color film-out recording on 16mm by exposing three 16mm frames in a row (one red, one green and one blue). The film was then printed to color 16mm or 35mm film. The video fed to the recorder could either be NTSC, PAL or SECAM. Later, Image Transform used specially modified VTRs to record 24 frame for their "Image Vision" system. The modified 1 inch type B videotape VTRs would record and play back 24frame video at 10 MHz bandwidth, at about twice the normal NTSC resolution. Modified 24fps 10 MHz Bosch Fernseh KCK-40 cameras were used on the set. This was a custom pre-HDTV video system. Image Transform had modified other gear for this process. At its peak, this system was used in the production of the film "Monty Python Live at the Hollywood Bowl" in 1982. This was the first major pre-digital intermediate post production using a film recorder for film-out production.
History:
In 1988, companies in the United States collectively produced 715 million slides at a cost of $8.3 billion.
History:
Awards The Academy of Motion Picture Arts and Sciences awarded an Oscar to the makers of the Arrilaser film recorder. The Award of Merit Oscar from the Academy Scientific and Technical Award ceremony was given on 11 February 2012 to Franz Kraus, Johannes Steurer and Wolfgang Riedel. Steurer was awarded the Oskar Messter Memorial Medal two years later in 2014 for his role in the development of the Arrilaser.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**First principle**
First principle:
In philosophy and science, a first principle is a basic proposition or assumption that cannot be deduced from any other proposition or assumption. First principles in philosophy are from first cause attitudes and taught by Aristotelians, and nuanced versions of first principles are referred to as postulates by Kantians.In mathematics and formal logic, first principles are referred to as axioms or postulates. In physics and other sciences, theoretical work is said to be from first principles, or ab initio, if it starts directly at the level of established science and does not make assumptions such as empirical model and parameter fitting. "First principles thinking" consists of decomposing things down to the fundamental axioms in the given arena, before reasoning up by asking which ones are relevant to the question at hand, then cross referencing conclusions based on chosen axioms and making sure conclusions do not violate any fundamental laws. Physicists include counterintuitive concepts with reiteration.
In formal logic:
In a formal logical system, that is, a set of propositions that are consistent with one another, it is possible that some of the statements can be deduced from other statements. For example, in the syllogism, "All men are mortal; Socrates is a man; Socrates is mortal" the last claim can be deduced from the first two.
A first principle is an axiom that cannot be deduced from any other within that system. The classic example is that of Euclid's Elements; its hundreds of geometric propositions can be deduced from a set of definitions, postulates, and common notions: all three types constitute first principles.
Philosophy:
In philosophy "first principles" are from first cause attitudes commonly referred to as a priori terms and arguments, which are contrasted to a posteriori terms, reasoning or arguments, in that the former is simply assumed and exist prior to the reasoning process and the latter are deduced or inferred after the initial reasoning process. First principles are generally treated in the realm of philosophy known as epistemology, but are an important factor in any metaphysical speculation.
Philosophy:
In philosophy "first principles" are often somewhat synonymous with a priori, datum and axiomatic reasoning.
Philosophy:
Ancient Greek philosophy In Ancient Greek philosophy, a first principle from which other principles are derived is called an arche and later "first principle" or "element". By extension, it may mean "first place", "method of government", "empire, realm", "authorities" The concept of an arche was adapted from the earliest cosmogonies of Hesiod and Orphism, through the physical theories of Pre-Socratic philosophy and Plato before being formalized as a part of metaphysics by Aristotle. Arche sometimes also transcribed as arkhé) is an Ancient Greek word with primary senses "beginning", "origin" or "source of action": from the beginning, οr the original argument,"command". The first principle or element corresponds to the "ultimate underlying substance" and "ultimate indemonstrable principle".
Philosophy:
Mythical cosmogonies The heritage of Greek mythology already embodied the desire to articulate reality as a whole and this universalizing impulse was fundamental for the first projects of speculative theorizing. It appears that the order of "being" was first imaginatively visualized before it was abstractly thought.In the mythological cosmogonies of the Near East, the universe is formless and empty and the only existing thing prior to creation was the water abyss. In the Babylonian creation story, Enuma Elish, the primordial world is described as a "watery chaos" from which everything else appeared. This watery chaos has similarities in the cosmogony of the Greek mythographer Pherecydes of Syros. In the mythical Greek cosmogony of Hesiod (8th to 7th century BC), the origin of the world is Chaos, considered as a divine primordial condition, from which everything else appeared. In the creation "chaos" is a gaping-void, but later the word is used to describe the space between the earth and the sky, after their separation. "Chaos" may mean infinite space, or a formless matter which can be differentiated. The notion of temporal infinity was familiar to the Greek mind from remote antiquity in the religious conception of immortality. The conception of the "divine" as an origin influenced the first Greek philosophers. In the Orphic cosmogony, the unaging Chronos produced Aether and Chaos and made in divine Aether a silvery egg, from which everything else appeared.
Philosophy:
Ionian school The earliest Pre-Socratic philosophers, the Ionian material monists, sought to explain all of nature (physis) in terms of one unifying arche. Among the material monists were the three Milesian philosophers: Thales, who believed that everything was composed of water; Anaximander, who believed it was apeiron; and Anaximenes, who believed it was air. This is considered as a permanent substance or either one or more which is conserved in the generation of rest of it. From this all things first come to be and into this they are resolved in a final state. This source of entity is always preserved. Although their theories were primitive, these philosophers were the first to give an explanation of the physical world without referencing the supernatural; this opened the way for much of modern science (and philosophy), which has the same goal of explaining the world without dependence on the supernatural.Thales of Miletus (7th to 6th century BC), the father of philosophy, claimed that the first principle of all things is water, and considered it as a substance that contains in it motion and change. His theory was supported by the observation of moisture throughout the world and coincided with his theory that the earth floated on water. His ideas were influenced by the Near-Eastern mythological cosmogony and probably by the Homeric statement that the surrounding Oceanus (ocean) is the source of all springs and rivers.Anaximander argued that water could not be the arche, because it could not give rise to its opposite, fire. Anaximander claimed that none of the elements (earth, fire, air, water) could be arche for the same reason. Instead, he proposed the existence of the apeiron, an indefinite substance from which all things are born and to which all things will return. Apeiron (endless or boundless) is something completely indefinite; and, Anaximander was probably influenced by the original chaos of Hesiod (yawning abyss). Anaximander was the first philosopher that used arche for that which writers from Aristotle onwards called "the substratum" (Simplicius Phys. 150, 22). He probably intended it to mean primarily "indefinite in kind" but assumed it also to be "of unlimited extent and duration". The notion of temporal infinity was familiar to the Greek mind from remote antiquity in the religious conception of immortality and Anaximander's description was in terms appropriate to this conception. This arche is called "eternal and ageless". (Hippolitus I,6,I;DK B2)Anaximenes, Anaximander's pupil, advanced yet another theory. He returns to the elemental theory, but this time posits air, rather than water, as the arche and ascribes to it divine attributes. He was the first recorded philosopher who provided a theory of change and supported it with observation. Using two contrary processes of rarefaction and condensation (thinning or thickening), he explains how air is part of a series of changes. Rarefied air becomes fire, condensed it becomes first wind, then cloud, water, earth, and stone in order. The arche is technically what underlies all of reality/appearances.
Philosophy:
Aristotle Terence Irwin writes: When Aristotle explains in general terms what he tries to do in his philosophical works, he says he is looking for "first principles" (or "origins"; archai): In every systematic inquiry (methodos) where there are first principles, or causes, or elements, knowledge and science result from acquiring knowledge of these; for we think we know something just in case we acquire knowledge of the primary causes, the primary first principles, all the way to the elements. It is clear, then, that in the science of nature as elsewhere, we should try first to determine questions about the first principles. The naturally proper direction of our road is from things better known and clearer to us, to things that are clearer and better known by nature; for the things that are known to us are not the same as the things known unconditionally (haplôs). Hence it is necessary for us to progress, following this procedure, from the things that are less clear by nature, but clearer to us, towards things that are clearer and better known by nature. (Phys. 184a10–21) The connection between knowledge and first principles is not axiomatic as expressed in Aristotle's account of a first principle (in one sense) as "the first basis from which a thing is known" (Met. 1013a14–15). For Aristotle, the arche is the condition necessary for the existence of something, the basis for what he calls "first philosophy" or metaphysics. The search for first principles is not peculiar to philosophy; philosophy shares this aim with biological, meteorological, and historical inquiries, among others. But Aristotle's references to first principles in this opening passage of the Physics and at the start of other philosophical inquiries imply that it is a primary task of philosophy.
Philosophy:
Modern philosophy Descartes Profoundly influenced by Euclid, Descartes was a rationalist who invented the foundationalist system of philosophy. He used the method of doubt, now called Cartesian doubt, to systematically doubt everything he could possibly doubt until he was left with what he saw as purely indubitable truths. Using these self-evident propositions as his axioms, or foundations, he went on to deduce his entire body of knowledge from them. The foundations are also called a priori truths. His most famous proposition is "Je pense, donc je suis" (I think, therefore I am, or Cogito ergo sum), which he indicated in his Discourse on the Method was "the first principle of the philosophy of which I was in search." Descartes describes the concept of a first principle in the following excerpt from the preface to the Principles of Philosophy (1644): I should have desired, in the first place, to explain in it what philosophy is, by commencing with the most common matters, as, for example, that the word philosophy signifies the study of wisdom, and that by wisdom is to be understood not merely prudence in the management of affairs, but a perfect knowledge of all that man can know, as well for the conduct of his life as for the preservation of his health and the discovery of all the arts, and that knowledge to subserve these ends must necessarily be deduced from first causes; so that in order to study the acquisition of it (which is properly called [284] philosophizing), we must commence with the investigation of those first causes which are called Principles. Now, these principles must possess two conditions: in the first place, they must be so clear and evident that the human mind, when it attentively considers them, cannot doubt their truth; in the second place, the knowledge of other things must be so dependent on them as that though the principles themselves may indeed be known apart from what depends on them, the latter cannot nevertheless be known apart from the former. It will accordingly be necessary thereafter to endeavor so to deduce from those principles the knowledge of the things that depend on them, as that there may be nothing in the whole series of deductions which is not perfectly manifest.
In physics:
In physics, a calculation is said to be from first principles, or ab initio, if it starts directly at the level of established laws of physics and does not make assumptions such as empirical model and fitting parameters.
For example, calculation of electronic structure using Schrödinger's equation within a set of approximations that do not include fitting the model to experimental data is an ab initio approach.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Barnes G-function**
Barnes G-function:
In mathematics, the Barnes G-function G(z) is a function that is an extension of superfactorials to the complex numbers. It is related to the gamma function, the K-function and the Glaisher–Kinkelin constant, and was named after mathematician Ernest William Barnes. It can be written in terms of the double gamma function.
Formally, the Barnes G-function is defined in the following Weierstrass product form: exp exp (z22k−z)} where γ is the Euler–Mascheroni constant, exp(x) = ex is the exponential function, and Π denotes multiplication (capital pi notation).
As an entire function, G is of order two, and of infinite type. This can be deduced from the asymptotic expansion given below.
Functional equation and integer arguments:
The Barnes G-function satisfies the functional equation G(z+1)=Γ(z)G(z) with normalisation G(1) = 1. Note the similarity between the functional equation of the Barnes G-function and that of the Euler gamma function: Γ(z+1)=zΓ(z).
Functional equation and integer arguments:
The functional equation implies that G takes the following values at integer arguments: if if n=1,2,… (in particular, G(0)=0,G(1)=1 and thus G(n)=(Γ(n))n−1K(n) where Γ(x) denotes the gamma function and K denotes the K-function. The functional equation uniquely defines the G function if the convexity condition, log (G(x))≥0 is added. Additionally, the Barnes G function satisfies the duplication formula, 11 12 πx−12G(2x)
Characterisation:
Similar to the Bohr-Mollerup theorem for the gamma function, for a constant c>0 , we have for f(x)=cG(x) f(x+1)=Γ(x)f(x) and for x>0 f(x+n)∼Γ(x)nn(x2)f(n) as n→∞
Value at 1/2:
24 24 e18π−14A−32, where A is the Glaisher–Kinkelin constant.
Reflection formula 1.0:
The difference equation for the G-function, in conjunction with the functional equation for the gamma function, can be used to obtain the following reflection formula for the Barnes G-function (originally proved by Hermann Kinkelin): log log log cot πxdx.
Reflection formula 1.0:
The logtangent integral on the right-hand side can be evaluated in terms of the Clausen function (of order 2), as is shown below: log log sin Cl 2(2πz) The proof of this result hinges on the following evaluation of the cotangent integral: introducing the notation Lc (z) for the logcotangent integral, and using the fact that log sin cot πx , an integration by parts gives Lc cot log sin log sin log sin log sin log log sin log sin πx)dx.
Reflection formula 1.0:
Performing the integral substitution y=2πx⇒dx=dy/(2π) gives log sin log sin y2)dy.
The Clausen function – of second order – has the integral representation Cl log sin x2|dx.
However, within the interval 0<θ<2π , the absolute value sign within the integrand can be omitted, since within the range the 'half-sine' function in the integral is strictly positive, and strictly non-zero. Comparing this definition with the result above for the logtangent integral, the following relation clearly holds: Lc log sin Cl 2(2πz).
Thus, after a slight rearrangement of terms, the proof is complete: log log sin Cl 2(2πz).◻ Using the relation G(1+z)=Γ(z)G(z) and dividing the reflection formula by a factor of 2π gives the equivalent form: log log sin log Cl 2(2πz) Ref: see Adamchik below for an equivalent form of the reflection formula, but with a different proof.
Reflection formula 2.0:
Replacing z with (1/2) − z'' in the previous reflection formula gives, after some simplification, the equivalent formula shown below (involving Bernoulli polynomials): log log log log tan πxdx
Taylor series expansion:
By Taylor's theorem, and considering the logarithmic derivatives of the Barnes function, the following series expansion can be obtained: log log 2π−(z+(1+γ)z22)+∑k=2∞(−1)kζ(k)k+1zk+1.
It is valid for 0<z<1 . Here, ζ(x) is the Riemann Zeta function: ζ(s)=∑n=1∞1ns.
Exponentiating both sides of the Taylor expansion gives: exp log exp exp [∑k=2∞(−1)kζ(k)k+1zk+1].
Comparing this with the Weierstrass product form of the Barnes function gives the following relation: exp exp (z22k−z)}
Multiplication formula:
Like the gamma function, the G-function also has a multiplication formula: G(nz)=K(n)nn2z2/2−nz(2π)−n2−n2z∏i=0n−1∏j=0n−1G(z+i+jn) where K(n) is a constant given by: 12 12 12 ⋅(2π)(n−1)/2.
Here ζ′ is the derivative of the Riemann zeta function and A is the Glaisher–Kinkelin constant.
Absolute value:
It holds true that G(z¯)=G(z)¯ , thus |G(z)|2=G(z)G(z¯) . From this relation and by the above presented Weierstrass product form one can show that exp exp (−y2k).
This relation is valid for arbitrary x∈R∖{0,−1,−2,…} , and y∈R . If x=0 , then the below formula is valid instead: exp exp (−y2k) for arbitrary real y.
Asymptotic expansion:
The logarithm of G(z + 1) has the following asymptotic expansion, as established by Barnes: log log log 12 log 12 log A)+∑k=1NB2k+24k(k+1)z2k+O(1z2N+2).
Here the Bk are the Bernoulli numbers and A is the Glaisher–Kinkelin constant. (Note that somewhat confusingly at the time of Barnes the Bernoulli number B2k would have been written as (−1)k+1Bk , but this convention is no longer current.) This expansion is valid for z in any sector not containing the negative real axis with |z| large.
Relation to the Loggamma integral:
The parametric Loggamma can be evaluated in terms of the Barnes G-function (Ref: this result is found in Adamchik below, but stated without proof): log log log log G(1+z) The proof is somewhat indirect, and involves first considering the logarithmic difference of the gamma function and Barnes G-function: log log G(1+z) where 1Γ(z)=zeγz∏k=1∞{(1+zk)e−z/k} and γ is the Euler–Mascheroni constant.
Relation to the Loggamma integral:
Taking the logarithm of the Weierstrass product forms of the Barnes function and gamma function gives: log log log log log log log log (1+zk)+z22k−z}] A little simplification and re-ordering of terms gives the series expansion: log log log log log G(1+z) Finally, take the logarithm of the Weierstrass product form of the gamma function, and integrate over the interval [0,z] to obtain: log log log log (1+zk)−z22k−z} Equating the two evaluations completes the proof: log log log log G(1+z) And since G(1+z)=Γ(z)G(z) then, log log log log G(z).
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Energy shot**
Energy shot:
Energy shots are a specialized kind of energy drink that contain a dose of the stimulant caffeine in a small amount of liquid. Whereas most energy drinks are sold in cans or bottles, energy shots are usually sold in 50ml bottles. Energy shots can contain the same total amount of caffeine, vitamins or other functional ingredients as their larger versions, and may be considered concentrated forms of energy drinks. "Micro shot" energy drinks also exist, containing only 1–5 teaspoonfuls (5–25 mL) of liquid.
Ingredients:
Similar to energy drinks, energy shots contain caffeine, vitamins, and herbs such as guarana, ginseng or ginkgo biloba, taurine, maltodextrin, inositol, carnitine, creatine or glucuronolactone. Some energy shots contain sugar; however, many brands also offer artificially-sweetened 'diet' versions. Some decaf varieties are also offered. The central ingredient in most energy shots is caffeine, the same stimulant found in coffee or tea. Vitamin based energy shots contain numerous additional vitamins and supplements for sustenance, sustainment, and overall health. 5-Hour contains vitamin levels sometimes hundreds of times higher than the recommended RDA, according to a 2010 test by ConsumerLab.com. Some energy shots include electrolytes, and others include a selection of vitamins.
Ingredients:
The average 50ml energy shot has about 80 mg of caffeine. This is approximately equivalent to a cup of coffee.
Effects:
The functional ingredients of energy shots are comparable to those of energy drinks, and their effects on improvement in mental and cognitive performances and subjective alertness are in line with the effects of traditional energy drinks. Vitamin based energy shots have variable benefits dependent on the additional ingredients.
History:
The idea of energy shots started decades ago in Japan, where small “tonics” became very popular among consumers, served highly concentrated and without carbonation. With the introduction of energy drinks as of the late 1980s, the efficacy of these energy shots started to travel the world as a new product format. In 2003, the founder of 5-Hour Energy discovered an energy drink at a natural products trade show and formulated a similar product reducing the content from 16 to 2 ounces but keeping the energizing effects. Daily Finance credits them with largely creating the energy shot market. By 2008, there were over 25 brands offering energy shots in the US alone. In 2009, major energy drink producer Red Bull launched an energy shot. By 2011, energy shots became so popular that 5-Hour Energy sold $1 billion of their product at retail.Although originally marketed in the US, energy shots are also becoming popular in other parts of the world, such as Europe, The products are aimed at customers who seek high efficacy with little liquid intake, and include truck drivers and students.
Products:
As of June 2009, there were approximately 250 energy shot brands in the US. 5-Hour Energy owned 90% of the market share in 2011, according to research firm of Symphony IRI.Some of the manufacturers of energy shots also market energy drinks, however, crossover success has not been common for the larger brands such as Monster and Rockstar. Red Bull, the leader of the energy drink category, launched an energy shot of its own in April 2009.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Rigid bus**
Rigid bus:
A rigid bus (either a motor bus or trolleybus) is a vehicle used in public transportation services with a single, rigid chassis. A bus of this type is to be contrasted with an articulated or bi-articulated bus, which will have two or more rigid sections linked by a pivoting joint, also with a trailer bus, which is formed out of a bus bodied semi-trailer pulled by a conventional tractor unit.
Rigid bus:
The term "rigid bus" is used mainly in British English and Australian English and usually only when distinguishing such buses from articulated buses, such as describing a fleet that includes both types. In the case of two-axle buses, which must be single-chassis, rigid vehicles, British English often refers to such vehicles as "two-axle" buses, only using the term "rigid" when referring to vehicles with three or more axles, which can be either rigid or articulated.
Rigid bus:
The term "rigid bus" is not used in American English, where the distinction is commonly made using the term "non-articulated" bus or, when the context is clear, "standard bus". However, the term "standard bus" can be confusing, because it is sometimes used, in other English-speaking countries, referring to a uniform bus design developed for and by a number of European bus manufacturers, in two model generations, between the 1960s and the end of the 20th century. The German VöV-Standard-Bus includes the Mercedes-Benz O305 and the Mercedes-Benz O405 types, each of which, in both rigid and articulated forms, was widely acquired and used by bus operators in English-speaking countries outside North America.Rigid buses may be of either single-deck or double-deck design, and may have either two axles or multi-axles. However, the expression "rigid bus" is seldom used to describe a double-decker bus, because very few double-decker buses have anything other than a rigid chassis.
Rigid bus:
Single-decker rigid buses are used mainly on bus lines with an average ridership (for example, as transit buses or regional buses on routes with normal levels of patronage), or as coaches.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Tisotumab vedotin**
Tisotumab vedotin:
Tisotumab vedotin, sold under the brand name Tivdak, is an antibody-drug conjugate used to treat cervical cancer. It is a combination of tisotumab, a monoclonal antibody against tissue factor, and monomethyl auristatin E (MMAE), a potent inhibitor of cell division. It is administered by infusion into a vein.Tisotumab vedotin was approved for medical use in the United States in September 2021. The U.S. Food and Drug Administration considers it to be a first-in-class medication.
Adverse effects:
In the United States, Tivdak carries a black box warning for ocular toxicity, which occurs in up to 60% of treated patients. In clinical trials, the most common forms of ocular toxicity were dry eye, conjunctivitis, corneal damage, and blepharitis.Other common adverse effects include bleeding (occurring in approximately 60% of patients, most often nosebleed) and peripheral neuropathy (42% of patients). Like all drugs containing MMAE, tisotumab vedotin can cause inflammation of the lungs.
Mechanism of action:
The antibody portion of tisotumab vedotin (tisotumab) binds to and forms a complex with tissue factor, a molecule expressed on the surface of cancer cells. This complex is then taken up into the cell, where tisotumab vedotin is broken down by proteolytic cleavage, releasing MMAE, which stops the cell cycle and kills the cell by apoptosis.
History:
Tisotumab vedotin was developed by Genmab in Utrecht, the Netherlands, and Copenhagen, Denmark, with the code name TF-011-MMAE. In September 2021, tisotumab vedotin was granted accelerated approval by United States Food and Drug Administration for the use of recurrent or metastatic cervical cancer with disease progression on or after chemotherapy.
Society and culture:
Names Tisotumab vedotin is the international nonproprietary name. Tivdak is the brand name for tisotumab vedotin in the United States.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Bell triangle**
Bell triangle:
In mathematics, the Bell triangle is a triangle of numbers analogous to Pascal's triangle, whose values count partitions of a set in which a given element is the largest singleton. It is named for its close connection to the Bell numbers, which may be found on both sides of the triangle, and which are in turn named after Eric Temple Bell. The Bell triangle has been discovered independently by multiple authors, beginning with Charles Sanders Peirce (1880) and including also Alexander Aitken (1933) and Cohn et al. (1962), and for that reason has also been called Aitken's array or the Peirce triangle.
Values:
Different sources give the same triangle in different orientations, some flipped from each other. In a format similar to that of Pascal's triangle, and in the order listed in the Online Encyclopedia of Integer Sequences, its first few rows are: 1 1 2 2 3 5 5 7 10 15 15 20 27 37 52 52 67 87 114 151 203 203 255 322 409 523 674 877
Construction:
The Bell triangle may be constructed by placing the number 1 in its first position. After that placement, the leftmost value in each row of the triangle is filled by copying the rightmost value in the previous row. The remaining positions in each row are filled by a rule very similar to that for Pascal's triangle: they are the sum of the two values to the left and upper left of the position.
Construction:
Thus, after the initial placement of the number 1 in the top row, it is the last position in its row and is copied to the leftmost position in the next row. The third value in the triangle, 2, is the sum of the two previous values above-left and left of it. As the last value in its row, the 2 is copied into the third row, and the process continues in the same way.
Combinatorial interpretation:
The Bell numbers themselves, on the left and right sides of the triangle, count the number of ways of partitioning a finite set into subsets, or equivalently the number of equivalence relations on the set.
Combinatorial interpretation:
Sun & Wu (2011) provide the following combinatorial interpretation of each value in the triangle. Following Sun and Wu, let An,k denote the value that is k positions from the left in the nth row of the triangle, with the top of the triangle numbered as A1,1. Then An,k counts the number of partitions of the set {1, 2, ..., n + 1} in which the element k + 1 is the only element of its set and each higher-numbered element is in a set of more than one element. That is, k + 1 must be the largest singleton of the partition.
Combinatorial interpretation:
For instance, the number 3 in the middle of the third row of the triangle would be labeled, in their notation, as A3,2, and counts the number of partitions of {1, 2, 3, 4} in which 3 is the largest singleton element. There are three such partitions: {1}, {2, 4}, {3} {1, 4}, {2}, {3} {1, 2, 4}, {3}.The remaining partitions of these four elements either do not have 3 in a set by itself, or they have a larger singleton set {4}, and in either case are not counted in A3,2.
Combinatorial interpretation:
In the same notation, Sun & Wu (2011) augment the triangle with another diagonal to the left of its other values, of the numbers An,0 = 1, 0, 1, 1, 4, 11, 41, 162, ...(sequence A000296 in the OEIS)counting partitions of the same set of n + 1 items in which only the first item is a singleton. Their augmented triangle is 1 0 1 1 1 2 1 2 3 5 4 5 7 10 15 11 15 20 27 37 52 41 52 67 87 114 151 203 162 203 255 322 409 523 674 877 This triangle may be constructed similarly to the original version of Bell's triangle, but with a different rule for starting each row: the leftmost value in each row is the difference of the rightmost and leftmost values of the previous row.
Combinatorial interpretation:
An alternative but more technical interpretation of the numbers in the same augmented triangle is given by Quaintance & Kwong (2013).
Diagonals and row sums:
The leftmost and rightmost diagonals of the Bell triangle both contain the sequence 1, 1, 2, 5, 15, 52, ... of the Bell numbers (with the initial element missing in the case of the rightmost diagonal). The next diagonal parallel to the rightmost diagonal gives the sequence of differences of two consecutive Bell numbers, 1, 3, 10, 37, ..., and each subsequent parallel diagonal gives the sequence of differences of previous diagonals.
Diagonals and row sums:
In this way, as Aitken (1933) observed, this triangle can be interpreted as implementing the Gregory–Newton interpolation formula, which finds the coefficients of a polynomial from the sequence of its values at consecutive integers by using successive differences. This formula closely resembles a recurrence relation that can be used to define the Bell numbers.
Diagonals and row sums:
The sums of each row of the triangle, 1, 3, 10, 37, ..., are the same sequence of first differences appearing in the second-from-right diagonal of the triangle. The nth number in this sequence also counts the number of partitions of n elements into subsets, where one of the subsets is distinguished from the others; for instance, there are 10 ways of partitioning three items into subsets and then choosing one of the subsets.
Related constructions:
A different triangle of numbers, with the Bell numbers on only one side, and with each number determined as a weighted sum of nearby numbers in the previous row, was described by Aigner (1999).
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Faith deconstruction**
Faith deconstruction:
Faith deconstruction, also known as deconstructing faith, evangelical deconstruction, the deconstruction movement, or simply deconstruction, is a phenomenon within American evangelicalism in which Christians rethink their faith and jettison previously held beliefs, sometimes to the point of no longer identifying as Christians. It is closely related to the exvangelical movement.
Description:
The term can have a range of meanings. Alisa Childers defines deconstruction as "the process of systematically dissecting and often rejecting the beliefs you grew up with." Tyler Huckabee, writing for Relevant magazine, defines it as "a process of re-examining the faith you grew up with." John Stonestreet and Timothy Padgett note that it is used both descriptively (covering everything from the deconversion of Kevin Max, through the soul searching of Derek Webb, to the theological revisions of Jen Hatmaker and Rob Bell), or prescriptively ("recommended, especially to those questioning what they’ve grown up with, as a courageous thing to do").There is broad agreement that the term is derived from Jacques Derrida's philosophical concept of deconstruction. David Hayward says that he "co-opted the term" from Derrida, whose work he was reading at the time his beliefs started to erode.Notable advocates of faith deconstruction include internet comedy duo Rhett McLaughlin and Link Neal (who published multiple podcast episodes detailing their spiritual deconstruction), John D. Caputo (who in 2007 wrote What Would Jesus Deconstruct?: The Good News of Postmodernism for the Church), and Richard Rohr. Prominent former Christians who underwent deconstruction include Joshua Harris (who briefly offered a course on deconstruction), Abraham Piper, and Marty Sampson.As of February 2022 there were 293,026 posts on Instagram using the hashtag #deconstruction.
Responses:
After preaching a sermon in which he equated deconstruction with leaving the faith, Matt Chandler clarified that it "doesn’t mean doubt or theological wrestle or struggling through church hurt." John Cooper has stated, "It is time that we declare war against this deconstruction Christian movement... There is nothing Christian about it. It is a false religion."On the other hand, Tyler Huckabee argues that it can result in "deconversion", or "in your faith looking more or less the same it always did" but "most often, it's somewhere in between—rethinking the things you’ve always believed and coming to a new, different understanding of parts of it." Huckabee goes on to suggest that Martin Luther's own theological revolution "fits into the paradigm of what researchers would call deconstruction today."Carl Trueman argues that the "(mis)use of the Derridean d-word gives the whole a specious veneer of intellectualism and a certain superannuated postmodern chic."
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Laughter**
Laughter:
Laughter is a pleasant physical reaction and emotion consisting usually of rhythmical, often audible contractions of the diaphragm and other parts of the respiratory system. It is a response to certain external or internal stimuli. Laughter can rise from such activities as being tickled, or from humorous stories or thoughts. Most commonly, it is considered an auditory expression of a number of positive emotional states, such as joy, mirth, happiness, or relief. On some occasions, however, it may be caused by contrary emotional states such as embarrassment, surprise, or confusion such as nervous laughter or courtesy laugh. Age, gender, education, language, and culture are all indicators as to whether a person will experience laughter in a given situation. Other than humans, some other species of primate (chimpanzees, gorillas and orangutans) show laughter-like vocalizations in response to physical contact such as wrestling, play chasing or tickling.
Laughter:
Laughter is a part of human behavior regulated by the brain, helping humans clarify their intentions in social interaction and providing an emotional context to conversations. Laughter is used as a signal for being part of a group—it signals acceptance and positive interactions with others. Laughter is sometimes seen as contagious, and the laughter of one person can itself provoke laughter from others as a positive feedback.The study of humor and laughter, and its psychological and physiological effects on the human body, is called gelotology.
Nature:
Laughter might be thought of as an audible expression or appearance of excitement, an inward feeling of joy and happiness. It may ensue from jokes, tickling, and other stimuli completely unrelated to psychological state, such as nitrous oxide. One group of researchers speculated that noises from infants as early as 16 days old may be vocal laughing sounds or laughter. However, the weight of the evidence supports the appearance of such sounds at 15 weeks to four months of age.
Nature:
Laughter researcher Robert Provine said: "Laughter is a mechanism everyone has; laughter is part of universal human vocabulary. There are thousands of languages, hundreds of thousands of dialects, but everyone speaks laughter in pretty much the same way." Babies have the ability to laugh before they ever speak. Children who are born blind and deaf still retain the ability to laugh.Provine argues that "Laughter is primitive, an unconscious vocalization." Provine argues that it probably is genetic. In a study of the "Giggle Twins", two happy twins who were separated at birth and only reunited 43 years later, Provine reports that "until they met each other, neither of these exceptionally happy ladies had known anyone who laughed as much as they did." They reported this even though they had been brought together by their adoptive parents, who they indicated were "undemonstrative and dour". He indicates that the twins "inherited some aspects of their laugh sound and pattern, readiness to laugh, and maybe even taste in humor".Scientists have noted the similarity in forms of laughter induced by tickling among various primates, which suggests that laughter derives from a common origin among primate species.The spotted hyena, another species of animal, was also known as the laughing hyena because of the way it sounds when it communicates.
Nature:
A very rare neurological condition has been observed whereby the sufferer is unable to laugh out loud, a condition known as aphonogelia.
Brain:
Neurophysiology indicates that laughter is linked with the activation of the ventromedial prefrontal cortex, that produces endorphins. Scientists have shown that parts of the limbic system are involved in laughter. This system is involved in emotions and helps us with functions necessary for humans' survival. The structures in the limbic system that are involved in laughter are the hippocampus and the amygdala.The December 7, 1984, Journal of the American Medical Association describes the neurological causes of laughter as follows: "Although there is no known 'laugh center' in the brain, its neural mechanism has been the subject of much, albeit inconclusive, speculation. It is evident that its expression depends on neural paths arising in close association with the telencephalic and diencephalic centers concerned with respiration. Wilson considered the mechanism to be in the region of the mesial thalamus, hypothalamus, and subthalamus. Kelly and co-workers, in turn, postulated that the tegmentum near the periaqueductal grey contains the integrating mechanism for emotional expression. Thus, supranuclear pathways, including those from the limbic system that Papez hypothesised to mediate emotional expressions such as laughter, probably come into synaptic relation in the reticular core of the brain stem. So while purely emotional responses such as laughter are mediated by subcortical structures, especially the hypothalamus, and are stereotyped, the cerebral cortex can modulate or suppress them."Some drugs are well known for their laughter-facilitating properties (e. g. ethanol and cannabis), while the others, like salvinorin A (the active ingredient of Salvia divinorum), can even induce bursts of uncontrollable laughter.A research article was published December 1, 2000, on the psycho-evolution of laughter (Panksepp 2000).
Health:
A link between laughter and healthy function of blood vessels was first reported in 2005 by researchers at the University of Maryland Medical Center with the fact that laughter causes the dilatation of the inner lining of blood vessels, the endothelium, and increases blood flow. Drs. Michael Miller (University of Maryland) and William Fry (Stanford) theorize that beta-endorphin-like compounds released by the hypothalamus activate receptors on the endothelial surface to release nitric oxide, thereby resulting in dilation of vessels. Other cardioprotective properties of nitric oxide include reduction of inflammation and decreased platelet aggregation.Laughter has various proven beneficial biochemical effects. It has been shown to lead to reductions in stress hormones such as cortisol and epinephrine. When laughing, the brain releases endorphins that can relieve some physical pain. Laughter also boosts the number of antibody-producing cells and enhances the effectiveness of T-cells, leading to a stronger immune system. A 2000 study found that people with heart disease were 40 percent less likely to laugh and be able to recognize humor in a variety of situations, compared to people of the same age without heart disease.Anecdotally, journalist and author Norman Cousins developed in 1964 a treatment program for his ankylosing spondylitis and collagen disease consisting of large doses of Vitamin C alongside laughter induced by comic films, including those of the Marx Brothers. "I made the joyous discovery that ten minutes of genuine belly laughter had an anesthetic effect and would give me at least two hours of pain-free sleep," he reported. "When the pain-killing effect of the laughter wore off, we would switch on the motion picture projector again and not infrequently, it would lead to another pain-free interval."
Communication:
A number of studies using methods of conversation analysis and discourse analysis have documented the systematic workings of laughter in a variety of interactions, from casual conversations to interviews, meetings, and therapy sessions. Working with recorded interactions, researchers have created detailed transcripts that indicate not only the presence of laughter but also features of its production and placement.
Communication:
These studies challenge several widely held assumptions about the nature of laughter. Contrary to notions that it is spontaneous and involuntary, research documents that laughter is sequentially organized and precisely placed relative to surrounding talk. Far more than merely a response to humor, laughter often works to manage delicate and serious moments. More than simply an external behavior "caused" by an inner state, laughter is highly communicative and helps accomplish actions and regulate relationships.
Causes:
Common causes for laughter are sensations of joy and humor; however, other situations may cause laughter as well.
Causes:
A general theory that explains laughter is called the relief theory. Sigmund Freud summarized it in his theory that laughter releases tension and "psychic energy". This theory is one of the justifications of the beliefs that laughter is beneficial for one's health. This theory explains why laughter can be used as a coping mechanism when one is upset, angry or sad.
Causes:
Philosopher John Morreall theorizes that human laughter may have its biological origins as a kind of shared expression of relief at the passing of danger. Friedrich Nietzsche, by contrast, suggested laughter to be a reaction to the sense of existential loneliness and mortality that only humans feel.
Causes:
For example: a joke creates an inconsistency and the audience automatically tries to understand what the inconsistency means; if they are successful in solving this 'cognitive riddle' and they realize that the surprise was not dangerous, they laugh with relief. Otherwise, if the inconsistency is not resolved, there is no laugh, as Mack Sennett pointed out: "when the audience is confused, it doesn't laugh." This is one of the basic laws of a comedian, referred to as "exactness". It is important to note that sometimes the inconsistency may be resolved and there may still be no laugh. Because laughter is a social mechanism, an audience may not feel as if they are in danger, and the laugh may not occur. In addition, the extent of the inconsistency (and aspects of its timing and rhythm) has to do with the amount of danger the audience feels, and how hard or long they laugh.
Causes:
Laughter can also be brought on by tickling. Although most people find it unpleasant, being tickled often causes heavy laughter, thought to be an (often uncontrollable) reflex of the body.
Structure and anatomy:
A normal laugh has the structure of "ha-ha-ha" or "ho-ho-ho". It is unnatural, and one is physically unable, to have a laugh structure of "ha-ho-ha-ho". The usual variations of a laugh most often occur in the first or final note in a sequence- therefore, "ho-ha-ha" or "ha-ha-ho" laughs are possible. Normal note durations with unusually long or short "inter-note intervals" do not happen due to the result of the limitations of our vocal cords. This basic structure allows one to recognize a laugh despite individual variants.It has also been determined that eyes moisten during laughter as a reflex from the tear glands.
Negative aspects:
Laughter is not always a pleasant experience and is associated with several negative phenomena. Excessive laughter can lead to cataplexy, and unpleasant laughter spells, excessive elation, and fits of laughter can all be considered negative aspects of laughter. Unpleasant laughter spells, or "sham mirth", usually occur in people who have a neurological condition, including patients with pseudobulbar palsy, multiple sclerosis and Parkinson's disease. These patients appear to be laughing out of amusement but report that they are feeling undesirable sensations "at the time of the punch line".
Negative aspects:
Excessive elation is a common symptom associated with bipolar disorder psychoses and mania/hypomania. Those with schizophrenic psychoses seem to experience the opposite—they do not understand humor or get any joy out of it. A fit describes an abnormal time when one cannot control the laughter or one's body, sometimes leading to seizures or a brief period of unconsciousness. Some believe that fits of laughter represent a form of epilepsy.
Therapy:
Laughter has been used as a therapeutic tool for many years because it is a natural form of medicine. Laughter is available to everyone and it provides benefits to a person's physical, emotional, and social well being. Some of the benefits of using laughter therapy are that it can relieve stress and relax the whole body. It can also boost the immune system and release endorphins to relieve pain. Additionally, laughter can help prevent heart disease by increasing blood flow and improving the function of blood vessels. Some of the emotional benefits include diminishing anxiety or fear, improving overall mood, and adding joy to one's life. Laughter is also known to reduce allergic reactions in a preliminary study related to dust mite allergy sufferers.Laughter therapy also has some social benefits, such as strengthening relationships, improving teamwork and reducing conflicts, and making oneself more attractive to others. Therefore, whether a person is trying to cope with a terminal illness or just trying to manage their stress or anxiety levels, laughter therapy can be a significant enhancement to their life.Ramon Mora-Ripoll in his study on The Therapeutic Value Of Laughter In Medicine, stated that laughter therapy is an inexpensive and simple tool that can be used in patient care. It is a tool that is only beneficial when experienced and shared. Care givers need to recognize the importance of laughter and possess the right attitude to pass it on. He went on to say that since this type of therapy is not widely practiced, health care providers will have to learn how to effectively use it. In another survey, researchers looked at how Occupational Therapists and other care givers viewed and used humor with patients as a means of therapy. Many agreed that while they believed it was beneficial to the patients, the proper training was lacking in order to effectively use It. Even though laughter and humor has been used therapeutically in medical conditions, according to Mora-Ripoll, there was not enough data to clearly establish that laughter could be used as an overall means of healing. It did suggest that additional research was still needed since "well-designed randomized controlled trials have not been conducted to date validating the therapeutic efficacy of laughter."In 2017, an institution in Japan conducted an open-label randomized controlled trial to evaluate the effects of laughter therapy on quality of life in patients with cancer. The study used laughter yoga, comedy, clown and jokes. The result showed that laughter therapy was helpful in improving quality of life and cancer symptoms in some areas for cancer survivors. Improvements were seen in the area of depression, anxiety and stress levels. There were limited harmful side effects. Laughter therapy should be used in conjunction with other cancer treatment.
Research and philosophy:
Laughter in literature, although considered understudied by some, is a subject that has received attention in the written word for millennia. The use of humor and laughter in literary works has been studied and analyzed by many thinkers and writers, from the Ancient Greek philosophers onward. Henri Bergson's Laughter: An Essay on the Meaning of the Comic (Le rire, 1901) is a notable 20th-century contribution.
Research and philosophy:
Ancient Herodotus For Herodotus, laughers can be distinguished into three types: Those who are innocent of wrongdoing, but ignorant of their own vulnerability Those who are mad Those who are overconfidentAccording to Donald Lateiner, Herodotus reports about laughter for valid literary and historiological reasons. "Herodotus believes either that both nature (better, the gods' direction of it) and human nature coincide sufficiently, or that the latter is but an aspect or analogue of the former, so that to the recipient the outcome is suggested." When reporting laughter, Herodotus does so in the conviction that it tells the reader something about the future and/or the character of the person laughing. It is also in this sense that it is not coincidental that in about 80% of the times when Herodotus speaks about laughter it is followed by a retribution. "Men whose laughter deserves report are marked, because laughter connotes scornful disdain, disdain feeling of superiority, and this feeling and the actions which stem from it attract the wrath of the gods." Modern There is a wide range of experiences with laughter. A 1999 study by two humor researchers asked 80 people to keep a daily laughter record, and found they laughed an average of 18 times per day. However, their study also found a wide range, with some people laughing as many as 89 times per day, and others laughing as few as 0 times per day.
Research and philosophy:
Hobbes Thomas Hobbes wrote, "The passion of laughter is nothing else but sudden glory arising from sudden conception of some eminency in ourselves, by comparison with the infirmity of others, or with our own formerly." Schopenhauer Philosopher Arthur Schopenhauer devotes the 13th chapter of the first part of his major work, The World as Will and Representation, to laughter.
Research and philosophy:
Nietzsche Friedrich Nietzsche distinguishes two different purposes for the use of laughter. In a positive sense, "man uses the comical as a therapy against the restraining jacket of logic, morality and reason. He needs from time to time a harmless demotion from reason and hardship and in this sense laughter has a positive character for Nietzsche." Laughter can, however, also have a negative connotation when it is used for the expression of social conflict. This is expressed, for instance, in The Gay Science: "Laughter -- Laughter means to be schadenfroh, but with clear conscience.""Possibly Nietzsche's works would have had a totally different effect, if the playful, ironical and joking in his writings would have been factored in better." Bergson In Laughter: An Essay on the Meaning of the Comic, French philosopher Henri Bergson, renowned for his philosophical studies on materiality, memory, life and consciousness, tries to determine the laws of the comic and to understand the fundamental causes of comic situations. His method consists in determining the causes of the comic instead of analyzing its effects. He also deals with laughter in relation to human life, collective imagination and art, to have a better knowledge of society. One of the theories of the essay is that laughter, as a collective activity, has a social and moral role, in forcing people to eliminate their vices. It is a factor of uniformity of behaviours, as it condemns ludicrous and eccentric behaviours.
Research and philosophy:
Ludovici Anthony Ludovici developed the thoughts of Hobbes even further in The Secret of Laughter. His conviction is that there's something sinister in laughter, and that the modern omnipresence of humour and the idolatry of it are signs of societal weakness, as instinctive resort to humour became a sort of escapism from responsibility and action. Ludovici considered laughter to be an evolutionary trait and he offered many examples of different triggers for laughter with their own distinct explanations.
Research and philosophy:
Bellieni Carlo Bellieni examined laughter in an essay published in New Ideas in Psychology. He wrote we can strip back laughter to a three-step process. First, it needs a situation that seems odd and induces a sense of incongruity (bewilderment or panic). Second, the worry or stress the incongruous situation has provoked must be worked out and overcome (resolution). Third, the actual release of laughter acts as an all-clear siren to alert bystanders (relief) that they are safe.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Artisanal food**
Artisanal food:
Artisanal food encompasses breads, cheeses, fruit preserves, cured meats, beverages, oils, and vinegars that are made by hand using traditional methods by skilled craftworkers, known as food artisans. The foodstuff material from farmers and backyard growers can include fruit, grains and flours, milks for cheese, cured meats, fish, beverages, oils, and vinegars. The movement is focused on providing farm to fork type foods with locally sourced products that benefit the consumer, small scale growers and producers, and the local economy.
Food artisans:
Food artisans produce foods and edible foodstuffs that are not mass produced, but rather made by hand. These include cheeses, breads and baked goods, charcuterie and other foods that involve preservation or fermentation, home preservation or canning processes, and fruit preserves, cured meats, beverages, oils, and vinegars. Fermentation or otherwise controlling the preservation environment for beneficial microorganisms can be utilized for vinegars, cheeses, cured meats, wine, oolong tea, kimchi and other examples. An artisan food item is usually developed and produced over a long period of time and consumed relatively close to where the food is created.
Legislation:
In 2009, the Food Safety Enhancement Act was proposed and passed the House of Representatives, but did not pass. The measure was renegotiated and became known as the Food Safety Modernization Act (FSMA). On 4 January 2011, President Barack Obama signed the bill into law.
Tester-Hagan Amendment Senator Jon Tester (D-MT) and Senator Kay Hagan (D-NC) introduced two amendments to the FSMA that removed local food growers and food processors from federal oversight. These growers and producers would remain under the jurisdiction of state and local health and sanitation laws, rules, and regulations.
Controversy:
As of 2016, there was not a published official standard or definition for artisan foods. A good working definition can be gleaned from the Tester-Hagen Amendment that stated artisanal food producers are constrained to: "make less than $500,000 a year and sell greater than 50% of their products direct to consumers in the same state and within a 400-mile radius".The advertising and marketing industries have latched on to the trendy word "artisanal" and now have artisanal products on supermarket shelves and offerings from local fast food chains. Dunkin' Donuts came out with an "artisanal" bagel, Domino's Pizza dished out an "artisanal" pizza, Tostitos served up "artisanal" chips, McDonald's offered an "artisan" bun, Wendy's introduced an "artisan" egg sandwich, and Subway provided "sandwich artisans" to prepare lunch.In April 2012, Davidovich Bagels, an artisanal maker of hand rolled, bagels in NYC filed a Federal complaint, claiming false advertising against Dunkin' Donuts to have them cease and desist claiming their commercially manufactured bagels were "Artisan". This case brought international attention to the meaning of the word in commerce and the parameters of representations to the consuming public.
Points of distribution:
Farmers' markets, either temporary or permanent, are a tremendous resource for consumers to procure artisanal foods. They exist in many communities in the United States, Canada, the United Kingdom, and throughout the European Union countries.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Minkowski content**
Minkowski content:
The Minkowski content (named after Hermann Minkowski), or the boundary measure, of a set is a basic concept that uses concepts from geometry and measure theory to generalize the notions of length of a smooth curve in the plane, and area of a smooth surface in space, to arbitrary measurable sets. It is typically applied to fractal boundaries of domains in the Euclidean space, but it can also be used in the context of general metric measure spaces.
Minkowski content:
It is related to, although different from, the Hausdorff measure.
Definition:
For A⊂Rn , and each integer m with 0≤m≤n , the m-dimensional upper Minkowski content is lim sup r→0+μ({x:d(x,A)<r})α(n−m)rn−m and the m-dimensional lower Minkowski content is defined as lim inf r→0+μ({x:d(x,A)<r})α(n−m)rn−m where α(n−m)rn−m is the volume of the (n−m)-ball of radius r and μ is an n -dimensional Lebesgue measure. If the upper and lower m-dimensional Minkowski content of A are equal, then their common value is called the Minkowski content Mm(A).
Properties:
The Minkowski content is (generally) not a measure. In particular, the m-dimensional Minkowski content in Rn is not a measure unless m = 0, in which case it is the counting measure. Indeed, clearly the Minkowski content assigns the same value to the set A as well as its closure.
If A is a closed m-rectifiable set in Rn, given as the image of a bounded set from Rm under a Lipschitz function, then the m-dimensional Minkowski content of A exists, and is equal to the m-dimensional Hausdorff measure of A.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Orofaciodigital syndrome 1**
Orofaciodigital syndrome 1:
Orofaciodigital syndrome 1 (OFD1), also called Papillon-League and Psaume syndrome, is an X-linked congenital disorder characterized by malformations of the face, oral cavity, and digits with polycystic kidney disease and variable involvement of the central nervous system.
Cause:
Orofaciodigital syndrome type 1 is caused by mutations in the OFD1 gene. OFD1 localizes to both centrosomes and basal bodies within the human genetic cellular structure. This suggests that this syndrome may fall into a broad category of ciliary diseases. The ciliary organelles are present in many cellular types throughout the human body. Cilia defects adversely affect numerous critical developmental signaling pathways essential to cellular development.Other types include: OMIM: 252100 Mohr syndrome; Orofaciodigital syndrome 2 at NIH's Office of Rare Diseases OMIM: 258860 Orofaciodigital syndrome 4 at NIH's Office of Rare Diseases OMIM: 300238 Orofaciodigital syndrome, Shashi type at NIH's Office of Rare Diseases OMIM: 277170 Varadi Papp syndrome; OFD6 at NIH's Office of Rare Diseases Relation to other rare genetic disorders Recent findings in genetic research have suggested that a large number of genetic disorders, both genetic syndromes and genetic diseases, that were not previously identified in the medical literature as related, may be, in fact, highly related in the genotypical root cause of these widely varying, phenotypically-observed disorders. Orofaciodigital syndrome has been found to be a ciliopathy. Other known ciliopathies include primary ciliary dyskinesia, Bardet–Biedl syndrome, polycystic kidney disease and polycystic liver disease, nephronophthisis, Alström syndrome, Meckel–Gruber syndrome and some forms of retinal degeneration.
Diagnosis:
Orofaciodigital syndrome type 1 is diagnosed through genetic testing. Some symptoms of Orofaciodigital syndrome type 1 are oral features such as, split tongue, benign tumors on the tongue, cleft palate, hypodontia and other dental abnormalities. Other symptoms of the face include hypertelorism and micrognathia. Bodily abnormalities such as webbed, short, joined, or abnormally curved fingers and toes are also symptoms of Orofaciodigital syndrome type 1. The most frequent symptoms are accessory oral frenulum, broad alveolar ridges, frontal bossing, high palate, hypertelorism, lobulated tongue, median cleft lip, and wide nasal bridge. Genetic screening of the OFD1 gene is used to officially diagnose a patient who has the syndrome, this is detected in 85% of individuals who are suspected to have Orofaciodigital syndrome type 1.
Management:
Orofaciodigital syndrome type 1 can be treated with reconstructive surgery or the affected parts of the body. Surgery of cleft palate, tongue nodules, additional teeth, accessory frenulae, and orthodontia for malocclusion. Routine treatment for patients with renal disease and seizures may also be necessary. Speech therapy and special education in the later development may also be used as management.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**CLDN22**
CLDN22:
Claudin-22 is a protein that in humans is encoded by the CLDN22 gene. It belongs to the group of claudins.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Thermal cycler**
Thermal cycler:
The thermal cycler (also known as a thermocycler, PCR machine or DNA amplifier) is a laboratory apparatus most commonly used to amplify segments of DNA via the polymerase chain reaction (PCR). Thermal cyclers may also be used in laboratories to facilitate other temperature-sensitive reactions, including restriction enzyme digestion or rapid diagnostics. The device has a thermal block with holes where tubes holding the reaction mixtures can be inserted. The cycler then raises and lowers the temperature of the block in discrete, pre-programmed steps.
History:
The earliest thermal cyclers were designed for use with the Klenow fragment of DNA polymerase I. Since this enzyme is destroyed during each heating step of the amplification process, new enzyme had to be added every cycle. This led to a cumbersome machine based on an automated pipettor, with open reaction tubes. Later, the PCR process was adapted to the use of thermostable DNA polymerase from Thermus aquaticus, which greatly simplified the design of the thermal cycler. While in some old machines the block is submerged in an oil bath to control temperature, in modern PCR machines a Peltier element is commonly used. Quality thermal cyclers often contain silver blocks to achieve fast temperature changes and uniform temperature throughout the block. Other cyclers have multiple blocks with high heat capacity, each of which is kept at a constant temperature, and the reaction tubes are moved between them by means of an automated process. Miniaturized thermal cyclers have been created in which the reaction mixture moves via channel through hot and cold zones on a microfluidic chip. Thermal cyclers designed for quantitative PCR have optical systems which enable fluorescence to be monitored during reaction cycling.
Modern innovations:
Modern thermal cyclers are equipped with a heated lid that presses against the lids of the reaction tubes. This prevents condensation of water from the reaction mixtures on the insides of the lids. Traditionally, a layer of mineral oil was used for this purpose. Some thermal cyclers are equipped with a fully adjustable heated lid to allow for nonstandard or diverse types of PCR plasticware.
Modern innovations:
Some thermal cyclers are equipped with multiple blocks allowing several different PCRs to be carried out simultaneously. Some models also have a gradient function to allow for different temperatures in different parts of the block. This is particularly useful when testing suitable annealing temperatures for PCR primers.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Correlation function**
Correlation function:
A correlation function is a function that gives the statistical correlation between random variables, contingent on the spatial or temporal distance between those variables. If one considers the correlation function between random variables representing the same quantity measured at two different points, then this is often referred to as an autocorrelation function, which is made up of autocorrelations. Correlation functions of different random variables are sometimes called cross-correlation functions to emphasize that different variables are being considered and because they are made up of cross-correlations.
Correlation function:
Correlation functions are a useful indicator of dependencies as a function of distance in time or space, and they can be used to assess the distance required between sample points for the values to be effectively uncorrelated. In addition, they can form the basis of rules for interpolating values at points for which there are no observations.
Correlation functions used in astronomy, financial analysis, econometrics, and statistical mechanics differ only in the particular stochastic processes they are applied to. In quantum field theory there are correlation functions over quantum distributions.
Definition:
For possibly distinct random variables X(s) and Y(t) at different points s and t of some space, the correlation function is corr (X(s),Y(t)), where corr is described in the article on correlation. In this definition, it has been assumed that the stochastic variables are scalar-valued. If they are not, then more complicated correlation functions can be defined. For example, if X(s) is a random vector with n elements and Y(t) is a vector with q elements, then an n×q matrix of correlation functions is defined with i,j element corr (Xi(s),Yj(t)).
Definition:
When n=q, sometimes the trace of this matrix is focused on. If the probability distributions have any target space symmetries, i.e. symmetries in the value space of the stochastic variable (also called internal symmetries), then the correlation matrix will have induced symmetries. Similarly, if there are symmetries of the space (or time) domain in which the random variables exist (also called spacetime symmetries), then the correlation function will have corresponding space or time symmetries. Examples of important spacetime symmetries are — translational symmetry yields C(s,s') = C(s − s') where s and s' are to be interpreted as vectors giving coordinates of the points rotational symmetry in addition to the above gives C(s, s') = C(|s − s'|) where |x| denotes the norm of the vector x (for actual rotations this is the Euclidean or 2-norm).Higher order correlation functions are often defined. A typical correlation function of order n is (the angle brackets represent the expectation value) Ci1i2⋯in(s1,s2,⋯,sn)=⟨Xi1(s1)Xi2(s2)⋯Xin(sn)⟩.
Definition:
If the random vector has only one component variable, then the indices i,j are redundant. If there are symmetries, then the correlation function can be broken up into irreducible representations of the symmetries — both internal and spacetime.
Properties of probability distributions:
With these definitions, the study of correlation functions is similar to the study of probability distributions. Many stochastic processes can be completely characterized by their correlation functions; the most notable example is the class of Gaussian processes. Probability distributions defined on a finite number of points can always be normalized, but when these are defined over continuous spaces, then extra care is called for. The study of such distributions started with the study of random walks and led to the notion of the Itō calculus.
Properties of probability distributions:
The Feynman path integral in Euclidean space generalizes this to other problems of interest to statistical mechanics. Any probability distribution which obeys a condition on correlation functions called reflection positivity leads to a local quantum field theory after Wick rotation to Minkowski spacetime (see Osterwalder-Schrader axioms). The operation of renormalization is a specified set of mappings from the space of probability distributions to itself. A quantum field theory is called renormalizable if this mapping has a fixed point which gives a quantum field theory.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**4-Fluoropethidine**
4-Fluoropethidine:
4-Fluoropethidine is a drug that is a derivative of pethidine (meperidine), which combines pethidine's opioid analgesic effects with increased monoamine reuptake inhibition. It is around 50% less potent than pethidine as an opioid analgesic, but conversely is 50% more potent as a dopamine reuptake inhibitor, with other derivatives such as the 4-iodo and 3,4-dichloro analogues being even more potent dopamine reuptake inhibitors again. However none of these compounds substitute for cocaine or produce stimulant effects in animals, suggesting that they still act primarily as opioid analgesic drugs in practice. Its action and degree of relation to pethidine means that it may be controlled in those countries which have laws about controlled-substance analogues; it is not itself listed in the Controlled Substances Act 1970.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Stomach rumble**
Stomach rumble:
A stomach rumble, also known as a bowel sound, peristaltic sound, abdominal sound, bubble gut or borborygmus (pronounced ; plural borborygmi), is a rumbling, growling or gurgling noise produced by movement of the contents of the gastrointestinal tract as they are propelled through the small intestine by a series of muscle contractions called peristalsis. A trained healthcare provider can listen to these intestinal noises with a stethoscope, but they may be audible enough to be heard with the naked ear as the fluid and gas move forward in the intestines (in the vicinity of, but not actually within the stomach). The lack of bowel sounds is indicative of ileus, intestinal obstruction, or some other serious pathology.
Etymology:
The scientific name borborygmus is related to the 16th-century French word borborygme, itself from Latin, ultimately from Ancient Greek βορβορυγμός (borborygmós). The Greek term is probably onomatopoetic in origin.
Other causes:
Other causes of stomach rumbles: Incomplete digestion of food can lead to excess gas in the intestine. In humans, this can be due to incomplete digestion of carbohydrate-containing foods, including milk and other dairy products (lactose intolerance or the use of α-glucosidase inhibitors by diabetics), gluten (protein in wheat, barley, and rye) (coeliac disease), fruit, vegetables, beans, legumes, and high-fiber whole grains. In rare instances, excessive abdominal noise may be a sign of digestive disease, especially when accompanied by abdominal bloating, abdominal pain, diarrhea or constipation. Some examples of diseases that may be associated with this symptom include carcinoid neoplasm and coeliac sprue.
Other causes:
Louder rumbles may occur when one is hungry. The sound of air moving around the lumen of the stomach is amplified by the empty space. Around two hours after the stomach has been emptied, it sends signals to the brain, which tells the digestive muscles to restart peristalsis in a wave called the migrating motor complex. Food left behind after the first cycle is swept up, and the vibrations of the empty stomach cause hunger. Appetite plays a big role in this situation. Peristalsis recurs about every hour, and one's appetite may cause 10- to 20-minute food cravings.
Other causes:
Stomach rumbles can form further along the gastrointestinal system when air is swallowed while talking, eating, and drinking. This phenomenon occurs in most people and is typical.
Diseases and conditions:
Celiac disease is a condition that prevents the small intestine from absorbing parts of food that are needed to stay healthy. Consuming food containing gluten is dangerous for people with this disease: Intestinal villi help to absorb nutrients from food, but when gluten is consumed, the immune system attacks these villi as a result. Symptoms may include abdominal pain, nausea, and bulky or foul smelling stools.
Diseases and conditions:
Colitis is swelling of the large intestine. The many different forms of colitis include cytomegalovirus or Cryptosporidium infection, and necrotizing and pseudomembranous colitis. The usual causes of colitis are infection and lack of blood flow. Symptoms may include bloody stools, chills, dehydration, diarrhea, and fever.
Diverticulitis is a condition where small bulging sacs, usually found in the large intestine, become inflamed or infected. The most probable cause is a low-fiber diet, possibly a result of eating processed food. Diverticulitis is usually seen in about half the American population over the age of 60. Symptoms may include bloating, fever, and nausea.
Irritable bowel syndrome, a disorder in the lower intestinal tract, is usually accompanied by other symptoms, such as abdominal pain and diarrhea. It is more common in women and it usually occurs during early adulthood. There are many risk factors such as emotional stress and a low-fiber diet. These can all cause stomach disorders.
Nonmedical usage:
The word borborygmic has been used in literature to describe noisy plumbing. In Ada, Vladimir Nabokov wrote: "All the toilets and waterpipes in the house had been suddenly seized with borborygmic convulsions".
In A Long Way Down (New York: Harper, 1959, p. 54), Elizabeth Fenwick wrote: "The room was very quiet, except for its borborygmic old radiator".Graham Greene's short story "Alas, Poor Maling" tells the tale of a luckless individual whose borborygmus takes the form of irritating noises that he has recently heard.
Nonmedical usage:
The word borborygmus has also been used in journalism to describe political turbulence. In an article in The Atlantic, Graeme Wood used the word to describe the effects of mass refugee migration into Europe: "Central Europe had to digest a massive refugee flow from Syria and Afghanistan, and the resulting borborygmus upended European politics and enabled a populist wave that has yet to crest."
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Slitherlink**
Slitherlink:
Slitherlink (also known as Fences, Takegaki, Loop the Loop, Loopy, Ouroboros, Suriza, Rundweg and Dotty Dilemma) is a logic puzzle developed by publisher Nikoli.
Rules:
Slitherlink is played on a rectangular lattice of dots. Some of the squares formed by the dots have numbers inside them. The objective is to connect horizontally and vertically adjacent dots so that the lines form a simple loop with no loose ends. In addition, the number inside a square represents how many of its four sides are segments in the loop.
Rules:
Other types of planar graphs can be used in lieu of the standard grid, with varying numbers of edges per vertex or vertices per polygon. These patterns include snowflake, Penrose, Laves and Altair tilings. These add complexity by varying the number of possible paths from an intersection, and/or the number of sides to each polygon; but similar rules apply to their solution.
Solution methods:
Notation Whenever the number of lines around a cell matches the number in the cell, the other potential lines must be eliminated. This is usually indicated by marking an X on lines known to be empty.
Solution methods:
Another useful notation when solving Slitherlink is a ninety degree arc between two adjacent lines, to indicate that exactly one of the two must be filled. A related notation is a double arc between adjacent lines, indicating that both or neither of the two must be filled. These notations are not necessary to the solution, but can be helpful in deriving it.
Solution methods:
Many of the methods below can be broken down into two simpler steps by use of arc notation.
Solution methods:
Exactly 2 or 0 lines at each point A key to many deductions in Slitherlink is that every point has either exactly two lines connected to it, or no lines. So if a point which is in the centre of the grid, not at an edge or corner, has three incoming lines which are X'd out, the fourth must also be X'd out. This is because the point cannot have just one line - it has no exit route from that point. Similarly, if a point on the edge of the grid, not at a corner, has two incoming lines which are X'd out, the third must also be X'd out. And if a corner of the grid has one incoming line which is X'd out, the other must also be X'd out. Application of this simple rule leads to increasingly complex deductions. Recognition of these simple patterns will help greatly in solving Slitherlink puzzles.
Solution methods:
Corners If a 1 is in a corner, the actual corner's lines may be X'd out, because a line that entered said corner could not leave it except by passing by the 1 again. This also applies if two lines leading into the 1-box at the same corner are X'd out.If a 3 is in a corner, the two outside edges of that box can be filled in because otherwise the rule above would have to be broken.If a 2 is in a corner, two lines must be going away from the 2 at the border.
Solution methods:
Rules for squares with 1 If a line comes into a corner of a 1 and if one of the three remaining directions that the line can continue, the one that is not a side of the 1 is a known blank, then the two sides of the 1 opposite that corner can be X'd out.This also applies in reverse. That is, if a line comes into the corner of a 1, and the two opposite edges of the 1 are already X'd out, the line cannot go away from the 1 since that would put Xs around all sides of the 1.If two 1s are diagonally adjacent, then of the eight segments around those two cells, either the "inner" set of four segments sharing a common endpoint (the point shared by the 1s) or the other "outer" set of four segments must all be X'd out. Thus if any two inner or outer segments in one 1 are X'd, the respective inner or outer segments of the other 1 must also be X'd.If two 1s are adjacent along the edge of the grid, the line between them can be X'd out, because there would be no direction for it to continue when it reached the edge.
Solution methods:
A rule for squares with 2 If a 2 has any surrounding line X’d, then a line coming into either of the two corners not adjacent to the X’d out line cannot immediately exit at right angles away from the 2, as then two lines around the 2 would be impossible, and can therefore be X’d. This means that the incoming line must continue on one side of the 2 or the other. This in turn means that the second line of the 2 must be on the only remaining free side, adjacent to the originally X’d line, so that can be filled in. Conversely, if a 2 has a line on one side, and an adjacent X’d out line, then the second line must be in one of the two remaining sides, and exit from the opposite corner (in either direction). If either of those two exits is X’d out, then it must take the other route.
Solution methods:
Rules for squares with 3 If a 3 is adjacent to a 0, either horizontally or vertically, then all edges of that 3 can be filled except for the one touching the 0. In addition, the two lines perpendicular to the adjacent boxes can be filled.If two 3s are adjacent to each other horizontally or vertically, their common edge must be filled in, because the only other option is a closed oval that is impossible to connect to any other line. Second, the two outer lines of the group (parallel to the common line) must be filled in. Thirdly, the line through the 3s will always wrap around in an "S" shape. Therefore, the line between the 3s cannot continue in a straight line, and those sides which are in a straight line from the middle line can be X'd out.If a 3 is adjacent to a 0 diagonally, both sides of the 3 that meet the 0's corner must be filled. This is because if either of those sides were open, the line ending in the corner of the 0 would have no place to go. This is similar to the 3-in-a-corner rule.Similarly, if a 3 has a corner with Xs in both directions going away from that corner, then both sides of the 3 that meet that corner must be filled. This is because if one of those two sides of the 3 were open, the other would have to be filled (because the 3 can only have one open side) but would meet 3 Xs at that corner, which is impossible because each point on the grid must have exactly 2 or 0 lines.
Solution methods:
If a line reaches a corner of a 3, there must be lines on both sides of the 3 that said corner is not adjacent to, because if the 3's sole empty space were not adjacent to it, the corner would have three lines connected to it. Furthermore, the segment leading away from the 3 at the corner reached by the line must be empty; if it were filled, neither of the remaining 2 undetermined sides of the 3 would be able to contain a line.
Solution methods:
Diagonals of 3s and 2s If two 3s are adjacent diagonally, the edges which do not run into the common point must be filled in.Similarly, if two 3s are in the same diagonal, but separated by any number of 2s (and only 2s) the outside edges of the 3s must be filled in, just as if they were adjacent diagonally.If there is a series of 2s in a diagonal line and an angled line meets the corner of the 2 at one end of the series, a matching angled line can be drawn all the way up the series.If a line reaches the starting point (A) of a diagonal that contains one or more 2s and ends with a 3, both sides of the far corner (farthest from A on the diagonal) of the 3 must be filled. If this were not true, it would imply that both sides of the near corner of the 3 must be filled, which would imply that the near corners of all the 2s must be filled, including the 2 at the start of the diagonal, which is impossible because it conflicts with the line that has reached the starting point (A).
Solution methods:
Diagonals of a 3 and 1 If a 1 and a 3 are adjacent diagonally and the outer two sides of the 1 are X'd out, then the outer two sides of the 3 must be filled in.The opposite is the same: if the outer two corners of the 3 are filled in, then the outer two corners of the 1 must be X'd out.
Solution methods:
Diagonals starting with a 2 If a line reaches a corner of a 2, and the line must continue through one of the two connecting sides of the 2, then exactly one of the other two sides of the 2 must be filled, and that line must continue through one of the two connecting sides of the diagonally adjacent square.
Solution methods:
A rule for closed regions If a region of the lattice is closed-off (such that no lines can "escape"), and is not empty, there must be a non-zero, even number of lines entering the region that begin outside the region. (An odd number of lines entering implies an odd number of segment ends inside the region, which makes it impossible for all the segment ends to connect. If there are no such lines, the lines inside the region cannot connect with the lines outside, making a solution impossible.) Often, this rule will eliminate one or more otherwise feasible options.
Solution methods:
In the figure below, the line at the top-left will close off the top-right region of the lattice whether it proceeds down or to the right. The line to the right (around two sides of the 3) has entered the closed region. To satisfy the rule, the first line must enter the region, and the second line must not enter the region a second time. (Since the boundary of any closed region also closes off the remainder of the puzzle, the rule can also be applied to the larger, bottom-left region. To apply the rule, it is only necessary to count the lines crossing the boundary.) Jordan curve theorem In an exceptionally difficult puzzle, one may use the Jordan curve theorem, which states that any open curve that starts and ends outside of a closed curve must intersect the closed curve an even number of times. In particular, this means that any row of the grid must have an even number of vertical lines and any column must have an even number of horizontal lines. When only one potential line segment in one of these groups is unknown, you can determine whether it is part of the loop or not with this theorem. This also means that if you mentally trace an arbitrary path from an outer edge of the grid, to another outer edge of the grid, the path will intersect the closed curve an even number of times. A simple strategy to assist in using this theorem is to "paint" (sometimes called "shade") the outside and the inside areas. When you see two outside cells, or two inside cells next to each other, then you know that there is not a line between them. The converse is also true: if you know there is no line between two cells, then those cells must be the same "color" (both inside or both outside). Similarly, if an outside cell and an inside cell are adjacent, you know there must be a filled line between them; and again the converse is true.
Solution methods:
Rules for puzzles that have only 1 solution If there are exactly two possible paths, A and B, between two points in the solution (two points that have been, or must be, reached by lines); and if a solution containing A must also work with B, and the reverse is not true; then B is the correct path, and the solution must pass through a point contained in A but not B.In the figure below, if a solution could pass through the top and right sides of the 2, then there must be another solution which is exactly the same except that it passes through the bottom and left sides of the 2, because the squares to the top and right of the 2 are unconstrained (do not contain numbers). Also, the solution must pass through the top-right corner of the 2, otherwise there must be another solution which is exactly the same except that it passes through the top and right sides of the 2.
Solution methods:
If there is a 2 in a corner, and the two non-diagonally adjacent squares are unconstrained, lines can be drawn as shown below. (In the figure, the question mark represents any number or blank, but the number will only be a 2 or 3. A puzzle with only one solution cannot have a 2 in a corner with two non-diagonally adjacent, unconstrained squares, and a diagonally adjacent 0 or 1.) If there are two paths between two points, such that a solution containing one must also work with the other, then both paths can be ruled out.In the figure below, the circled points can be connected by a line directly between them, and also by a line that traverses the other three sides of the square that extends to the left of the points. It should be clear (with the red line ignored) that for both paths the remainder of the solution can be the same – since the constraints for the remainder of the solution are the same – so both paths are ruled out.
History:
Slitherlink is an original puzzle of Nikoli; it first appeared in Puzzle Communication Nikoli #26 (June 1989). The editor combined two original puzzles contributed there. At first, every square contained a number and the edges did not have to form a loop.
Video games:
Slitherlink puzzles have been featured in video games on several platforms. A game titled Slither Link was published in Japan by Bandai for the Wonderswan portable console in 2000. Slitherlink puzzles were included alongside Sudoku and Nonogram puzzles in the Loppi Puzzle Magazine: Kangaeru Puzzle series of games from Success for the Game Boy Nintendo Power cartridge in 2001. Slitherlink games were also featured for the Nintendo DS handheld game console, with Hudson Soft releasing Puzzle Series Vol. 5: Slitherlink in Japan on November 16, 2006, and Agetec including Slitherlink in its Nikoli puzzle compilation, Brain Buster Puzzle Pak, released in North America on June 17, 2007.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Small-signal model**
Small-signal model:
Small-signal modeling is a common analysis technique in electronics engineering used to approximate the behavior of electronic circuits containing nonlinear devices with linear equations. It is applicable to electronic circuits in which the AC signals (i.e., the time-varying currents and voltages in the circuit) are small relative to the DC bias currents and voltages. A small-signal model is an AC equivalent circuit in which the nonlinear circuit elements are replaced by linear elements whose values are given by the first-order (linear) approximation of their characteristic curve near the bias point.
Overview:
Many of the electrical components used in simple electric circuits, such as resistors, inductors, and capacitors are linear. Circuits made with these components, called linear circuits, are governed by linear differential equations, and can be solved easily with powerful mathematical frequency domain methods such as the Laplace transform.In contrast, many of the components that make up electronic circuits, such as diodes, transistors, integrated circuits, and vacuum tubes are nonlinear; that is the current through them is not proportional to the voltage, and the output of two-port devices like transistors is not proportional to their input. The relationship between current and voltage in them is given by a curved line on a graph, their characteristic curve (I-V curve). In general these circuits don't have simple mathematical solutions. To calculate the current and voltage in them generally requires either graphical methods or simulation on computers using electronic circuit simulation programs like SPICE.
Overview:
However in some electronic circuits such as radio receivers, telecommunications, sensors, instrumentation and signal processing circuits, the AC signals are "small" compared to the DC voltages and currents in the circuit. In these, perturbation theory can be used to derive an approximate AC equivalent circuit which is linear, allowing the AC behavior of the circuit to be calculated easily. In these circuits a steady DC current or voltage from the power supply, called a bias, is applied to each nonlinear component such as a transistor and vacuum tube to set its operating point, and the time-varying AC current or voltage which represents the signal to be processed is added to it. The point on the graph representing the bias current and voltage is called the quiescent point (Q point). In the above circuits the AC signal is small compared to the bias, representing a small perturbation of the DC voltage or current in the circuit about the Q point. If the characteristic curve of the device is sufficiently flat over the region occupied by the signal, using a Taylor series expansion the nonlinear function can be approximated near the bias point by its first order partial derivative (this is equivalent to approximating the characteristic curve by a straight line tangent to it at the bias point). These partial derivatives represent the incremental capacitance, resistance, inductance and gain seen by the signal, and can be used to create a linear equivalent circuit giving the response of the real circuit to a small AC signal. This is called the "small-signal model". The small signal model is dependent on the DC bias currents and voltages in the circuit (the Q point). Changing the bias moves the operating point up or down on the curves, thus changing the equivalent small-signal AC resistance, gain, etc. seen by the signal. Any nonlinear component whose characteristics are given by a continuous, single-valued, smooth (differentiable) curve can be approximated by a linear small-signal model. Small-signal models exist for electron tubes, diodes, field-effect transistors (FET) and bipolar transistors, notably the hybrid-pi model and various two-port networks. Manufacturers often list the small-signal characteristics of such components at "typical" bias values on their data sheets.
Variable notation:
DC quantities (also known as bias), constant values with respect to time, are denoted by uppercase letters with uppercase subscripts. For example, the DC input bias voltage of a transistor would be denoted VIN . For example, one might say that VIN=5 Small-signal quantities, which have zero average value, are denoted using lowercase letters with lowercase subscripts. Small signals typically used for modeling are sinusoidal, or "AC", signals. For example, the input signal of a transistor would be denoted as vin . For example, one might say that 0.2 cos (2πt) Total quantities, combining both small-signal and large-signal quantities, are denoted using lower case letters and uppercase subscripts. For example, the total input voltage to the aforementioned transistor would be denoted as vIN(t) . The small-signal model of the total signal is then the sum of the DC component and the small-signal component of the total signal, or in algebraic notation, vIN(t)=VIN+vin(t) . For example, 0.2 cos (2πt)
PN junction diodes:
The (large-signal) Shockley equation for a diode can be linearized about the bias point or quiescent point (sometimes called Q-point) to find the small-signal conductance, capacitance and resistance of the diode. This procedure is described in more detail under diode modelling#Small-signal_modelling, which provides an example of the linearization procedure followed in small-signal models of semiconductor devices.
Differences between small signal and large signal:
A large signal is any signal having enough magnitude to reveal a circuit's nonlinear behavior. The signal may be a DC signal or an AC signal or indeed, any signal. How large a signal needs to be (in magnitude) before it is considered a large signal depends on the circuit and context in which the signal is being used. In some highly nonlinear circuits practically all signals need to be considered as large signals.
Differences between small signal and large signal:
A small signal is part of a model of a large signal. To avoid confusion, note that there is such a thing as a small signal (a part of a model) and a small-signal model (a model of a large signal). A small signal model consists of a small signal (having zero average value, for example a sinusoid, but any AC signal could be used) superimposed on a bias signal (or superimposed on a DC constant signal) such that the sum of the small signal plus the bias signal gives the total signal which is exactly equal to the original (large) signal to be modeled. This resolution of a signal into two components allows the technique of superposition to be used to simplify further analysis. (If superposition applies in the context.) In analysis of the small signal's contribution to the circuit, the nonlinear components, which would be the DC components, are analyzed separately taking into account nonlinearity.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Shikkui**
Shikkui:
Shikkui (漆喰) is an ecological nontoxic Japanese lime plaster primarily made out of hydrated lime and calcium carbonate coming from reprocessed eggshells. It is mainly used for surface coatings of walls and ceilings in housing construction.
This material is reputed to achieve a notable range of traditional and modern finishes, including a full range of Venetian stucco and stone effects. Shikkui finishes allow a thin two-coat application, and their elasticity provides good stress-crack resistance. The color and texture of a finishing can be individually customized using a variety of diluted color pigments.
The coatings are highly porous and naturally antiseptic, so indoor air quality is actively improved for healthier spaces. Shikkui coatings are also said to be humidity-regulating, fire-resistant, antistatic (preventing dust accumulation), hypoallergenic, antifungal and mold resistant.
Ecological characteristics:
Limix, a Shikkui plaster-based material, has a low carbon footprint in production, and a low energy consumption in production (85% less compared to baked ceramic tiles). It absorbs VOC odors and CO2. It is also fully recyclable or decomposable.
Technical specifications:
ASTM Test Data The Shikkui Surface Coatings have been tested in accordance with the ASTM International standards by accredited testing laboratories in the United States.
• Fire Resistance (ASTM E84) Class A (Type I in other codes) • VOC Content (ASTM D3960) Zero-VOC material • Shore D Hardness (ASTM D2240) 61-85 (depending on product) • Mold/Fungal Resistance (ASTM D3273/D3274) Rating 10 (no fungal growth)
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Backstitch**
Backstitch:
Backstitch or back stitch and its variants stem stitch, outline stitch and split stitch are a class of embroidery and sewing stitches in which individual stitches are made backward to the general direction of sewing. In embroidery, these stitches form lines and are most often used to outline shapes and to add fine detail to an embroidered picture. It is also used to embroider lettering. In hand sewing, it is a utility stitch which strongly and permanently attaches two pieces of fabric. The small stitches done back-and-forth makes the back stitch the strongest stitch among the basic stitches. Hence it can be used to sew strong seams by hand, without a sewing machine.
How to do it:
A versatile stitch which is easy to work, backstitch is ideal for following both simple and intricate outlines and as a foundation row for more complex embroidery stitches such as herringbone ladder filling stitch. Although superficially similar to the Holbein stitch, which is commonly used in blackwork embroidery, backstitch differs in the way it is worked, requiring only a single journey to complete a line of stitching.
How to do it:
Basic backstitch is the stitch used to outline shapes in modern cross-stitch, in Assisi embroidery and occasionally in blackwork.
How to do it:
Stem stitch is an ancient technique; surviving mantles embroidered with stem stitch by the Paracas people of Peru are dated to the first century BCE. Stem stitch is used in the Bayeux Tapestry, an embroidered cloth probably dating to the later 1070s, for lettering and to outline areas filled with couching or laid-work.Split stitch in silk is characteristic of Opus Anglicanum, an embroidery style of Medieval England.
Description of the technique:
Backstitch is most easily worked on an even-weave fabric, where the threads can be counted to ensure regularity, and is generally executed from right to left. The stitches are worked in a 'two steps forward, one step back' fashion, along the line to be filled, as shown in the diagram.
Neatly worked in a straight line this stitch resembles chain stitching produced by a sewing machine.
The back stitch can also be used as a hand sewing utility stitch to attach two pieces of fabric together.
Variants:
Variants of backstitch include: Basic backstitch or point de sable.
Threaded backstitch Pekinese stitch, a looped interlaced backstitch Stem stitch, in which each stitch overlaps the previous stitch to one side, forming a twisted line of stitching, with the thread passing below the needle. It is generally used for outlining shapes and for stitching flower stems and tendrils.
Whipped back stitch using thread of a different color than the original stitch, the needle is passed under the stitch without piercing the fabric, repeated to create a colorful twisted effect Outline stitch, sometimes distinguished from stem stitch in that the thread passes above rather than below the needle.
Split stitch, in which the needle pierces the thread rather than returning to one side.
Ringed back stitch, back stitches are worked to create half rings, these are completed by a second row of stitches to form ring outlines Stitch gallery
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Tonkatsu sauce**
Tonkatsu sauce:
Tonkatsu sauce or katsu sauce is a Japanese sauce served with tonkatsu (pork cutlet). It is a thick (viscosity over 2.0 pascal-second, per JAS Standard) Japanese Worcestershire-type sauce. It is similar to a brown sauce (British Isles), and can include a fish sauce, tomatoes, prunes, dates, apples, lemon juice, carrots, onions, and celery among its ingredients.
History and varieties:
The first tonkatsu sauce was made in 1948 by Oliver Sauce Co., Ltd. of Hyogo Prefecture. The Bull-Dog brand of tonkatsu sauce, for example, is made from malt vinegar, yeast, and vegetable and fruit purees, pastes, and extracts.
In the United States, Kikkoman brand sells a fruity tonkatsu sauce with applesauce as the main ingredient.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Flip or Flop (franchise)**
Flip or Flop (franchise):
Flip or Flop is a television franchise of television programs. With the exception of Flip or Flop: Follow Up, each series follows a format, with couples in different parts of the United States purchasing homes, flipping them, and reselling.
As of December 29, 2017, 132 episodes of the Flip or Flop franchise have aired.
Overview:
Flip or Flop The El Moussas were both real estate agents prior to the crash in 2008, and later they began flipping homes, mostly in Orange County, California.In 2011, Tarek asked a friend to help him make an audition tape for HGTV. The friend filmed an entire episode of the process of house flipping from start to finish. The audition tape was sent to HGTV, and they were interested in talking to the couple. In 2012, HGTV signed the couple to a regular weekly program that shows the process of buying distressed property and renovating it.Christina's expertise is primarily in design, while Tarek finds and renovates homes. The show follows them as they buy homes, typically bank-owned, short sales or foreclosures, to renovate and resell.
Overview:
Flip or Flop Follow-Up Flip or Flop Follow-Up premiered July 14, 2015. The show revisits old house flips from previous Flip or Flop episodes. The series goes deeper into the issues with the individual flips, and shows previously unaired footage. The series also updates on houses that remained unsold at the time of the original production. These three stories include a successful flip, a flop, and a follow-up that ends with Tarek and Christina revisiting one of their house flips. This series did not return for a season 2 making it the first series in the franchise to end.
Overview:
Flip or Flop Vegas Flip or Flop Vegas is a television series airing on HGTV hosted by real estate agents Bristol and Aubrey Marunde. Filmed in Las Vegas, Nevada, it premiered on April 6, 2017. On June 5, 2017, HGTV announced Flip or Flop Vegas would be renewed for a second season, with 16 episodes.
Overview:
Flip or Flop Atlanta Flip or Flop Atlanta is a television series airing on HGTV hosted by real estate agent Anita Corsini and contractor husband Ken. Filmed in the metro Atlanta, Georgia area, it premiered on July 20, 2017. On August 21, 2017, HGTV announced Flip or Flop Atlanta would be renewed for a second season, with 14 episodes, which is expected to debut in 2018.
Overview:
Flip or Flop Nashville Flip or Flop Nashville will be a television series airing on HGTV hosted by real estate agents DeRon Jenkins and Page Turner. It will premiere on January 18, 2018 and will be filmed in Nashville, Tennessee.
Flip or Flop Fort Worth Flip or Flop Fort Worth is a television series airing on HGTV hosted by real estate agents Andy and Ashley Williams. Filmed in Dallas, Texas, it premiered on November 2, 2017.
Flip or Flop Chicago On March 1, 2017, HGTV announced that "Flip or Flop" would expand to Chicago, Illinois. The show featured a new couple, Mark and Liz Perez, flipping houses in Chicago, Illinois. It premiered as a pilot on March 23, 2017.
Overview:
Christina on the Coast In June 2018 it was announced that Christina would be receiving her own spin off show known as Christina on the Coast. The series premiere will focus on Christina renovating her new home following her divorce; with the remaining seven episodes focusing on her fixing up other people's homes. Filming began in fall 2018, for a spring 2019 premiere. On February 13, 2019, it was announced that the series will premiere on May 23, 2019.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Media multitasking**
Media multitasking:
Media multitasking is the concurrent use of multiple digital media streams. Media multitasking has been associated with depressive symptoms and social anxiety by a single study involving 318 participants. A 2018 review found that while the literature is sparse and inconclusive, people who do a heavy amount of media multitasking have poorer performance in several cognitive domains. One of the authors commented that while the data does not "unambiguously show that media multitasking causes a change in attention and memory," media multitasking is an inefficient practice that requires "task switching" costs.In many cases, media multitasking is made up of experiences that are not necessarily intended to be combined or coordinated. For example, a user may be browsing the Web, listening to music, playing video games, using e-mail, and/or talking on the phone while watching TV. More intentionally coordinated forms of media multitasking are emerging in the form of "co-active media" and particularly "co-active TV".
Cognitive distraction:
A touchstone 2009 study by Stanford University used experiments to compare heavy media multitaskers to light media multitaskers in terms of their cognitive control and ability to process information. Findings from the experiment include: When intentionally distracting elements were added to experiments, heavy media multitaskers were on average 0.08 seconds slower than their lighter media multitasking counterparts at identifying changes in patterns; In a longer-term memory test that invited participants to recall specific elements from earlier experiments, the high multitaskers more often falsely identified the elements that had been used most frequently as intentional distractors; In the presence of distracting elements, high multitaskers were 0.4 seconds slower than their counterparts to switch to new activities and 0.3 seconds slower to engage in a new section of the same activity.The researchers concluded that heavy media multitaskers are distracted by the multiple streams of media they are consuming, and that not multitasking can help with concentration. In the "bottleneck theory" of cognitive performance, the slowing down seen when people multitask is called "interference." According to this theory, people have only a limited amount of cognitive resources, which allow them to focus and complete one task at a time. When people try to do several things at once or multitask, their performance suffers a slowdown because of a "cognitive bottleneck," like a traffic jam in the brain.
Cognitive distraction:
Researchers tried to disprove this theory over several decades, and although they found a handful of activities that people can do simultaneously without slowing, these activities are relatively simple and so far removed from everyday human activities—that they cannot be used as support for people's ability to multitask. A team of researchers reviewed the extensive literature on multitasking and concluded that hundreds of studies show that slowing will happen when people try to multitask; in fact, many studies that were designed to show that people could multitask without interference in fact indicated the opposite. These researchers warned that when people attempt to multitask, especially when doing complex and potentially dangerous tasks (such as driving and using their cell phones to talk or text), they will always encounter the cognitive bottleneck, causing their performance to suffer in terms of speed or accuracy.A related article, "Breadth-biased versus focused cognitive control in media multitasking behaviors," notes that the prevalence of this phenomenon leads "to a question about the required skills and expertise to function in society. A society with its ever-increasing complexity appears to move people towards juggling among multiple tasks rather than focusing on one task for a long period." The study's author suggests that further research will be necessary as the effects on society become more pronounced: "The new technologies are gearing people, especially young people who grow up with digital technologies and wired networks, toward breadth-biased information processing behavior rather than linear in-depth study behavior. Long-term exposure to media multitasking is expected to produce both positive and negative outcomes on cognitive, emotional, and social development." By generation Despite the research, people from younger generations report that they feel multitasking is easy, even "a way of life." They perceive themselves as good at it and spend a substantial amount of their time engaged in one form of multitasking or another (for example, watching TV while doing homework, listening to music while doing homework, or even all three things at once). By contrast, members of older generations often openly admit that they are not very good at multitasking, finding it difficult, and therefore, do not do it as often as young people.
Cognitive distraction:
In the workforce Multitasking behavior in the workforce has been increasing steadily since the 1990s as people have easier, and therefore faster, access to information and communication through smart technologies that have become cheaper over time. Although multitasking behavior harms performance, the paradox is that organizational productivity is increasing at a high rate nonetheless. Concurrent with increased multitasking in the workforce and the subsequent rise in productivity and multitasking in general, literature has witnessed progressively more reports of increased stress, loss of focus, symptoms resembling attention deficit hyperactivity disorder (ADHD), and even a lowering of IQ.
Cognitive distraction:
While driving Research in media multitasking in real-world settings focused mostly on using cellphones while driving. There is an overwhelming amount of evidence to show that talking on a phone while driving is very dangerous, often leading to crashes, including those fatal to both drivers and pedestrians. Just one hour of talking on a cellphone per month while driving makes a person between four and nine times more likely to crash. Meanwhile, people who text while driving are 23 times more likely to be involved in some kind of accident. A large review of studies on driving while media multitasking showed that using a hands-free phone while driving is just as dangerous as using a hand-held version, and that both can result in many different driving mistakes including missing stop signs, forgetting to reduce speed when necessary, and following too closely, among many others. Also, media multitasking while driving with other technologies, including MP3 players, voice-based email, a car's music system, and even the GPS, is just as distracting as using a phone. Talking to a person on a cellphone while driving is not the same as having a conversation with a passenger, as adult passengers (but not children) often warn the driver of possible dangers, or at least stop talking when the driving conditions are tough, to let the driver focus on the road.
Learning:
Students commonly use multiple portable digital technologies, including laptops, tablets and smartphones with wireless access to the Internet.
Learning:
Students can use technologies in the classroom to multi-task in two specific ways when given the choice: For on-task purposes that supplement learning and ease the learning task, or for off-task purposes such as entertainment or social interaction. Overall, research shows that digital technologies can enhance learning when used as educational tools, as they are affordable and extremely portable. However, research consistently shows that inappropriate multitasking with digital technologies is harmful to student performance.
Learning:
On-task multitasking Students use technology for many diverse on-task purposes including taking notes, conducting literature searches, viewing video/audio files, creating and viewing spreadsheets and PowerPoint slides, completing online tests and assignments, and even texting friends to ask questions about course material.
Outside of the classroom, students frequently use technology such as instant messaging to communicate with other students, coordinate group work, share important files and homework, and form peer support groups to vent and improve motivation.
Learning:
Students in grade school and high school benefit most from on-task use of technology. This is largely because at the grade school and high school levels, technology is integrated into the design of the course, and teachers provide the necessary structure and supervision. Such conditions allow students to process information more deeply and apply the newly learned information to new contexts, as well as improve collaboration among students.
Learning:
However, university students do not generally benefit from technology. The results of one study showed no benefits to using laptops for improving student GPA (grade point average) in comparison to students who did not use laptops.
Two further studies showed that students who did not use laptops outperformed those who did use laptops.
Overall, there is a pattern of decreasing the effectiveness of using technology for on-task purposes from the grade school level to the university level. This appears to be due to increased freedom of use of technology, combined with lower levels of integration of specific technology in the design of specific course material.
Additionally, younger students and students from financially disadvantaged backgrounds who have high levels of Internet use are at an especially high risk of under-performing.
Learning:
Off-task multitasking A large portion of students use digital technologies for off-task purposes during classroom lectures, with social networking (especially Facebook), instant messaging, texting, emailing, and web-browsing being used most commonly Moreover, young adults multitask more than older adults and males multitask more than females for off-task purposes. The results of numerous studies show that high Internet use for off-task purposes is associated with lower GPA.
Learning:
One experimental study compared the impact of using 4 different technologies for off-task purposes including MSN, email, texting, and Facebook, to three control groups during real classroom lectures. The three control groups included one group of students who were free to use any amount of technologies as they wished including any on-task or off-task purposes. The other two groups were on-task note-takers who took notes either on paper, or on a laptop. The results showed that students in the MSN and Facebook conditions scored lower on a memory test than the paper notes control group. When examining the amount of multitasking instead of specific technologies, the results showed that greater levels of multitasking led to progressively lower grades.
Learning:
While all studies show that any kind of off-task multitasking lowers performance, some tasks impair performance more than others. Specifically, social networking is particularly bad for student performance as it leads to higher levels of unfinished assignments and lower GPAs.
Moreover, off-task multitasking distracts not only the user but also neighboring students.
Learning:
Student multitasking An observational study of how students study at home examined their study habits and strategies. The results showed that most students prefer to task-switch a lot and focus for only approximately 6 minutes before reaching for their favorite digital device. Moreover, the students who enjoyed task-switching did so more often and with more technologies in comparison to students who preferred to focus on a single learning task, and who therefore did not have as many technologies readily available. Consistent with previous studies, students with a preference for focusing and those who used proper study strategies had higher GPAs than students who preferred to task-switch.
Learning:
Karpinski and colleagues (2013) compared multitasking behaviors of students from Europe to those of students from the U.S. They found that only the students from the U.S. were distracted by multitasking to the point that their GPA suffered. This was due to two main reasons: the U.S. students multitask more than European students and the European students, when engaging in multitasking, were more strategic in their multitasking behavior as they delayed replying to incoming messages. The concept of "digital meta cognition"—awareness of one's usage of and the effects of digital devices—has been proposed as a construct for providing a way to avoid problems with media multitasking while learning.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**DEC Alpha**
DEC Alpha:
Alpha (original name Alpha AXP) is a 64-bit reduced instruction set computer (RISC) instruction set architecture (ISA) developed by Digital Equipment Corporation (DEC). Alpha was designed to replace 32-bit VAX complex instruction set computers (CISC) and to be a highly competitive RISC processor for Unix workstations and similar markets.
Alpha is implemented in a series of microprocessors originally developed and fabricated by DEC. These microprocessors are most prominently used in a variety of DEC workstations and servers, which eventually formed the basis for almost all of their mid-to-upper-scale lineup. Several third-party vendors also produced Alpha systems, including PC form factor motherboards.
DEC Alpha:
Operating systems that support Alpha included OpenVMS (formerly named OpenVMS AXP), Tru64 UNIX (formerly named DEC OSF/1 AXP and Digital UNIX), Windows NT (discontinued after NT 4.0; and prerelease Windows 2000 RC2), Linux (Debian, SUSE, Gentoo and Red Hat), BSD UNIX (NetBSD, OpenBSD and FreeBSD up to 6.x), Plan 9 from Bell Labs, and the L4Ka::Pistachio kernel. A port of Ultrix to Alpha was carried out during the initial development of the Alpha architecture, but was never released as a product.The Alpha architecture was sold, along with most parts of DEC, to Compaq in 1998. Compaq, already an Intel x86 customer, announced that they would phase out Alpha in favor of the forthcoming Hewlett-Packard/Intel Itanium architecture, and sold all Alpha intellectual property to Intel, in 2001, effectively killing the product. Hewlett-Packard purchased Compaq in 2002, continuing development of the existing product line until 2004, and selling Alpha-based systems, largely to the existing customer base, until April 2007.
History:
PRISM Alpha emerged from an earlier RISC project named Parallel Reduced Instruction Set Machine (PRISM), itself the product of several earlier projects. PRISM was intended to be a flexible design, supporting Unix-like applications, and Digital's existing VAX/VMS software, after minor conversion. A new operating system named MICA would support both ULTRIX and VAX/VMS interfaces on a common kernel, allowing software for both platforms to be easily ported to the PRISM architecture.Started in 1985, the PRISM design was continually changed during its development in response to changes in the computer market, leading to lengthy delays in its introduction. It was not until the summer of 1987 that it was decided that it would be a 64-bit design, among the earliest such designs in a microprocessor format. In October 1987, Sun Microsystems introduced the Sun-4, their first workstation using their new SPARC processor. The Sun-4 runs about three to four times as fast as their latest Sun-3 designs using the Motorola 68020, and any Unix offering from DEC. The plans changed again; PRISM was realigned once again as a 32-bit part and aimed directly at the Unix market. This further delayed the design.Having watched the PRISM delivery date continue to slip, and facing the possibility of more delays, a team in the Palo Alto office decided to design their own workstation using another RISC processor. After due diligence, they selected the MIPS R2000 and built a working workstation running Ultrix in a period of 90 days. This sparked off an acrimonious debate within the company, which came to a head in a July 1988 management meeting. PRISM appeared to be faster than the R2000, but the R2000 machines could be in the market by January 1989, a year earlier than PRISM. When this proposal was accepted, one of the two original roles for PRISM disappeared. The decision to make a VMS PRISM had already ended by this point, so there was no remaining role. PRISM was cancelled at the meeting.
History:
RISCy VAX As the meeting broke up, Bob Supnik was approached by Ken Olsen, who stated that the RISC chips appeared to be a future threat to their VAX line. He asked Supnik to consider what might be done with VAX to keep it competitive with future RISC systems.This led to the formation of the "RISCy VAX" team. They initially considered three concepts. One was a cut-down version of the VAX instruction set architecture (ISA) that would run on a RISC-like system and leave more complex VAX instructions to system subroutines. Another concept was a pure RISC system that would translate existing VAX code into its own ISA on-the-fly and store it in a CPU cache. Finally, there was still the possibility of a much faster CISC processor running the complete VAX ISA. Unfortunately, all of these approaches introduced overhead and would not be competitive with a pure-RISC machine running native RISC code.The group then considered hybrid systems that combined one of their existing VAX one-chip solution and a RISC chip as a coprocessor used for high-performance needs. These studies suggested that the system would inevitably be hamstrung by the lower-performance part and would offer no compelling advantage. It was at this point that Nancy Kronenberg pointed out that people ran VMS, not VAX, and that VMS only had a few hardware dependencies based on its modelling of interrupts and memory paging. There appeared to be no compelling reason why VMS could not be ported to a RISC chip as long as these small bits of the model were preserved. Further work on this concept suggested this was a workable approach.Supnik took the resulting report to the Strategy Task Force in February 1989. Two questions were raised: could the resulting RISC design also be a performance leader in the Unix market, and should the machine be an open standard? And with that, the decision was made to adopt the PRISM architecture with the appropriate modifications. This became the "EVAX" concept, a follow-on to the successful CMOS CVAX implementation. When management accepted the findings, they decided to give the project a more neutral name, removing "VAX", eventually settling on Alpha. Soon after, work began on a port of VMS to the new architecture.
History:
Alpha The new design uses most of the basic PRISM concepts, but was re-tuned to allow VMS and VMS programs to run at reasonable speed with no conversion at all. The primary Alpha instruction set architects were Richard L. Sites and Richard T. Witek. The PRISM's Epicode was developed into the Alpha's PALcode, providing an abstracted interface to platform- and processor implementation-specific features.
History:
The main contribution of Alpha to the microprocessor industry, and the main reason for its performance, is not so much the architecture but rather its implementation. At that time (as it is now), the microchip industry was dominated by automated design and layout tools. The chip designers at Digital continued pursuing sophisticated manual circuit design in order to deal with the complex VAX architecture. The Alpha chips show that manual circuit design applied to a simpler, cleaner architecture allows for much higher operating frequencies than those that are possible with the more automated design systems. These chips caused a renaissance of custom circuit design within the microprocessor design community.
History:
Originally, the Alpha processors were designated the DECchip 21x64 series, with "DECchip" replaced in the mid-1990s with "Alpha". The first two digits, "21" signifies the 21st century, and the last two digits, "64" signifies 64 bits. The Alpha was designed as 64-bit from the start and there is no 32-bit version. The middle digit corresponds to the generation of the Alpha architecture. Internally, Alpha processors were also identified by EV numbers, EV officially standing for "Extended VAX" but having an alternative humorous meaning of "Electric Vlasic", giving homage to the Electric Pickle experiment at Western Research Lab.In May 1997, DEC sued Intel for allegedly infringing on its Alpha patents in designing the original Pentium, Pentium Pro, and Pentium II chips. As part of a settlement, much of DEC's chip design and fabrication business was sold to Intel. This included DEC's StrongARM implementation of the ARM computer architecture, which Intel marketed as the XScale processors commonly used in Pocket PCs. The core of Digital Semiconductor, the Alpha microprocessor group, remained with DEC, while the associated office buildings went to Intel as part of the Hudson fab.
History:
Improved models The first few generations of the Alpha chips were some of the most innovative of their time. The first version, the Alpha 21064 or EV4, is the first CMOS microprocessor whose operating frequency rivalled higher-powered ECL minicomputers and mainframes.
The second, 21164 or EV5, is the first microprocessor to place a large secondary cache on-chip.
The third, 21264 or EV6, is the first microprocessor to combine both high operating frequency and the more complicated out-of-order execution microarchitecture.
The 21364 or EV7 is the first high performance processor to have an on-chip memory controller.
History:
The unproduced 21464 or EV8 would have been the first to include simultaneous multithreading, but this version was canceled after the sale of DEC to Compaq. The Tarantula research project, which most likely would have been called EV9, would have been the first Alpha processor to feature a vector processor unit.A persistent report attributed to DEC insiders suggests the choice of the AXP tag for the processor was made by DEC's legal department, which was still smarting from the VAX trademark fiasco. After a lengthy search the tag "AXP" was found to be entirely unencumbered. Within the computer industry, a joke got started that the acronym AXP meant "Almost eXactly PRISM".
Design principles:
The Alpha architecture was intended to be a high-performance design. Digital intended the architecture to support a one-thousandfold increase in performance over twenty-five years. To ensure this, any architectural feature that impeded multiple instruction issue, clock rate or multiprocessing was removed. As a result, the Alpha does not have: Branch delay slots Suppressed instructions Byte load or store instructions (later added with the Byte Word Extensions (BWX)) Condition codes The Alpha does not have condition codes for integer instructions to remove a potential bottleneck at the condition status register. Instructions resulting in an overflow, such as adding two numbers whose result does not fit in 64 bits, write the 32 or 64 least significant bits to the destination register. The carry is generated by performing an unsigned compare on the result with either operand to see if the result is smaller than either operand. If the test was true, the value one is written to the least significant bit of the destination register to indicate the condition.
Registers:
The architecture defines a set of 32 integer registers and a set of 32 floating-point registers in addition to a program counter, two lock registers and a floating-point control register (FPCR). It also defines registers that were optional, implemented only if the implementation required them. Lastly, registers for PALcode are defined.
Registers:
The integer registers are denoted by R0 to R31 and floating-point registers are denoted by F0 to F31. The R31 and F31 registers are hardwired to zero and writes to those registers by instructions are ignored. Digital considered using a combined register file, but a split register file was determined to be better, as it enables two-chip implementations to have a register file located on each chip and integer-only implementations to omit the floating-point register file containing the floating-point registers. A split register file was also determined to be more suitable for multiple instruction issue due to the reduced number of read and write ports. The number of registers per register file was also considered, with 32 and 64 being contenders. Digital concluded that 32 registers was more suitable as it required less die space, which improves clock frequencies. This number of registers was deemed not to be a major issue in respect to performance and future growth, as thirty-two registers could support at least eight-way instruction issue.
Registers:
The program counter is a 64-bit register which contains a longword-aligned virtual byte address, that is, the low two bits of the program counter are always zero. The PC is incremented by four to the address of the next instruction when an instruction is decoded. A lock flag and locked physical address register are used by the load-locked and store-conditional instructions for multiprocessor support. The floating-point control register (FPCR) is a 64-bit register defined by the architecture intended for use by Alpha implementations with IEEE 754-compliant floating-point hardware.
Data types:
In the Alpha architecture, a byte is defined as an 8-bit datum (octet), a word as a 16-bit datum, a longword as a 32-bit datum, a quadword as a 64-bit datum, and an octaword as a 128-bit datum.
Data types:
The Alpha architecture originally defined six data types: Quadword (64-bit) integer Longword (32-bit) integer IEEE T-floating-point (double precision, 64-bit) IEEE S-floating-point (single precision, 32-bit)To maintain a level of compatibility with the VAX, the 32-bit architecture that preceded the Alpha, two other floating-point data types are included: VAX G-floating point (double precision, 64-bit) VAX F-floating point (single precision, 32-bit) VAX H-floating point (quad precision, 128-bit) was not supported, but another 128-bit floating-point option, X-floating point, is available on Alpha, but not VAX.H and X have been described as similar, but not identical. Software emulation for H-floating is available from DEC, as is a source-code level converter named DECmigrate.
Memory:
The Alpha has a 64-bit linear virtual address space with no memory segmentation. Implementations can implement a smaller virtual address space with a minimum size of 43 bits. Although the unused bits were not implemented in hardware such as TLBs, the architecture required implementations to check whether they are zero to ensure software compatibility with implementations with a larger (or full) virtual address space.
Instruction formats:
The Alpha ISA has a fixed instruction length of 32 bits. It has six instruction formats.
Instruction formats:
The integer operate format is used by integer instructions. It contains a 6-bit opcode field, followed by the Ra field, which specifies the register containing the first operand and the Rb field, specifies the register containing the second operand. Next is a 3-bit field which is unused and reserved. A 1-bit field contains a "0", which distinguished this format from the integer literal format. A 7-bit function field follows, which is used in conjunction with the opcode to specify an operation. The last field is the Rc field, which specifies the register which the result of a computation should be written to. The register fields are all 5 bits long, required to address 32 unique locations, the 32 integer registers.
Instruction formats:
The integer literal format is used by integer instructions which use a literal as one of the operands. The format is the same as the integer operate format except for the replacement of the 5-bit Rb field and the 3 bits of unused space with an 8-bit literal field which is zero-extended to a 64-bit operand.
The floating-point operate format is used by floating-point instructions. It is similar to the integer operate format, but has an 11-bit function field made possible by using the literal and unused bits which are reserved in integer operate format.
The memory format is used mostly by load and store instructions. It has a 6-bit opcode field, a 5-bit Ra field, a 5-bit Rb field and a 16-bit displacement field.
Instruction formats:
Branch instructions have a 6-bit opcode field, a 5-bit Ra field and a 21-bit displacement field. The Ra field specifies a register to be tested by a conditional branch instruction, and if the condition is met, the program counter is updated by adding the contents of the displacement field with the program counter. The displacement field contains a signed integer and if the value of the integer is positive, if the branch is taken then the program counter is incremented. If the value of the integer is negative, then program counter is decremented if the branch is taken. The range of a branch thus is ±1 Mi instructions, or ±4 MiB. The Alpha Architecture was designed with a large range as part of the architecture's forward-looking goal.
Instruction formats:
The CALL_PAL format is used by the CALL_PAL instruction, which is used to call PALcode subroutines. The format retains the opcode field but replaces the others with a 26-bit function field, which contains an integer specifying a PAL subroutine.
Instruction set:
Control instructions The control instructions consist of conditional and unconditional branches, and jumps. The conditional and unconditional branch instructions use the branch instruction format, while the jump instructions use the memory instruction format.
Instruction set:
Conditional branches test whether the least significant bit of a register is set or clear, or compare a register as a signed quadword to zero, and branch if the specified condition is true. The conditions available for comparing a register to zero are equality, inequality, less than, less than or equal to, greater than or equal to, and greater than. The new address is computed by longword aligning and sign extending the 21-bit displacement and adding it to the address of the instruction following the conditional branch.
Instruction set:
Unconditional branches update the program counter with a new address computed in the same way as conditional branches. They also save the address of the instruction following the unconditional branch to a register. There are two such instructions, and they differ only in the hints provided for the branch prediction hardware.
There are four jump instructions. These all perform the same operation, saving the address of the instruction following the jump, and providing the program counter with a new address from a register. They differ in the hints provided to the branch prediction hardware. The unused displacement field is used for this purpose.
Instruction set:
Integer arithmetic The integer arithmetic instructions perform addition, multiplication, and subtraction on longwords and quadwords; and comparison on quadwords. There is no instruction(s) for division as the architects considered the implementation of division in hardware to be adverse to simplicity. In addition to the standard add and subtract instructions, there are scaled versions. These versions shift the second operand to the left by two or three bits before adding or subtracting. The Multiply Longword and Multiply Quadword instructions write the least significant 32 or 64 bits of a 64- or 128-bit result to the destination register, respectively. Since it is useful to obtain the most significant half, the Unsigned Multiply Quadword High (UMULH) instruction is provided. UMULH is used for implementing multi-precision arithmetic and division algorithms. The concept of a separate instruction for multiplication that returns the most significant half of a result was taken from PRISM.
Instruction set:
The instructions that operate on longwords ignore the most significant half of the register and the 32-bit result is sign-extended before it is written to the destination register. By default, the add, multiply, and subtract instructions, with the exception of UMULH and scaled versions of add and subtract, do not trap on overflow. When such functionality is required, versions of these instructions that perform overflow detection and trap on overflow are provided.
Instruction set:
The compare instructions compare two registers or a register and a literal and write '1' to the destination register if the specified condition is true or '0' if not. The conditions are equality, inequality, less than or equal to, and less than. With the exception of the instructions that specify the former two conditions, there are versions that perform signed and unsigned compares.
Instruction set:
The integer arithmetic instructions use the integer operate instruction formats.
Instruction set:
Logical and shift The logical instructions consist of those for performing bitwise logical operations and conditional moves on the integer registers. The bitwise logical instructions perform AND, NAND, NOR, OR, XNOR, and XOR between two registers or a register and literal. The conditional move instructions test a register as a signed quadword to zero and move if the specified condition is true. The specified conditions are equality, inequality, less than or equal to, less than, greater than or equal to, and greater than. The shift instructions perform arithmetic right shift, and logical left and right shifts. The shift amount is given by a register or literal. Logical and shift instructions use the integer operate instruction formats.
Extensions:
Byte-Word Extensions (BWX) Later Alphas include byte-word extensions, a set of instructions to manipulate 8-bit and 16-bit data types. These instructions were first introduced in the 21164A (EV56) microprocessor and are present in all subsequent implementations. These instructions perform operations that formerly required multiple instructions to implement, which improves code density and the performance of certain applications. BWX also makes the emulation of x86 machine code and the writing of device drivers easier.
Extensions:
Motion Video Instructions (MVI) Motion Video Instructions (MVI) was an instruction set extension to the Alpha ISA that added instructions for single instruction, multiple data (SIMD) operations. Alpha implementations that implement MVI, in chronological order, are the Alpha 21164PC (PCA56 and PCA57), Alpha 21264 (EV6) and Alpha 21364 (EV7). Unlike most other SIMD instruction sets of the same period, such as MIPS' MDMX or SPARC's Visual Instruction Set, but like PA-RISC's Multimedia Acceleration eXtensions (MAX-1, MAX-2), MVI was a simple instruction set composed of a few instructions that operate on integer data types stored in existing integer registers.
Extensions:
MVI's simplicity is due to two reasons. Firstly, Digital had determined that the Alpha 21164 was already capable of performing DVD decoding through software, therefore not requiring hardware provisions for the purpose, but was inefficient in MPEG-2 encoding. The second reason is the requirement to retain the fast cycle times of implementations. Adding many instructions would have complicated and enlarged the instruction decode logic, reducing an implementation's clock frequency.
Extensions:
MVI consists of 13 instructions: Floating-point Extensions (FIX) Floating-point extensions (FIX) are an extension to the Alpha Architecture. It introduces nine instructions for floating-point square-root and for transferring data to and from the integer registers and floating-point registers. The Alpha 21264 (EV6) is the first microprocessor to implement these instructions.
Count Extensions (CIX) Count Extensions (CIX) is an extension to the architecture which introduces three instructions for counting bits. These instructions are categorized as integer arithmetic instructions. They were first implemented on the Alpha 21264A (EV67).
Implementations:
At the time of its announcement, Alpha was heralded as an architecture for the next 25 years. While this was not to be, Alpha has nevertheless had a reasonably long life. The first version, the Alpha 21064 (otherwise named the EV4) was introduced in November 1992 running at up to 192 MHz; a slight shrink of the die (the EV4S, shrunk from 0.75 µm to 0.675 µm) ran at 200 MHz a few months later. The 64-bit processor was a superpipelined and superscalar design, like other RISC designs, but nevertheless outperformed them all and DEC touted it as the world's fastest processor. Careful attention to circuit design, a hallmark of the Hudson design team, like a huge centralized clock circuitry, allowed them to run the CPU at higher speeds, even though the microarchitecture was fairly similar to other RISC chips. In comparison, the less expensive Intel Pentium ran at 66 MHz when it was launched the following spring.
Implementations:
The Alpha 21164 or EV5 became available in 1995 at processor frequencies of up to 333 MHz. In July 1996 the line was speed bumped to 500 MHz, in March 1998 to 666 MHz. Also in 1998 the Alpha 21264 (EV6) was released at 450 MHz, eventually reaching (in 2001 with the 21264C/EV68CB) 1.25 GHz. In 2003, the Alpha 21364 or EV7 Marvel was launched, essentially an EV68 core with four 1.6 GB/s inter-processor communication links for improved multiprocessor system performance, running at 1 or 1.15 GHz.
Implementations:
In 1996, the production of Alpha chips was licensed to Samsung Electronics Company. Following the purchase of Digital by Compaq the majority of the Alpha products were placed with API NetWorks, Inc. (formerly Alpha Processor Inc.), a private company funded by Samsung and Compaq. In October 2001, Microway became the exclusive sales and service provider of API NetWorks' Alpha-based product line.
Implementations:
On June 25, 2001, Compaq announced that Alpha would be phased out by 2004 in favor of Intel's Itanium, canceled the planned EV8 chip, and sold all Alpha intellectual property to Intel. Hewlett-Packard merged with Compaq in 2002; HP announced that development of the Alpha series would continue for a few more years, including the release of a 1.3 GHz EV7 variant named the EV7z. This would be the final iteration of Alpha, the 0.13 µm EV79 also being canceled.
Implementations:
Alpha is also implemented in the Piranha, a research prototype developed by Compaq's Corporate Research and Nonstop Hardware Development groups at the Western Research Laboratory and Systems Research Center. Piranha is a multicore design for transaction processing workloads that contains eight simple cores. It was described at the 27th Annual International Symposium on Computer Architecture in June 2000.Early revisions of the Sunway architecture are claimed to be based on Alpha, however since the SW26010, Sunway uses a new instruction set architecture unrelated to Alpha.
Implementations:
Model history ISA extensions R – Hardware support for rounding to infinity and negative infinity.
Implementations:
B – BWX, the "Byte/Word Extension", adding instructions to allow 8- and 16-bit operations from memory and I/O M – MVI, "multimedia" instructions F – FIX, instructions to move data between integer and floating-point registers and for square root C – CIX, instructions for counting and finding bits T – support for prefetch with modify intent to improve the performance of the first attempt to acquire a lock
Performance:
To illustrate the comparative performance of Alpha-based systems, some Standard Performance Evaluation Corporation (SPEC) performance numbers (SPECint95, SPECfp95) are listed below. Note that the SPEC results claim to report the measured performance of a whole computer system (CPU, bus, memory, compiler optimizer), not just the CPU. Also note that the benchmark and scale changed from 1992 to 1995. However, the figures give a rough impression of the performance of the Alpha architecture (64-bit), compared with the contemporary HP (64-bit) and Intel-based offerings (32-bit). Perhaps the most obvious trend is that while Intel could always get reasonably close to Alpha in integer performance, in floating-point performance the difference was considerable. On the other side, HP (PA-RISC) is also reasonably close to Alpha, but these CPUs are running at significantly lower clock rates (MHz). The tables lack two important values: the power consumption and the price of a CPU.
Alpha-based systems:
The first generation of DEC Alpha-based systems comprise the DEC 3000 AXP series workstations and low-end servers, DEC 4000 AXP series mid-range servers, and DEC 7000 AXP and 10000 AXP series high-end servers. The DEC 3000 AXP systems use the same TURBOchannel bus as the prior MIPS-based DECstation models, whereas the 4000 is based on Futurebus+ and the 7000/10000 share an architecture with corresponding VAX models.
Alpha-based systems:
DEC also produced a personal computer (PC) configuration Alpha workstation with an Extended Industry Standard Architecture (EISA) bus, the DECpc AXP 150 (codename Jensen, also named the DEC 2000 AXP). This is the first Alpha system to support Windows NT. DEC later produced Alpha versions of their Celebris XL and Digital Personal Workstation PC lines, with 21164 processors.
Alpha-based systems:
Digital also produced single-board computers based on the VMEbus for embedded and industrial use. The first generation includes the 21068-based AXPvme 64 and AXPvme 64LC, and the 21066-based AXPvme 160. These were introduced on March 1, 1994. Later models such as the AXPvme 100, AXPvme 166 and AXPvme 230 are based on the 21066A processor, while the Alpha VME 4/224 and Alpha VME 4/288 are based on the 21064A processor. The last models, the Alpha VME 5/352 and Alpha VME 5/480, are based on the 21164 processor.
Alpha-based systems:
The 21066 chip is used in the DEC Multia VX40/41/42 compact workstation and the ALPHAbook 1 laptop from Tadpole Technology.
In 1994, DEC launched a new range of AlphaStation and AlphaServer systems. These use 21064 or 21164 processors and introduced the PCI bus, VGA-compatible frame buffers and PS/2-style keyboards and mice. The AlphaServer 8000 series supersedes the DEC 7000/10000 AXP and also employs XMI and FutureBus+ buses.
The AlphaStation XP1000 is the first workstation based on the 21264 processor. Later AlphaServer/Station models based on the 21264 are categorised into DS (departmental server), ES (enterprise server) or GS (global server) families.
The final 21364 chip is used in the AlphaServer ES47, ES80 and GS1280 models and the AlphaStation ES47.
Alpha-based systems:
A number of OEM motherboards were produced by DEC, such as the 21066 and 21068-based AXPpci 33 "NoName", which was part of a major push into the OEM market by the company, the 21164-based AlphaPC 164 and AlphaPC 164LX, the 21164PC-based AlphaPC 164SX and AlphaPC 164RX and the 21264-based AlphaPC 264DP. Several third parties such as Samsung and API also produced OEM motherboards such as the API UP1000 and UP2000.
Alpha-based systems:
To assist third parties in developing hardware and software for the platform, DEC produced Evaluation Boards, such as the EB64+ and EB164 for the Alpha 21064A and 21164 microprocessors respectively.
The 21164 and 21264 processors were used by NetApp in various network-attached storage systems, while the 21064 and 21164 processors were used by Cray in their T3D and T3E massively parallel supercomputers.
Supercomputers The fastest supercomputer based on Alpha processors was the ASCI Q at Los Alamos National Laboratory. The machine was built as an HP AlphaServer SC45/GS Cluster. It had 4096 Alpha (21264 EV-68, 1.25 GHz) CPUs, and reached an Rmax of 7.727 TFLOPS.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Milk equivalent**
Milk equivalent:
Milk equivalent is a measure of the quantity of fluid milk used in a processed dairy product. Measured on a milkfat basis, it takes about 21.8 pounds of farm milk to make a pound of butter, and about 9.2 pounds to make a pound of American cheese. Measured on a skim solids basis, it takes about 11.6 pounds of farm milk to make a pound of nonfat dry milk. Farm milk weighs about 8.6 pounds per gallon.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**SimRefinery**
SimRefinery:
SimRefinery is a computer management simulation game designed to simulate Chevron's Richmond refinery operation. It was developed by the Maxis Business Simulations division of Maxis in 1993. John Hiles, who was the head of the Maxis division, was a lead designer on the project.
Development:
After the success of SimCity, Maxis received numerous requests from various companies to develop simulations for their industries. After rejecting many requests from other companies, the team eventually agreed to make a prototype of SimRefinery for Chevron: [SimRefinery was] a simulation of their refinery operation, for orienting people in the company as to how a refinery works. It wasn't so much for the engineers as it was for the accountants and managers who walked through this refinery every day and didn't know what these pipes were carrying.
Release and rediscovery:
As a commissioned business aid, it was not made available to the public. Until 2020, little information about the game had existed, though Maxis had discussed its creation and some screenshots existed. Most of the assets stayed with Maxis Business Simulations, which Maxis eventually divested in 1996. The division rebranded itself as Thinking Tools Inc. and continued to develop similar corporate simulations, but eventually had to shutter itself, and most of its assets were destroyed.In May 2020, librarian Phil Salvador published a long form investigative article about Maxis Business Simulations and SimRefinery featuring interviews with Hiles and other members of the division. Ars Technica reported on the article, which led to a commentor on the website uncovering a floppy disc that contains an in-development build of the game. The anonymous commenter then uploaded a digital copy to the Internet Archive to work within its DOSBox emulator.This emulated version reveals more details about the "gameplay" of SimRefinery. The game resembles SimCity with different graphics, disasters, and rules, the former to represent oil tanker ports, petroleum storage and piping systems. The user's role in the simulation was the plant manager of a refinery. One of the things the user learned was about supply and demand and how it affects the financial situation. The game was not defined to be an accurate representation of the chemical processes of a plant, as this would have been considered extremely dangerous. Instead, it was intended to show how disparate systems of a chemical plant may end up interacting at the larger scale, incorporating the financial, production, and logistics related to operating a plant. The game allowed some "disasters" to be created by creating explosive mixtures of components that set off fires, as well as external events that may disrupt the plant. The game was not considered to be a fully finished product based on the version received by the Internet Archive.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Anti-neutrophil cytoplasmic antibody**
Anti-neutrophil cytoplasmic antibody:
Anti-neutrophil cytoplasmic antibodies (ANCAs) are a group of autoantibodies, mainly of the IgG type, against antigens in the cytoplasm of neutrophils (the most common type of white blood cell) and monocytes. They are detected as a blood test in a number of autoimmune disorders, but are particularly associated with systemic vasculitis, so called ANCA-associated vasculitides (AAV).
ANCA IF patterns:
Immunofluorescence (IF) on ethanol-fixed neutrophils is used to detect ANCA, although formalin-fixed neutrophils may be used to help differentiate ANCA patterns. ANCA can be divided into four patterns when visualised by IF; cytoplasmic ANCA (c-ANCA), C-ANCA (atypical), perinuclear ANCA (p-ANCA) and atypical ANCA (a-ANCA), also known as x-ANCA. c-ANCA shows cytoplasmic granular fluorescence with central interlobular accentuation. C-ANCA (atypical) shows cytoplasmic staining that is usually uniform and has no interlobular accentuation. p-ANCA has three subtypes, classical p-ANCA, p-ANCA without nuclear extension and granulocyte specific-antinuclear antibody (GS-ANA). Classical p-ANCA shows perinuclear staining with nuclear extension, p-ANCA without nuclear extension has perinuclear staining without nuclear extension and GS-ANA shows nuclear staining on granulocytes only. a-ANCA often shows combinations of both cytoplasmic and perinuclear staining.
ANCA antigens:
The c-ANCA antigen is specifically proteinase 3 (PR3). p-ANCA antigens include myeloperoxidase (MPO) and bacterial permeability increasing factor Bactericidal/permeability-increasing protein (BPI). Other antigens exist for c-ANCA (atypical), however many are as yet unknown. Classical p-ANCA occurs with antibodies directed to MPO. p-ANCA without nuclear extension occurs with antibodies to BPI, cathepsin G, elastase, lactoferrin and lysozyme. GS-ANA are antibodies directed to granulocyte specific nuclear antigens. Atypical ANCA are thought to be antigens similar to that of the p-ANCAs, however may occur due to differences in neutrophil processing.Other less common antigens include HMG1 (p-ANCA pattern), HMG2 (p-ANCA pattern), alpha enolase (p and c-ANCA pattern), catalase (p and c-ANCA pattern), beta glucuronidase (p-ANCA pattern), azurocidin (p and c-ANCA pattern), actin (p and a-ANCA) and h-lamp-2 (c-ANCA).
ELISA:
Enzyme-linked immunosorbent assay (ELISA) is used in diagnostic laboratories to detect ANCAs. Although IF can be used to screen for many ANCAs, ELISA is used to detect antibodies to individual antigens. The most common antigens used on an ELISA microtitre plate are MPO and PR3, which are usually tested for after a positive IF test.
Development:
It is poorly understood how ANCA are developed, although several hypotheses have been suggested. There is probably a genetic contribution, particularly in genes controlling the level of immune response – although genetic susceptibility is likely to be linked to an environmental factor, some possible factors including vaccination or exposure to silicates. Two possible mechanisms of ANCA development are postulated, although neither of these theories answers the question of how the different ANCA specificities are developed, and there is much research still being undertaken on the development of ANCA.
Development:
Theory of molecular mimicry Microbial superantigens are molecules expressed by bacteria and other microorganisms that have the power to stimulate a strong immune response by activation of T-cells. These molecules generally have regions that resemble self-antigens that promote a residual autoimmune response – this is the theory of molecular mimicry. Staphylococcal and streptococcal superantigens have been characterized in autoimmune diseases – the classical example in post group A streptococcal rheumatic heart disease, where there is similarity between M proteins of Streptococcus pyogenes to cardiac myosin and laminin. It has also been shown that up to 70% of patients with granulomatosis with polyangiitis are chronic nasal carriers of Staphylococcus aureus, with carriers having an eight times increased risk of relapse. This would therefore be considered a type II hypersensitivity reaction.
Development:
Theory of defective apoptosis Neutrophil apoptosis, or programmed cell death, is vital in controlling the duration of the early inflammatory response, thus restricting damage to tissues by the neutrophils. ANCA may be developed either via ineffective apoptosis or ineffective removal of apoptotic cell fragments, leading to the exposure of the immune system to molecules normally sequestered inside the cells. This theory solves the paradox of how it could be possible for antibodies to be raised against the intracellular antigenic targets of ANCA.
Role in disease:
Disease associations ANCAs are associated with small vessel vasculitides including granulomatosis with polyangiitis, microscopic polyangiitis, primary pauci-immune necrotizing crescentic glomerulonephritis (a type of renal-limited microscopic polyangiitis), eosinophilic granulomatosis with polyangiitis and drug induced vasculitides. PR3 directed c-ANCA is present in 80-90% of granulomatosis with polyangiitis, 20-40% of microscopic polyangiitis, 20-40% of pauci-immune crescentic glomerulonephritis and 35% of eosinophilic granulomatosis with polyangiitis. c-ANCA (atypical) is present in 80% of cystic fibrosis (with BPI as the target antigen) and also in inflammatory bowel disease, primary sclerosing cholangitis and rheumatoid arthritis (with antibodies to multiple antigenic targets). p-ANCA with MPO specificity is found in 50% of microscopic polyangiitis, 50% of primary pauci-immune necrotizing crescentic glomerulonephritis and 35% of eosinophilic granulomatosis with polyangiitis. p-ANCA with specificity to other antigens are associated with inflammatory bowel disease, rheumatoid arthritis, drug-induced vasculitis, autoimmune liver disease, drug induced syndromes and parasitic infections. Atypical ANCA is associated with drug-induced systemic vasculitis, inflammatory bowel disease and rheumatoid arthritis. The ANCA‐positive rate is much higher in patients with type 1 diabetes mellitus than in healthy individuals.Levamisole, which is a common adulterant of cocaine, can cause an ANCA positive vasculitis.The presence or absence of ANCA cannot indicate presence or absence of disease and results are correlated with clinical features. The association of ANCA and disease activity remains controversial; however, the reappearance of ANCA after treatment can indicate a relapse.
Role in disease:
Pathogenesis Although the pathogenic role of ANCA is still controversial, in vitro and animal models support the idea that the antibodies have a direct pathological role in the formation of small vessel vasculitides. MPO and PR3 specific ANCA can activate neutrophils and monocytes through their Fc and Fab'2 receptors, which can be enhanced by cytokines which cause neutrophils to display MPO and PR3 on their surface. Aberrant glycosylation of the MPO and PR3 specific ANCA enhances their ability to interact with activating Fc receptors on neutrophils. The activated neutrophils can then adhere to endothelial cells where degranulation occurs. This releases free oxygen radicals and lytic enzymes, resulting in damage to the endothelium via the induction of necrosis and apoptosis. Furthermore, neutrophils release chemoattractive signalling molecules that recruit more neutrophils to the endothelium, acting as a positive feedback loop. Animal models have shown that MPO antibodies can induce necrotizing crescentic glomerulonephritis and systemic small vessel vasculitis. In these animal models the formation of glomerulonephritis and vasculitis can occur in the absence of T-cells, however neutrophils must be present. Although ANCA titres have been noted to have limited correlation with disease activity, except for kidney disease, and with risk of relapse, this is explained by differences in the epitopes and affinity of ANCAs. ANCAs induce excess activation of neutrophils, resulting in the production of neutrophil extracellular traps (NETs), which cause damage to small blood vessels. In addition, in patients with active disease, treated with Rituximab, an anti-CD20 antibody which remove circulating B-cells, clinical remission correlates more to the decreasing number of circulating B-cells than decrease in ANCA titre, which in some patient does not change during treatment. The same study found that clinical relapse in some patients were in association with the return of circulating B-cells. Based on the above observations and that ANCA reactive B-cells can be found in circulation in patients with AAV, an alternative hypothesis have been proposed assigning a direct pathogenic role of these cells, whereby activated neutrophils and ANCA-reactive B-cells engage in intercellular cross-talk, which leads not only to neutrophil degranulation and inflammation but also to the proliferation and differentiation of ANCA-reactive B-cells. However, this hypothesis remains to be tested.
Role in disease:
Treatment Avacopan was approved for medical use in the United States to treat anti-neutrophil cytoplasmic autoantibody-associated vasculitis in October 2021.
History:
ANCAs were originally described in Davies et al. in 1982 in segmental necrotising glomerulonephritis. The Second International ANCA Workshop, held in The Netherlands in May 1989, fixed the nomenclature on perinuclear vs. cytoplasmic patterns, and the antigens MPO and PR3 were discovered in 1988 and 1989, respectively. International ANCA Workshops have been carried out every two years.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Developer Certificate of Origin**
Developer Certificate of Origin:
The Developer Certificate of Origin (DCO) is a statement that a software developer agrees to, saying that "the contributor is allowed to make the contribution and that the project has the right to distribute it under its license." It was introduced in 2004 by the Linux Foundation, to enhance the submission process for software used in the Linux kernel, shortly after the SCO–Linux disputes.DCOs are often used as an alternative to a Contributor License Agreement (CLA). Instead of a signed legal contract, a DCO is an affirmation that the source code being submitted originated from the developer, or that the developer has permission to submit the code. Proponents of the DCO contend that it reduces the barriers of entry introduced by a CLA.
Developer Certificate of Origin:
Developer Certificate of Origin Version 1.1 Copyright (C) 2004, 2006 The Linux Foundation and its contributors.
1 Letterman Drive Suite D4700 San Francisco, CA, 94129 Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed.
Developer Certificate of Origin:
Developer's Certificate of Origin 1.1 By making a contribution to this project, I certify that: (a) The contribution was created in whole or in part by me and I have the right to submit it under the open source license indicated in the file; or (b) The contribution is based upon previous work that, to the best of my knowledge, is covered under an appropriate open source license and I have the right under that license to submit that work with modifications, whether created in whole or in part by me, under the same open source license (unless I am permitted to submit under a different license), as indicated in the file; or (c) The contribution was provided directly to me by some other person who certified (a), (b) or (c) and I have not modified it.
Developer Certificate of Origin:
(d) I understand and agree that this project and the contribution are public and that a record of the contribution (including all personal information I submit with it, including my sign-off) is maintained indefinitely and may be redistributed consistent with this project or the open source license(s) involved.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**GO64**
GO64:
GO-64! was an early software emulation of the Commodore 64 computer, with a copyright date of 1988 for version 2.0.
The name most likely comes from the ability for the Commodore 128 computer to switch to a hardware emulation of the Commodore 64 by typing GO64 at the BASIC prompt and pressing the return key.
This software was created by Christopher P. Zura and Cliff Dugan of Software Insight Systems Inc.
It allowed the use of some software and hardware designed for Commodore 64 computers on Amiga computers.
It required a minimum of 512kb of RAM to operate, but 1024kb of RAM was required to make use of all features. If a 68020 CPU was installed, it could operate at speeds exceeding the speed of a real Commodore 64, according to the developers.
This software does not operate on versions later than 1.3 of the Amiga Kickstart, and so does not operate on the Amiga 3000, Amiga 500 plus, Amiga 600, Amiga 4000 or Amiga 1200.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**McASP**
McASP:
McASP is an acronym for Multichannel Audio Serial Port, a communication peripheral found in Texas Instruments family of digital signal processors (DSPs) and Microcontroller Units (MCUs).
The McASP functions as a general-purpose audio serial port optimized for the needs of multichannel audio applications.
Depending on the implementation, the McASP may be useful for time-division multiplexed (TDM) stream, Inter-Integrated Sound (I2S) protocols, and intercomponent digital audio interface transmission (DIT). However, some implementations are limited to supporting just the Inter-Integrated Sound (I2S) protocol.
McASP:
The McASP consists of transmit and receive sections that may operate synchronized, or completely independently with separate master clocks, bit clocks, and frame syncs, and using different transmit modes with different bit-stream formats. The McASP module also includes up to 16 serializers that can be individually enabled to either transmit or receive. In addition, all of the McASP pins can be configured as general-purpose input/output (GPIO) pins.
Features:
Features of the McASP include: Two independent clock generator modules for transmit and receive Clocking flexibility allows the McASP to receive and transmit at different rates. For example, the McASP can receive data at 48 kHz but output up-sampled data at 96 kHz or 192 kHz.
Features:
Independent transmit and receive modules, each includes: Programmable clock and frame sync generator TDM streams from 2 to 32, and 384 time slots Support for time slot sizes of 8, 12, 16, 20, 24, 28, and 32 bits Data formatter for bit manipulation Individually assignable serial data pins (up to 16 pins) Glueless connection to audio analog-to-digital converters (ADC), digital-to-analog converters (DAC), Codec, digital audio interface receiver (DIR), and S/PDIF transmit physical layer components.
Features:
Wide variety of I2S and similar bit-stream format Integrated digital audio interface transmitter (DIT) supports: S/PDIF, IEC60958-1, AES-3 formats Up to 16 transmit pins Enhanced channel status/user data RAM 384-slot TDM with external digital audio interface receiver (DIR) device For DIR reception, an external DIR receiver integrated circuit should be used with I2S output format and connected to the McASP receive section.
Features:
Extensive error checking and recovery Transmit underruns and receiver overruns due to the system not meeting real-time requirements Early or late frame sync in TDM mode Out-of-range high-frequency master clock for both transmit and receive External error signal coming into the AMUTEIN input DMA error due to incorrect programming
Protocols:
The McASP supports a wide variety of protocols.
Protocols:
Transmit section supports Wide variety of I2S and similar bit-stream formats TDM streams from 2 to 32 time slots S/PDIF, IEC60958-1, AES-3 formats Receive section supports Wide variety of I2S and similar bit-stream formats TDM streams from 2 to 32 time slots TDM stream of 384 time slots specifically designed for easy interface to external digital interface receiver (DIR) device transmitting DIR frames to McASP using the I2S protocol (one time slot for each DIR subframe)The transmit and receive sections may each be individually programmed to support the following options on the basic serial protocol: Programmable clock and frame sync polarity (rising or falling edge): ACLKR/X, AHCLKR/X and AFSR/X Slot length (number of bits per time slot): 8, 12, 16, 20, 24, 28, 32 bits supported Word length (bits per word): 8, 12, 16, 20, 24, 28, 32 bits; always less than or equal to the time slot length First-bit data delay: 0, 1, 2 bit clocks Left/right alignment of word inside slot Bit order: MSB first or LSB first Bit mask/pad/rotate function Automatically aligns data for DSP internally in either Q31 or integer formats Automatically masks nonsignificant bits (sets to 0, 1, or extends value of another bit)In DIT mode, additional features of the transmitter are: Transmit-only mode- 384 time slots (subframe) per frame Bi-phase encoded 3.3 V output Support for consumer and professional applications Channel status RAM (384 bits) User data RAM (384 bits) Separate valid bit (V) for subframe A, B
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Triple-negative breast cancer**
Triple-negative breast cancer:
Triple-negative breast cancer (TNBC) is any breast cancer that either lacks or shows low levels of estrogen receptor (ER), progesterone receptor (PR) and human epidermal growth factor receptor 2 (HER2) overexpression and/or gene amplification (i.e. the tumor is negative on all three tests giving the name triple-negative). Triple-negative is sometimes used as a surrogate term for basal-like.Triple-negative breast cancer comprises 15–20% of all breast cancer cases and affects more young women or women with a mutation in the BRCA1 gene than other breast cancers. Triple-negative breast cancers comprise a very heterogeneous group of cancers. TNBC is the most challenging breast cancer type to treat. Hormone therapy that is used for other breast cancers does not work for TNBC. In its early stages, the cancer is typically treated through surgery, radiation and chemotherapy. In later stages where surgery is not possible or the cancer has spread from the initial localised area, treatment is limited to chemotherapy and in some cases further targeted therapy.Triple-negative breast cancers have a relapse pattern that is very different from hormone-positive breast cancers where the risk of relapse is much higher for the first 3–5 years, but drops sharply and substantially below that of hormone-positive breast cancers afterwards.
Risk factors:
The overall proportion of TNBC is very similar in all age groups. Younger women have a higher rate of basal or BRCA related TNBC, while older women have a higher proportion of apocrine, normal-like and rare subtypes including neuroendocrine TNBC.A study in the US has shown that, among younger women, African American and Hispanic women have a higher risk of TNBC, with African Americans facing worse prognosis than other ethnic groups.One known risk factor for triple-negative breast cancer is germline mutations. These are alterations within the heritable lineage that is being passed down to the offspring. Due to their high disposition for cancers of the breast, ovaries, pancreas, and prostate, the BRCA1 and BRCA2 genes were identified as high risk for triple-negative. Changes or mutations in 19p13.1 and MDM4 loci have also been associated with triple-negative breast cancer, but not other forms of breast cancer. Thus, triple-negative tumors may be distinguished from other breast cancer subtypes by a unique pattern of common and rare germline alterations.In 2009, a case-control study of 187 triple-negative breast cancer patients described a 2.5 increased risk for triple-negative breast cancer in women who used oral contraceptives (OCs) for more than one year, compared to women who used OCs for less than one year or never. The increased risk for triple-negative breast cancer was 4.2 among women 40 years of age or younger who used OCs for more than one year, while there was no increased risk for women between the ages of 41 and 45. Also, as duration of OC use increased, triple-negative breast cancer risk increased.
Classification:
Breast cancer classification is used to assess the tumor to decide on treatment and prognosis. Classification can be performed using molecular, immunohistochemical, and clinical characteristics. One of the important classification types is receptor status, because it identifies those cancers that have specific targeted treatments available. Breast cancer tumors have traditionally been classed using immunohistochemistry as one of four types: estrogen receptor positive progesterone receptor positive HER2 overexpression positive triple-negativeThere are targeted therapies for estrogen and progesterone receptor cancers and more recently HER2 receptor cancers but there are no targeted therapies for TNBC as a whole.The threshold level for hormone receptor positivity was changed in 2010 and now requires more than 1% positive tumor nuclei are found in the tumor sample.Newer techniques for categorising breast cancer are based on gene expression in the tumor which classifies breast cancer into: luminal A (HR+/HER2-) 68% luminal B (HR+/HER2+) 10% HER2 overexpressing (HR-/HER2+) 4% basal-like (HR-/HER2-) 10%with 7% of unknown subtype. HR indicates hormone receptor and +/- indicates status whether positive or negative.
Classification:
The basal-like subtype has many overlapping features to TNBC and in addition to being receptor negative, has increased expression of basal cytokeratins. 85% of basal-like tumors are TNBC.Subtypes are used to try to define better treatments or a more accurate prognosis. However, there is no standard classification for TNBC subtypes. Although TNBC has a variety of different subtypes that may vary depending on how they are determined, to date the disease is still uniformly treated with chemotherapy although they may have additional targeted treatments. One of the popular subtype classification for TNBC is: basal-like 1 (BL1) 35% basal-like 2 (BL2) 22% mesenchymal (M) 25% luminal androgen receptor (LAR) 16%Most of TNBC is invasive carcinoma of no special type. The following rarer breast tumors have a higher proportion of being TNBC: adenoid cystic carcinoma 78.2% are TNBC metaplastic 76.2% are TNBC medullary carcinoma 60.5% are TNBC apocrine adenocarcinoma 56.7% are TNBC inflammatory 25.9% are TNBC
Prognosis:
TNBC is more likely to recur within the first five years after treatment than other breast cancers. However, after five years the chance of recurrence is much less than for other breast cancers. The risk of recurrence peaks at three years from diagnosis and reduces after that.Cancer survival is typically based on 5-year survival rates which is the survival rate compared to women without breast cancer and is based on the stage when the cancer is first diagnosed. These statistics do not apply if the cancer returns after treatment.
Prognosis:
Approximately 25% of those with localised disease will relapse with distant metastasis also known as stage IV. Median survival from diagnosis of metastasis is around 12 months. Metastasis in TNBC is different from other breast cancers with a tendency to spread to the brain and other organs such as lungs and liver and there is less of a tendency to spread to bones.
Treatment:
Early stage disease Standard treatment is surgery with adjuvant chemotherapy and radiotherapy.
Treatment:
Surgery is primarily used for early stage disease and may be either a lumpectomy or a mastectomy. Studies have found that the overall survival for lumpectomy and radiotherapy was the same or higher than for a mastectomy for TNBC patients.Neoadjuvant chemotherapy (before surgery) is very frequently used for triple-negative breast cancers as they are more susceptible to platinum-based regimen, allowing for a higher rate of breast-conserving surgeries. Important details on the individual responsiveness of particular cancers can be gained from evaluating the response to this form of chemotherapy. However, the improvement in breast conservation is only 10–15% and the clues to individual responsiveness have conclusively proven to make an improvement in outcomes.
Treatment:
Early stage TNBC is generally very susceptible to chemotherapy and can lead to a pathological complete response (pCR) i.e. no detectable cancer cells in the breast or lymph nodes. Although this does not always translate into overall survival.
Chemotherapy used to treat early stage cancers are: anthracyclines alkalating agents such as cisplatin and carboplatin. These are particularly effective with BRCA positive cases. These agents cause DNA damage which is unable to be repaired when there are BRCA defects leading to cell death.
taxanes Late stage disease Late stage disease is known as metastatic TNBC (mTNBC).
Treatment:
Treatment depends on whether the tumour tests positive for the programmed death cell ligand 1 (PD-L1) protein or BRCA gene mutation. Also known as immunotherapy the presence of PD-L1 on cancer cells mates with an associate PD-1 receptor on the bodies own immune killer T cells which prevents the T cell from further attacking the cancer cell. By blocking these receptors the T-cells can attack both cancer cells and healthy cells.
Treatment:
The following treatment is recommended by the American Society of Clinical Oncology (ASCO) for metastatic TNBC: mTNBC +PD-L1: 1st line: offered chemo + immune checkpoint inhibitor mTNBC -PD-L1: 1st line: single-agent chemo; 3rd line: sacituzumab govitecan mTNBC +BRCA: patients previously treated with chemotherapy in the neoadjuvant, adjuvant, or metastatic disease should be offered PARP inhibitor rather than chemotherapy.Sacituzumab govitecan (Trodelvy ) is an anti-Trop-2 antibody linked to SN-38, developed by Immunomedics Inc. (now Gilead Sciences). It was approved by the FDA on 22 April 2020 for the treatment of metastatic TNBC. Sacituzumab govitecan had previously received FDA priority review, breakthrough therapy, and fast track designations.
Clinical trials:
Angiogenesis and EGFR (HER-1) inhibitors are frequently tested in experimental settings and have shown efficacy. Treatment modalities are not sufficiently established for normal use, and it is unclear in which stage they are best used and which patients would profit.
Clinical trials:
By 2009 A number of new strategies for TNBC were being tested in clinical trials, including the PARP inhibitor BSI 201, NK012.A novel antibody-drug conjugate known as glembatumumab vedotin (CDX-011), which targets the protein GPNMB, has also shown encouraging clinical trial results in 2009.PARP inhibitors had shown some promise in early trials but failed in some later trials.An accelerated approval Phase II clinical trial (METRIC) investigating glembatumumab vedotin versus capecitabine began in November 2013, expected to enroll 300 patients with GPNMB-expressing metastatic TNBC.Three early stage trials reported TNBC results in June 2016, for IMMU-132, Vantictumab, and atezolizumab in combination with the chemotherapy nab-paclitaxel.In 2019, CytoDyn initiated a Phase 1b/2 trial with its humanized monoclonal antibody, leronlimab (PRO 140), in combination with chemotherapy following strong results in animal murine models. Among other mechanisms of action, leronlimab is believed to inhibit metastasis by inhibiting the CCR5 receptor on cell surfaces, which is commonly expressed in triple-negative breast cancer. On November 11, 2019, CytoDyn reported that the first TNBC patient injected under its naïve protocol (not previously treated for triple-negative breast cancer) demonstrated significantly reduced levels of circulating tumor cells (CTCs) and decreased tumor size at two-week and five-week observation intervals compared to baseline observations. CTCs are a potential surrogate endpoint in oncology trials, with reduced levels suggesting long-term clinical benefit.
Research:
Triple-negative breast cancers (TNBC) have, on average, significantly higher fluorine-18 fluorodeoxyglucose (FDG) uptake (measured by the SUVmax values) compared with uptake in ER+/PR+/HER2- tumors using fluorine-18 fluorodeoxyglucose-positron emission tomography (FDG-PET).
It is speculated that enhanced glycolysis in these tumors is probably related to their aggressive biology.
Research:
The widely used diabetes drug, metformin, holds promise for the treatment of triple-negative breast cancer. In addition metformin may influence cancer cells through indirect (insulin-mediated) effects, or it may directly affect cell proliferation and apoptosis of cancer cells. Epidemiologic and preclinical lab studies indicate that metformin has anti-tumor effects, via at least two mechanisms, both involving activation of the AMP-activated protein kinase (AMPK). A large-scale phase III trial of metformin in the adjuvant breast cancer setting is being planned in 2009.Triple-negative breast cancer cells rely on glutathione-S-transferase Pi1, and an inhibitor (LAS17) shows encouraging results in a pre-clinical study.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Adenylosuccinate synthase**
Adenylosuccinate synthase:
In molecular biology, adenylosuccinate synthase (or adenylosuccinate synthetase) (EC 6.3.4.4) is an enzyme that plays an important role in purine biosynthesis, by catalysing the guanosine triphosphate (GTP)-dependent conversion of inosine monophosphate (IMP) and aspartic acid to guanosine diphosphate (GDP), phosphate and N(6)-(1,2-dicarboxyethyl)-AMP. Adenylosuccinate synthetase has been characterised from various sources ranging from Escherichia coli (gene purA) to vertebrate tissues. In vertebrates, two isozymes are present: one involved in purine biosynthesis and the other in the purine nucleotide cycle.
Structure:
The crystal structure of adenylosuccinate synthetase from E. coli reveals that the dominant structural element of each monomer of the homodimer is a central beta-sheet of 10 strands. The first nine strands of the sheet are mutually parallel with right-handed crossover connections between the strands. The 10th strand is antiparallel with respect to the first nine strands. In addition, the enzyme has two antiparallel beta-sheets, composed of two strands and three strands each, 11 alpha-helices and two short 310-helices. Further, it has been suggested that the similarities in the GTP-binding domains of the synthetase and the p21ras protein are an example of convergent evolution of two distinct families of GTP-binding proteins. Structures of adenylosuccinate synthetase from Triticum aestivum and Arabidopsis thaliana when compared with the known structures from E. coli reveals that the overall fold is very similar to that of the E. coli protein.
Isozymes:
Humans express two adenylosuccinate synthase isozymes:
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Zero ionic layer**
Zero ionic layer:
Zero ionic layer is the main site of interaction in the core SNARE complex. Dipole-dipole interactions take place between 3 glutamine (Q) residues and 1 arginine (R) residue exposed in this layer. Despite that, the majority of the SNARE complex is hydrophobic because of the leucine zipper. Extensively studied layers within the SNARE alpha-helical bundle are designated from "-7" to "+8". Zero ionic layer is at the center of the bundle, and thus designated as "0" layer.
Structure:
SNARE complex is a bundle formed by 4 alpha-helical proteins, including vesicle-associated Synaptobrevin and cell-membrane-associated Syntaxin and SNAP. When the bundle is viewed on the side, for every alpha-helical turn, the alpha-carbons from each helix form a plane, which is thus designated as a "layer". Along the helical bundle from N-terminus to C-terminus, layers are designated from "-7" to "+8" respectively. "0" layer (i.e. zero ionic layer) is at the center of the helical bundle.The zero ionic layer is an ionic domain within the otherwise largely hydrophobic alpha-helical complex (SNARE complex) . It is stabilized by attractive forces(dipole-dipole interactions) between three partially negatively charged carbonyl groups of glutamine residues and a positively charged arginine. Specifically, these interacting groups include Q226 on Syntaxin, Q53 on SNAP-25 (Sn1), Q174 on SNAP-25 (Sn2) and R56 on Synaptobrevin (v-SNARE).The 4 amino acids are asymmetrically arranged in the layer, as shown in the picture. However, their intensive interactions ensure the layer's stability: the arginine side chain end lies in the center of the asymmetry and amino groups form hydrogen bonds with the three glutamine residues. Thus, steric and electrostatic fit is well established.
Function and research interest:
SNARE proteins are a family of a proteins that are located in cell membranes to mediate any secretory pathways. The complex is formed during exocytosis, a process where the vesicles inside the cell fuse with the cell membrane to secrete molecules into the extracellular space.The zero ionic layer of the SNARE complex is at special interest to scientists studying SNARE because of its three characteristics. Firstly, it is the only hydrophilic region in the entire hydrophobic SNARE complex; secondly, unlike most of the other layers, it displays asymmetry; thirdly, the 3Q:1R arrangement is found in almost all of the SNARE superfamily among eukaryotic cells. These unique aspects imply its importance to eukaryotic organisms in general. However, the exact and functions of zero ionic layer is still under investigation.Previous studies have focused on how mutations in this layer would affect the functionality of SNARE complex in secretory pathways. Even though the exact mechanism still awaits further investigation, these studies have revealed that the integrity of zero ionic layer is not essential to the proper alignment during complex formation, but it is essential to the disassociation of SNARE complex and the recycling of its 4 constituent alpha-helical proteins after exocytosis.An ATPase (NSF) together with a cofactor (α-SNAP) facilitates the breakdown of the SNARE complex after the completion of exocytosis. Studies have suggested that, during the disassociation process, the NSF/α-SNAP complex acts specifically on the zero ionic layer, particularly, the glutamine residue (Q226) in Syntaxin. The glutamine residue transmits the conformational change of NSF/α-SNAP complex to the SNARE complex in order to disrupt and thus disassociate the SNARE complex at the zero ionic layer. More specifically, even though the ionic layer is buried within the hydrophobic complex for the most part, during disassociation, NSF/α-SNAP complex may disturb the hydrophobic shielding and thus let water molecules into the core. This exposure of other hydrophilic molecules disturb the original hydrogen bonding equilibrium and thus facilitate disassembly of the alpha-helical bundle.
Mutation and alternation:
In studies that use exocytotic SNAREs of yeast as models, a mutation from glutamine to arginine in the zero ionic layer leads to yeast cells that have deficient growth and protein secretion ability. However, a mutation from arginine to glutamine in this layer leads to yeast cells that are functionally wild-type. In the mutation where all four amino acids in the zero ionic layer are glutamine residues, the cells still exhibit normal secretory ability, but defects may become pronounced when there are other mutations.Complementary mutations, where a glutamine to arginine mutation is paired with an arginine to glutamine mutation in the zero ionic layer, have resulted in functionally wild-type yeast cells too, according to their secretory ability.These mutation studies have been done to study the role of the four amino acids in zero ionic layer. Underlying mechanisms of why these mutations would lead to certain results are not well discussed. In general, the glutamine residues in this layer are of critical importance to the functionality of mutated strains. As long as the glutamine is intact or compensated in someway during mutation, functionality of SNARE complex will be retained.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Congenital distal spinal muscular atrophy**
Congenital distal spinal muscular atrophy:
Congenital distal spinal muscular atrophy is a hereditary condition characterized by muscle wasting (atrophy), particularly of distal muscles in legs and hands, and by early-onset contractures (permanent shortening of a muscle or joint) of the hip, knee, and ankle. Affected individuals often have shorter lower limbs relative to the trunk and upper limbs. The condition is a result of a loss of anterior horn cells localized to lumbar and cervical regions of the spinal cord early in infancy, which in turn is caused by a mutation of the TRPV4 gene. The disorder is inherited in an autosomal dominant manner. Arm muscle and function, as well as cardiac and respiratory functions are typically well preserved.
Signs and symptoms:
The presentation is as follows: Neurogenic muscle weakness Atrophy (of lower and upper limbs) Club foot Arthrogryposis Scoliosis Platyspondyly Pes cavus Vocal cord paralysis
Causes:
Congenital distal spinal muscular atrophy is caused by a mutation of the TRPV4 gene found on the 12q23-12q24.1. The mutation causes an affected individual to have lower levels of TRPV4 expression. This deficiency can lead to abnormal osmotic regulation. Congenital dSMA is genetically heterogeneous, meaning a mutation on this gene can cause a plethora of other phenotypically related or phenotypically unrelated diseases depending on the region that is mutated.
Pathophysiology:
The TRPV4 (transient receptor potential vanilloid 4) gene, located on chromosome 12, encodes for a protein that serves as an ion channel, typically found in the plasma membrane and is permeable to Ca2+. Abnormal regulation of Ca2+ can lead to inefficient muscle contraction. TRPV4 plays a major role in mechanosensation, as well as osmosensory functions in nerve endings, endothelia, and alveoli. The TRPV4 protein consists of 871 amino acids with its N- and C- terminals facing intracellularly. The protein also contains six alpha helices that pass through the plasma membrane. Mutations in TRPV4 can result in the loss of its normal function, or a toxic gain of function. In the latter case, intracellular Ca2+ levels are increased, which results in abnormal regulation.
Pathophysiology:
Mechanism The ankyrin repeat domain (ARD) is a region located near the intracellular N-terminal of the TRPV4 protein and consists of six ankyrin repeats. Four missense mutations have been identified at three specific positions all located within the ARD. All of these mutations are due to the swapping out of arginine with a different amino acid. Arginine is highly polar and positively charged, while its replacements are less polar or nonpolar. Some of these identified amino acid substitutions are: R296H, arginine to histidine substitution R315W, arginine to tryptophan substitution R316C, arginine to cysteine substitution R594H, arginine to histidine substitution
Diagnosis:
Electrophysiological evidence of denervation with intact motor and sensory nerve conduction findings must be made by using nerve conduction studies, usually in conjunction with EMG. The presence of polyphasic potentials and fibrillation at rest are characteristic of congenital dSMA.
The following are useful in diagnosis: Nerve conduction studies (NCS), to test for denervation Electromyography (EMG), also to detect denervation X-ray, to look for bone abnormalities Magnetic resonance imaging (MRI) Skeletal muscle biopsy examination Serum creatine kinase (CK) level in blood, usually elevated in affected individuals Pulmonary function test
Management:
Congenital dSMA has a relatively stable disease course, with disability mainly attributed to increased contractures rather than loss of muscle strength. Individuals frequently use crutches, knee, ankle, and/or foot orthoses, or wheelchairs. Orthopaedic surgery can be an option for some patients with severely impaired movement. Physical therapy and occupational therapy can help prevent further contractures from occurring, though they do not reverse the effects of preexisting ones. Some literature suggests the use of electrical stimulation or botulinum toxin to halt the progression of contractures.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**DT Carnage**
DT Carnage:
DT Carnage is a racing game developed by South Korean studio Axis Entertainment. The Nintendo Wii release was cancelled.
Plot:
The game revolves around a fictional tournament called the DT Tournament where people race using modified cars and are allowed the use of weapons. The player's father is injured in one of the tournaments, and the player swears revenge against the one who injured him.
Features:
Allows the player to use items to slow down opponents Crush other drivers against the side of the track RPG mode - use various items and techniques to excel on the track.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Congenital epulis**
Congenital epulis:
Congenital epulis is a proliferation of cells most frequently occurring on the alveolar ridge of the upper jaw at birth. Less frequently, the mass may arise from the mandibular alveolus. Rare cases can arise from the tongue. This lesion is more commonly found in female babies, suggesting hormonal involvement during embryonic development. The cause of this type of epulis is unknown. Also known as congenital granular cell tumor or Neumann's tumor; historically referred to as granular cell myoblastoma.
Congenital epulis:
Multiple lesions occur in 10% of affected neonates. The tumor is typically pedunculated and varies in maximum size from 0.5 cm to 9 cm. The lesion is typically painless and does not increase in size after discovery. Some small lesions may regress over time. Treatment is surgical excision. Recurrence is extremely rare even after incomplete excision.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**DYNLT1**
DYNLT1:
Dynein light chain Tctex-type 1 is a protein that in humans is encoded by the DYNLT1 gene.Cytoplasmic dynein is the major motor protein complex responsible for minus-end, microtubule-based motile processes. Each dynein complex consists of 2 heavy chains that have ATPase and motor activities, plus a group of accessory polypeptides. TCTEX1 is a dynein light chain involved in cargo binding (Chuang et al., 2005).[supplied by OMIM]
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Radio propaganda**
Radio propaganda:
Radio propaganda is propaganda aimed at influencing attitudes towards a certain cause or position, delivered through radio broadcast. The power of radio propaganda came from its revolutionary nature. The radio, like later technological advances in the media, allowed information to be transmitted quickly and uniformly to vast populations. Internationally, the radio was an early and powerful recruiting tool for propaganda campaigns.
Radio propaganda:
Before television, radio was by far the most effective way to prevent or promote social change. In many areas, it still is. Radio propaganda can be broadcast over great distances to a large audience at a relatively low cost. Through radio, a propagandist can bring his voice and all the persuasive power of his emotions to millions of people. A similar approach is used in every war employing radio propaganda: aside from convincing those on the home front of the necessity of war, a different kind of propaganda must be directed towards the enemy. Radio became a powerful propaganda tool because it ignored national borders and made enemy lines more accessible. One of the most common ways hosts got the civilian and enemy populations to listen to their broadcasts was by dropping leaflets from hot air balloons or airplanes. Most programs were broadcast on selected stations at certain times of the day; the dropped leaflets explained exactly when and where the broadcasts could be heard.
World War II:
The use of radio as a wartime propaganda tool was made famous during World War II by broadcasting organizations such as Voice of America and by shows such as Tokyo Rose, Axis Sally, and Lord Haw Haw.
World War II:
Nazi Germany The radio was an important tool of the Nazi propaganda efforts and it has been argued that it was the Nazis who pioneered the use of what was still a relatively new technology. A few months after the break out of World War II, German propagandists were transmitting no less than eleven hours a day of programs, offering most of them in English as well. In the first year of Nazi propaganda programming, broadcasters attempted to destroy pro-British feeling rather than arouse pro-German sentiment. These propagandists targeted certain groups, including capitalists, Jews, and selected newspapers/politicians. By the summer of 1940, the Nazis had abandoned all attempts to win American sympathy and the tone of German radio broadcasts towards the United States had become critical.
World War II:
German propaganda minister, Joseph Goebbels, claimed the radio was the "eighth great power" and he, along with the Nazi party, recognized the power of the radio in the propaganda machine of Nazi Germany. Recognizing the importance of radio in disseminating the Nazi message, Goebbels approved a mandate whereby millions of cheap radio sets were subsidized by the government and distributed to German citizens. It was Goebbels' job to propagate the anti-Bolshevik statements of Hitler and aim them directly at neighboring countries with German-speaking minorities. In Goebbels' "Radio as the Eighth Great Power" speech, he proclaimed: “It would not have been possible for us to take power or to use it in the ways we have without the radio...It is no exaggeration to say that the German revolution, at least in the form it took, would have been impossible without the airplane and the radio…[Radio] reached the entire nation, regardless of class, standing, or religion. That was primarily the result of the tight centralization, the strong reporting, and the up-to-date nature of the German radio.”As well as domestic broadcasts, the Nazi regime used the radio to deliver its message to both occupied territories and enemy states. One of the main targets was the United Kingdom to which William Joyce broadcast regularly, gaining him the nickname "Lord Haw-Haw" in the process. Broadcasts were also made to the United States, notably through Robert Henry Best and 'Axis Sally', Mildred Gillars.
World War II:
United Kingdom British propaganda during the First World War set a new benchmark that inspired the fascist and socialist regimes during the Second World War and the Cold War; Marshal Paul von Hindenburg stated, "This English propaganda was a new weapon, or rather a weapon which had never been employed on such a scale and so ruthlessly in the past." It was clear that large numbers of civilians could be mobilized for a massive war effort through persuasive techniques derived from the emerging disciplines of behavioral psychology and social sciences.An example of effective radio propaganda by the United States for the United Kingdom is the news reports of Edward R. Murrow. When the United Kingdom was at the time the only remaining nation opposing Germany in the autumn of 1940, Murrow covered the Battle of Britain and particularly the nightly bombing raids on London. His reports described the falling of the bombs, their impact, and the destruction they brought. As he described his approach to a London newspaper in 1941, "The official news is perhaps less important than the more intimate stories of life, work, and sacrifice."Murrow's objective was to focus on reaching the common man through his broadcasts, intimating to the world that the United Kingdom was fighting a "people’s war," not a war for its colonies, as the American isolationists charged. He wanted Americans to know that the UK was standing tall, united in its cause and protecting Western liberties and European civilization. He wanted Americans to see the UK as their natural ally and hurry to extend a helping hand. Many say he had far greater influence than the American ambassador to London; "He was an ambassador, in a double role, representing Britain in America as well as America in Britain.... He was a diplomat without a portfolio, a spokesman for a cause." United States Historians believe the moment when American radio made its debut as the preeminent means of foreign news was the Munich Crisis in September 1938. Early that month, Hitler began implementing his plans to dominate Europe by demanding self-determination for Germans living in a region of Czechoslovakia known as the Sudetenland. He left few doubts that he meant to annex the Sudetenland as part of an enlarged German Reich. High-level negotiations ensued, during which Britain's Prime Minister, Neville Chamberlain, journeyed to Germany three times in less than three weeks in a desperate attempt to save the peace. Fearful that a European war would once again entangle them, Americans became glued to their radios for daily and sometimes hourly updates and interpretations of the latest developments of the crisis. Within a couple of days, American listeners were bombarded with news programs, special news bulletins, and expert commentary on the crisis.America's first real venture into international broadcasting was in 1940 after Nazi victories in Europe, when the Roosevelt administration was becoming increasingly concerned about the effects of Nazi propaganda, both domestically and internationally. In August 1940, President Roosevelt issued an Executive Order establishing the Office of Coordination of Commercial and Cultural Relations to promote the use of government/private radio, and the Office of the Coordinator of Information. By 1942, the most famous radio program airing overseas became known as the "Voice of America." Even before the Japanese attack on Pearl Harbor, the U.S. government's Office of the Coordinator of Information began providing war news and commentary to commercial American shortwave radio stations for use on a voluntary basis.A popular government wartime radio show, performed by President Franklin D. Roosevelt, was known as "fireside chats". Two of the most famous programs on the radio show were entitled "On National Security" and "On the Declaration of War with Japan". "The Arsenal of Democracy" was a slogan coined by President Roosevelt during his national security radio broadcast delivered on 29 December 1940. Roosevelt promised to help the UK fight Nazi Germany by providing them with military supplies in a program known as Lend-Lease, while the United States stayed out of the actual fighting. This announcement was made a year before the attacks on Pearl Harbor, at a time when Germany had occupied much of Europe and threatened the UK. The day after the attack on Pearl Harbor, Roosevelt delivered his famous Infamy Speech to the United States, which was broadcast to the American people. The President called for a formal declaration of war on the Empire of Japan. The Infamy Speech was brief, running to just a little over seven minutes, and Roosevelt made a point of emphasizing that the United States and her interests were in grave danger. In so doing, he sought to end the isolationist stance the United States had previously been advocating concerning involvement in the war. The overall tone of the speech was one of determined realism. Roosevelt made no attempt to glaze over the extensive damage that had been caused to the American armed forces, noting the number of American lives lost in the attack. However, he emphasized his confidence in the strength of the America to face the challenge posed by Japan.
World War II:
"Yesterday, December 7th, 1941 – a date which will live in infamy – the United States of America was suddenly and deliberately attacked by naval and air forces of the Empire of Japan. It will be recorded that the distance of Hawaii from Japan makes it obvious that the attack was deliberately planned many days or even weeks ago. During the intervening time, the Japanese Government has deliberately sought to deceive the United States by false statements and expressions of hope for continued peace.”"As Commander-in-Chief of the Army and Navy I have directed that all measures be taken for our defense, that always will our whole nation remember the character of the onslaught against us. No matter how long it may take us to overcome this premeditated invasion, the American people, in their righteous might, will win through to absolute victory.”"Hostilities exist. There is no blinking at the fact that our people, our territory and our interests are in grave danger. With confidence in our armed forces, with the un-bounding determination of our people, we will gain the inevitable triumph. So help us God.”President Franklin D. Roosevelt – 8 December 1941With this declaration of war, radio became part of the propaganda campaign. Throughout the war, the attack on Pearl Harbor was frequently used in American propaganda. Direct wartime programming began shortly after the United States entry into the war. The first live broadcast to Germany, called Stimmen aus Amerika ("Voices from America") took place on 1 February 1942. It was introduced by "The Battle Hymn of the Republic" and included the pledge: "Today, and every day from now on, we will be with you from America to talk about the war.... The news may be good or bad for us – We will always tell you the truth."The Armed Forces Radio Service created a number of radio shows for American GIs stationed overseas. The most popular of these "mosquito networks" was GI Jive. In Agra, India, Virginia C. Claudon Allen broadcast nightly to counter Tokyo Rose.
World War II:
Famous radio shows During World War II, American GIs in both the Pacific and European theaters of war heard anonymous voices on the radio playing carefully selected American music and extolling the virtues of Japanese and Nazi causes. The DJs continuously encouraged GIs to stop fighting and constantly made false claims of American defeats and Japanese or Nazi victories. They frequently referred to specific American units and individuals by name, and in rare cases mentioned the names of loved ones back home. GIs dubbed the voice from Japan "Tokyo Rose"; two popular voices from Germany were "Axis Sally" and "Lord Haw-Haw".
World War II:
"Tokyo Rose": After being stranded in Japan while visiting her sick aunt after the United States refused to let her reenter the country after the attack on Pearl Harbor, Iva Toguri wound up at Radio Tokyo as a typist, preparing English-language scripts drafted by Japanese authorities for broadcast to the Allied troops in the Pacific. At Radio Tokyo, Toguri met captured Australian Major Charles Cousens and his associates, American Captain Wallace Ince and Filipino Lieutenant Normando Reyes. A supporter of the Allies in the war, she was delighted to meet soldiers who had been fighting for her side. Put off by her overt friendliness and pro-Americanism, the POWs initially suspected her of being a Kempeitai spy, but over the next few months they eventually came to trust her. When Radio Tokyo directed Cousens to include a woman DJ in his Zero Hour program, he asked for Toguri by name. Since their capture and conscription into Radio Tokyo, the Allied POWs had waged a covert campaign to sabotage the Japanese propaganda effort through the use of on-air innuendos, satire, and sarcastic, rushed or muffled readings. Now they had to bring a fourth party into the conspiracy, and the only person they could trust to support their efforts was Toguri. She was dubbed with the name "Tokyo Rose" and listeners came to know her by that name.
World War II:
After the war, the Army Counter Intelligence Corps, the Federal Bureau of Investigation, and the press continued to refer to Toguri by that name as she was taken into custody and brought to trial. Those defending Toguri stated that she was clearly "forced" to broadcast for the Japanese and was always a loyal American, shown by her many attempts to return home, which were continuously rejected. They also pointed to the lack of "tangible" evidence; American investigators never discovered any Japanese documents with the name "Tokyo Rose" because "Tokyo Rose" was a name coined by American GIs. However, under the United States Constitution, treason is the act of providing "aid and comfort" to an enemy. It does not say that force, loneliness, trickery, coercion, or fright are mitigating factors in favor of traitors. On 6 October 1949, Toguri was sentenced to 10 years in prison and fined $10,000. She served less than half that time and was pardoned by President Gerald Ford.
World War II:
"Axis Sally" was the pseudonym of Mildred Gillars, an American broadcaster employed by the Third Reich in Nazi Germany to proliferate propaganda during World War II. By 1941, the U.S. State Department advised American nationals to return home, but Gillars chose to stay in Germany after her fiancé, a German citizen named Paul Karlson, refused to marry her if she returned to the United States. Shortly afterwards, Karlson was sent to the Eastern Front, where he died in action. In 1940 she obtained work as an announcer with the Reichs-Rundfunk-Gesellschaft (RRG), German State Radio. On 7 December 1941, Gillars was working in the studio when the Japanese attack on Pearl Harbor was announced. She broke down in front of her colleagues and announced her allegiance to the East. However, since she decided to stay in Germany, Gillar was faced with the prospect of joblessness or prison, so she produced a written oath of allegiance to Germany and returned to work, her duties being limited to announcing records and participating in chat shows. She soon acquired several names amongst her GI listeners, including Berlin Bitch, Berlin Babe, Olga, and Sally, but the one that became most common was "Axis Sally." Her most successful show was known as Home Sweet Home. Home Sweet Home attempted to exploit the fears of American soldiers about the home front. The broadcasts were designed to make the soldiers cast doubt on their mission, their leaders, and their prospects after the war. Another show was known as Midge at the Mike, broadcast to late fall 1943. Gillar played American songs interspersed with defeatist propaganda and anti-Semitic rhetoric, as well as G.I.’s Letter box and Medical Reports in 1944, in which Gillars used information on wounded and captured U.S. airmen to cause fear and worry in their families. She was convicted of treason by the United States in 1949 following her capture in post-war Berlin. Her arrest came about after the U.S. attorney general specially dispatched prosecutor Victor C. Woerheide to Berlin to find and arrest Gillars. He only had one solid lead: Raymond Kurtz, a B-17 pilot shot down by the Germans, recalled that a woman who had visited his prison camp seeking interviews was the broadcaster. Gillars was indicted on 10 September 1948, and charged with 10 counts of treason, but only eight were presented at her trial, which began on 25 January 1949. The prosecution relied on the large number of her programs recorded by the Federal Communications Commission to demonstrate her active participation in propaganda activities against the United States. It was also shown that she had made an oath of allegiance to Adolf Hitler. She was sentenced to 10 to 30 years in prison and a $10,000 fine.
World War II:
"Lord Haw-Haw" was a pseudonym for William Joyce, German radio's most prominent English-language speaker. He hosted a propaganda show on a radio program called Germany Calling, broadcast by Nazi German radio to audiences in the UK on the station Reichssender Hamburg. The program started on 18 September 1939 and continued until 30 April 1945, when Hamburg was overrun by the British Army. Through his broadcasts, the Reich Ministry of Public Enlightenment and Propaganda attempted to discourage and demoralize British, Canadian, Australian, and American troops and the British population within radio range to suppress the effectiveness of the Allied war effort through propaganda and to motivate the Allies to agree to peace terms leaving the Nazi regime intact and in power. The Nazi broadcasts prominently reported on the shooting down of Allied aircraft and the sinking of Allied ships, presenting discouraging reports of high losses and casualties among Allied forces. Although listening to his broadcasts was highly discouraged, many Britons did indeed tune into them. In 1940, at the height of his influence, Joyce had an estimated 6 million regular and 18 million occasional listeners in the United Kingdom.At the end of the war, Joyce was captured by British forces at Flensburg, near the German border with Denmark. Spotting a dishevelled figure resting from gathering firewood, intelligence soldiers engaged him in conversation, asked if he was Joyce, and when he reached in his pocket for his false passport, the soldiers, believing he was armed, shot him in the buttocks, leaving four wounds. Joyce was charged on the basis that, even though he had misstated his nationality to gain possession of a British passport, until it expired this entitled him to British diplomatic protection in Germany, and therefore he owed allegiance to the King of England at the time he commenced working for the Germans. Joyce was convicted and sentenced to death on 19 September 1945.
Cold War:
By 1946, it became clear to the United States that the Soviet Union did not share the American vision of postwar collaboration for peace in Europe. Soviet authorities began to install Communist regimes in liberated territories of Eastern Europe, a direct violation of the provisions in the Teheran and Yalta Conferences. The radio became crucial in the propaganda war between the two blocs and was the main concern of both participants’ information agencies as the "war of ideas" began. In 1948, the Soviet Union organized the Communist Information Bureau (Cominform), which was formed to unite the Communist states in forthcoming struggle against "Anglo-American Imperialism."One of the earliest responses in Europe was known as Radio in the American Sector (RIAS). RIAS was established in 1946 to serve the American sector in West Berlin. The station's importance was magnified during the 1948 Berlin blockade, when it carried the message of Allied determination to resist Soviet intimidation. In East Germany, broadcasts included news, commentary, and cultural programs that were unavailable in the controlled media of the German Democratic Republic. The management of RIAS developed many of the techniques later used to develop Radio Free Europe/Radio Liberty. The RIAS broadcasts concentrated on the idea of democracy and the importance of the breakdown of the international communications barriers erected by the Communists. The programming was generally geared towards "special groups" within the East German population, including youth, women, farmers, etc. The broadcast became known as the "bridge" from West to East Germany over the Berlin Wall.
Cold War:
Aside from RIAS, Voice of America (VOA) began broadcasting in 1947 in the Soviet Union for the first time as a part of U.S. foreign policy to fight the propaganda of the Soviet Union and other countries. Initially, there was only one hour per day of news and other features broadcast on the pretext of countering "more harmful instances of Soviet propaganda directed against American leaders and policies" on the part of the internal Soviet Russian-language media. The Soviet Union responded by initiating aggressive, electronic jamming of Voice of America broadcasts on 24 April 1949. This led critics to question the broadcasts' actual impact. However, after the collapse of the Warsaw Pact and the Soviet Union, interviews with participants in anti-Soviet movements verified the effectiveness of VOA broadcasts in transmitting information to socialist societies.While many acknowledged the importance of propaganda as an instrument of foreign policy, it was primarily the Cold War that institutionalized propaganda as a permanent instrument of U.S. foreign policy. The Soviets suddenly increased the tempo of the war, by taking over Czechoslovakia and attempting to take complete control of Berlin. Realizing there was no further hope of considering the Soviet Union as an ally, the North Atlantic Treaty Organization was formed in April 1949, establishing the policy of containment of Communism as the organization's priority. The escalation of the Cold War intensified America's interest in broadcasting and information policy. The world was entering into a new era of foreign relations; therefore, the United States National Security Council produced a study in 1949 that concluded that there was a need for the United States to have a major information program to counter Soviet aggression. The council issued document 10/2, approved by President Truman in June 1948, authorizing a comprehensive program of clandestine warfare, including black propaganda, psychological warfare, subversion, assistance to underground resistance movements, paramilitary operations, and economic warfare. The most famous form of anti-Soviet propaganda was the development of Radio Free Europe (RFE) and Radio Liberty (RL), which broadcast to Eastern Europe. The stations' purpose, above all, was fighting a political mission against Communism and Sovietism, against the representatives of the terrorist regimes. Its job was to mask Communist plans and expose all of those who were propagandist of Communist ideology. While other countries established international broadcasting entities, RFE/RL's purpose was to change the form of government in foreign nations by airing news not about the country from which the broadcasts originated, but about the countries that were the targets of the broadcasts.President Harry Truman announced in 1950 that the United States would launch an information program known as the "Campaign of Truth." The name was strategically picked to avoid any connotation of propaganda. The goals of the campaign included: 1) Establish a “healthy international community” with confidence in American leadership.
Cold War:
2) Present America fairly and counter “all the misrepresentations.” 3) Discourage further Soviet encroachment by showing that American is desirous of peace but is prepared for war.
Cold War:
4) Help “to roll back Soviet influence” by all means short of force, making captive people feel that they can identify with the West, weakening the morale of the Soviet military personnel, and encouraging non-Communist forces.In late 1950, RFE/RL began to assemble a full-fledged foreign broadcast staff, becoming more than a "mouthpiece for exiles." Teams of journalists were hired for each language service and an elaborate system of intelligence gathering provided up-to-date broadcast material. Most of this material came from a network of well-connected émigrés and interviews with travelers and defectors. The Communist regimes devoted considerable resources to countering Western broadcasts. They organized radio jamming on a massive scale, spending more on jamming than the West did on broadcasting. They placed spies in Western radio stations in an attempt to disrupt information sharing and organize counterpropaganda, while also attempting to gain access to top level officials who could provide them with information controlled by Western media outlets or intelligence services. These countermeasures by foreign regimes significantly drained domestic resources, and failed to neutralize Western broadcasts.During these years, the practice of propaganda became inextricably tied to the practices of psychological warfare. During World War II, psychological warfare was largely seen as an accessory to military operations, but during the Cold War psychological warfare was used to influence public opinion and advance foreign policy interests. Psychological warfare became, in essence, a synonym for the Cold War. It reflected the belief that the Cold War was an ideological, psychological, and cultural contest for hearts and minds that would be won or lost on the plain of public opinion. When President John F. Kennedy took office, his administration had a greater interest in the U.S. information effort than any other president up until that time. With Soviet premier Nikita Khrushchev’s address to the Soviet Central Committee in 1961, U.S. leaders believed the Soviet Union would be ready to seek a more limited form of conflict, emphasizing their winning of hearts and minds. The United States saw this as a good sign to use psychological resources to their advantage. However, these components of propaganda were put on hold with the Bay of Pigs scandal, the Cuban Missile Crisis, and the abrupt end of the Kennedy Administration.
Vietnam:
The first Vietnamese-language radio transmission was made on 2 September 1945, when Ho Chi Minh read out the Declaration of Independence. Prior to 1945, Vietnamese people were banned from owning radio receivers, and broadcasting was under control of the French colonial government, which established the first radio station in Vietnam, Radio Saigon, in the late 1920s. Vietnam's national radio station, now called the Voice of Vietnam, started broadcasting from Hanoi the week after declaration of the Democratic Republic of Vietnam, stating, "This is the Voice of Vietnam, broadcasting from Hanoi, the capital of the Democratic Republic of Vietnam." During the Vietnam War, Radio Hanoi operated as a propaganda tool of North Vietnam. Following reunification, all radio stations were combined into the Voice of Vietnam, which became the national radio station in 1978.
Vietnam:
"Hanoi Hannah" or Trịnh Thị Ngọ, was a Vietnamese radio personality best known for her work during the Vietnam War, when she made English-language broadcasts for North Vietnam directed at U.S. troops. During the Vietnam War in the 1960s and 1970s, Ngo became famous among U.S. soldiers for her propaganda broadcasts on Radio Hanoi. She made three broadcasts a day, reading the list of the newly killed or imprisoned Americans, attempting to persuade U.S. GIs that the U.S. involvement in the Vietnam War was unjust and immoral and playing popular U.S. anti-war songs in an attempt to incite feelings of nostalgia and homesickness amongst U.S. troops. Although she used the alias Thu Huong, the GIs usually called her "Hanoi Hannah" or "the Dragon Lady". Few were believed to have been influenced by her propaganda work and the soldiers often mocked her tactics, but they were also impressed by her military intelligence, especially when she mentioned the location of their own unit or listed specific U.S. casualties. After the war, she returned to live in Ho Chi Minh City with her husband where her voice was better known in the U.S. than in her own country.
Iraq and Afghanistan:
The United States took the lead in broadcasting psychological operations due to its superior technology and its ability to use aircraft to broadcast AM, FM and shortwave radio from directly above target audiences. America had dropped battery or crank-powered radios on third-world nations like Haiti so that the populace could hear U.S. broadcasts. In the more recent struggles in Iraq and Afghanistan, the United States distributed various battery and solar-powered satellite radios so that its story could be heard. The U.S. also dropped leaflets to inform Afghans about the attacks of 11 September and the Taliban, and to infiltrate Iraq with information on anti-Saddam radio programs that would be broadcast.
Iraq and Afghanistan:
In the 2001 invasion of Afghanistan, psychological operations (PSYOP) tactics were employed to demoralize the Taliban and to win the sympathies of the Afghan population. At least six EC-130E Commando Solo aircraft were used to jam local radio transmissions and transmit replacement propaganda messages even before the United States invaded Afghanistan. The primary PSYOP objectives were used to counter adversarial propaganda, to discourage interference with humanitarian affairs activities, to support objectives against state and non-state supporters and sponsors of terrorism, and to disrupt support for and relationships among terrorist organizations. In Afghanistan, the U.S. military has long conducted propaganda campaigns to try to sway public opinion against insurgents. Today (2013), the U.S. is teaching Afghan army units how to counter Taliban propaganda, especially with local radio broadcasts. The idea is to counter the Taliban-sponsored stations, so called "Mullah Radios," that operate mainly in the tribal areas along the Pakistani border and broadcast propaganda that helps turn public opinion against foreign troops and the pro-Western Afghan government. Radio is key to reaching the majority of Afghans; with only limited access to television, newspapers, and the Internet, most depend on radio programs for information.During the Iraq War, the U.S. implemented "black propaganda" by creating false radio personalities who disseminated pro-American information, but were supposedly supporters of Saddam Hussein. Radio Tikrit was a radio station in Iraq that broadcast programs that reflected strong support for the Iraqi leader Saddam Hussein and his government. The station's name is also the name of the Iraqi town where Saddam and other members of his government were born. However, the tone of Radio Tikrit's programs began to change dramatically; one show reportedly described Iraqis as being so poor that they had to sell their windows and doors. Another broadcast reported to have encouraged Iraqi soldiers to refuse the "orders of the tyrant" and to "be brave before it is too late," suggesting that the United States may have infiltrated the station. The U.S. was also successful with the Voice of America efforts once the censorship of the Iraqi media was lifted with the removal of Saddam from power.
Voice of America:
Voice of America, The Voice, or VOA is the official external broadcast institution of the United States federal government, sponsoring programming for broadcast on the radio, television, and the Internet outside of the U.S. in 43 languages. Currently, VOA produces about 1,500 hours of news and feature programming each week to global audience in order, "to promote freedom and democracy and to enhance understanding through multimedia communication of accurate, objective, and balanced news, information and other programming about America and the world to audiences overseas." Under § 501 of the Smith–Mundt Act of 1948, the Voice of America was forbidden to broadcast directly to American citizens until July 2013 when it was repealed in the Smith-Mundt Modernization Act provision of the National Defense Authorization Act for 2013. The intent of the legislation was to protect the American public from propaganda actions by its own government.On 12 July 1976, the principles were signed into law by President Gerald Ford: "The long-range interests of the United States are served by communicating directly with the peoples of the world by radio. To be effective, the Voice of America must win the attention and respect of listeners. These principles will therefore govern Voice of America (VOA) broadcasts: 1. VOA will serve as a consistently reliable and authoritative source of news. VOA news will be accurate, objective, and comprehensive.
Voice of America:
2. VOA will represent America, not any single segment of American society, and will therefore present a balanced and comprehensive projection of significant American thought and institutions.
Voice of America:
3. VOA will present the policies of the United States clearly and effectively, and will also present responsible discussions and opinion on these policies.”Today, the VOA operates shortwave radio transmitters and antenna farms at one site in the United States close to Greenville, North Carolina. The 44 languages that Voice of America currently broadcasts in include (TV broadcasts are marked with an asterisk): From 1942 to 1945, VOA was part of the Office of War Information, from 1945 to 1953, a function of the State Department, and in 1953 it was placed under the U.S. Information Agency. When the USIA was abolished in 1999, the VOA was placed under the Broadcasting Board of Governors (BBG), where the control remains today. The BBG was established as a buffer to protect VOA and other U.S.-sponsored, non-military, international broadcasters from political interference.
Voice of America:
In 1994, Voice of America became the first broadcast-news organization to offer continuously updated programs on the Internet in English and 44 other languages, using more than 20,000 servers across 71 countries. Since many listeners in Africa and other areas still receive much of their information via radio and have only limited access to computers, VOA continues to maintain regular shortwave-radio broadcasts.
Radio Free Europe/Radio Liberty:
Radio Free Europe/Radio Liberty is a broadcaster funded by the United States Congress that provides news, information, and analysis to countries in Eastern Europe, Central Asia, and the Middle East "where the free flow of information is either banned by government authorities or not fully developed". RFE/RL is supervised by the Broadcasting Board of Governors, alongside Voice of America.
Radio Free Europe/Radio Liberty:
Founded as an anti-Communist propaganda source during the Cold War, RFE/RL was headquartered in Munich, Germany, from 1949 to 1995. In 1995, the headquarters were moved to Prague in the Czech Republic, where operations have been significantly reduced since the end of the Cold War. In addition to the headquarters, the service maintains 20 local bureaus in countries throughout their broadcast region, including a corporate office in Washington, D.C. RFE/RL broadcasts in 28 languages to 21 countries including Russia, Iran, Afghanistan, Pakistan, and Iraq.
Radio Free Europe/Radio Liberty:
RFE/RL was developed out of a belief that the Cold War would eventually be fought by political rather than military means. American policymakers such as George Kennan and John Foster Dulles acknowledged that the Cold War was essentially a "war of ideas". The United States, acting through the Central Intelligence Agency, funded a long list of projects to counter the Communist appeal in Europe and the developing world. The missions of RFE/RL were separate from Voice of America in the sense that VOA was meant to be the voice of America, reflecting American foreign policy and disseminating world news from an official American viewpoint, whereas RFE/RL has the mission of captivating people and stimulating non-cooperation in Communist countries.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**EIF4E**
EIF4E:
Eukaryotic translation initiation factor 4E, also known as eIF4E, is a protein that in humans is encoded by the EIF4E gene.
Structure and function:
Most eukaryotic cellular mRNAs are blocked at their 5'-ends with the 7-methyl-guanosine five-prime cap structure, m7GpppX (where X is any nucleotide). This structure is involved in several cellular processes including enhanced translational efficiency, splicing, mRNA stability, and RNA nuclear export. eIF4E is a eukaryotic translation initiation factor involved in directing ribosomes to the cap structure of mRNAs as well as other steps in RNA metabolism that require cap-binding. It is a 24-kD polypeptide that exists as both a free form and as part of the eIF4F pre-initiation complex. Many cellular mRNAs require eIF4E in order to be translated into protein. The eIF4E polypeptide is considered by some to be the rate-limiting component of the eukaryotic translation apparatus and is involved in the mRNA-ribosome binding step of eukaryotic protein synthesis.
Structure and function:
The other subunits of eIF4F are a 47-kD polypeptide, termed eIF4A, that possesses ATPase and RNA helicase activities, and a 220-kD scaffolding polypeptide, eIF4G.Some viruses cut eIF4G in such a way that the eIF4E binding site is removed and the virus is able to translate its proteins without eIF4E. Also some cellular proteins, the most notable being heat shock proteins, do not require eIF4E in order to be translated. Both viruses and cellular proteins achieve this through an internal ribosome entry site in the RNA or through other RNA translation mechanisms such as those going through eIF3deIF4E plays roles outside of translation and other cap-binding proteins can engage in cap-dependent translation in an eIF4E-independent manner including factors such as eIF3D, eIF3I, PARN, the nuclear cap-binding complex CBC. Many of these appear to be dependent on both specific features of transcripts as well as cellular context.
Structure and function:
eIF4E is found in the nucleus of many mammalian cell types as well as in other species including yeast, drosophila and humans. eIF4E is found in nuclear bodies a subset of which colocalize with PML nuclear bodies, and eIF4E is additionally found diffusely in parts of the nucleoplasm in mammalian. In the nucleus, eIF4E plays well defined roles in the export of selected RNAs which contributes to its oncogenic phenotypes. This relies on the ability of eIF4E to bind the m7G cap of RNAs and the presence of the 50 nucleotide eIF4E sensitivity element (4ESE) in the 3’UTR of sensitive transcripts; although other elements may also play a role. This form of export relies on the CRM1/XPO1 pathway. Nuclear eIF4E has been shown to play other roles in RNA processing including in m7G capping, alternative polyadenylation and splicing.Increased nuclear accumulation of eIF4E as well as increased eIF4E-dependent RNA export, m7G capping and splicing of selected transcripts is characteristic of high-eIF4E AML patient samples. RNAs are selected based on USER codes, or cis-acting elements, within their RNAs for specific levels of RNA processing; thus not all transcripts are sensitive to all levels of regulation (including translation). For its RNA export function, eIF4E directly binds to the leucine rich pentatricopeptide repeat protein (LRPPRC) which directly binds the dorsal surface of eIF4E and simultaneously to the 4ESE RNA thereby acting as a platform for assembly for the RNA export complex. The current model is then LRPPRC binds to CRM1/XPO1 to engage the nuclear pore and traffic the 4ESE RNA to the cytoplasm. In all, the nuclear functions of eIF4E can have potent impacts on the proteome allowing eIF4E to both re-write the message as well as to increase production of proteins based on increased accumulation in the cytoplasm due to increased export as well as to increased number of ribosomes per transcript in some cases. Its multiple roles in RNA processing require its association of RNAs through the m7G cap, and thus eIF4E can be considered a cap-chaperone protein.
Regulation:
Since eIF4E is an initiation factor that is relatively low in abundance, eIF4E can be controlled at multiple levels. Regulation of eIF4E may be achieved at the levels of transcription, RNA stability phosphorylation, subcellular localization and partner proteins.a. Regulation of eIF4E by Gene Expression and RNA stability The mechanisms responsible for eIF4E transcriptional regulation are not entirely understood. However, several reports suggest a correlation between myc levels and eIF4E mRNA levels during the cell cycle. The basis of this relationship was further established by the characterization of two myc-binding sites (CACGTG E box repeats) in the promoter region of the eIF4E gene. This sequence motif is shared with other in vivo targets for myc and mutations in the E box repeats of eIF4E inactivated the promoter region, thereby diminishing its expression.
Regulation:
Recent studies shown that eIF4E levels can be regulated at transcriptional level by NFkB and C/EBP. Interestingly, transduction of primary AML cells with IkB-SR resulted not only in reduction of eIF4E mRNA levels, but also re-localization of eIF4E protein. eIF4E mRNA stability are also regulated by HuR and TIAR proteins. eIF4E gene amplification has been observed in subset of head and neck and breast cancer specimens.b. Regulation of eIF4E by Phosphorylation Stimuli such as hormones, growth factors, and mitogens that promote cell proliferation also enhance translation rates by phosphorylating eIF4E. Although eIF4E phosphorylation and translation rates are not always correlated, consistent patterns of eIF4E phosphorylation are observed throughout the cell cycle; wherein low phosphorylation is seen during G0 and M phase and wherein high phosphorylation is seen during G1 and S phase. This evidence is further supported by the crystal structure of eIF4E which suggests that phosphorylation on serine residue 209 may increase the affinity of eIF4E for capped mRNA.
Regulation:
eIF4E phosphorylation is also related to its ability to suppress RNA export and its oncogenic potential as first shown in cell lines.c. Regulation of eIF4E by Partner Proteins Assembly of the eIF4F complex is inhibited by proteins known as eIF4E-binding proteins (4E-BPs), which are small heat-stable proteins that block cap-dependent translation. Non-phosphorylated 4E-BPs interact strongly with eIF4E thereby preventing translation; whereas phosphorylated 4E-BPs bind weakly to eIF4E and thus do not interfere with the process of translation. Furthermore, binding of the 4E-BPs inhibits phosphorylation of Ser209 on eIF4E. Of note, 4E-BP1 is found in both the nucleus and the cytoplasm, indicating that it likely modulates nuclear eIF4Es functions of eIF4E as well. A recent study showed that 4E-BP3 regulated eIF4E dependent mRNA nucleo-cytoplasmic export. There are also many cytoplasmic regulators of eIF4E that bind to the same site as 4E-BP1.
Regulation:
Many other partner proteins has been found that can both stimulate or repress eIF4E activity, such as homeodomain containing proteins, including HoxA9, Hex/PRH, Hox 11, Bicoid, Emx-2 and Engrailed 2. While HoxA9 promotes mRNA export and translation activities of eIF4E, Hex/PRH inhibits nuclear functions of eIF4E. The RNA helicase DDX3 directly binds with eIF4E, modulates translation, and has potential functions in P-bodies and mRNA export.RING domains also bind eIF4E. The promyelocytic leukemia protein PML is a potent suppressor of both the nuclear RNA export and oncogenic activities of eIF4E whereby the RING domain of PML directly binds eIF4E on its dorsal surface suppressing eIF4E's oncogenic activity; and moreover a subset of PML and eIF4E nuclear bodies co-localize. RNA-eIF4E complexes are never observed in PML bodies consistent with the observation that PML suppresses the m7G cap binding function of eIF4E. Structural studies show that a related arenavirus RING finger protein, Lassa Fever Z protein, can similarly bind eIF4E on the dorsal surface.eIF4E nuclear entry is mediated by its direct interactions with Importin 8 where Importin 8 associates with the m7G cap-binding site of eIF4E. Indeed, reduction in Importin 8 levels reduce the oncogenic potential of eIF4E overexpressing cells and its RNA export function. Importin 8 binds to the cap-binding site of eIF4E and is competed by excess m7G cap analogues as observed by NMR. Interestingly, eIF4E also stimulates the RNA export of Importin 8 RNA thereby producing more Importin 8 protein. There may be additional importins that play this role depending on cell type. Although an initial study suggested that the eIF4E transporter protein 4E-T (eIF4ENIF1) facilitated nuclear entry, later studies showed that this factor rather alters the localization of eIF4E to cytoplasmic processing bodies (P-bodies) and repress translation.Potyvirus viral protein genome linked (VPg) were found to directly bind eIF4E in its cap-binding site. VPg is covalently linked to its genomic RNA and this interaction allows VPg to act as a "cap." The potyvirus VPg has no sequence or structural homology to other VPg's such as those from poliovirus. In vitro, VPg-RNA conjugates were translated with similar efficiency to m7G-capped RNAs indicating that VPg binds eIF4E and engages the translation machinery; while free VPg (in the absence of conjugated RNA) successfully competes for all the cap-dependent activities of eIF4E in the cell inhibiting translation and RNA export.d. Regulation of eIF4E cellular localization Several factors that regulate eIF4E functions also modulate the subcellular localization of eIF4E. For instance, overexpression of PRH/Hex leads to cytoplasmic retention of eIF4E, and thus loss of its mRNA export activity and suppression of transformation. PML overexpression leads to sequestration of eIF4E to nuclear bodies with PML and decrease of eIF4E nuclear bodies containing RNA, which correlates to repressed eIF4E dependent mRNA export and can be modulated by stress. Interestingly, overexpression of LRPPRC reduces eIF4E’s co-localization with PML in the nucleus and leads to increased mRNA export activity of eIF4E. As discussed above, Importin 8 brings eIF4E into the nucleus and its overexpression stimulates the RNA export and oncogenic tranformation activities of eIF4E in cell lines. Interestingly, transduction of primary AML cells with IkB-SR resulted not only in reduction of eIF4E mRNA levels, but also re-localization of eIF4E protein.
The Role of eIF4E in Cancer:
The role of eIF4E in cancer was established after Lazaris-Karatzas et al. made the discovery that over-expressing eIF4E causes tumorigenic transformation of fibroblasts. Since this initial observation, numerous groups have recapitulated these results in different cell lines. As a result, eIF4E activity is implicated in several cancers including cancers of the breast, lung, and prostate. In fact, transcriptional profiling of metastatic human tumors has revealed a distinct metabolic signature wherein eIF4E is known to be consistently up-regulated.eIF4E levels are increased in many cancers including acute myeloid leukemia (AML), multiple myeloma, infant ALL, diffuse large B-cell lymphoma, breast cancer, prostate cancer, head and neck cancer and its elevation generally correlates with poor prognosis. In many of these cancers such as AML, eIF4E is enriched in nuclei and several of eIF4E’s activities are found to be elevated in primary patient specimens, including capping, splicing, RNA export, and translation.
The Role of eIF4E in Cancer:
In the first clinical trials targeting eIF4E, old antiviral drug ribavirin was used as a m7G cap competitor which had substantial activity in cancer cell lines and animal models associated with dysregulated eIF4E. In the first trial to ever target eIF4E, ribavirin monotherapy was demonstrated to inhibit eIF4E activity leading to objective clinical responses including complete remissions in AML patients. Interestingly, relocalization of eIF4E from the nucleus to the cytoplasm correlated with clinical remissions indicative of the relevance of its nuclear activities to disease progression. Subsequent ribavirin trials in AML in combination with antileukemic drugs again showed objective clinical responses including remissions and molecular targeting of eIF4E. Clinical responses correlated with reduced nuclear eIF4E and clinical relapse with re-emergence of eIF4E nuclear eIF4E and its RNA export activity in these AML studies. Other studies used ribavirin in combination showed similar promising results in head and neck cancer. Ribavirin impairs all of the activities of eIF4E examined to date (splicing, capping, RNA export and translation). Thus, eIF4E has been successfully therapeutically targetable in humans; however drug resistance to ribavirin is an emergent problem to long term disease control.eIF4E has also been targeted by antisense oligonucleotides which were very potent in mouse models of prostate cancer, but in monotherapy trials in humans did not provide clinical benefit likely due to the inefficiency of reducing eIF4E levels in humans compared to mice. There is also an allosteric inhibitor of eIF4E which binds between the cap-binding site and the dorsal surface that is used experimentally.
FMRP represses translation through EIF4E binding:
Fragile X mental retardation protein (FMR1) acts to regulate translation of specific mRNAs through its binding of eIF4E. FMRP acts by binding CYFIP1, which directly binds eIF4e at a domain that is structurally similar to those found in 4E-BPs including EIF4EBP3, EIF4EBP1, and EIF4EBP2. The FMRP/CYFIP1 complex binds in such a way as to prevent the eIF4E-eIF4G interaction, which is necessary for translation to occur. The FMRP/CYFIP1/eIF4E interaction is strengthened by the presence of mRNA(s). In particular, BC1 RNA allows for an optimal interaction between FMRP and CYFIP1. RNA-BC1 is a non-translatable, dendritic mRNA, which binds FMRP to allow for its association with a specific target mRNA. BC1 may function to regulate FMRP and mRNA interactions at synapse(s) through its recruitment of FMRP to the appropriate mRNA.In addition, FMRP may recruit CYFIP1 to specific mRNAs in order to repress translation. The FMRP-CYFIP1 translational inhibitor is regulated by stimulation of neuron(s). Increased synaptic stimulation resulted in the dissociation of eIF4E and CYFIP1, allowing for the initiation of translation.
Interactions:
EIF4E has been shown to interact with: . Other direct interactors: PML; arenavirus Z protein; Importin 8; potyvirus VPg protein, LRPPRC, RNMT and others.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Album cover**
Album cover:
An album cover (also referred to as album art) is the front packaging art of a commercially released studio album or other audio recordings. The term can refer to either the printed paperboard covers typically used to package sets of 10 in (25 cm) and 12 in (30 cm) 78-rpm records, single and sets of 12 in (30 cm) LPs, sets of 45 rpm records (either in several connected sleeves or a box), or the front-facing panel of a cassette J-card or CD package, and, increasingly, the primary image accompanying a digital download of the album, or of its individual tracks.
Album cover:
In the case of all types of tangible records, it also serves as part of the protective sleeve.
Early history:
Around 1910, 78-rpm records replaced the phonograph cylinder as the medium for recorded sound. The 78-rpm records were issued in both 10- and 12-inch diameter sizes and were usually sold separately, in brown paper or cardboard sleeves that were sometimes plain and sometimes printed to show the producer or the retailer's name. These were invariably made out of acid paper, limiting conservability. Generally the sleeves had a circular cutout allowing the record label to be seen. Records could be laid on a shelf horizontally or stood upright on an edge, but because of their fragility, many broke in storage.
Early history:
German record company Odeon pioneered the "album" in 1909 when it released the Nutcracker Suite by Tchaikovsky on four double-sided discs in a specially designed package. (It is not indicated what the specially designed package was.) The practice of issuing albums does not seem to have been taken up by other record companies for many years.
Early history:
Beginning in the 1920s, bound collections of empty sleeves with a plain paperboard or leather cover were sold as "record albums" (similar to a photograph album) that customers could use to store their records. (The name "record album" was printed on some covers.) These empty albums were sold in both 10- and 12-inch sizes. The covers of these bound books were wider and taller than the records inside, allowing the record album to be placed on a shelf upright, like a book, and suspending the fragile records above the shelf, protecting them.
Early history:
Starting in the 1930s, record companies began issuing collections of 78-rpm records by one performer or of one type of music in specially assembled collections. These albums of several 78-rpm records could include a collection of popular songs related by either performer or style, or extended-length classical music, including complete symphonies.
Early history:
In 1938, Columbia Records hired Alex Steinweiss as its first art director. He is credited with inventing the concept of album covers and cover art, replacing the plain covers used before. After his initial efforts at Columbia, other record companies followed his lead. By the late 1940s, record albums for all the major companies featured their own colorful paper covers in both 10- and 12-inch sizes. Some featured reproductions of classic art while others utilized original designs.
Early history:
When the 10- and 12-inch long-playing records (LPs) came along in 1948, and box sets of 45-rpm records soon followed (see gramophone record), the name "album" was used for the new format of collections, and the creation of artistic original album covers continued.
Formats:
From the 1950s through to the 1980s, the 12" LP record and the 45 rpm record became the major formats for the distribution of popular music. The LP format remains in use for occasional new releases, though other formats have largely supplanted it. The size of the typical cardboard LP sleeve cover is 12.375 in (31.43 cm) square.
Formats:
Starting in the mid-1990s, the compact disc (CD) was the most common form of physically distributed music products. Packaging formats vary, including the jewel case (which since 1982 has been the most popular form of CD packaging), and the cardboard and plastic combination commonly known as a Digipak (which has been a popular alternative form of packaging in recent years, but remains supplanted by the jewel case due to higher manufacturing costs and lower durability). Typically the album cover component of these packages is approximately 4.75 in (12.1 cm) square.
Formats:
In the 1980s and early 1990s, CDs were often sold as jewel cases enclosed within cardboard longboxes measuring 12 in (30 cm) by 6 in (15 cm), which provided more space for album artwork than the jewel cases they contained, but were seen as harmful to the environment since the cardboard box was typically discarded by the buyer soon after purchase. Major record labels in the United States stopped distributing CDs in longboxes as of April 1, 1993.
Design:
Album covers are one of the various ways in which first impressions affect an audience's perception of a given musician or band, or other content of the album. Album covers' design cover may also add to how an audience forms an opinion of them and their music. There are various ways in which an album cover is visualized. Some examples include artists choosing to put a photo of themselves, which is one of the factors that add to the observation of the band, the musician, and the music.
Design:
The album cover eventually became an important part of the culture of music. Under the influence of designers like Bob Cato, who at various stages in his long music career was vice president of creative services at both Columbia Records and United Artists, album covers became renowned for being a marketing tool and an expression of artistic intent. Album art has also been discussed as an important postwar cultural expression.During the early 1960s, the Beatles' With the Beatles, Bob Dylan's The Times They Are a-Changin' and the Rolling Stones' self-titled debut album each contained a cover photograph designed to further the musical artist's public image. Author Peter Doggett also highlights the cover of Otis Redding's Otis Blue, containing a photo of a young white woman, as a design that "played a dual role: she represented the transcendent power of the music, and obscured the race of its creator." The standard portrait-based LP cover was further challenged over 1965–66 by Dylan's Bringing It All Back Home, through the inclusion of symbolic artefacts around the singer; the artificially stretched faces of the Beatles shown on their Rubber Soul album; and the darkened hues applied to the Rolling Stones on Aftermath.Gatefold covers (a folded double cover) and inserts, often with lyric sheets, made the album cover a desirable work in its own right. Notable examples are the Beatles' Sgt. Pepper's Lonely Hearts Club Band, which had cut-out inserts, printed lyrics, and a gatefold sleeve, even though it was a single album; the Rolling Stones' Exile on Main Street, which had a gatefold and a series of 12 perforated postcards as inserts (taken by photographer Norman Seeff); and Pink Floyd's The Dark Side of the Moon, which had a gatefold, lyrics, no title on the sleeve, and poster and sticker inserts. The Band's 1970 release Stage Fright, which included a photograph by Seeff as a poster insert, is an early example of LP artwork quickly becoming a collector's item. The move to the small (less than 1/4 the size of a record) CD format lost that impact, although attempts have been made to create a more desirable packaging for the CD format, for example the reissue of Sgt. Pepper, which had a cardboard box and booklet, or the use of oversized packaging.
Design:
The importance of design was such that some cover artists specialised or gained fame through their work. Such people include the design team Hipgnosis, through their work on Pink Floyd albums and others; Roger Dean, famous for his Yes and Greenslade covers; Cal Schenkel, for Captain Beefheart's Trout Mask Replica and Frank Zappa's We're Only in It for the Money.
Design:
The talents of many photographers and illustrators from both inside and outside of the music industry have been used to produce a vast array of memorable LP/CD covers. Photographer Mick Rock produced some of the most iconographic album covers of the 1970s, including Queen's Queen II (recreated for their classic music video "Bohemian Rhapsody"), Syd Barrett's The Madcap Laughs, and Lou Reed's Transformer. From 1972 to 1975, photographer Norman Seeff was creative director at United Artists and in addition to his many cover photographs (The Band, Kiss's Hotter than Hell, Joni Mitchell's Hejira, etc.), he art directed dozens of album covers including Exile on Main Street, many of which received Grammy Award nominations. In addition to the examples mentioned previously, a number of world-renowned graphic artists and illustrators such as Robert Crumb (Big Brother & the Holding Company), Shepard Fairey (Johnny Cash), Howard Finster (R.E.M., Talking Heads), Frank Frazetta (Molly Hatchet), Derek Riggs (Iron Maiden), H. R. Giger (Emerson, Lake & Palmer, Debbie Harry), Gottfried Helnwein (Marilyn Manson), Al Hirschfeld (Aerosmith), Ken Kelly (Kiss, Mati Klarwein, Santana, Miles Davis), Rex Ray (David Bowie), Jamie Reid (The Sex Pistols), Ed Repka (Megadeth), Norman Rockwell (Mike Bloomfield and Al Kooper), John Van Hamersveld (The Rolling Stones), Alberto Vargas (The Cars), and Andy Warhol (The Velvet Underground, The Rolling Stones) have all applied their talents to memorable music packages.A number of record covers have also used images licensed (or borrowed from the public domain) from artists of bygone eras. Well-known examples of this include the cover of Derek and the Dominos' Layla and Other Assorted Love Songs (from the painting "La Fille au Bouquet" by French painter and sculptor Émile Théodore Frandsen de Schomberg), "The Downfall of Icarus" by Genisson on the cover of the first album by Renaissance; Bosch on the cover of Deep Purple; Breugel on the cover of Fleet Foxes; the cover of Kansas's debut album, adapted from a mural by painter John Steuart Curry, Norman Rockwell's cowboy (Pure Prairie League), and Coldplay's Viva La Vida, which features Eugène Delacroix's painting Liberty Leading the People (a favorite in The Louvre) with the words "VIVA LA VIDA" brushed on top in white paint.
Design:
Legends from photography and video/film who have also produced record cover images include Drew Struzan (Black Sabbath, Alice Cooper, Iron Butterfly, The Beach Boys and others), Annie Leibovitz (John Lennon, Bruce Springsteen, Patti Smith), Richard Avedon (Whitney Houston, Teddy Pendergrass), David LaChappelle (No Doubt, Elton John), Anton Corbijn (U2, The Killers, Depeche Mode), Karl Ferris (Jimi Hendrix, Donovan, The Hollies), Robert Mapplethorpe (Patti Smith, Peter Gabriel) and Francesco Scavullo (Diana Ross, Edgar Winter), David Michael Kennedy others.
Design:
A number of artists and bands feature members who are, in their own right, accomplished illustrators, designers and photographers and whose talents are exhibited in the artwork they produced for their own recordings. Examples include Jimmy Page (Led Zeppelin IV), Chris Mars (Replacements' Pleased to Meet Me and others), Marilyn Manson (Lest We Forget...), Michael Stipe (R.E.M.'s Accelerate), Thom Yorke (credited as "Tchocky" on misc. Radiohead records), Michael Brecker (Ringorama), Freddie Mercury (Queen I), Lynsey De Paul (Surprise), John Entwistle (Who By Numbers), Graham Coxon (13 and most solo albums), Mike Shinoda (various Linkin Park albums), Joni Mitchell (Miles of Aisles and several others) as well for Crosby, Stills, Nash & Young (So Far), and M.I.A. (credited variously on Elastica's The Menace, her records), and Captain Beefheart, 'Mona Bone Jakon', 'Tea for the Tillerman' and 'Teaser and the Firecat' by Cat Stevens, Mika (all albums released to date), Music from Big Pink (for The Band), Self Portrait and Planet Waves by Bob Dylan, Walls and Bridges by John Lennon.
Design:
A genre of music that people have found issues in album covers is reggae. There are certain reggae artists that feel that the way they are displayed on their own album covers is not an accurate way of describing themselves and their culture. The stereotypical rasta lifestyle depicted on many reggae album covers is only displayed that way because this is what the white audience seemed to appreciate the most. This version of the reggae artists is what many people take notice of and what makes them unique in regards to other genres. However, these album covers do not accurately represent the core values of typical people in Jamaica but they deal with this representation because they know that the audience is familiar with the stereotypical rasta depiction. These album covers tend to display inauthentic versions of their considerations of style and sexuality and do not accurately display "Uptown" Jamaica.
Design:
Album cover art was the subject of a 2013 documentary film, The Cover Story: Album Art, by Eric Christensen, a San Francisco Bay Area record collector.The physical design of album covers has been the subject of creative innovation. Ogden's Nut Gone Flake by the Small Faces was originally in a circular metal tin, and Happy to Meet – Sorry to Part by Horslips was in an octagonal package. Anyway by Family was originally issued in an opaque plastic package through which a design (a Leonardo sketch) could be seen. Magical Mystery Tour by the Beatles was first released as a double EP with a booklet between the records. Sgt. Pepper contained a cardboard sheet of images, and The Beatles (often referred to as the White Album) contained four large glossy photos of the individual Beatles along with a poster-sized collage. Live at Leeds by The Who also contained a generous supply of posters and printed material. Led Zeppelin III had a front cover that contained a revolving disc which brought different images into view through small cut-outs in the outer sleeve. A similar effect was used for the band's later album Physical Graffiti with cut-outs of the windows of a brownstone building. The original issue of Sticky Fingers by the Rolling Stones had an actual zipper incorporated into the picture of the crotch area of a pair of jeans. The Velvet Underground and Nico album had a Warhol-designed cardboard banana on the cover that could be peeled back. The record company Vertigo had a black-and-white design on the centre label that produced a hypnotic optical effect when the disc revolved on the turntable.
Packaging:
The album cover is a component of the overall packaging of an album. Especially in the case of vinyl records with paperboard sleeves, these packages are prone to wear and tear, although wear and tear does often take place to some degree on covers contained within plastic cases. A variety of treatments could be applied to improve both their appearance and durability, such as clear plastic wrap. Many products have been available for the storage of vinyl albums, often clear plastic sleeves.
Packaging:
The surface of a vinyl record is readily damaged, so aside from the outer paperboard sleeve, there is usually an inner protective cover to protect against dust and handling. This is normally shaped to allow it to readily slide within the outer cover. The inner sleeve is either thin white paper, either plain or printed with information on other recordings available from the same company, or a paper sleeve supporting a thin plastic bag. These quite often have a circular cut out so that the record label can be read without directly handling the record, though when the inner sleeve is printed with lyrics, which became quite common, then there is usually no hole. Decca Records used a system of colour-coding on these sleeves where a blue color denoted a stereophonic recording while red denoted a monophonic recording (the mono record players of the time were not always compatible with stereo records). This system was begun in the 1960s to reduce packaging costs.
Packaging:
Packaging formats for compact discs widened the variety of presentations as well, even as the size of the CD meant that album covers were no longer so large.
Packaging:
Besides the practicalities of identifying specific records, album covers serve the purpose of advertising the musical contents on the LP, through the use of graphic design, photography, and/or illustration. An album cover normally has the artist's name, sometimes in logo form; and the album title. Occasionally, though more common on historical vinyl records, the cover may include a reference number; a branding (the label), and possibly a track listing. Other information is seldom included on the cover, and is usually contained on the rear or interior of the packaging, such as a track listing together with a more detailed list of those involved in making the record, band members, guest performers, engineers and producer. On the spine of the package, the artist, title, and reference number are usually repeated so that albums can be identified while tightly packed on a shelf.
Packaging:
Parental advisory labels are warning labels that are required to be placed on album covers when the music on the album contains explicit content such as vulgar language. These labels have been known to be controversial when it comes to keeping underage audiences from this content. There are a few different theories on this, such as the "forbidden fruit" and "tainted fruit" theories. The "forbidden fruit" theory states that when a child sees the parental advisory label on an album cover they will be more likely to listen to it because there is an increased attractiveness to the music. There are many adolescents that follow the "forbidden fruit" theory as a way to either lash out at their parents or to make themselves feel more mature than they are. They may use explicit music as a way to be rebellious and to appear cooler to their friends, even if they are much too young to be exposed to that kind of music. The "tainted fruit" theory states that the child will see the label and immediately know to avoid this kind of content because it is inappropriate for their age. These children are the ones who see the label and do not even acknowledge this album or these songs because they know that it is not made for them. The Recording Industry Association of America (RIAA) introduced this warning label and it is now a requirement on any explicit music. However, the RIAA is unable to actually control whether or not adolescents will be listening to the music but as of now there is no way to fully control what these children are doing.
Album covers in the age of downloads and streaming:
In August 2008, album cover designer Peter Saville suggested that the album cover was dead. Album art is still considered a vital part of the listening experience to many.Both MP3, WMA, M4A (Apple Format) music files are able to contain embedded digital album artworks (called cover images or simply covers) in jpeg format. One digital solution is the iTunes LP format for interactive album artwork introduced by Apple on 2009. Resolution for digital album covers should not be lower than 800x800 (1:1 Aspect Ratio), lower resolutions might not look good on newer devices.
Album covers in the age of downloads and streaming:
Some artists have used Internet technology to generate even more cover art. For instance, Nine Inch Nails initially released its album The Slip as a free download on the band's website, attaching separate but thematically connected images to each individual track.
Banned covers:
Some album covers have been banned due to depicting violence, nudity, or other offensive imagery. For instance, Guns N' Roses's 1987 album Appetite for Destruction's cover depicted a robot rapist about to be punished by a metal avenger, and Kanye West's 2010 album My Beautiful Dark Twisted Fantasy depicted West naked and being straddled by a phoenix with her bare breasts and buttocks showing.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**2,3-Bis(acetylmercaptomethyl)quinoxaline**
2,3-Bis(acetylmercaptomethyl)quinoxaline:
2,3-Bis(acetylmercaptomethyl)quinoxaline is an antiviral agent which can inhibits poliovirus RNA synthesis in vitro and in vivo and inhibits human herpesvirus 1 multiplication in vitro. It does not interfere with attachment, penetration or DNA synthesis, but interrupts a late stage in virus assembly and/or maturation.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Mainspring**
Mainspring:
A mainspring is a spiral torsion spring of metal ribbon—commonly spring steel—used as a power source in mechanical watches, some clocks, and other clockwork mechanisms. Winding the timepiece, by turning a knob or key, stores energy in the mainspring by twisting the spiral tighter. The force of the mainspring then turns the clock's wheels as it unwinds, until the next winding is needed. The adjectives wind-up and spring-powered refer to mechanisms powered by mainsprings, which also include kitchen timers, metronomes, music boxes, wind-up toys and clockwork radios.
Modern mainsprings:
A modern watch mainspring is a long strip of hardened and blued steel, or specialised steel alloy, 20–30 cm long and 0.05-0.2 mm thick. The mainspring in the common 1-day movement is calculated to enable the watch to run for 36 to 40 hours, i.e. 24 hours between daily windings with a power-reserve of 12 to 16 hours, in case the owner is late winding the watch. This is the normal standard for hand-wound as well as self-winding watches. 8-Day movements, used in clocks meant to be wound weekly, provide power for at least 192 hours but use longer mainsprings and bigger barrels. Clock mainsprings are similar to watch springs, only larger.
Modern mainsprings:
Since 1945, carbon steel alloys have been increasingly superseded by newer special alloys (iron, nickel and chromium with the addition of cobalt, molybdenum, or beryllium), and also by cold-rolled alloys ('structural hardening'). Known to watchmakers as 'white metal' springs (as opposed to blued carbon steel), these are stainless and have a higher elastic limit. They are less subject to permanent bending (becoming 'tired') and there is scarcely any risk of their breaking. Some of them are also practically non-magnetic.
Modern mainsprings:
In their relaxed form, mainsprings are made in three distinct shapes: Spiral coiled: These are coiled in the same direction throughout, in a simple spiral.
Semi-reverse: The outer end of the spring is coiled in the reverse direction for less than one turn (less than 360°).
Reverse (resilient): the outer end of the spring is coiled in the reverse direction for one or more turns (exceeding 360°).The semi-reverse and reverse types provide extra force at the end of the running period, when the spring is almost out of energy, in order to keep the timepiece running at a constant rate to the end.
Operation:
The mainspring is coiled around an axle called the arbor, with the inner end hooked to it. In many clocks, the outer end is attached to a stationary post. The spring is wound up by turning the arbor, and after winding its force turns the arbor the other way to run the clock. The disadvantage of this open spring arrangement is that while the mainspring is being wound, its drive force is removed from the clock movement, so the clock may stop. This type is often used on alarm clocks, music boxes and kitchen timers where it doesn't matter if the mechanism stops while winding. The winding mechanism always has a ratchet attached, with a pawl (called by clockmakers the click) to prevent the spring from unwinding.
Operation:
In the form used in modern watches, called the going barrel, the mainspring is coiled around an arbor and enclosed inside a cylindrical box called the barrel which is free to turn. The spring is attached to the arbor at its inner end, and to the barrel at its outer end. The attachments are small hooks or tabs, which the spring is hooked to by square holes in its ends, so it can be easily replaced.
Operation:
The mainspring is wound by turning the arbor, but drives the watch movement by the barrel; this arrangement allows the spring to continue powering the watch while it is being wound. Winding the watch turns the arbor, which tightens the mainspring, wrapping it closer around the arbor. The arbor has a ratchet attached to it, with a click to prevent the spring from turning the arbor backward and unwinding. After winding, the arbor is stationary and the pull of the mainspring turns the barrel, which has a ring of gear teeth around it. This meshes with one of the clock's gears, usually the center wheel pinion and drives the wheel train. The barrel usually rotates once every 8 hours, so the common 40-hour spring requires 5 turns to unwind completely.
Operation:
Hazards The mainspring contains a lot of energy. If precautions are not taken during disassembly the spring can release suddenly, causing potentially serious injury. Before servicing, mainsprings are “let down” gently by pulling the click back while holding the winding key, allowing the spring to slowly unwind. However, even in their “let down” state, mainsprings contain dangerous residual tension. Watchmakers and clockmakers use a tool called a "mainspring winder" to safely install and remove them. Large mainsprings in clocks are immobilized by "mainspring clamps" before removal.
History:
Mainsprings appeared in the first spring-powered clocks, in 15th-century Europe. It replaced the weight hanging from a cord wrapped around a pulley, which was the power source used in all previous mechanical clocks. Around 1400 coiled springs began to be used in locks, and many early clockmakers were also locksmiths. Springs were applied to clocks to make them smaller and more portable than previous weight-driven clocks, evolving into the first pocketwatches by 1600. Many sources erroneously credit the invention of the mainspring to the Nuremberg clockmaker Peter Henlein (also spelled Henle, or Hele) around 1511. However, many references in 15th-century sources to portable clocks 'without weights', and at least two surviving examples, show that spring-driven clocks existed by the early years of that century. The oldest surviving clock powered by a mainspring is the Burgunderuhr (Burgundy Clock), an ornate, gilt chamber clock, currently at the Germanisches Nationalmuseum in Nuremberg, whose iconography suggests that it was made around 1430 for Philip the Good, Duke of Burgundy.The first mainsprings were made of steel without tempering or hardening processes. They didn't run very long, and had to be wound twice a day. Henlein was noted for making watches that would run 40 hours between windings. The 18th century methods of making mainsprings are described by Berthoud and Blakey Constant force from a spring A problem throughout the history of spring-driven clocks and watches is that the force (torque) provided by a spring is not constant, but diminishes as the spring unwinds (see graph). However, timepieces have to run at a constant rate in order to keep accurate time. Timekeeping mechanisms are never perfectly isochronous, meaning their rate is affected by changes in the drive force. This was especially true of the primitive verge and foliot type used before the advent of the balance spring in 1657. So early clocks slowed down during their running period as the mainspring ran down, causing inaccurate timekeeping. Two solutions to this problem appeared in the early spring-powered clocks in the 15th century; the stackfreed and the fusee: Stackfreed The stackfreed was an eccentric cam mounted on the mainspring arbor, with a spring-loaded roller that pressed against it. The cam had a 'snail' shape so that early in the running period when the mainspring was pushing strongly, the spring would bear against the wide part of the cam, providing a strong opposing force, while later in the running period as the force of the mainspring decreased, the spring would bear against the narrower part of the cam and the opposing force would also decrease. The stackfreed added a lot of friction and probably reduced a clock's running time substantially; it was only used in some German timepieces and was abandoned after about a century.
History:
Fusee The fusee was a much longer-lasting innovation. This was a cone-shaped pulley that was turned by a chain wrapped around the mainspring barrel. Its curving shape continuously changed the mechanical advantage of the linkage to even out the force of the mainspring as it ran down. Fusees became the standard method of getting constant torque from a mainspring. They were used in most spring-driven clocks and watches from their first appearance until the 19th century when the going barrel took over, and in marine chronometers until the 1970s.
History:
Stopwork Another early device which helped even out the spring's force was stopwork or winding stops, which prevented the mainspring from being wound up all the way, and prevented it from unwinding all the way. The idea was to use only the central part of the spring's 'torque curve', where its force was more constant. The most common form was the Geneva stop or 'Maltese cross'. Stopwork isn't needed in modern watches.
History:
Remontoire A fourth device used in a few precision timepieces was the remontoire. This was a small secondary spring or weight which powered the timepiece's escapement, and was itself rewound periodically by the mainspring. This isolated the timekeeping element from the varying mainspring force.
History:
Going barrel The modern going barrel, invented in 1760 by Jean-Antoine Lépine, produces a constant force by simply using a longer mainspring than needed, and coiling it under tension in the barrel. In operation, only a few turns of the spring at a time are used, with the remainder pressed against the outer wall of the barrel. Mathematically, the tension creates a 'flat' section in the spring's 'torque curve' (see graph) and only this flat section is used. In addition, the outer end of the spring is often given a 'reverse' curve, so it has an 'S' shape. This stores more tension in the spring's outer turns where it is available toward the end of the running period. The result is that the barrel provides approximately constant torque over the watch's designed running period; the torque doesn't decline until the mainspring has almost run down.
History:
The built-in tension of the spring in the going barrel makes it hazardous to disassemble even when not wound up.
Broken mainsprings:
Because they are subjected to constant stress cycles, up until the 1960s mainsprings generally broke from metal fatigue long before other parts of the timepiece. They were considered expendable items. This often happened at the end of the winding process, when the spring is wound as tightly as possible around the arbor, with no space between the coils. When manually winding, it is easy to reach this point unexpectedly and put excessive pressure on the spring. Another cause was temperature changes. If a watch was fully wound in the evening and the temperature dropped at night, without any slack between the coils the thermal contraction of the long spring could break it loose from its attachments at one end. In earlier times, watch repairers noted that changes in the weather brought in a rash of watches with broken mainsprings. Broken mainsprings were the largest cause of watch repairs until the 1960s. Since then, the improvements in spring metallurgy mentioned above have made broken mainsprings rare.
Broken mainsprings:
'Knocking' or 'banking' Even if the mainsprings were not prone to breakage, too much force during winding caused another problem in early watches, called 'knocking' or 'banking'. If very little slack was left in the spring after winding ('overwinding"), the pressure of the last turn of the winding knob put the end of the spring under excessive tension, which was locked in by the last click of the ratchet. So the watch ran with excessive drive force for several hours, until the extra tension in the end of the spring was relieved. This made the balance wheel rotate too far in each direction, causing the impulse pin on the wheel to knock against the back of the fork horns. This caused the watch to gain time, and could break the impulse pin. In older watches this was prevented with 'stopwork'. In modern watches this is prevented by designing the 'click' with some 'recoil' (backlash), to allow the arbor to rotate backward after winding by about two ratchet teeth, enough to remove excess tension.
Broken mainsprings:
Motor or safety barrel Around 1900, when broken watchsprings were more of a problem, some pocketwatches used a variation of the going barrel called the motor barrel or safety barrel. Mainsprings usually broke at their attachment to the arbor, where bending stresses are greatest. When the mainspring broke, the outer part recoiled and the momentum spun the barrel in the reverse direction. This applied great force to the delicate wheel train and escapement, often breaking pivots and jewels.
Broken mainsprings:
In the motor barrel, the functions of the arbor and barrel were reversed from the going barrel. The mainspring was wound by the barrel, and turned the arbor to drive the wheel train. Thus if the mainspring broke, the destructive recoil of the barrel would be applied not to the wheel train but to the winding mechanism, which was robust enough to take it.
Broken mainsprings:
Safety pinion A safety pinion was an alternate means of protection, used with the going barrel. In this, the center wheel pinion, which the barrel gear engages, was attached to its shaft with a reverse screw thread. If the spring broke, the reverse recoil of the barrel, instead of being passed on to the gear train, would simply unscrew the pinion.
The myth of 'overwinding':
Watches and clocks are often found stopped with the mainspring fully wound, which led to a myth that winding a spring-driven timepiece all the way up damages it. Several problems can cause this type of breakdown, but it is never due to "overwinding", as timepieces are designed to handle being wound up all the way.One cause of “overwinding” is dirt. Watch movements require regular cleaning and lubrication, and the normal result of neglecting to get a watch cleaned is a watch stopped at full wind. As the watch movement collects dirt and the oil dries up, friction increases, so that the mainspring doesn't have the force to turn the watch at the end of its normal running period, and it stops prematurely. If the owner continues to wind and use the watch without servicing, eventually the friction force reaches the 'flat' part of the torque curve, and quickly a point is reached where the mainspring doesn't have the force to run the watch even at full wind, so the watch stops with the mainspring fully wound. The watch needs service, but the problem is caused by a dirty movement or other defect, not "overwinding".
The myth of 'overwinding':
Another common cause of a watch stopped at full wind is that if a watch is dropped then the balance staff can break and the watch can no longer run even when the mainspring is fully wound.
Self-winding watches and 'unbreakable' mainsprings:
Self-winding or automatic watches, introduced widely in the 1950s, use the natural motions of the wrist to keep the mainspring wound. A semicircular weight, pivoted at the center of the watch, rotates with each wrist motion. A winder mechanism uses rotations in both directions to wind the mainspring.
Self-winding watches and 'unbreakable' mainsprings:
In automatic watches, motion of the wrist could continue winding the mainspring until it broke. This is prevented with a slipping clutch device. The outer end of the mainspring, instead of attaching to the barrel, is attached to a circular expansion spring called the bridle that presses against the inner wall of the barrel, which has serrations or notches to hold it. During normal winding the bridle holds by friction to the barrel, allowing the mainspring to wind. When the mainspring reaches its full tension, its pull is stronger than the bridle. Further rotation of the arbor causes the bridle to slip along the barrel, preventing further winding. In watch company terminology, this is often misleadingly referred to as an 'unbreakable mainspring'.
'Tired' or 'set' mainsprings:
After decades of use, mainsprings in older timepieces are found to deform slightly and lose some of their force, becoming 'tired' or 'set'. This condition is mostly found in springs in barrels. It causes the running time between windings to decrease. During servicing the mainspring should be checked for 'tiredness' and replaced if necessary. The British Horological Institute suggests these tests: In a mainspring barrel, when unwound and relaxed, most of a healthy spring's turns should be pressed flat against the wall of the barrel, with only 1 or 2 turns spiralling across the central space to attach to the arbor. If more than 2 turns are loose in the center, the spring may be 'tired'; with 4 or 5 turns it definitely is 'tired'.
'Tired' or 'set' mainsprings:
When removed from the barrel, if the diameter of the relaxed spring lying on a flat surface is less than 2½ times the barrel diameter, it is 'tired'.
Power reserve indicator:
Some high-grade watches have an extra dial on the face indicating how much power is left in the mainspring, often graduated in hours the watch has left to run. Since both the arbor and the barrel turn, this mechanism requires a differential gear that measures how far the arbor has been turned, compared to the barrel.
Unusual forms of mainspring:
A mainspring is usually a coiled metal spring, however there are exceptions: The wagon spring clock: During a brief time in American clockmaking history, coilable spring steel was not available in the United States, and inventive clockmakers built clocks powered by a stack of leaf springs, similar to what has traditionally served as a suspension spring for wagons.
Other spring types are conceivable and have been used occasionally on experimental timepieces.
Occasionally one finds an odd clock with a spring made of material other than metal, such as synthetic elastic materials.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**A feather in your cap**
A feather in your cap:
The term a feather in your cap is an English idiomatic phrase believed to have derived from the general custom in some cultures of a warrior adding a new feather to their headgear for every enemy slain. or in other cases from the custom of establishing the success of a hunter as being the first to bag a game bird by plucking off the feathers of that prey and placing them in the hat band.
A feather in your cap:
The phrase today has altered to a more peaceful allusion, where it is used to refer to any laudable success or achievement by an individual that may help that person in the future.
Traditions involving feathers in headdress:
Examples of the use of feathers related to the killing of enemy combatants can be found in the traditional cultures of the Meunitarris of Alberta; and the Mandan people (present-day North and South Dakota), both of whom wore feathers in their headdress: and also the Caufirs of Cabul who are said to have stuck a feather in their turban for every enemy slain.Similar customs are thought to have been practiced by the Mongols, Incas; Caciques; Abyssinians; Tur’comans; Hungarians; Dayak people; and the ancient Lycians.Examples of the use of feathers related to hunting can be found in the cultures of highland peoples in Scotland and Wales where it is still customary for the hunter who kills the first woodcock to pluck out a feather and stick it in his cap.
Traditions involving feathers in headdress:
Other examples of feathers in caps which appear to be related to hunters and warriors can be found in mythological stories of historical figures such as the Austrian bailiff of Altdorf, Albrecht Gessler an aggressor who made Swiss national hero William Tell shoot an apple from the head of his son. Indeed, the Tyrolean hat of today, worn in the Austrian Alps has a cord wrapped around the base of the crown and a feather or brush on the side as trim.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Somaesthetics**
Somaesthetics:
Somaesthetics is an interdisciplinary field of inquiry aimed at promoting and integrating the theoretical, empirical and practical disciplines related to bodily perception, performance and presentation.
Etymology:
The term ‘somaesthetics’ was coined by the American pragmatist philosopher Richard Shusterman in 1996 through the compounding of “soma”, an expression derived from the Greek word for body, and “aesthetics”, a word derived from the Greek aesthesis, meaning ‘sensory perception’. Shusterman has reported in a number of his works that he chose ‘soma’ over more familiar terms “to avoid problematic associations of body (which can be a lifeless, mindless thing) and flesh (which designates only the fleshly parts of the body and is strongly associated with Christian notions of sin)” and to emphasize that the project “concerns the lived, sentient, purposive body rather than merely a physical body”. As an amalgamation, ‘somaesthetics’ “implies a project of appreciating and cultivating the body not only as an object that externally displays beauty, sublimity, grace, and other aesthetic qualities, but also as a subjectivity that perceives these qualities and that experiences attendant aesthetic pleasures somatically”.
Origin and development:
Somaesthetics as a research project initially arose from the work of Richard Shusterman during the mid-1990s in response to what he perceived as needed developments within his two principal modes of inquiry: pragmatist aesthetics and philosophy as an embodied art of living. While pragmatist aesthetics, according to Shusterman, advocates for more active and creative engagement than traditional aesthetics, he believed it should also recognize that artistic, practical and political action requires humanity’s primary tool, the body, and that such action could be improved partly by improving this instrument. In the same way, the philosophical life could be improved through greater mastery of the soma -- our medium of living. He moreover lamented the reduction of aesthetics (as well as philosophy itself) from “a noble art of living into a minor, specialized university discipline” narrowly concerned with beauty and fine art. Shusterman thus argued for the revival of “Baumgarten’s idea of aesthetics as a life-improving cognitive discipline that extends far beyond questions of beauty and fine arts and that involves both theory and practical exercise” and for an end to “the neglect of the body that Baumgarten disastrously introduced into aesthetics”. As proposed, Shusterman’s project of somaesthetics would restore “the soma — the living, sentient, purposive body — as the indispensable medium for all perception". Such heightening of somatic consciousness would not only enhance artistic appreciation and creation, but increase the perceptual awareness of meanings and feelings that have the potential to elevate everyday experience into an art of living.
Origin and development:
Shusterman proposed three fundamental dimensions of his emerging field: • Analytic somaesthetics, as the most theoretically-oriented of the three, “describes the basic nature of bodily perceptions and practices and also of their function in our knowledge and construction of reality”.
• Pragmatic somaesthetics presupposes the analytic dimension and “has a distinctly normative, prescriptive character – by proposing specific methods of somatic improvement and engaging in their comparative critique”.
Origin and development:
• Practical somaesthetics focuses on practicing somatic care “through intelligently disciplined body work aimed at somatic self-improvement (whether in a representational, experiential, or performative mode)".Over the past two decades, somaesthetics has become a truly interdisciplinary endeavor. Originally conceived by Shusterman as being under the umbrella of philosophy, or perhaps even a branch of aesthetics, somaesthetics has evolved into an “open field for collaborative, interdisciplinary, and transcultural inquiry” with applications “ranging from the arts, product design, and politics to fashion, health, sports, martial arts, and the use of hallucinogenic drugs in education”.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Inflation-restriction exact sequence**
Inflation-restriction exact sequence:
In mathematics, the inflation-restriction exact sequence is an exact sequence occurring in group cohomology and is a special case of the five-term exact sequence arising from the study of spectral sequences.
Specifically, let G be a group, N a normal subgroup, and A an abelian group which is equipped with an action of G, i.e., a homomorphism from G to the automorphism group of A. The quotient group G/N acts on AN = { a ∈ A : na = a for all n ∈ N}.
Inflation-restriction exact sequence:
Then the inflation-restriction exact sequence is: 0 → H 1(G/N, AN) → H 1(G, A) → H 1(N, A)G/N → H 2(G/N, AN) →H 2(G, A) In this sequence, there are maps inflation H 1(G/N, AN) → H 1(G, A) restriction H 1(G, A) → H 1(N, A)G/N transgression H 1(N, A)G/N → H 2(G/N, AN) inflation H 2(G/N, AN) →H 2(G, A)The inflation and restriction are defined for general n: inflation Hn(G/N, AN) → Hn(G, A) restriction Hn(G, A) → Hn(N, A)G/NThe transgression is defined for general n transgression Hn(N, A)G/N → Hn+1(G/N, AN)only if Hi(N, A)G/N = 0 for i ≤ n − 1.The sequence for general n may be deduced from the case n = 1 by dimension-shifting or from the Lyndon–Hochschild–Serre spectral sequence.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Triphosphoribosyl-dephospho-CoA synthase**
Triphosphoribosyl-dephospho-CoA synthase:
In enzymology, a triphosphoribosyl-dephospho-CoA synthase (EC 2.7.8.25) is an enzyme that catalyzes the chemical reaction ATP + 3-dephospho-CoA ⇌ 2'-(5"-triphosphoribosyl)-3'-dephospho-CoA + adenineThus, the two substrates of this enzyme are ATP and 3-dephospho-CoA, whereas its two products are 2'-(5''-triphosphoribosyl)-3'-dephospho-CoA and adenine.
This enzyme belongs to the family of transferases, specifically those transferring non-standard substituted phosphate groups. The systematic name of this enzyme class is ATP:3-dephospho-CoA 5"-triphosphoribosyltransferase. Other names in common use include 2'-(5"-triphosphoribosyl)-3-dephospho-CoA synthase, ATP:dephospho-CoA 5-triphosphoribosyl transferase, and CitG. This enzyme participates in two-component system - general.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Walther graph**
Walther graph:
In the mathematical field of graph theory, the Walther graph, also called the Tutte fragment, is a planar bipartite graph with 25 vertices and 31 edges named after Hansjoachim Walther. It has chromatic index 3, girth 3 and diameter 8.
If the single vertex of degree 1 whose neighbour has degree 3 is removed, the resulting graph has no Hamiltonian path. This property was used by Tutte when combining three Walther graphs to produce the Tutte graph, the first known counterexample to Tait's conjecture that every 3-regular polyhedron has a Hamiltonian cycle.
Algebraic properties:
The Walther graph is an identity graph; its automorphism group is the trivial group.
The characteristic polynomial of the Walther graph is : 22 31 20 411 18 3069 16 14305 14 43594 12 88418 10 119039 103929 55829 16539 2040 )
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Torsades de pointes**
Torsades de pointes:
Torsades de pointes, torsade de pointes or torsades des pointes (TdP) (, French: [tɔʁsad də pwɛ̃t̪], translated as "twisting of peaks") is a specific type of abnormal heart rhythm that can lead to sudden cardiac death. It is a polymorphic ventricular tachycardia that exhibits distinct characteristics on the electrocardiogram (ECG). It was described by French physician François Dessertenne in 1966. Prolongation of the QT interval can increase a person's risk of developing this abnormal heart rhythm, occurring in between 1% and 10% of patients who receive QT-prolonging antiarrhythmic drugs.
Signs and symptoms:
Most episodes will revert spontaneously to a normal sinus rhythm. Symptoms and consequences include palpitations, dizziness, lightheadedness (during shorter episodes), fainting (during longer episodes), and sudden cardiac death.
Causes:
Torsades occurs as both an inherited (linked to at least 17 genes) and as an acquired form caused most often by drugs and/or electrolyte disorders that cause excessive lengthening of the QT interval.Common causes for torsades de pointes include drug-induced QT prolongation and less often diarrhea, low serum magnesium, and low serum potassium or congenital long QT syndrome. It can be seen in malnourished individuals and chronic alcoholics, due to a deficiency in potassium and/or magnesium. Certain drugs and combinations of drugs resulting in drug interactions are common contributors to torsades de pointes risk. QT-prolonging medications such as clarithromycin, levofloxacin, or haloperidol, when taken concurrently with cytochrome P450 inhibitors, such as fluoxetine, cimetidine, or particular foods including grapefruit, can result in higher-than-normal levels of medications that prolong the QT interval in the bloodstream and therefore increase a person's risk of developing torsades de pointes. A TdP cardiac event precipitated by loperamide causing has been reported (although the dose was well beyond the therapeutic range of the medication).
Causes:
Medications as causes Knowledge that TdP may occur in patients taking certain prescription drugs has been both a major liability and reason for removal of 14 medications from the marketplace. Forty-nine drugs known to cause TdP and another 170 that are known to prolong QT remain on the market because the drugs provide medical benefit and the risk of TdP can be managed and mitigated by instructions in the drug label. Examples of compounds linked to clinical observations of TdP include amiodarone, most fluoroquinolones, methadone, lithium, chloroquine, erythromycin, azithromycin, pimozide, and phenothiazines. The anti-emetic agent ondansetron may also increase the risk of developing TdP. It has also been shown as a side effect of certain anti-arrhythmic medications, such as sotalol, procainamide, quinidine, ibutilide, and dofetilide. In one example, the gastrokinetic drug cisapride (Propulsid) was withdrawn from the US market in 2000 after it was linked to deaths caused by long QT syndrome-induced torsades de pointes. This effect can be directly linked to QT prolongation mediated predominantly by inhibition of the hERG channel and, in some cases, augmentation of the late sodium channel.
Risk factors:
The following is a partial list of factors associated with an increased tendency towards developing torsades de pointes: Medications Hypokalemia (low serum potassium) Hypomagnesemia (low serum magnesium) Hypocalcemia (low serum calcium) Bradycardia (slow heartbeat) Heart failure Left ventricular hypertrophy Hypothermia Subarachnoid hemorrhage Hypothyroidism
Pathophysiology:
Action potential of cardiac muscles can be broken down into five phases: Phase 0: Sodium channels open, resulting in the entrance of Na+ into the cells; this results in the depolarization of the cardiac muscles.
Phase 1: Sodium channels close; this stops depolarization. Potassium channels open, leading to an outward current of K+ out of the cells.
Phase 2: Potassium channels remain open (outward current of K+), and calcium channels now also open (inward current of Ca++), resulting in a plateau state.
Pathophysiology:
Phase 3: Calcium channels close (inward Ca++ stops), but potassium channels are still open (outward K+ current); this persists until the cells gain back normal polarization (repolarization achieved). Please note that phase 0 leads to a net gain of Na+, while phases 1–3 lead to a net loss of K+. This imbalance is corrected by the Na+/K+-ATPase channel that pumps K+ into the cell and sodium out of the cell; this does not change polarization of the cells, but does restore ionic content to its initial state.
Pathophysiology:
Phase 4: Exciting triggers (e.g. sinus node) will cause minor depolarization in the cells; this will result in increasing permeability of sodium channels, which trigger the opening of sodium channels.Repolarization of the cardiomyocytes occurs in phases 1–3, and is caused predominantly by the outward movement of potassium ions. In Torsades de pointes, however, the repolarization is prolonged; this can be due to electrolyte disturbances (hypokalemia, hypomagnesemia, hypocalcemia), bradycardia, certain drugs (disopyramide, sotalol, amiodarone, amitriptyline, chlorpromazine, erythromycin) and/or congenital syndromes.The prolongation of repolarisation may result in subsequent activation of an inward depolarisation current, known as an early after-depolarisation, which may promote triggered activity. Re-entry, due to a dispersion of refractory periods, is also possible; this is because M Cells (found in the mid myocardial layer) show a more prolonged repolarization phase in response to potassium blockage than other cells. In turn, this produces a zone of functional refractoriness (inability to depolarize) in the mid myocardial layer. When new action potential is generated, the mid myocardial layer will remain in a refractory period, but the surrounding tissue will depolarize. As soon as the mid myocardial layer is no longer in a refractory period, excitation from nearby tissue will cause a retrograde current and a reentry circuit that will result in a positive chronotropic cycle, leading to tachycardia.
Diagnosis:
The ECG tracing in torsades demonstrates a polymorphic ventricular tachycardia with a characteristic illusion of a twisting of the QRS complex around the isoelectric baseline (peaks, which are at first pointing up, appear to be pointing down for subsequent "beats" when looking at ECG traces of the "heartbeat"). It is hemodynamically unstable and causes a sudden drop in arterial blood pressure, leading to dizziness and fainting. Depending on their cause, most individual episodes of torsades de pointes revert to normal sinus rhythm within a few seconds; however, episodes may also persist and possibly degenerate into ventricular fibrillation, leading to sudden death in the absence of prompt medical intervention. Torsades de pointes is associated with long QT syndrome, a condition whereby prolonged QT intervals are visible on an ECG. Long QT intervals predispose the patient to an R-on-T phenomenon, wherein the R-wave, representing ventricular depolarization, occurs during the relative refractory period at the end of repolarization (represented by the latter half of the T-wave). An R-on-T can initiate torsades. Sometimes, pathologic T-U waves may be seen in the ECG before the initiation of torsades.A "short-coupled variant of torsade de pointes", which presents without long QT syndrome, was also described in 1994 as having the following characteristics: Drastic rotation of the heart's electrical axis Prolonged QT interval (LQTS) - may not be present in the short-coupled variant of torsade de pointes Preceded by long and short RR-intervals - not present in the short-coupled variant of torsade de pointes Triggered by a premature ventricular contraction (R-on-T PVC)
R-on-T phenomenon:
The R-on-T phenomenon is the superimposition of a premature ventricular contraction on the T wave of a preceding heart beat. Studies suggest that R-on-T phenomenon is likely to start a sustained ventricular tachycardia and ventricular fibrillation. It's considered a cardiac arrhythmia in which the ventricles of the heart become again excited during the repolarization of the previous heart action. Because part of the heart muscle cannot be excited at this early point in time, a premature chamber action can trigger life-threatening cardiac arrhythmias (e.g. ventricular fibrillation or Torsades de pointes).
R-on-T phenomenon:
On the ECG, this phenomenon is showing when a ventricular extrasystole (R) (T-wave) is superimposed during the repolarization phase of the previous action of the heart. Not all premature chamber actions can trigger these dangerous arrhythmias; the risk is increased with ischemia of the heart muscle or with prolonged repolarization time (long QT syndrome). The arrhythmia can also be triggered when an external stimulus such as cardioversion falls in the vulnerable phase of the cardiac cycle.
R-on-T phenomenon:
In the Lown grading system of ventricular arrhythmias, the R-on-T phenomenon is the fifth, most threatening class.
Treatment:
The treatment of torsades de pointes aims to restore a normal rhythm and to prevent the arrhythmia recurring. While torsades may spontaneously revert to a normal sinus rhythm, sustained torsades requires emergency treatment to prevent cardiac arrest. The most effective treatment to terminate torsades is an electrical cardioversion - a procedure in which an electrical current is applied across the heart to temporarily stop and then resynchronise the heart's cells. Treatment to prevent recurrent torsades includes infusion of magnesium sulphate, correction of electrolyte imbalances such as low blood potassium levels (hypokalaemia), and withdrawal of any medications that prolong the QT interval. Treatments used to prevent torsades in specific circumstances include beta blockers or mexiletine in long QT syndrome. Occasionally a pacemaker may be used to accelerate the heart's own sinus rhythm, and those at risk of further torsades may be offered an implantable defibrillator to automatically detect and defibrillate further episodes of the arrhythmia.
History:
The phenomenon was originally described in a French medical journal by Dessertenne in 1966, when he observed this cardiac rhythm disorder in an 80-year-old female patient with complete intermittent atrioventricular block. In coining the term, he referred his colleagues to the "Dictionnaire Le Robert", a bilingual French English dictionary, of which his wife had just given him a copy. Here, "torsade" is defined as: a bundle of threads, twisted in a helix or spiral, for ornamental purposes (such as in an Aran sweater); long hair twisted together; an ornamental motif, as seen on architectural columns.
Terminology:
The singular and plural forms (torsade de pointes, torsades de pointes and torsades des pointes) have all often been used. The question of whether each one is grammatically "correct" and the others "incorrect" has repeatedly arisen. This is seen among major medical dictionaries, where one enters only the plural form, another enters the plural form as the headword but lists the singular as a variant, and yet another enters the singular form as the headword and gives a usage comment saying that the plural is not preferred. One group of physicians has suggested that it would make the most sense to use the singular form to refer to the arrhythmia entity (where an arrhythmia may involve one or multiple episodes), and that one might best reserve the plural form for describing repeated twisting during a single episode. Other authors have suggested all three words should be plural. Regarding the natural language variation, they concluded, in good nature, "Wasn't it the French who coined the term vive la difference?"
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Jell-O 1-2-3**
Jell-O 1-2-3:
Jell-O 1-2-3 was a Jell-O gelatin product introduced in 1969 and discontinued in 1996. The product was one 4.3 ounce (121 g) powdered mix that, when properly prepared, separated and solidified into three distinct layers: a creamy top, a mousse-like middle, and regular Jell-O bottom.
In popular culture:
In season 1, episode 16 of the TV show The Nanny, Fran's mother prepared Jell-O 1-2-3 for her and the Sheffields. Fran notes that it hasn't been produced in a long time, and her mother claims she's been saving it for a "special occasion".
In season 1, episode 3 of the TV show The Kids in the Hall, Fran (Scott's character) mentions Jell-O 1-2-3 at the end of the "Salty Ham" skit.
In the episode "The Lost Art of Forehead Sweat" of the TV show The X-Files, Agent Scully remembers having it at family celebrations, but misremembers it as "Goop-O A-B-C" due to the Mandela effect.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Valuation (geometry)**
Valuation (geometry):
In geometry, a valuation is a finitely additive function from a collection of subsets of a set X to an abelian semigroup. For example, Lebesgue measure is a valuation on finite unions of convex bodies of Rn.
Other examples of valuations on finite unions of convex bodies of Rn are surface area, mean width, and Euler characteristic.
In geometry, continuity (or smoothness) conditions are often imposed on valuations, but there are also purely discrete facets of the theory. In fact, the concept of valuation has its origin in the dissection theory of polytopes and in particular Hilbert's third problem, which has grown into a rich theory reliant on tools from abstract algebra.
Definition:
Let X be a set, and let S be a collection of subsets of X.
A function ϕ on S with values in an abelian semigroup R is called a valuation if it satisfies whenever A, B, A∪B, and A∩B are elements of S.
If ∅∈S, then one always assumes 0.
Examples Some common examples of S are the convex bodies in Rn compact convex polytopes in Rn convex cones smooth compact polyhedra in a smooth manifold X Let K(Rn) be the set of convex bodies in Rn.
Then some valuations on K(Rn) are the Euler characteristic χ:K(Rn)→Z Lebesgue measure restricted to K(Rn) intrinsic volume (and, more generally, mixed volume) the map A↦hA, where hA is the support function of A Some other valuations are the lattice point enumerator P↦|Zn∩P| , where P is a lattice polytope cardinality, on the family of finite sets
Valuations on convex bodies:
From here on, let V=Rn , let K(V) be the set of convex bodies in V , and let ϕ be a valuation on K(V) We say ϕ is translation invariant if, for all K∈K(V) and x∈V , we have ϕ(K+x)=ϕ(K) Let (K,L)∈K(V)2 . The Hausdorff distance dH(K,L) is defined as where Kε is the ε -neighborhood of K under some Euclidean inner product. Equipped with this metric, K(V) is a locally compact space.
Valuations on convex bodies:
The space of continuous, translation-invariant valuations from K(V) to C is denoted by Val (V).
The topology on Val (V) is the topology of uniform convergence on compact subsets of K(V).
Equipped with the norm where B⊂V is a bounded subset with nonempty interior, Val (V) is a Banach space.
Homogeneous valuations A translation-invariant continuous valuation Val (V) is said to be i -homogeneous if for all λ>0 and K∈K(V).
The subset Val i(V) of i -homogeneous valuations is a vector subspace of Val (V).
McMullen's decomposition theorem states that In particular, the degree of a homogeneous valuation is always an integer between 0 and dim V.
Valuations are not only graded by the degree of homogeneity, but also by the parity with respect to the reflection through the origin, namely where Val iϵ with ϵ∈{+,−} if and only if ϕ(−K)=ϵϕ(K) for all convex bodies K.
The elements of Val i+ and Val i− are said to be even and odd, respectively.
It is a simple fact that Val 0(V) is 1 -dimensional and spanned by the Euler characteristic χ, that is, consists of the constant valuations on K(V).
In 1957 Hadwiger proved that Val n(V) (where dim V ) coincides with the 1 -dimensional space of Lebesgue measures on V.
A valuation Val (Rn) is simple if ϕ(K)=0 for all convex bodies with dim K<n.
Schneider in 1996 described all simple valuations on Rn : they are given by where c∈C, f∈C(Sn−1) is an arbitrary odd function on the unit sphere Sn−1⊂Rn, and σK is the surface area measure of K.
In particular, any simple valuation is the sum of an n - and an (n−1) -homogeneous valuation. This in turn implies that an i -homogeneous valuation is uniquely determined by its restrictions to all (i+1) -dimensional subspaces.
Embedding theorems The Klain embedding is a linear injection of Val i+(V), the space of even i -homogeneous valuations, into the space of continuous sections of a canonical complex line bundle over the Grassmannian Gr i(V) of i -dimensional linear subspaces of V.
Valuations on convex bodies:
Its construction is based on Hadwiger's characterization of n -homogeneous valuations. If Val i(V) and Gr i(V), then the restriction ϕ|E is an element Val i(E), and by Hadwiger's theorem it is a Lebesgue measure. Hence defines a continuous section of the line bundle Dens over Gr i(V) with fiber over E equal to the 1 -dimensional space Dens (E) of densities (Lebesgue measures) on E.
Valuations on convex bodies:
Theorem (Klain). The linear map Kl Val Gr Dens ) is injective.
A different injection, known as the Schneider embedding, exists for odd valuations. It is based on Schneider's description of simple valuations. It is a linear injection of Val i−(V), the space of odd i -homogeneous valuations, into a certain quotient of the space of continuous sections of a line bundle over the partial flag manifold of cooriented pairs (Fi⊂Ei+1).
Valuations on convex bodies:
Its definition is reminiscent of the Klain embedding, but more involved. Details can be found in.The Goodey-Weil embedding is a linear injection of Val i into the space of distributions on the i -fold product of the (n−1) -dimensional sphere. It is nothing but the Schwartz kernel of a natural polarization that any Val k(V) admits, namely as a functional on the k -fold product of C2(Sn−1), the latter space of functions having the geometric meaning of differences of support functions of smooth convex bodies. For details, see.
Valuations on convex bodies:
Irreducibility Theorem The classical theorems of Hadwiger, Schneider and McMullen give fairly explicit descriptions of valuations that are homogeneous of degree 1, n−1, and dim V.
But for degrees 1<i<n−1 very little was known before the turn of the 21st century. McMullen's conjecture is the statement that the valuations span a dense subspace of Val (V).
McMullen's conjecture was confirmed by Alesker in a much stronger form, which became known as the Irreducibility Theorem: Theorem (Alesker). For every 0≤i≤n, the natural action of GL(V) on the spaces Val i+(V) and Val i−(V) is irreducible.
Here the action of the general linear group GL(V) on Val (V) is given by The proof of the Irreducibility Theorem is based on the embedding theorems of the previous section and Beilinson-Bernstein localization.
Smooth valuations A valuation Val (V) is called smooth if the map g↦g⋅ϕ from GL(V) to Val (V) is smooth. In other words, ϕ is smooth if and only if ϕ is a smooth vector of the natural representation of GL(V) on Val (V).
The space of smooth valuations Val ∞(V) is dense in Val (V) ; it comes equipped with a natural Fréchet-space topology, which is finer than the one induced from Val (V).
For every (complex-valued) smooth function f on Gr i(Rn), where PE:Rn→E denotes the orthogonal projection and dE is the Haar measure, defines a smooth even valuation of degree i.
It follows from the Irreducibility Theorem, in combination with the Casselman-Wallach theorem, that any smooth even valuation can be represented in this way. Such a representation is sometimes called a Crofton formula.
For any (complex-valued) smooth differential form ω∈Ωn−1(Rn×Sn−1) that is invariant under all the translations (x,u)↦(x+t,u) and every number c∈C, integration over the normal cycle defines a smooth valuation: As a set, the normal cycle N(K) consists of the outward unit normals to K.
The Irreducibility Theorem implies that every smooth valuation is of this form.
Operations on translation-invariant valuations:
There are several natural operations defined on the subspace of smooth valuations Val Val (V).
The most important one is the product of two smooth valuations. Together with pullback and pushforward, this operation extends to valuations on manifolds.
Exterior product Let V,W be finite-dimensional real vector spaces. There exists a bilinear map, called the exterior product, which is uniquely characterized by the following two properties: it is continuous with respect to the usual topologies on Val and Val ∞.
Operations on translation-invariant valuations:
if vol V(∙+A) and vol W(∙+B) where A∈K(V) and B∈K(W) are convex bodies with smooth boundary and strictly positive Gauss curvature, and vol V and vol W are densities on V and W, then Product The product of two smooth valuations Val ∞(V) is defined by where Δ:V→V×V is the diagonal embedding. The product is a continuous map Equipped with this product, Val ∞(V) becomes a commutative associative graded algebra with the Euler characteristic as the multiplicative identity.
Operations on translation-invariant valuations:
Alesker-Poincaré duality By a theorem of Alesker, the restriction of the product is a non-degenerate pairing. This motivates the definition of the k -homogeneous generalized valuation, denoted Val k−∞(V), as Val Dens (V), topologized with the weak topology. By the Alesker-Poincaré duality, there is a natural dense inclusion Val Val k−∞(V)/ Convolution Convolution is a natural product on Val Dens (V∗).
Operations on translation-invariant valuations:
For simplicity, we fix a density vol on V to trivialize the second factor. Define for fixed A,B∈K(V) with smooth boundary and strictly positive Gauss curvature There is then a unique extension by continuity to a map called the convolution.
Unlike the product, convolution respects the co-grading, namely if Val n−i∞(V), Val n−j∞(V), then Val n−i−j∞(V).
For instance, let V(K1,…,Kn) denote the mixed volume of the convex bodies K1,…,Kn⊂Rn.
If convex bodies A1,…,An−i in Rn with a smooth boundary and strictly positive Gauss curvature are fixed, then ϕ(K)=V(K[i],A1,…,An−i) defines a smooth valuation of degree i.
The convolution two such valuations is where ci,j is a constant depending only on i,j,n.
Fourier transform The Alesker-Fourier transform is a natural, GL(V) -equivariant isomorphism of complex-valued valuations discovered by Alesker and enjoying many properties resembling the classical Fourier transform, which explains its name.
It reverses the grading, namely Val Val Dens (V), and intertwines the product and the convolution: Fixing for simplicity a Euclidean structure to identify V=V∗, Dens (V)=C, we have the identity On even valuations, there is a simple description of the Fourier transform in terms of the Klain embedding: Kl Kl ϕ(E⊥).
In particular, even real-valued valuations remain real-valued after the Fourier transform.
For odd valuations, the description of the Fourier transform is substantially more involved. Unlike the even case, it is no longer of purely geometric nature. For instance, the space of real-valued odd valuations is not preserved.
Pullback and pushforward Given a linear map f:U→V, there are induced operations of pullback Val Val (U) and pushforward Val Dens Val Dens (V)∗.
The pullback is the simpler of the two, given by f∗ϕ(K)=ϕ(f(K)).
It evidently preserves the parity and degree of homogeneity of a valuation. Note that the pullback does not preserve smoothness when f is not injective.
The pushforward is harder to define formally. For simplicity, fix Lebesgue measures on U and V.
The pushforward can be uniquely characterized by describing its action on valuations of the form vol (∙+A), for all A∈K(U), and then extended by continuity to all valuations using the Irreducibility Theorem. For a surjective map f, For an inclusion f:U↪V, choose a splitting V=U⊕W.
Then Informally, the pushforward is dual to the pullback with respect to the Alesker-Poincaré pairing: for Val (V) and Val Dens (U)∗, However, this identity has to be carefully interpreted since the pairing is only well-defined for smooth valuations. For further details, see.
Valuations on manifolds:
In a series of papers beginning in 2006, Alesker laid down the foundations for a theory of valuations on manifolds that extends the theory of valuations on convex bodies. The key observation leading to this extension is that via integration over the normal cycle (1), a smooth translation-invariant valuation may be evaluated on sets much more general than convex ones. Also (1) suggests to define smooth valuations in general by dropping the requirement that the form ω be translation-invariant and by replacing the translation-invariant Lebesgue measure with an arbitrary smooth measure.
Valuations on manifolds:
Let X be an n-dimensional smooth manifold and let PX=P+(T∗X) be the co-sphere bundle of X, that is, the oriented projectivization of the cotangent bundle. Let P(X) denote the collection of compact differentiable polyhedra in X.
The normal cycle N(A)⊂PX of A∈P(X), which consists of the outward co-normals to A, is naturally a Lipschitz submanifold of dimension 1.
Valuations on manifolds:
For ease of presentation we henceforth assume that X is oriented, even though the concept of smooth valuations in fact does not depend on orientability. The space of smooth valuations V∞(X) on X consists of functions ϕ:P(X)→C of the form where μ∈Ωn(X) and ω∈Ωn−1(PX) can be arbitrary. It was shown by Alesker that the smooth valuations on open subsets of X form a soft sheaf over X.
Valuations on manifolds:
Examples The following are examples of smooth valuations on a smooth manifold X Smooth measures on X.
The Euler characteristic; this follows from the work of Chern on the Gauss-Bonnet theorem, where such μ and ω were constructed to represent the Euler characteristic. In particular, μ is then the Chern-Gauss-Bonnet integrand, which is the Pfaffian of the Riemannian curvature tensor.
If X is Riemannian, then the Lipschitz-Killing valuations or intrinsic volumes V0X=χ,V1X,…,VnX=volX are smooth valuations. If f:X→Rm is any isometric immersion into a Euclidean space, then ViX=f∗ViRm, where ViRm denotes the usual intrinsic volumes on Rm (see below for the definition of the pullback). The existence of these valuations is the essence of Weyl's tube formula.
Let CPn be the complex projective space, and let GrkC denote the Grassmannian of all complex projective subspaces of fixed dimension k.
The function where the integration is with respect to the Haar probability measure on GrkC, is a smooth valuation. This follows from the work of Fu.
Filtration The space V∞(X) admits no natural grading in general, however it carries a canonical filtration Here Wn consists of the smooth measures on X, and Wj is given by forms ω in the ideal generated by π∗Ωj(X), where π:PX→X is the canonical projection.
The associated graded vector space ⨁i=0nWi/Wi+1 is canonically isomorphic to the space of smooth sections where Val i∞(TX) denotes the vector bundle over X such that the fiber over a point x∈X is Val i∞(TxX), the space of i -homogeneous smooth translation-invariant valuations on the tangent space TxX.
Product The space V∞(X) admits a natural product. This product is continuous, commutative, associative, compatible with the filtration: and has the Euler characteristic as the identity element. It also commutes with the restriction to embedded submanifolds, and the diffeomorphism group of X acts on V∞(X) by algebra automorphisms.
For example, if X is Riemannian, the Lipschitz-Killing valuations satisfy The Alesker-Poincaré duality still holds. For compact X it says that the pairing V∞(X)×V∞(X)→C, (ϕ,ψ)↦(ϕ⋅ψ)(X) is non-degenerate. As in the translation-invariant case, this duality can be used to define generalized valuations. Unlike the translation-invariant case, no good definition of continuous valuations exists for valuations on manifolds.
The product of valuations closely reflects the geometric operation of intersection of subsets.
Informally, consider the generalized valuation χA=χ(A∩∙).
The product is given by χA⋅χB=χA∩B.
Now one can obtain smooth valuations by averaging generalized valuations of the form χA, more precisely ϕ(X)=∫Sχs(A)ds is a smooth valuation if S is a sufficiently large measured family of diffeomorphisms. Then one has see.
Pullback and pushforward Every smooth immersion f:X→Y of smooth manifolds induces a pullback map f∗:V∞(Y)→V∞(X).
If f is an embedding, then The pullback is a morphism of filtered algebras.
Every smooth proper submersion f:X→Y defines a pushforward map f∗:V∞(X)→V∞(Y) by The pushforward is compatible with the filtration as well: dim dim Y)(Y).
For general smooth maps, one can define pullback and pushforward for generalized valuations under some restrictions.
Applications in Integral Geometry:
Let M be a Riemannian manifold and let G be a Lie group of isometries of M acting transitively on the sphere bundle SM.
Under these assumptions the space V∞(M)G of G -invariant smooth valuations on M is finite-dimensional; let ϕ1,…,ϕm be a basis. Let A,B∈P(M) be differentiable polyhedra in M.
Applications in Integral Geometry:
Then integrals of the form ∫Gϕi(A∩gB)dg are expressible as linear combinations of ϕk(A)ϕl(B) with coefficients cikl independent of A and B : Formulas of this type are called kinematic formulas. Their existence in this generality was proved by Fu. For the three simply connected real space forms, that is, the sphere, Euclidean space, and hyperbolic space, they go back to Blaschke, Santaló, Chern, and Federer.
Applications in Integral Geometry:
Describing the kinematic formulas explicitly is typically a difficult problem. In fact already in the step from real to complex space forms, considerable difficulties arise and these have only recently been resolved by Bernig, Fu, and Solanes. The key insight responsible for this progress is that the kinematic formulas contain the same information as the algebra of invariant valuations V∞(M)G.
Applications in Integral Geometry:
For a precise statement, let be the kinematic operator, that is, the map determined by the kinematic formulas (2). Let denote the Alesker-Poincaré duality, which is a linear isomorphism. Finally let mG∗ be the adjoint of the product map The Fundamental theorem of algebraic integral geometry relating operations on valuations to integral geometry, states that if the Poincaré duality is used to identify V∞(M)G with V∞(M)G∗, then kG=mG∗
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Trial graphics**
Trial graphics:
Trial graphics are images that have been designed by expert graphic artists for use in legal trials and procedures. Graphs and other images can be created to use as evidential support in a court of law by utilizing current graphic design technology.
Effective jury presentations are a key point to creating a strong legal case. High quality legal graphics are a relatively new tool that can be utilized by lawyers looking to add clear forms of analytic data or other designed images for jury review.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Standard Geographical Classification code (Canada)**
Standard Geographical Classification code (Canada):
The Standard Geographical Classification (SGC) is a system maintained by Statistics Canada for categorizing and enumerating the census geographic units of Canada. Each geographic area receives a unique numeric code ranging from one to seven digits, which extend telescopically to refer to increasingly small areas. This geocode is roughly analogous to the ONS coding system in use in the United Kingdom.
Regions:
The SGC code format for regions is X, where X is a unique identifier incrementing from east to west, then north.
1: Atlantic Canada2: Quebec3: Ontario4: Prairies5: British Columbia6: Northern Canada
Provinces and Territories:
The SGC code format for provinces and territories is XY, where X is the above regional prefix, and Y is a further identifier incrementing from east to west. Taken as a single digit, each value of Y is unique within the province group, or unique within the territory group.
10: Newfoundland and Labrador11: Prince Edward Island12: Nova Scotia13: New Brunswick24: Quebec35: Ontario46: Manitoba47: Saskatchewan48: Alberta59: British Columbia60: Yukon61: Northwest Territories62: Nunavut
Census divisions:
The SGC code format for census divisions is XX YY, where XX is the above province/territory code, and YY is the census division's code, unique within its own province. Census divisions are generally numbered from east to west. In some locations, a similar policy to American FIPS county codes has been adopted, with even-numbered slots being left vacant for future expansion.
Census divisions:
Examples:10 04: Division No. 4, Newfoundland and Labrador10 05: Division No. 5, Newfoundland and Labrador13 08: Kent County, New Brunswick13 09: Northumberland County, New Brunswick13 10: York County, New Brunswick24 64: Les Moulins Regional County Municipality, Quebec24 65: Territoire équivalent of Laval, Quebec24 66: Territoire équivalent of Montreal, Quebec24 67: Roussillon Regional County Municipality, Quebec24 68: Les Jardins-de-Napierville Regional County Municipality, Quebec35 07: Leeds and Grenville United Counties, Ontario35 08: [vacant slot]35 09: Lanark County, Ontario35 10: Frontenac Census Division, Ontario47 04: Division No. 4, Saskatchewan48 05: Division No. 5, Alberta59 01: Regional District of East Kootenay, British Columbia59 02: [vacant slot]59 03: Regional District of Central Kootenay, British Columbia59 04: [vacant slot]59 05: Regional District of Kootenay Boundary, British Columbia
Census subdivisions:
The SGC code format for census subdivisions is XX YY ZZZ, where XX is the province/territory code, YY is the census division code, and ZZZ is the census subdivision's code, unique within its own census division. Census subdivisions are again generally numbered from east to west, and the practice has been to leave even-numbered slots vacant for future expansion.
Examples:35 12 001: Tyendinaga, Ontario35 12 002: Deseronto, Ontario35 12 003: [vacant slot]35 12 004: Tyendinaga Mohawk Territory, Ontario35 12 005: Belleville, Ontario35 12 006: [vacant slot]62 04 001: Sanikiluaq, Nunavut62 04 002: [vacant slot]62 04 003: Iqaluit, Nunavut62 04 004: [vacant slot]62 04 005: Kimmirut, Nunavut62 04 006: [vacant slot]
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**MAP3K8**
MAP3K8:
Mitogen-activated protein kinase kinase kinase 8 is an enzyme that in humans is encoded by the MAP3K8 gene.
Function:
The gene was identified by its oncogenic transforming activity in cells. The encoded protein is a member of the serine/threonine-specific protein kinase family. This kinase can activate ERK1, ERK2 and p38 MAP kinases. This kinase was shown to activate IkappaB kinases, and thus induce the nuclear production of NF-kappaB. This kinase was also found to promote the production of TNF-alpha and IL-2 during T lymphocyte activation. Studies of a similar gene in rat suggested the direct involvement of this kinase in the proteolysis of NF-kappaB1, p105 (NFKB1). This gene may also start transcription at a downstream in-frame translation start codon, and thus produce an isoform containing a shorter N-terminus. The shorter isoform has been shown to display weaker transforming activity. In mice, the gene is known as TPL2 and is a tumor-suppressor gene whose absence contributes to the development and progression of cancer. However, it functions in other organs as a oncogene, promoting cancer.
Interactions:
MAP3K8 has been shown to interact with AKT1, CHUK, NFKB2, NFKB1, C22orf25 and TNIP2.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Textile bleaching**
Textile bleaching:
The textile bleaching (or bleaching of textiles) is one of the steps in the textile manufacturing process. The objective of bleaching is to remove the natural color for the following steps such as dyeing or printing or to achieve full white. All raw textile materials, when they are in natural form, are known as 'greige' material. They have their natural color, odor and impurities that are not suited to clothing materials. Not only the natural impurities will remain in the greige material, but also the add-ons that were made during its cultivation, growth and manufacture in the form of pesticides, fungicides, worm killers, sizes, lubricants, etc. The removal of these natural coloring matters and add-ons during the previous state of manufacturing is called scouring and bleaching.: 193 A continuous bleaching range is a set of machines to carry out bleaching action. It consists of several compartments in which fabric moves from one side to another with the help of guide rollers and is treated with chemicals, heated, rinsed, and squeezed. Continuous bleaching is possible for the fabrics in open-width or rope form.
History:
Bleaching can be dated back to at least 1000BC from an Egyptian list found in the tomb of Rekh-mi-re at Thebes, which mentioned both bleached and unbleached linen. Mulrooney dates it back as far as 5000BC, while Walton claims it was introduced to Egypt from Asia. It’s plausible that it was discovered independently by different cultures. It’s generally assumed to have developed after noticing that garments are naturally bleached by sunlight and washing. Wood ash (potash, or impure potassium hydroxide) was an early form of soap, known to have been used in bleaching since at least 1AD. This process of washing cloth in a solution of ashes (lye) and left in the sun, known as Grassing, is one of the oldest methods of bleaching textile goods. To bleach linen and cotton-based fabrics, the Grassing method has been used. Linen has long been bleached in Europe with Grassing method. The linens were laid out on the grass for over seven days after boiling with the ''lyes of ashes and rinsing''. Bleachfield was an open area to spread cloth, it was a field near watercourse used by a bleachery. Bleachfields were common in and around the mill towns during the British Industrial RevolutionThe Dutch were bleaching by about the 12th century and are credited with soaking the bleached cloth in a bath of soured milk for 5 - 8 days. This softened and neutralised the harsh effects of the caustic lye. By the 17th century the Dutch were renowned for their bleaching skills and much of their trade was for customers abroad. Around 1756 an alternative to soured milk was proposed by the Scottish doctor, Francis Home using a weak solution of sulphuric acid. This was made commercially viable by John Roebuck's manufacture of sulphuric acid and reduced the soaking time to 12 - 24 hours. A final rinse and drying finished the bleaching process. The English East India Company imported bleached, painted and printed calico from India during the 17th century. This disrupted the English silk and wool trades and an act of parliament (11 Will III Cap X) was passed in 1700 that prohibited the wearing of printed calicos manufactured in China, India or Persia. This inadvertently established a calico bleaching and printing industry using unbleached Indian calico. A second law in 1721 prohibited the use and wear of all printed, painted, stained or dyed calicoes which stimulated demand for linen and fustian. The calico acts were repealed in 1774 when cloth was made using imported cotton from America.
History:
Discovery of Chlorine After discovering Chlorine in the late 18th century, when chemical bleaching came into existence, the chemical bleaching rose above Grassing, as it was quicker and possible in indoors.The French chemist Claude Louis Berthollet first demonstrated the bleaching properties of chlorine and subsequently developed liquid bleaches around 1789. James Watt is credited with bringing it to Britain, and a fellow Scot, Charles Tennant patented a more practical bleaching powder that made chlorine-based bleaching a commercial success.
Scouring:
Scouring is the first process carried out with or without chemicals, at room temperature or at suitable higher temperatures with the addition of suitable wetting agents, alkali and so on. Scouring removes the impurities such as waxes, pectins and makes the textile material hydrophilichy or water absorbent.: 78 Scouring is then followed by the bleaching process.: 169 : 193
Bleaching:
Bleaching is the process of decolorizing the material after it has been scoured.: 169 Bleaching textiles can be classified as oxidative bleaching and reductive bleaching which can be carried out with oxidizing and reductive bleaching agents.: 161 Bleaching agents attack the chromophores and alter the color absorbing properties of the objects.
Oxidative bleaching Generally oxidative bleachings are carried out using sodium hypochlorite, sodium chlorite or sulfuric acid.
Vegetable fibres, animal fibers, and mineral fibres are the three major types of natural fibers. Natural fibers such as cotton, ramie, jute, wool, and regenerated fibers such as bamboo are all generally bleached with oxidative methods.
Bleaching:
Oxygen bleaching action It is the conjugated double bonds of the substrate that makes the substrate capable of absorbing visible light. Hence, it looks yellower and need bleaching. When bleaching action carries out with oxygen, it removes the chromophoric sites and makes the cloths whiter. Oxygen is a degrading bleaching agent. Its bleaching action is based on ''destroying the phenolic groups and the carbon–carbon double bonds.''. The major source of chemical bleaching is Hydrogen peroxide H2O2 that contains a single bond, (–O–O–). When this breaks down it gives rise to very reactive oxygen specie, which is the active agent of the bleach. Around sixty percent of the world Hydrogen peroxide is used in chemical bleaching of textiles and wood pulp.
Bleaching:
Reductive bleaching Reductive bleaching is done with sodium hydrosulphite, a powerful reducing agent. Fibres like polyamides, polyacrylics and polyacetates can be bleached using reductive bleaching technology.
Bleaching:
Textile whitening Bleaching of textiles may include an additional application of optical brighteners (OBAs). Optical brightening agents are chemical compounds that absorb light in the ultraviolet and violet region (usually 340-370 nm) of the electromagnetic spectrum, and re-emit light in the blue region (typically 420-470 nm) by fluorescence. After scouring and bleaching, optical brightening agents are applied to make the textile material appear a more brilliant white. These OBAs are available in different tints such as blue, violet and red.
Whiteness:
Whiteness in colorimetry is the degree to which a surface is white. The term "whiteness" refers to the degree to which a surface resembles the properties of a perfect reflecting diffuser, i.e. an ideal reflecting surface that neither absorbs nor transmits light, but instead reflects it evenly in all directions.
CIE Whiteness CIE Whiteness is a formula that measures the degree of whiteness. The CIE Whiteness Index is a measure or methodology developed by the Commission on illumination.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Occupancy grid mapping**
Occupancy grid mapping:
Occupancy Grid Mapping refers to a family of computer algorithms in probabilistic robotics for mobile robots which address the problem of generating maps from noisy and uncertain sensor measurement data, with the assumption that the robot pose is known. Occupancy grids were first proposed by H. Moravec and A. Elfes in 1985.The basic idea of the occupancy grid is to represent a map of the environment as an evenly spaced field of binary random variables each representing the presence of an obstacle at that location in the environment. Occupancy grid algorithms compute approximate posterior estimates for these random variables.
Algorithm outline:
There are four major components of occupancy grid mapping approach. They are: Interpretation Integration Position estimation Exploration
Occupancy grid mapping algorithm:
The goal of an occupancy mapping algorithm is to estimate the posterior probability over maps given the data: p(m∣z1:t,x1:t) , where m is the map, z1:t is the set of measurements from time 1 to t, and x1:t is the set of robot poses from time 1 to t. The controls and odometry data play no part in the occupancy grid mapping algorithm since the path is assumed known.
Occupancy grid mapping algorithm:
Occupancy grid algorithms represent the map m as a fine-grained grid over the continuous space of locations in the environment. The most common type of occupancy grid maps are 2d maps that describe a slice of the 3d world.
If we let mi denote the grid cell with index i (often in 2d maps, two indices are used to represent the two dimensions), then the notation p(mi) represents the probability that cell i is occupied.
The computational problem with estimating the posterior p(m∣z1:t,x1:t) is the dimensionality of the problem: if the map contains 10,000 grid cells (a relatively small map), then the number of possible maps that can be represented by this gridding is 10 000 . Thus calculating a posterior probability for all such maps is infeasible.
Occupancy grid mapping algorithm:
The standard approach, then, is to break the problem down into smaller problems of estimating p(mi∣z1:t,x1:t) for all grid cells mi . Each of these estimation problems is then a binary problem. This breakdown is convenient but does lose some of the structure of the problem, since it does not enable modelling dependencies between neighboring cells. Instead, the posterior of a map is approximated by factoring it into p(m∣z1:t,x1:t)=∏ip(mi∣z1:t,x1:t) .Due to this factorization, a binary Bayes filter can be used to estimate the occupancy probability for each grid cell. It is common to use a log-odds representation of the probability that each grid cell is occupied.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.