text
stringlengths
60
353k
source
stringclasses
2 values
**Investigator's brochure** Investigator's brochure: In drug development and medical device development the Investigator's Brochure (IB) is a comprehensive document summarizing the body of information about an investigational product ("IP" or "study drug") obtained during a drug trial. The IB is a document of critical importance throughout the drug development process and is updated with new information as it becomes available. The purpose of the IB is to compile data relevant to studies of the IP in human subjects gathered during preclinical and other clinical trials. Investigator's brochure: An IB is intended to provide the investigator with insights necessary for management of study conduct and study subjects throughout a clinical trial. An IB may introduce key aspects and safety measures of a clinical trial protocol, such as: Dose (of the study drug) Frequency of dosing interval Methods of administration Safety monitoring proceduresAn IB contains a "Summary of Data and Guidance for the Investigator" section, of which the overall aim is to "provide the investigator with a clear understanding of the possible risks and adverse reactions, and of the specific tests, observations, and precautions that may be needed for a clinical trial. This understanding should be based on the available physical, chemical, pharmaceutical, pharmacological, toxicological, and clinical information on the investigational product(s). Guidance should also be provided to the clinical investigator on the recognition and treatment of possible overdose and adverse drug reactions that is based on previous human experience and on the pharmacology of the investigational product".The sponsor is responsible for keeping the information in the IB up-to-date. The IB should be reviewed annually and must be updated when any new and important information becomes available, such as when a drug has received marketing approval and can be prescribed for use commercially. Investigator's brochure: Owing to the importance of the IB in maintaining the safety of human subjects in clinical trials, and as part of their guidance on good clinical practice (GCP), the U.S. Food and Drug Administration (FDA) has written regulatory codes and guidances for authoring the IB, and the International Conference on Harmonisation (ICH) has prepared a detailed guidance for the authoring of the IB in the European Union (EU), Japan, and the United States (US). Guidance documents: As part of its guidance on good clinical practice (GCP), the International Conference on Harmonisation (ICH) has prepared a detailed guidance for the contents of the IB in the European Union (EU), Japan, and the United States (US).[1] (broken link) If many clinical trials have been completed, tables that summarize findings across the various studies can be very useful to demonstrate outcomes in, e.g., different patient populations or different indications. Guidance documents: Code of Federal Regulations, Title 21, Part 312, Investigational New Drug Application [2] Code of Federal Regulations, Title 21, Part 201.56 (and Part 201.57) [3] CDER Guidance for Industry. Adverse Reactions Section of Labeling for Human Prescription Drug and Biological Products — Content and Format. [4] CDER Guidance for Industry. Clinical Studies Section of Labeling for Human Prescription Drug and Biological Products — Content and Format. [5] CDER Guidance for Industry. Estimating the Maximum Safe Starting Dose in Initial Clinical Trials for Therapeutics in Adult Healthy Volunteers. [6]
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Truncated dodecadodecahedron** Truncated dodecadodecahedron: In geometry, the truncated dodecadodecahedron (or stellatruncated dodecadodecahedron) is a nonconvex uniform polyhedron, indexed as U59. It is given a Schläfli symbol t0,1,2{5⁄3,5}. It has 54 faces (30 squares, 12 decagons, and 12 decagrams), 180 edges, and 120 vertices. The central region of the polyhedron is connected to the exterior via 20 small triangular holes. Truncated dodecadodecahedron: The name truncated dodecadodecahedron is somewhat misleading: truncation of the dodecadodecahedron would produce rectangular faces rather than squares, and the pentagram faces of the dodecadodecahedron would turn into truncated pentagrams rather than decagrams. However, it is the quasitruncation of the dodecadodecahedron, as defined by Coxeter, Longuet-Higgins & Miller (1954). For this reason, it is also known as the quasitruncated dodecadodecahedron. Coxeter et al. credit its discovery to a paper published in 1881 by Austrian mathematician Johann Pitsch. Cartesian coordinates: Cartesian coordinates for the vertices of a truncated dodecadodecahedron are all the triples of numbers obtained by circular shifts and sign changes from the following points (where τ=1+52 is the golden ratio): (1,1,3);(1τ,1τ2,2τ);(τ,2τ,τ2);(τ2,1τ2,2);(5,1,5). Each of these five points has eight possible sign patterns and three possible circular shifts, giving a total of 120 different points. As a Cayley graph: The truncated dodecadodecahedron forms a Cayley graph for the symmetric group on five elements, as generated by two group members: one that swaps the first two elements of a five-tuple, and one that performs a circular shift operation on the last four elements. That is, the 120 vertices of the polyhedron may be placed in one-to-one correspondence with the 5! permutations on five elements, in such a way that the three neighbors of each vertex are the three permutations formed from it by swapping the first two elements or circularly shifting (in either direction) the last four elements. Related polyhedra: Medial disdyakis triacontahedron The medial disdyakis triacontahedron is a nonconvex isohedral polyhedron. It is the dual of the uniform truncated dodecadodecahedron.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**International Journal of Medical Sciences** International Journal of Medical Sciences: The International Journal of Medical Sciences is a peer-reviewed open access medical journal published by Ivyspring International Publisher covering research in basic medical sciences. Articles include original research papers, reviews, and short research communications. Full text of published articles is archived in PubMed Central. The current editor-in-chief is Dennis D. Taub (National Institute on Aging). Abstracting and indexing: The journal is abstracted and indexed in: According to the Journal Citation Reports, the journal has a 2016 impact factor of 2.399.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Ancestral graph** Ancestral graph: In statistics and Markov modeling, an ancestral graph is a type of mixed graph to provide a graphical representation for the result of marginalizing one or more vertices in a graphical model that takes the form of a directed acyclic graph. Definition: Ancestral graphs are mixed graphs used with three kinds of edges: directed edges, drawn as an arrow from one vertex to another, bidirected edges, which have an arrowhead at both ends, and undirected edges, which have no arrowheads. It is required to satisfy some additional constraints: If there is an edge from a vertex u to another vertex v, with an arrowhead at v (that is, either an edge directed from u to v or a bidirected edge), then there does not exist a path from v to u consisting of undirected edges and/or directed edges oriented consistently with the path. Definition: If a vertex v is an endpoint of an undirected edge, then it is not also the endpoint of an edge with an arrowhead at v. Applications: Ancestral graphs are used to depict conditional independence relations between variables in Markov models.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Method of factors** Method of factors: The Method of Factors is a technique in cognitive behavioral therapy to organise a session of exposure therapy. Rather than generating a list of objects or situations in advance ( a static hierarchy) representing escalating levels of arousal and intensity of fear for a particular phobia, the Method of Factors involves identifying a fear-provoking stimulus, then identifying those features of the stimulus that control the intensity of fear. The hierarchy then emerges in the course of the exposure session as the patient seeks to maintain a moderately high arousal. Because of this emergent nature, it is referred to as a Dynamic Hierarchy
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Occasional poetry** Occasional poetry: Occasional poetry is poetry composed for a particular occasion. In the history of literature, it is often studied in connection with orality, performance, and patronage. Term: As a term of literary criticism, "occasional poetry" describes the work's purpose and the poet's relation to subject matter. It is not a genre, but several genres originate as occasional poetry, including epithalamia (wedding songs), dirges or funerary poems, paeans, and victory odes. Occasional poems may also be composed exclusive of or within any given set of genre conventions to commemorate single events or anniversaries, such as birthdays, foundings, or dedications. Term: Occasional poetry is often lyric because it originates as performance, in antiquity and into the 16th century even with musical accompaniment; at the same time, because performance implies an audience, its communal or public nature can place it in contrast with the intimacy or personal expression of emotion often associated with the term "lyric".Occasional poetry was a significant and even characteristic form of expression in ancient Greek and Roman culture, and has continued to play a prominent if sometimes aesthetically debased role throughout Western literature. Poets whose body of work features occasional poetry that stands among their highest literary achievements include Pindar, Horace, Ronsard, Jonson, Dryden, Milton, Goethe, Yeats, and Mallarmé. The occasional poem (French pièce d'occasion, German Gelegenheitsgedichte) is also important in Persian, Arabic, Chinese, and Japanese literature, and its ubiquity among virtually all world literatures suggests the centrality of occasional poetry in the origin and development of poetry as an art form. Term: Goethe declared that "Occasional Poetry is the highest kind," and Hegel gave it a central place in the philosophical examination of how poetry interacts with life: Poetry's living connection with the real world and its occurrences in public and private affairs is revealed most amply in the so-called pièces d'occasion. If this description were given a wider sense, we could use it as a name for nearly all poetic works: but if we take it in the proper and narrower sense we have to restrict it to productions owing their origin to some single present event and expressly devoted to its exaltation, embellishment, commemoration, etc. But by such entanglement with life poetry seems again to fall into a position of dependence, and for this reason it has often been proposed to assign the whole sphere of pièces d'occasion an inferior value although to some extent, especially in lyric poetry, the most famous works belong to this class." In the 19th and 20th centuries, newspapers in the United States often published occasional poems, and memorial poems for floods, train accidents, mine disasters and the like were frequently written as lyrics in ballad stanzas.A high-profile example of a 21st-century occasional poem is Elizabeth Alexander's "Praise Song for the Day," written for Barack Obama's 2009 US presidential inauguration, and read by the poet during the event to a television audience of around 38 million. Selected bibliography: Sugano, Marian Zwerling. The Poetics of the Occasion: Mallarmé and the Poetry of Circumstance. Stanford University Press, 1992. Limited preview online.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Ocean heat content** Ocean heat content: Ocean heat content (OHC) is the energy absorbed and stored by oceans. To calculate the ocean heat content, measurements of ocean temperature at many different locations and depths are required. Integrating the areal density of ocean heat over an ocean basin, or entire ocean, gives the total ocean heat content. Between 1971 and 2018, the rise in OHC accounted for over 90% of Earth’s excess thermal energy from global heating. The main driver of this OHC increase was anthropogenic forcing via rising greenhouse gas emissions.: 1228  By 2020, about one third of the added energy had propagated to depths below 700 meters. In 2022, the world’s oceans, as given by OHC, were again the hottest in the historical record and exceeded the previous 2021 record maximum. The four highest ocean heat observations occurred in the period 2019–2022 with the North Pacific, North Atlantic, the Mediterranean, and the Southern Ocean all recording their highest heat observations for more than sixty years. Ocean heat content and sea level rise are important indicators of climate change.Ocean water absorbs solar energy efficiently and has far greater heat capacity than atmospheric gases. As a result, the top few meters of the ocean contain more thermal energy than the entire Earth's atmosphere. Since before 1960, research vessels and stations have sampled sea surface temperatures and temperatures at greater depth all over the world. Furthermore, since the year 2000, an expanding network of nearly 4000 Argo robotic floats has measured temperature anomalies, or the change in OHC. OHC has been increasing at a steady or accelerating rate since at least 1990. The net rate of change in the upper 2000 meters from 2003 to 2018 was +0.58±0.08 W/m2 (or annual mean energy gain of 9.3 zettajoules). The uncertainty is primarily due to the challenges of making multidecadal measurements with sufficient accuracy and spatial coverage.Changes in ocean heat content have far-reaching consequences for the planet's marine and terrestrial ecosystems; including multiple impacts to coastal ecosystems and communities. Direct effects include variations in sea level and sea ice, shifts in intensity of the water cycle, and the migration and extinction of marine life. Calculations: Definition Ocean heat content is "the total amount of heat stored by the oceans". To calculate the ocean heat content, measurements of ocean temperature at many different locations and depths are required. Integrating the areal density of ocean heat over an ocean basin, or entire ocean, gives the total ocean heat content. Thus, total ocean heat content is a volume integral of the product of temperature, density, and heat capacity over the three-dimensional region of the ocean for which data is available. The bulk of measurements have been performed at depths shallower than about 2000 m (1.25 miles).The areal density of ocean heat content between two depths is defined as a definite integral: H=cp∫h2h1ρ(z)T(z)dz where cp is the specific heat capacity of sea water, h2 is the lower depth, h1 is the upper depth, ρ(z) is the seawater density profile, and T(z) is the temperature profile. In SI units, H has units of Joules per square metre (J·m−2). Calculations: In practice, the integral can be approximated by summation of a smooth and otherwise well-behaved sequence of temperature and density data. Seawater density is a function of temperature, salinity, and pressure. Despite the cold and great pressure at ocean depth, water is nearly incompressible and favors the liquid state for which its density is maximized. Calculations: Measurements of temperature versus ocean depth generally show an upper mixed layer (0–200 m), a thermocline (200–1500 m), and a deep ocean layer (>1500 m). These boundary depths are only rough approximations. Sunlight penetrates to a maximum depth of about 200 m; the top 80 m of which is the habitable zone for photosynthetic marine life covering over 70% of Earth's surface. Wave action and other surface turbulence help to equalize temperatures throughout the upper layer. Calculations: Unlike surface temperatures which decrease with latitude, deep-ocean temperatures are relatively cold and uniform in most regions of the world. About 50% of all ocean volume is at depths below 3000 m (1.85 miles), with the Pacific Ocean being the largest and deepest of five oceanic divisions. The thermocline is the transition between upper and deep layers in terms of temperature, nutrient flows, abundance of life, and other properties. It is semi-permanent in the tropics, variable in temperate regions (often deepest during the summer), and shallow to nonexistent in polar regions. Calculations: Measurements Ocean heat content measurements come with difficulties, especially before the deployment of the Argo profiling floats. Due to poor spatial coverage and poor quality of data, it has not always been easy to distinguish between long term global warming trends and climate variability. Examples of these complicating factors are the variations caused by El Niño–Southern Oscillation or changes in ocean heat content caused by major volcanic eruptions.Argo is an international program of robotic profiling floats deployed globally since the start of the 21st century. The program's initial 3000 units had expanded to nearly 4000 units by year 2020. At the start of each 10-day measurement cycle, a float descends to a depth of 1000 meters and drifts with the current there for nine days. It then descends to 2000 meters and measures temperature, salinity (conductivity), and depth (pressure) over a final day of ascent to the surface. At the surface the float transmits the depth profile and horizontal position data through satellite relays before repeating the cycle.Starting 1992, the TOPEX/Poseidon and subsequent Jason satellite series have observed vertically integrated OHC, which is a major component of sea level rise. The partnership between Argo and Jason measurements has yielded ongoing improvements to estimates of OHC and other global ocean properties. Causes for heat uptake: The more abundant equatorial solar irradiance which is absorbed by Earth's tropical surface waters drives the overall poleward propagation of ocean heat. The surface also exchanges energy with the lower troposphere, and thus responds to long-term changes in cloud albedo, greenhouse gases, and other factors in the Earth's energy budget. Over time, a sustained imbalance in the budget enables a net flow of heat either into or out of ocean depth via thermal conduction, downwelling, and upwelling.Oceans are Earth's largest thermal reservoir that function to regulate the planet's climate; acting as both a sink and a source of energy. Releases of OHC to the atmosphere occur primarily via evaporation and enable the planetary water cycle. Concentrated releases in association with high sea surface temperatures help drive tropical cyclones, atmospheric rivers, atmospheric heat waves and other extreme weather events that can penetrate far inland.The ocean also functions as a sink and source of carbon, with a role comparable to that of land regions in Earth's carbon cycle. In accordance with the temperature dependence of Henry's law, warming surface waters are less able to absorb atmospheric gases including the growing emissions of carbon dioxide and other greenhouse gases from human activity. Recent observations and changes: Numerous independent studies in recent years have found a multi-decadal rise in OHC of upper ocean regions that has begun to penetrate to deeper regions. The upper ocean (0–700 m) has warmed since 1971, while it is very likely that warming has occurred at intermediate depths (700–2000 m) and likely that deep ocean (below 2000 m) temperatures have increased.: 1228  The heat uptake results from a persistent warming imbalance in Earth's energy budget that is most fundamentally caused by the anthropogenic increase in atmospheric greenhouse gases.: 41  The rate in which the ocean absorbs anthropogenic carbon dioxide has approximately tripled from the early 1960s to the late 2010s, a scaling proportional to the increase in atmospheric carbon dioxide. There is very high confidence that increased ocean heat content in response to anthropogenic carbon dioxide emissions is essentially irreversible on human time scales.: 1233 Studies based on Argo measurements indicate that ocean surface winds, especially the subtropical trade winds in the Pacific Ocean, change ocean heat vertical distribution. This results in changes among ocean currents, and an increase of the subtropical overturning, which is also related to the El Niño and La Niña phenomenon. Depending on stochastic natural variability fluctuations, during La Niña years around 30% more heat from the upper ocean layer is transported into the deeper ocean. Furthermore, studies have shown that approximately one-third of the observed warming in the ocean is taking place in the 700-2000 meter ocean layer.Model studies indicate that ocean currents transport more heat into deeper layers during La Niña years, following changes in wind circulation. Years with increased ocean heat uptake have been associated with negative phases of the interdecadal Pacific oscillation (IPO). This is of particular interest to climate scientists who use the data to estimate the ocean heat uptake. Recent observations and changes: The upper ocean heat content in most North Atlantic regions is dominated by heat transport convergence (a location where ocean currents meet), without large changes to temperature and salinity relation. Additionally, a study from 2022 on anthropogenic warming in the ocean indicates that 62% of the warming from the years between 1850 and 2018 in the North Atlantic along 25°N is kept in the water below 700 m, where a major percentage of the ocean's surplus heat is stored. Although the upper 2000 m of the oceans have experienced warming on average since the 1970s, the rate of ocean warming varies regionally with the subpolar North Atlantic warming more slowly and the Southern Ocean taking up a disproportionate large amount of heat due to anthropogenic greenhouse gas emissions.: 1230 Deep-ocean warming below 2000 m has been largest in the Southern Ocean compared to other ocean basins.: 1230 Impacts: Warming oceans are one reason for coral bleaching and contribute to the migration of marine species. Marine heat waves are regions of life-threatening and persistently elevated water temperatures. Redistribution of the planet's internal energy by atmospheric circulation and ocean currents produces internal climate variability, often in the form of irregular oscillations, and helps to sustain the global thermohaline circulation.The increase in OHC accounts for 30–40% of global sea-level rise from 1900 to 2020 because of thermal expansion. Impacts: It is also an accelerator of sea ice, iceberg, and tidewater glacier melting. The ice loss reduces polar albedo, amplifying both the regional and global energy imbalances. The resulting ice retreat has been rapid and widespread for Arctic sea ice, and within northern fjords such as those of Greenland and Canada. Impacts: Impacts to Antarctic sea ice and the vast Antarctic ice shelves which terminate into the Southern Ocean have varied by region and are also increasing due to warming waters. Breakup of the Thwaites Ice Shelf and its West Antarctica neighbors contributed about 10% of sea-level rise in 2020.A study in 2015 concluded that ocean heat content increases by the Pacific Ocean were compensated by an abrupt distribution of OHC into the Indian Ocean.Warming of the deep ocean has the further potential to melt and release some of the vast store of frozen methane hydrate deposits that have naturally accumulated there.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Silver medal** Silver medal: A silver medal in sports and other similar areas involving competition is a medal made of, or plated with, silver awarded to the second-place finisher, or runner-up, of contests or competitions such as the Olympic Games, Commonwealth Games, etc. The outright winner receives a gold medal and the third place a bronze medal. More generally, silver is traditionally a metal sometimes used for all types of high-quality medals, including artistic ones. Sports: Olympic Games During the first Olympic event in 1896, number one achievers or winners' medals were in fact made of silver metal. The custom of gold-silver-bronze for the first three places dates from the 1904 games and has been copied for many other sporting events. Minting the medals is the responsibility of the host city. From 1928 to 1968 the design was always the same: the obverse showed a generic design by Florentine artist Giuseppe Cassioli with text giving the host city; the reverse showed another generic design of an Olympic champion. From 1972–2000, Cassioli's design (or a slight reworking) remained on the obverse with a custom design by the host city on the reverse. Noting that Cassioli's design showed a Roman amphitheatre for what was originally a Greek games, a new obverse design was commissioned for the Athens Games. Winter Olympics medals have been of more varied design. Sports: The Open Championship In The Open Championship golf tournament, the Silver Medal is an award presented to the lowest scoring amateur player at the tournament. Sports: Rejection of silver medals In many sports with an elimination tournament, including those with a third place playoff (such as Olympic ice hockey, Olympic soccer, FIFA World Cup), silver is the only medal given to a team that loses its final game, whereas gold and bronze are earned by teams winning their final matches. Notable athletes such as Jocelyne Larocque (2018 Olympics) removed their runners-up/silver medals right after receiving them; Larocque was later ordered by the International Ice Hockey Federation official to put her silver medal back on. Military and government: Some countries present military and civilian decorations known as Silver Medals. These include: Austria′s Silver Medal for Services to the Republic of Austria Italy′s Silver Medal of Military Valor South Africa′s Silver Medal for Merit The Civil Air Patrol′s Silver Medal of Valor in the United States. Other awards: The Zoological Society of London awards a Silver Medal "to a Fellow of the Society or any other person for contributions to the understanding and appreciation of zoology, including such activities as public education in natural history, and wildlife conservation." The Royal Academy of Engineering awards a Silver Medal "for an outstanding and demonstrated personal contribution to UK engineering, which results in successful market exploitation, by an engineer with less than 22 years in full-time employment or equivalent."
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Photon mapping** Photon mapping: In computer graphics, photon mapping is a two-pass global illumination rendering algorithm developed by Henrik Wann Jensen between 1995 and 2001 that approximately solves the rendering equation for integrating light radiance at a given point in space. Rays from the light source (like photons) and rays from the camera are traced independently until some termination criterion is met, then they are connected in a second step to produce a radiance value. The algorithm is used to realistically simulate the interaction of light with different types of objects (similar to other photorealistic rendering techniques). Specifically, it is capable of simulating the refraction of light through a transparent substance such as glass or water (including caustics), diffuse interreflection between illuminated objects, the subsurface scattering of light in translucent materials, and some of the effects caused by particulate matter such as smoke or water vapor. Photon mapping can also be extended to more accurate simulations of light, such as spectral rendering. Progressive photon mapping (PPM) starts with ray tracing and then adds more and more photon mapping passes to provide a progressively more accurate render. Photon mapping: Unlike path tracing, bidirectional path tracing, volumetric path tracing, and Metropolis light transport, photon mapping is a "biased" rendering algorithm, which means that averaging infinitely many renders of the same scene using this method does not converge to a correct solution to the rendering equation. However, it is a consistent method, and the accuracy of a render can be increased by increasing the number of photons. As the number of photons approaches infinity, a render will get closer and closer to the solution of the rendering equation. Effects: Caustics Light refracted or reflected causes patterns called caustics, usually visible as concentrated patches of light on nearby surfaces. For example, as light rays pass through a wine glass sitting on a table, they are refracted and patterns of light are visible on the table. Photon mapping can trace the paths of individual photons to model where these concentrated patches of light will appear. Effects: Diffuse interreflection Diffuse interreflection is apparent when light from one diffuse object is reflected onto another. Photon mapping is particularly adept at handling this effect because the algorithm reflects photons from one surface to another based on that surface's bidirectional reflectance distribution function (BRDF), and thus light from one object striking another is a natural result of the method. Diffuse interreflection was first modeled using radiosity solutions. Photon mapping differs though in that it separates the light transport from the nature of the geometry in the scene. Color bleed is an example of diffuse interreflection. Effects: Subsurface scattering Subsurface scattering is the effect evident when light enters a material and is scattered before being absorbed or reflected in a different direction. Subsurface scattering can accurately be modeled using photon mapping. This was the original way Jensen implemented it; however, the method becomes slow for highly scattering materials, and bidirectional surface scattering reflectance distribution functions (BSSRDFs) are more efficient in these situations. Usage: Construction of the photon map (1st pass) With photon mapping, light packets called photons are sent out into the scene from the light sources. Whenever a photon intersects with a surface, the intersection point and incoming direction are stored in a cache called the photon map. Typically, two photon maps are created for a scene: one especially for caustics and a global one for other light. After intersecting the surface, a probability for either reflecting, absorbing, or transmitting/refracting is given by the material. A Monte Carlo method called Russian roulette is used to choose one of these actions. If the photon is absorbed, no new direction is given, and tracing for that photon ends. If the photon reflects, the surface's bidirectional reflectance distribution function is used to determine the ratio of reflected radiance. Finally, if the photon is transmitting, a function for its direction is given depending upon the nature of the transmission. Usage: Once the photon map is constructed (or during construction), it is typically arranged in a manner that is optimal for the k-nearest neighbor algorithm, as photon look-up time depends on the spatial distribution of the photons. Jensen advocates the usage of kd-trees. The photon map is then stored on disk or in memory for later usage. Rendering (2nd pass) In this step of the algorithm, the photon map created in the first pass is used to estimate the radiance of every pixel of the output image. For each pixel, the scene is ray traced until the closest surface of intersection is found. At this point, the rendering equation is used to calculate the surface radiance leaving the point of intersection in the direction of the ray that struck it. To facilitate efficiency, the equation is decomposed into four separate factors: direct illumination, specular reflection, caustics, and soft indirect illumination. For an accurate estimate of direct illumination, a ray is traced from the point of intersection to each light source. As long as a ray does not intersect another object, the light source is used to calculate the direct illumination. For an approximate estimate of indirect illumination, the photon map is used to calculate the radiance contribution. Specular reflection can be, in most cases, calculated using ray tracing procedures (as it handles reflections well). The contribution to the surface radiance from caustics is calculated using the caustics photon map directly. The number of photons in this map must be sufficiently large, as the map is the only source for caustics information in the scene. For soft indirect illumination, radiance is calculated using the photon map directly. This contribution, however, does not need to be as accurate as the caustics contribution and thus uses the global photon map. Calculating radiance using the photon map In order to calculate surface radiance at an intersection point, one of the cached photon maps is used. The steps are: Gather the N nearest photons using the nearest neighbor search function on the photon map. Let S be the sphere that contains these N photons. For each photon, divide the amount of flux (real photons) that the photon represents by the area of S and multiply by the BRDF applied to that photon. The sum of those results for each photon represents total surface radiance returned by the surface intersection in the direction of the ray that struck it. Usage: Optimizations To avoid emitting unneeded photons, the initial direction of the outgoing photons is often constrained. Instead of simply sending out photons in random directions, they are sent in the direction of a known object that is a desired photon manipulator to either focus or diffuse the light. There are many other refinements that can be made to the algorithm: for example, choosing the number of photons to send, and where and in what pattern to send them. It would seem that emitting more photons in a specific direction would cause a higher density of photons to be stored in the photon map around the position where the photons hit, and thus measuring this density would give an inaccurate value for irradiance. This is true; however, the algorithm used to compute radiance does not depend on irradiance estimates. Usage: For soft indirect illumination, if the surface is Lambertian, then a technique known as irradiance caching may be used to interpolate values from previous calculations. Usage: To avoid unnecessary collision testing in direct illumination, shadow photons can be used. During the photon mapping process, when a photon strikes a surface, in addition to the usual operations performed, a shadow photon is emitted in the same direction the original photon came from that goes all the way through the object. The next object it collides with causes a shadow photon to be stored in the photon map. Then during the direct illumination calculation, instead of sending out a ray from the surface to the light that tests collisions with objects, the photon map is queried for shadow photons. If none are present, then the object has a clear line of sight to the light source and additional calculations can be avoided. Usage: To optimize image quality, particularly of caustics, Jensen recommends use of a cone filter. Essentially, the filter gives weight to photons' contributions to radiance depending on how far they are from ray-surface intersections. This can produce sharper images. Image space photon mapping achieves real-time performance by computing the first and last scattering using a GPU rasterizer. Variations Although photon mapping was designed to work primarily with ray tracers, it can also be extended for use with scanline renderers.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Lateral propriospinal tract** Lateral propriospinal tract: Lateral propriospinal tract is a collection of nerve fibers, ascending, descending, crossed and uncrossed, that interconnect various levels of the spinal cord. Its fibers are largely myelinated. It is a component of the white lateral columns. Most prominent in the cervical and lumbar regions, it is located close to the spinal central gray. Shorter fibers are closer to, longer fibers further from the gray The tract is one of three propriospinal tracts in which most pathways intrinsic to the spinal cord are located. The others are the ventral propriospinal tract and the dorsal propriospinal tract.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Unique hues** Unique hues: Unique hue is a term used in perceptual psychology of color vision and generally applied to the purest hues of blue, green, yellow and red. They cannot be described as a mixture of other hues, and are therefore pure, whereas all other hues are composite. The neural correlate of the unique hues are approximated by the extremes of the opponent channels in opponent process theory. In this context, unique hues are sometimes described as "psychological primaries" as they can be considered analogous to the primary colors of trichromatic color theory. Opponent Process Theory: The concept of certain hues as 'unique' came with the introduction of opponent process theory, which Ewald Hering introduced in 1878. Hering first proposed the idea that red, green, blue, and yellow were unique hues ("Urfarben"), based on the concept that these colors could not be simultaneously perceived. These hues represented the extremes of two perpendicular axes of color: a red-green axis and a blue-yellow axis. While this theory with 4 unique hues was initially considered contradictory to the Young-Helmholtz trichromatic theory's three primary colors, the two theories were reconciled theoretically by Erwin Schrödinger and the later discovery of color-opponent cells in the retina and lateral geniculate nucleus (LGN) related the two theories physiologically. Physiology: A physiological pathway from the cones in the retina to a neural correlate for the psychological unique hues has been elusive. Mollon and Jordan state: “...the nature of the unique hues remains mysterious and we do not know whether they tell us anything about the neural organisation of the visual system.” The first transformation of light to a neuronal signal (visual phototransduction) yields 3 channels, each proportional to the quantal catch of one cone type (L-, M- and S-), estimated by the LMS color space. The second transformation occurs in the color-opponent cells and produces the opponent process channels: L+M (luminance), L-M (red-green), and S-(L+M) (blue-yellow), the latter of which form the cardinal axes.Hering and researchers until the mid 20th century expected that the cardinal axes would correspond to the unique hues, i.e. the unique hues would exist when one opponent channel is maximally stimulated and the other opponent channel is in equilibrium. However, subsequent psychophysical tests demonstrated that while unique red lies on the extreme of the L-M axis, the other unique hues do not lie on the extremes of either opponent channel (L-M and S-(L+M) axes). Therefore, the cardinal axes are not a direct correlate of our experience of unique hues and a further (third) transformation must be applied to identify correlates, i.e. each unique hue is a synthesis of the opponent process channels. One theory suggests a conversion at a point later than the LGN, and that this produces non-linear combinations resulting in our experience of color being non-linear to the cardinal axes. However, while opponent-cells have been found in the LGN that respond to cone combinations other than those of the cardinal axes, such as M-S, there is no physiological understanding of this third transformation. An opposing theory therefore suggests that hues are learned based on variations in the visual environment; that unique hues represent an adaptation away from the cardinal axes and unique hues cannot be explained by relative numbers of excited L- and M-cones or their sensitivities.There is mixed evidence as to whether unique hues are perceptually privileged compared to other colors. Some research suggests that there is no greater sensitivity for unique hues compared to other colors, but other evidence suggests there is greater sensitivity for yellows and blues, which may be due to them coinciding with the daylight locus. There is no direct evidence that larger populations of neurons are dedicated to unique hues compared to other colors, but some EEG research suggests that the latency of some EEG components may be shorter for unique hues compared to non-unique hues, and that colors can be decoded with a higher accuracy from EEG signals when they are unique hues. Measurement: Unique hues are typically quantified as wavelength of monochromatic light, Munsell color, or hue degree derived from a RGB color space. The subject is asked to determine the hue that is not contaminated by neighboring unique hues, either by the method of adjustment, where the subject freely adjusts the color until they reach the unique hue, or two-alternative forced choice (2AFC) staircases. In the latter, the subject iteratively chooses which of two spectral color options is more pure. The unchosen color is replaced with a color on the opposite side of the chosen color. When the same color is chosen twice in a row, this constitutes a reversal, and the step size decreases. After a certain number of reversals, the wavelength/hue of the unique hue is determined. Variability: The unique hues have been experimentally determined to represent average hue angles of 353° (carmine-red), 128° (cobalt green), 228° (cobalt blue), 58° (yellow). However, the values have large inter-subject and slight intra-subject variability, depending on the state of adaptation of the visual system. For example, the wavelength attributed to unique green varies by up to 70 nm between subjects. The variance greatly exceeds the variance that would be expected from differing L:M cone ratios or spectral sensitivities, but the source of this variance has not been identified.Unique hues are a useful tool in measuring intra-subject variability in color perception. Neitz et al (2002) show that unique yellow shifts towards longer wavelengths following multi-day adaptation to red environments, and is also shifted for deuteranomalous colorblind observers. The researchers interpret these results as suggesting a long-term normalisation mechanism which can change the weighting of cone inputs to compensate for global changes in illumination, allowing color vision to remain optimal in a changing chromatic environment. Unique hues have also been shown to change over the course of the year as a result of adaptation to differences in the color spectrum of the environment in summer compared to winter, and have been shown to change after surgery to remove cataracts.Unique hues have played an important role in understanding linguistic relativity or the idea that language has a significant influence on thought. The way in which language and culture affects color naming is debated and not yet fully understood. The Universalist side of the debate argues that unique color terms are biologically tied to the human visual system and the visual environment and are the same regardless of language and culture. The Relativist side argues that language contextualizes thought and therefore perception, the idea being that having a different environment and culture causes the perception of the individual to be different. Variability: In CVD Unique hues have different meaning in subjects with color vision deficiency. Unique yellow was determined to skew to higher wavelengths for anomalous trichromats (deuteranomaly), approaching 700 nm for strong deutans. Dichromats, who possess a single chromatic opponent channel, thereby have unique hues at the extremes of their visible spectrum, where each cone is excited independently, which renders unique hues an ineffective tool for quantifying dichromatic color vision. However, it is common to use similar techniques for defining the wavelength corresponding to "unique white" (achromatic point) of dichromats as means for quantifying their color vision. While imbalance in the L:M cone ratio is linked to mild red-green CVD, there is no dependence of unique yellow on the L:M ratio. Likewise, there is no change to unique yellow for carriers of dichromacy.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Interrupt** Interrupt: In digital computers, an interrupt (sometimes referred to as a trap) is a request for the processor to interrupt currently executing code (when permitted), so that the event can be processed in a timely manner. If the request is accepted, the processor will suspend its current activities, save its state, and execute a function called an interrupt handler (or an interrupt service routine, ISR) to deal with the event. This interruption is often temporary, allowing the software to resume normal activities after the interrupt handler finishes, although the interrupt could instead indicate a fatal error.Interrupts are commonly used by hardware devices to indicate electronic or physical state changes that require time-sensitive attention. Interrupts are also commonly used to implement computer multitasking, especially in real-time computing. Systems that use interrupts in these ways are said to be interrupt-driven. History: Hardware interrupts were introduced as an optimization, eliminating unproductive waiting time in polling loops, waiting for external events. The first system to use this approach was the DYSEAC, completed in 1954, although earlier systems provided error trap functions.The UNIVAC 1103A computer is generally credited with the earliest use of interrupts in 1953. Earlier, on the UNIVAC I (1951) "Arithmetic overflow either triggered the execution of a two-instruction fix-up routine at address 0, or, at the programmer's option, caused the computer to stop." The IBM 650 (1954) incorporated the first occurrence of interrupt masking. The National Bureau of Standards DYSEAC (1954) was the first to use interrupts for I/O. The IBM 704 was the first to use interrupts for debugging, with a "transfer trap", which could invoke a special routine when a branch instruction was encountered. The MIT Lincoln Laboratory TX-2 system (1957) was the first to provide multiple levels of priority interrupts. Types: Interrupt signals may be issued in response to hardware or software events. These are classified as hardware interrupts or software interrupts, respectively. For any particular processor, the number of interrupt types is limited by the architecture. Types: Hardware interrupts A hardware interrupt is a condition related to the state of the hardware that may be signaled by an external hardware device, e.g., an interrupt request (IRQ) line on a PC, or detected by devices embedded in processor logic (e.g., the CPU timer in IBM System/370), to communicate that the device needs attention from the operating system (OS) or, if there is no OS, from the bare metal program running on the CPU. Such external devices may be part of the computer (e.g., disk controller) or they may be external peripherals. For example, pressing a keyboard key or moving a mouse plugged into a PS/2 port triggers hardware interrupts that cause the processor to read the keystroke or mouse position. Types: Hardware interrupts can arrive asynchronously with respect to the processor clock, and at any time during instruction execution. Consequently, all incoming hardware interrupt signals are conditioned by synchronizing them to the processor clock, and acted upon only at instruction execution boundaries. In many systems, each device is associated with a particular IRQ signal. This makes it possible to quickly determine which hardware device is requesting service, and to expedite servicing of that device. Types: On some older systems, such as the 1964 CDC 3600, all interrupts went to the same location, and the OS used a specialized instruction to determine the highest-priority outstanding unmasked interrupt. On contemporary systems, there is generally a distinct interrupt routine for each type of interrupt (or for each interrupt source), often implemented as one or more interrupt vector tables. Types: Masking To mask an interrupt is to disable it, so it is deferred or ignored by the processor, while to unmask an interrupt is to enable it.Processors typically have an internal interrupt mask register, which allows selective enabling (and disabling) of hardware interrupts. Each interrupt signal is associated with a bit in the mask register. On some systems, the interrupt is enabled when the bit is set, and disabled when the bit is clear. On others, the reverse is true, and a set bit disables the interrupt. When the interrupt is disabled, the associated interrupt signal may be ignored by the processor, or it may remain pending. Signals which are affected by the mask are called maskable interrupts. Types: Some interrupt signals are not affected by the interrupt mask and therefore cannot be disabled; these are called non-maskable interrupts (NMIs). These indicate high-priority events which cannot be ignored under any circumstances, such as the timeout signal from a watchdog timer. Missing interrupts One failure mode is when the hardware does not generate the expected interrupt for a change in state, causing the operating system to wait indefinitely. Depending on the details, the failure might affect only a single process or might have global impact. Some operating systems have code specifically to deal with this. Types: As an example, IBM Operating System/360 (OS/360) relies on a not-ready to ready device-end interrupt when a tape has been mounted on a tape drive, and will not read the tape label until that interrupt occurs or is simulated. IBM added code in OS/360 so that the VARY ONLINE command will simulate a device end interrupt on the target device. Types: Spurious interrupts A spurious interrupt is a hardware interrupt for which no source can be found. The term "phantom interrupt" or "ghost interrupt" may also be used to describe this phenomenon. Spurious interrupts tend to be a problem with a wired-OR interrupt circuit attached to a level-sensitive processor input. Such interrupts may be difficult to identify when a system misbehaves. Types: In a wired-OR circuit, parasitic capacitance charging/discharging through the interrupt line's bias resistor will cause a small delay before the processor recognizes that the interrupt source has been cleared. If the interrupting device is cleared too late in the interrupt service routine (ISR), there won't be enough time for the interrupt circuit to return to the quiescent state before the current instance of the ISR terminates. The result is the processor will think another interrupt is pending, since the voltage at its interrupt request input will be not high or low enough to establish an unambiguous internal logic 1 or logic 0. The apparent interrupt will have no identifiable source, hence the "spurious" moniker. Types: A spurious interrupt may also be the result of electrical anomalies due to faulty circuit design, high noise levels, crosstalk, timing issues, or more rarely, device errata.A spurious interrupt may result in system deadlock or other undefined operation if the ISR doesn't account for the possibility of such an interrupt occurring. As spurious interrupts are mostly a problem with wired-OR interrupt circuits, good programming practice in such systems is for the ISR to check all interrupt sources for activity and take no action (other than possibly logging the event) if none of the sources is interrupting. They may even lead to crashing of the computer in adverse scenarios. Types: Software interrupts A software interrupt is requested by the processor itself upon executing particular instructions or when certain conditions are met. Every software interrupt signal is associated with a particular interrupt handler. Types: A software interrupt may be intentionally caused by executing a special instruction which, by design, invokes an interrupt when executed. Such instructions function similarly to subroutine calls and are used for a variety of purposes, such as requesting operating system services and interacting with device drivers (e.g., to read or write storage media). Software interrupts may also be triggered by program execution errors or by the virtual memory system. Types: Typically, the operating system kernel will catch and handle such interrupts. Some interrupts are handled transparently to the program - for example, the normal resolution of a page fault is to make the required page accessible in physical memory. But in other cases such as a segmentation fault the operating system executes a process callback. On Unix-like operating systems this involves sending a signal such as SIGSEGV, SIGBUS, SIGILL or SIGFPE, which may either call a signal handler or execute a default action (terminating the program). On Windows the callback is made using Structured Exception Handling with an exception code such as STATUS_ACCESS_VIOLATION or STATUS_INTEGER_DIVIDE_BY_ZERO.In a kernel process, it is often the case that some types of software interrupts are not supposed to happen. If they occur nonetheless, an operating system crash may result. Types: Terminology The terms interrupt, trap, exception, fault, and abort are used to distinguish types of interrupts, although "there is no clear consensus as to the exact meaning of these terms". The term trap may refer to any interrupt, to any software interrupt, to any synchronous software interrupt, or only to interrupts caused by instructions with trap in their names. In some usages, the term trap refers specifically to a breakpoint intended to initiate a context switch to a monitor program or debugger. It may also refer to a synchronous interrupt caused by an exceptional condition (e.g., division by zero, invalid memory access, illegal opcode), although the term exception is more common for this. Types: x86 divides interrupts into (hardware) interrupts and software exceptions, and identifies three types of exceptions: faults, traps, and aborts. (Hardware) interrupts are interrupts triggered asynchronously by an I/O device, and allow the program to be restarted with no loss of continuity. A fault is restartable as well but is tied to the synchronous execution of an instruction - the return address points to the faulting instruction. A trap is similar to a fault except that the return address points to the instruction to be executed after the trapping instruction; one prominent use is to implement system calls. An abort is used for severe errors, such as hardware errors and illegal values in system tables, and often does not allow a restart of the program.Arm uses the term exception to refer to all types of interrupts, and divides exceptions into (hardware) interrupts, aborts, reset, and exception-generating instructions. Aborts correspond to x86 exceptions and may be prefetch aborts (failed instruction fetches) or data aborts (failed data accesses), and may be synchronous or asynchronous. Asynchronous aborts may be precise or imprecise. MMU aborts (page faults) are synchronous. Triggering methods: Each interrupt signal input is designed to be triggered by either a logic signal level or a particular signal edge (level transition). Level-sensitive inputs continuously request processor service so long as a particular (high or low) logic level is applied to the input. Edge-sensitive inputs react to signal edges: a particular (rising or falling) edge will cause a service request to be latched; the processor resets the latch when the interrupt handler executes. Triggering methods: Level-triggered A level-triggered interrupt is requested by holding the interrupt signal at its particular (high or low) active logic level. A device invokes a level-triggered interrupt by driving the signal to and holding it at the active level. It negates the signal when the processor commands it to do so, typically after the device has been serviced. The processor samples the interrupt input signal during each instruction cycle. The processor will recognize the interrupt request if the signal is asserted when sampling occurs. Level-triggered inputs allow multiple devices to share a common interrupt signal via wired-OR connections. The processor polls to determine which devices are requesting service. After servicing a device, the processor may again poll and, if necessary, service other devices before exiting the ISR. Triggering methods: Edge-triggered An edge-triggered interrupt is an interrupt signaled by a level transition on the interrupt line, either a falling edge (high to low) or a rising edge (low to high). A device wishing to signal an interrupt drives a pulse onto the line and then releases the line to its inactive state. If the pulse is too short to be detected by polled I/O then special hardware may be required to detect it. The important part of edge triggering is that the signal must transition to trigger the interrupt; for example, if the signal was high-low-low, there would only be one falling edge interrupt triggered, and the continued low level would not trigger a further interrupt. The signal must return to the high level and fall again in order to trigger a further interrupt. This contrasts with a level trigger where the low level would continue to create interrupts (if they are enabled) until the signal returns to its high level. Triggering methods: Computers with edge-triggered interrupts may include an interrupt register that retains the status of pending interrupts. Systems with interrupt registers generally have interrupt mask registers as well. Processor response: The processor samples the interrupt trigger signals or interrupt register during each instruction cycle, and will process the highest priority enabled interrupt found. Regardless of the triggering method, the processor will begin interrupt processing at the next instruction boundary following a detected trigger, thus ensuring: The processor status is saved in a known manner. Typically the status is stored in a known location, but on some systems it is stored on a stack. All instructions before the one pointed to by the PC have fully executed. No instruction beyond the one pointed to by the PC has been executed, or any such instructions are undone before handling the interrupt. Processor response: The execution state of the instruction pointed to by the PC is known.There are several different architectures for handling interrupts. In some, there is a single interrupt handler that must scan for the highest priority enabled interrupt. In others, there are separate interrupt handlers for separate interrupt types, separate I/O channels or devices, or both.PDP-11 Peripherals and Interfacing Handbook (PDF). Digital Equipment Corporation. p. 4.</ref> Several interrupt causes may have the same interrupt type and thus the same inteerupt handler, requiring the interrupt handler to determine the cause. System implementation: Interrupts may be implemented in hardware as a distinct component with control lines, or they may be integrated into the memory subsystem. System implementation: If implemented in hardware as a distinct component, an interrupt controller circuit such as the IBM PC's Programmable Interrupt Controller (PIC) may be connected between the interrupting device and the processor's interrupt pin to multiplex several sources of interrupt onto the one or two CPU lines typically available. If implemented as part of the memory controller, interrupts are mapped into the system's memory address space. System implementation: Shared IRQs Multiple devices may share an edge-triggered interrupt line if they are designed to. The interrupt line must have a pull-down or pull-up resistor so that when not actively driven it settles to its inactive state, which is the default state of it. Devices signal an interrupt by briefly driving the line to its non-default state, and let the line float (do not actively drive it) when not signaling an interrupt. This type of connection is also referred to as open collector. The line then carries all the pulses generated by all the devices. (This is analogous to the pull cord on some buses and trolleys that any passenger can pull to signal the driver that they are requesting a stop.) However, interrupt pulses from different devices may merge if they occur close in time. To avoid losing interrupts the CPU must trigger on the trailing edge of the pulse (e.g. the rising edge if the line is pulled up and driven low). After detecting an interrupt the CPU must check all the devices for service requirements. System implementation: Edge-triggered interrupts do not suffer the problems that level-triggered interrupts have with sharing. Service of a low-priority device can be postponed arbitrarily, while interrupts from high-priority devices continue to be received and get serviced. If there is a device that the CPU does not know how to service, which may raise spurious interrupts, it won't interfere with interrupt signaling of other devices. However, it is easy for an edge-triggered interrupt to be missed - for example, when interrupts are masked for a period - and unless there is some type of hardware latch that records the event it is impossible to recover. This problem caused many "lockups" in early computer hardware because the processor did not know it was expected to do something. More modern hardware often has one or more interrupt status registers that latch interrupts requests; well-written edge-driven interrupt handling code can check these registers to ensure no events are missed. System implementation: The elderly Industry Standard Architecture (ISA) bus uses edge-triggered interrupts, without mandating that devices be able to share IRQ lines, but all mainstream ISA motherboards include pull-up resistors on their IRQ lines, so well-behaved ISA devices sharing IRQ lines should just work fine. The parallel port also uses edge-triggered interrupts. Many older devices assume that they have exclusive use of IRQ lines, making it electrically unsafe to share them. System implementation: There are 3 ways multiple devices "sharing the same line" can be raised. First is by exclusive conduction (switching) or exclusive connection (to pins). Next is by bus (all connected to the same line listening): cards on a bus must know when they are to talk and not talk (i.e., the ISA bus). Talking can be triggered in two ways: by accumulation latch or by logic gates. Logic gates expect a continual data flow that is monitored for key signals. Accumulators only trigger when the remote side excites the gate beyond a threshold, thus no negotiated speed is required. Each has its speed versus distance advantages. A trigger, generally, is the method in which excitation is detected: rising edge, falling edge, threshold (oscilloscope can trigger a wide variety of shapes and conditions). System implementation: Triggering for software interrupts must be built into the software (both in OS and app). A 'C' app has a trigger table (a table of functions) in its header, which both the app and OS know of and use appropriately that is not related to hardware. However do not confuse this with hardware interrupts which signal the CPU (the CPU enacts software from a table of functions, similarly to software interrupts). System implementation: Difficulty with sharing interrupt lines Multiple devices sharing an interrupt line (of any triggering style) all act as spurious interrupt sources with respect to each other. With many devices on one line, the workload in servicing interrupts grows in proportion to the square of the number of devices. It is therefore preferred to spread devices evenly across the available interrupt lines. Shortage of interrupt lines is a problem in older system designs where the interrupt lines are distinct physical conductors. Message-signaled interrupts, where the interrupt line is virtual, are favored in new system architectures (such as PCI Express) and relieve this problem to a considerable extent. System implementation: Some devices with a poorly designed programming interface provide no way to determine whether they have requested service. They may lock up or otherwise misbehave if serviced when they do not want it. Such devices cannot tolerate spurious interrupts, and so also cannot tolerate sharing an interrupt line. ISA cards, due to often cheap design and construction, are notorious for this problem. Such devices are becoming much rarer, as hardware logic becomes cheaper and new system architectures mandate shareable interrupts. System implementation: Hybrid Some systems use a hybrid of level-triggered and edge-triggered signaling. The hardware not only looks for an edge, but it also verifies that the interrupt signal stays active for a certain period of time. System implementation: A common use of a hybrid interrupt is for the NMI (non-maskable interrupt) input. Because NMIs generally signal major – or even catastrophic – system events, a good implementation of this signal tries to ensure that the interrupt is valid by verifying that it remains active for a period of time. This 2-step approach helps to eliminate false interrupts from affecting the system. System implementation: Message-signaled A message-signaled interrupt does not use a physical interrupt line. Instead, a device signals its request for service by sending a short message over some communications medium, typically a computer bus. The message might be of a type reserved for interrupts, or it might be of some pre-existing type such as a memory write. Message-signalled interrupts behave very much like edge-triggered interrupts, in that the interrupt is a momentary signal rather than a continuous condition. Interrupt-handling software treats the two in much the same manner. Typically, multiple pending message-signaled interrupts with the same message (the same virtual interrupt line) are allowed to merge, just as closely spaced edge-triggered interrupts can merge. Message-signalled interrupt vectors can be shared, to the extent that the underlying communication medium can be shared. No additional effort is required. Because the identity of the interrupt is indicated by a pattern of data bits, not requiring a separate physical conductor, many more distinct interrupts can be efficiently handled. This reduces the need for sharing. Interrupt messages can also be passed over a serial bus, not requiring any additional lines. PCI Express, a serial computer bus, uses message-signaled interrupts exclusively. System implementation: Doorbell In a push button analogy applied to computer systems, the term doorbell or doorbell interrupt is often used to describe a mechanism whereby a software system can signal or notify a computer hardware device that there is some work to be done. Typically, the software system will place data in some well-known and mutually agreed upon memory locations, and "ring the doorbell" by writing to a different memory location. This different memory location is often called the doorbell region, and there may even be multiple doorbells serving different purposes in this region. It is this act of writing to the doorbell region of memory that "rings the bell" and notifies the hardware device that the data are ready and waiting. The hardware device would now know that the data are valid and can be acted upon. It would typically write the data to a hard disk drive, or send them over a network, or encrypt them, etc. System implementation: The term doorbell interrupt is usually a misnomer. It is similar to an interrupt, because it causes some work to be done by the device; however, the doorbell region is sometimes implemented as a polled region, sometimes the doorbell region writes through to physical device registers, and sometimes the doorbell region is hardwired directly to physical device registers. When either writing through or directly to physical device registers, this may cause a real interrupt to occur at the device's central processor unit (CPU), if it has one. System implementation: Doorbell interrupts can be compared to Message Signaled Interrupts, as they have some similarities. Multiprocessor IPI In multiprocessor systems, a processor may send an interrupt request to another processor via inter-processor interrupts (IPI). Performance: Interrupts provide low overhead and good latency at low load, but degrade significantly at high interrupt rate unless care is taken to prevent several pathologies. The phenomenon where the overall system performance is severely hindered by excessive amounts of processing time spent handling interrupts is called an interrupt storm. There are various forms of livelocks, when the system spends all of its time processing interrupts to the exclusion of other required tasks. Performance: Under extreme conditions, a large number of interrupts (like very high network traffic) may completely stall the system. To avoid such problems, an operating system must schedule network interrupt handling as carefully as it schedules process execution.With multi-core processors, additional performance improvements in interrupt handling can be achieved through receive-side scaling (RSS) when multiqueue NICs are used. Such NICs provide multiple receive queues associated to separate interrupts; by routing each of those interrupts to different cores, processing of the interrupt requests triggered by the network traffic received by a single NIC can be distributed among multiple cores. Distribution of the interrupts among cores can be performed automatically by the operating system, or the routing of interrupts (usually referred to as IRQ affinity) can be manually configured.A purely software-based implementation of the receiving traffic distribution, known as receive packet steering (RPS), distributes received traffic among cores later in the data path, as part of the interrupt handler functionality. Advantages of RPS over RSS include no requirements for specific hardware, more advanced traffic distribution filters, and reduced rate of interrupts produced by a NIC. As a downside, RPS increases the rate of inter-processor interrupts (IPIs). Receive flow steering (RFS) takes the software-based approach further by accounting for application locality; further performance improvements are achieved by processing interrupt requests by the same cores on which particular network packets will be consumed by the targeted application. Typical uses: Interrupts are commonly used to service hardware timers, transfer data to and from storage (e.g., disk I/O) and communication interfaces (e.g., UART, Ethernet), handle keyboard and mouse events, and to respond to any other time-sensitive events as required by the application system. Non-maskable interrupts are typically used to respond to high-priority requests such as watchdog timer timeouts, power-down signals and traps. Typical uses: Hardware timers are often used to generate periodic interrupts. In some applications, such interrupts are counted by the interrupt handler to keep track of absolute or elapsed time, or used by the OS task scheduler to manage execution of running processes, or both. Periodic interrupts are also commonly used to invoke sampling from input devices such as analog-to-digital converters, incremental encoder interfaces, and GPIO inputs, and to program output devices such as digital-to-analog converters, motor controllers, and GPIO outputs. Typical uses: A disk interrupt signals the completion of a data transfer from or to the disk peripheral; this may cause a process to run which is waiting to read or write. A power-off interrupt predicts imminent loss of power, allowing the computer to perform an orderly shut-down while there still remains enough power to do so. Keyboard interrupts typically cause keystrokes to be buffered so as to implement typeahead. Typical uses: Interrupts are sometimes used to emulate instructions which are unimplemented on some computers in a product family. For example floating point instructions may be implemented in hardware on some systems and emulated on lower-cost systems. In the latter case, execution of an unimplemented floating point instruction will cause an "illegal instruction" exception interrupt. The interrupt handler will implement the floating point function in software and then return to the interrupted program as if the hardware-implemented instruction had been executed. This provides application software portability across the entire line. Typical uses: Interrupts are similar to signals, the difference being that signals are used for inter-process communication (IPC), mediated by the kernel (possibly via system calls) and handled by processes, while interrupts are mediated by the processor and handled by the kernel. The kernel may pass an interrupt as a signal to the process that caused it (typical examples are SIGSEGV, SIGBUS, SIGILL and SIGFPE).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Pseudo-atoll** Pseudo-atoll: A pseudo-atoll, like an atoll, is an island that encircles a lagoon, either partially or completely. A pseudo-atoll differs from an atoll as established by several authorities, such as how it is formed (not by subsidence, nor by coral). It is considered a preferable term to "near-atoll". There is a need for rigorous definition of "pseudo-atoll" before it can be accepted as a general term. Definitions: Alexander Agassiz gave the term pseudo-atoll to "any ring-shaped reefs not formed as a result of subsidence". while Norman D. Newell and J. Keith Rigby called such reefs non-coral. and "We conclude that almost-atoll should be retained as a descriptive term as defined by Davis and Tayama, and that the use of "near-atoll" as a synonym be abandoned. The value of terms such as "semi-atoll" and "pseudo-atoll" needs close examination and more rigorous definition before being generally accepted." H. Mergner yet states that micro-atolls classify as pseudo-atolls. Professor David R. Stoddart of Berkeley states an "almost-atoll" is an atoll with a central island of left over residue. Usage: Dr. Edward J. Petuch, author of Cenozoic seas: the view from eastern North America, refers to pseudo-atolls as pseudoatolls with the Everglades Pseudoatoll as an example.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Leatt-Brace** Leatt-Brace: The Leatt-Brace is a neck brace designed to help reduce neck injuries in helmeted sports, including Supercross, motocross, enduro, roadracing, downhill-type mountain biking, BMX, ATV, street riding, karting, and snowmobiling. The brace is marketed and distributed worldwide by the Leatt Corporation, a Nevada corporation with its administrative office based in Cape Town, South Africa. History and Description: South African inventor Dr. Christopher Leatt filed his first neck-brace-related patent in 2003. The Leatt-Brace is designed to work only when worn in conjunction with the full-face helmets typically used in the aforementioned activities. The brace uses what the inventor calls Alternative Load Path Technology to help absorb and disperse injury-producing forces. The brace is designed to limit hyperflexion, hyperextension, lateral hyperflexion and posterior hypertranslation, which are extreme forward, backward, sideways, and rearward movement of the head on the neck. Although the brace cannot protect against pure axial compression of the spine, it is designed to help minimize such loading, when coupled with one of the extreme movements above.In 2009, the Leatt-Brace received C/E approval, which is granted by European Union law. Gaining CE approval for Personal Protective Equipment (PPE) includes adhering to basic health and safety requirements as well as performance requirements. The Leatt-Brace was approved based on a review of the concept, design, operation and testing of the brace, and chemical analysis of brace components.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Schiffler point** Schiffler point: In geometry, the Schiffler point of a triangle is a triangle center, a point defined from the triangle that is equivariant under Euclidean transformations of the triangle. This point was first defined and investigated by Schiffler et al. (1985). Definition: A triangle △ABC with the incenter I has its Schiffler point at the point of concurrence of the Euler lines of the four triangles △BCI, △CAI, △ABI, △ABC. Schiffler's theorem states that these four lines all meet at a single point. Coordinates: Trilinear coordinates for the Schiffler point are cos cos cos cos cos cos ⁡B or, equivalently, b+c−ab+c:c+a−bc+a:a+b−ca+b where a, b, c denote the side lengths of triangle △ABC.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Conductive anodic filament** Conductive anodic filament: Conductive anodic filament, also called CAF, is a metallic filament that forms from an electrochemical migration process and is known to cause printed circuit board (PCB) failures. Mechanism: CAF formation is a process involving the transport of conductive chemistries across a nonmetallic substrate under the influence of an applied electric field. CAF is influenced by electric field strength, temperature (including soldering temperatures), humidity, laminate material, and the presence of manufacturing defects. The occurrence of CAF failures has been primarily driven by the electronics industry pushing for higher density circuit boards and the use of electronics in harsher environments for high reliability applications. Failure modes and detection: CAF commonly occurs between adjacent vias (i.e. plated through holes) inside a PCB, as the copper migrates along the glass/resin interface from anode to cathode. CAF failures can manifest as current leakage, intermittent electrical shorts, and even dielectric breakdown between conductors in printed circuit boards. This often makes CAF very difficult to detect, especially when it occurs as an intermittent issue. There are a few things that can be done to isolate the fault location and confirm CAF as a root cause of a failure. If the issue is intermittent then putting the sample of interest under combined temperature-humidity-bias (THB) may help recreate the failure mode. In addition, techniques such as cross sectioning or superconducting quantum interference device (SQUID) can be used to identify the failure. Considerations and mitigation: There are several design considerations and mitigation techniques that can be used to reduce the susceptibility to CAF. Certain material selection (i.e. laminate) and design rules (i.e. via spacing) can help reduce CAF risk. Poor adhesion between the resin and glass fibers in the PCB can create a path for CAF to occur. This may depend on parameters of the silane finish applied to the glass fibers, which is used to promote adhesion to the resin. There are also testing standards that can be performed to assess CAF risk. IPC TM-650 2.6.25 provides a test method to assess CAF susceptibility. Additionally, IPC TM-650 2.6.16 provides a pressure vessel test method to rapidly evaluate glass epoxy laminate integrity. This is helpful but it may often be better to use design rules and proper material selection to proactively mitigate the issue.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Augeas (software)** Augeas (software): Augeas is a free software configuration-management library, written in the C programming language. It is licensed under the terms of the GNU Lesser General Public License. Augeas uses programs called lenses (in reference to the Harmony Project) to map a filesystem to an XML tree which can then be parsed using an XPath syntax, using a bidirectional transformation. Writing such lenses extends the amount of files Augeas can parse. Bindings: Augeas has bindings for Python, Ruby, OCaml, Perl, Haskell, Java, PHP, and Tcl. Programs using augeas: Certbot, ACME client Puppet provides an Augeas module which makes use of the Ruby bindings SaltStack provides an Augeas module which makes use of the python bindings
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Polynomials calculating sums of powers of arithmetic progressions** Polynomials calculating sums of powers of arithmetic progressions: The polynomials calculating sums of powers of arithmetic progressions are polynomials in a variable that depend both on the particular arithmetic progression constituting the basis of the summed powers and on the constant exponent, non-negative integer, chosen. Their degree always exceeds the constant exponent by one unit and have the property that when the polynomial variable coincides with the number of summed addends, the result of the polynomial function also coincides with that of the sum. Polynomials calculating sums of powers of arithmetic progressions: The problem therefore consists in finding Sh,dm(n) i.e. polynomials as a function of n calculating sums of n addends: ∑k=0n−1(h+kd)m=hm+(h+d)m+⋯+(h+(n−1)d)m, with m and n integers positive, h first term of an arithmetic progression and d≠0 the common difference. The two parameters can be not only integers but also rational, real and even complex. History: Ancient period The history of the problem begins in antiquity and coincides with that of some of its special cases. The case m=1, coincides with that of the calculation of the arithmetic series, the sum of the first n values of an arithmetic progression. This problem is quite simple but the case already known by the Pythagorean school for its connection with triangular numbers is historically interesting: 1+2+⋯+n=12n2+12n, Polynomial S1,11(n) calculating the sum of the first n natural numbers.For m>1, the first cases encountered in the history of mathematics are: 1+3+⋯+2n−1=n2, Polynomial S1,21(n) calculating the sum of the first n successive odds forming a square. A property probably well known by the Pythagoreans themselves who, in constructing their figured numbers, had to add each time a gnomon consisting of an odd number of points to obtain the next perfect square. History: 12+22+…+n2=13n3+12n2+16n, Polynomial S1,12(n) calculating the sum of the squares of the successive integers. Property that we find demonstrated in Spirals, a work of Archimedes; 13+23+…+n3=14n4+12n3+14n2, Polynomial S1,13(n) calculating the sum of the cubes of the successive integers. Corollary of a theorem of Nicomachus of Gerasa...L'insieme S1,1m(n) of the cases, to which the two preceding polynomials belong, constitutes the classical problem of powers of successive integers. History: Middle period Over time, many other mathematicians became interested in the problem and made various contributions to its solution. These include Aryabhata, Al-Karaji, Ibn al-Haytham, Thomas Harriot, Johann Faulhaber, Pierre de Fermat and Blaise Pascal who recursively solved the problem of the sum of powers of successive integers by considering an identity that allowed to obtain a polynomial of degree m+1 already knowing the previous ones.In 1713 the family of Jacob Bernoulli posthumously publishes his Artis Conjectandi where the first 10 polynomials of this infinite series appear together with a general formula dependent on particular numbers that were soon named after him. The formula was instead attributed to Johann Faulhaber for his worthy contributions recognized by Bernoulli himself. History: It was also immediately clear that the polynomials S0,1m(n) calcolating the sum of n powers of successive integers starting from zero were very similar to those starting from one. This is because it is evident that S1,1m(n)−S0,1m(n)=nm and that therefore polynomials of degree m+1 of the form 1m+1nm+1+12nm+⋯ subtracted the monomial difference nm they become 1m+1nm+1−12nm+⋯ However, a proof of Faulhaber's formula was missing, which was given more than a century later by Carl G. Jacobi who benefited from the progress of mathematical analysis using the development in infinite series of an exponential function generating Bernoulli numbers. History: Modern period In 1982 A.W.F. Edwards publishes an article in which he shows that Pascal's identity can be expressed by means of triangular matrices containing the Pascal's triangle deprived of 'last element of each line: 10 10 5)(n∑k=0n−1k1∑k=0n−1k2∑k=0n−1k3∑k=0n−1k4) The example is limited by the choice of a fifth order matrix but is easily extendable to higher orders. The equation can be written as: N→=AS→ and multiplying the two sides of the equation to the left by A−1 , inverse of the matrix A, we obtain A−1N→=S→ which allows to arrive directly at the polynomial coefficients without directly using the Bernoulli numbers. Other authors after Edwards dealing with various aspects of the power sum problem take the matrix path and studying aspects of the problem in their articles useful tools such as the Vandermonde vector. Other researchers continue to explore through the traditional analytic route and generalize the problem of the sum of successive integers to any geometric progressionThe coefficients of the polynomials Sh,dm are found through recursive formulas and in other ways that are interesting for number theory as the expression of the result of the sum as a function of Bernoulli polynomials or the formulas involving the Stirling numbers and the r-Whitney numbers of the first and second kind Finally, Edwards' matrix approach was also generalized to any arithmetic progressions Solution by matrix method: The general problem has recently been solved through the use of binomial matrices easily constructible knowing the binomial coefficients and the Pascal's triangle. Solution by matrix method: It is shown that, having chosen the parameters h and d which determine the arithmetic progression and a positive integer m, we find m+1 polynomials corresponding to the following sums of powers: with the polynomial coefficients elements of the row r of the triangular matrix G(h,d)=T(h,d)A−1 of order m+1 Here is the solving formula in the particular case m=3 which gives the polynomials of a given arithmetic progression with exponents from 0 to 3: (Sh,d0(n)Sh,d1(n)Sh,d2(n)Sh,d3(n))=(1000hd00h22hdd20h33h2d3hd2d3)(1000120013301464)−1(nn2n3n4) The equation that can be easily extended to different values of m (non-negative integers) is summarized and generalized as follows: S→h,d(n)=T(h,d)A−1nV→(n) or also by placing with G(h,d)=T(h,d)A−1 S→h,d(n)=G(h,d)nV→(n) Here is the rigorous definition of the matrices and the Vandermonde vector: for m=3 it results therefore Matrix A is that of Edwards already seen, a lower triangular matrix that reproduces, in the non-null elements, the triangle of Pascal deprived of the last element of each row. The elements of T(h,d) on the other hand are the monomials of the power development (h+d)r−1, for 1. Solution by matrix method: .T(0,1) is the neutral element of the row by column product so that the general equation in this case becomes: S→0,1(n)=A−1nV→(n) that is the one discovered by Edwards To arrive from this particular case to prove the general one, it is sufficient to multiply on the left the two members of the equation by the matrix T(h,d) after having ascertained the following identity T(h,d)V→(n)=V(h+dn) Sum of powers of successive odd numbers We use the previous formula to solve the problem of adding powers of successive odds:. The odds correspond to the arithmetic progression with the first element h=1 and as reason 2. Solution by matrix method: We set m = 4 to find the first five polynomials calculating sums of powers of odd. Calculated T(1,2) we obtain: We have therefore At this point the general equation S→1,2(n)=G(1,2)nV→(n) for m=4 and the damage done product: using the last line ( r=5 ) we get then and using the other rows: Sum of successive integers starting with 1 Chosen m=3 and calculated A−1 and T(1,1) which corresponds to Pascal's triangle: Sum of successive integers starting with 0 Chosen m=3 and calculated A−1 and T(0,1) unit matrix: Progression -1,3,7,11,15 ... Solution by matrix method: Chosen again m=3 , calculated T(−1,4) , exploited the result of the previous paragraph and the associative property: Generalization of Faulhaber's formula: The matrix G(h,d) can be expressed as a function of the Bernoulli polynomials in the following way which for m=5 becomes from which the generalized Faulhaber formula is derived: and also the well-known special cases: where the Bernoulli polynomials calculated in 0 are the Bernoulli numbers and those calculated in 1 are its variant with B1 changed of sign.Being {\textstyle B_{m}\left({\frac {h}{d}}+n\right)=\sum _{k=0}^{m}{\binom {m}{k}}B_{m-k}\left({\frac {h}{d}}\right)n^{k}} for the property of translation of Bernoulli's polynomials, the generalized Faulhaber formula can become: very widespread, unlike the other, in the literature. Generalization of Faulhaber's formula: Hence also the two special cases:
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Dynamic range** Dynamic range: Dynamic range (abbreviated DR, DNR, or DYR) is the ratio between the largest and smallest values that a certain quantity can assume. It is often used in the context of signals, like sound and light. It is measured either as a ratio or as a base-10 (decibel) or base-2 (doublings, bits or stops) logarithmic value of the difference between the smallest and largest signal values.Electronically reproduced audio and video is often processed to fit the original material with a wide dynamic range into a narrower recorded dynamic range that can more easily be stored and reproduced; this processing is called dynamic range compression. Human perception: The human senses of sight and hearing have a relatively high dynamic range. However, a human cannot perform these feats of perception at both extremes of the scale at the same time. The human eye takes time to adjust to different light levels, and its dynamic range in a given scene is actually quite limited due to optical glare. The instantaneous dynamic range of human audio perception is similarly subject to masking so that, for example, a whisper cannot be heard in loud surroundings. Human perception: A human is capable of hearing (and usefully discerning) anything from a quiet murmur in a soundproofed room to the loudest heavy metal concert. Such a difference can exceed 100 dB which represents a factor of 100,000 in amplitude and a factor 10,000,000,000 in power. The dynamic range of human hearing is roughly 140 dB, varying with frequency, from the threshold of hearing (around −9 dB SPL at 3 kHz) to the threshold of pain (from 120–140 dB SPL). This wide dynamic range cannot be perceived all at once, however; the tensor tympani, stapedius muscle, and outer hair cells all act as mechanical dynamic range compressors to adjust the sensitivity of the ear to different ambient levels.A human can see objects in starlight or in bright sunlight, even though on a moonless night objects receive one billionth (10−9) of the illumination they would on a bright sunny day; a dynamic range of 90 dB. Change of sensitivity is achieved in part through adjustments of the iris and slow chemical changes, which take some time. Human perception: In practice, it is difficult for humans to achieve the full dynamic experience using electronic equipment. For example, a good quality liquid-crystal display (LCD) has a dynamic range limited to around 1000:1, and some of the latest CMOS image sensors now have measured dynamic ranges of about 23,000:1. Paper reflectance can produce a dynamic range of about 100:1. A professional video camera such as the Sony Digital Betacam achieves a dynamic range of greater than 90 dB in audio recording. Audio: Audio engineers use dynamic range to describe the ratio of the amplitude of the loudest possible undistorted signal to the noise floor, say of a microphone or loudspeaker. Dynamic range is therefore the signal-to-noise ratio (SNR) for the case where the signal is the loudest possible for the system. For example, if the ceiling of a device is 5 V (rms) and the noise floor is 10 µV (rms) then the dynamic range is 500000:1, or 114 dB: In digital audio theory the dynamic range is limited by quantization error. The maximum achievable dynamic range for a digital audio system with Q-bit uniform quantization is calculated as the ratio of the largest sine-wave rms to rms noise is: However, the usable dynamic range may be greater, as a properly dithered recording device can record signals well below the noise floor. Audio: The 16-bit compact disc has a theoretical undithered dynamic range of about 96 dB; however, the perceived dynamic range of 16-bit audio can be 120 dB or more with noise-shaped dither, taking advantage of the frequency response of the human ear.Digital audio with undithered 20-bit quantization is theoretically capable of 120 dB dynamic range, while 24-bit digital audio affords 144 dB dynamic range. Most Digital audio workstations process audio with 32-bit floating-point representation which affords even higher dynamic range and so loss of dynamic range is no longer a concern in terms of digital audio processing. Dynamic range limitations typically result from improper gain staging, recording technique including ambient noise and intentional application of dynamic range compression. Audio: Dynamic range in analog audio is the difference between low-level thermal noise in the electronic circuitry and high-level signal saturation resulting in increased distortion and, if pushed higher, clipping. Multiple noise processes determine the noise floor of a system. Noise can be picked up from microphone self-noise, preamp noise, wiring and interconnection noise, media noise, etc. Audio: Early 78 rpm phonograph discs had a dynamic range of up to 40 dB, soon reduced to 30 dB and worse due to wear from repeated play. Vinyl microgroove phonograph records typically yield 55-65 dB, though the first play of the higher-fidelity outer rings can achieve a dynamic range of 70 dB.German magnetic tape in 1941 was reported to have had a dynamic range of 60 dB, though modern day restoration experts of such tapes note 45-50 dB as the observed dynamic range. Ampex tape recorders in the 1950s achieved 60 dB in practical usage, In the 1960s, improvements in tape formulation processes resulted in 7 dB greater range,: 158  and Ray Dolby developed the Dolby A-Type noise reduction system that increased low- and mid-frequency dynamic range on magnetic tape by 10 dB, and high-frequency by 15 dB, using companding (compression and expansion) of four frequency bands.: 169  The peak of professional analog magnetic recording tape technology reached 90 dB dynamic range in the midband frequencies at 3% distortion, or about 80 dB in practical broadband applications.: 158  The Dolby SR noise reduction system gave a 20 dB further increased range resulting in 110 dB in the midband frequencies at 3% distortion.: 172 Compact Cassette tape performance ranges from 50 to 56 dB depending on tape formulation, with type IV tape tapes giving the greatest dynamic range, and systems such as XDR, dbx and Dolby noise reduction system increasing it further. Specialized bias and record head improvements by Nakamichi and Tandberg combined with Dolby C noise reduction yielded 72 dB dynamic range for the cassette.A dynamic microphone is able to withstand high sound intensity and can have a dynamic range of up to 140 dB. Condenser microphones are also rugged but their dynamic range may be limited by the overloading of their associated electronic circuitry. Practical considerations of acceptable distortion levels in microphones combined with typical practices in a recording studio result in a useful dynamic range of 125 dB.: 75 In 1981, researchers at Ampex determined that a dynamic range of 118 dB on a dithered digital audio stream was necessary for subjective noise-free playback of music in quiet listening environments.Since the early 1990s, it has been recommended by several authorities, including the Audio Engineering Society, that measurements of dynamic range be made with an audio signal present, which is then filtered out in the noise floor measurement used in determining dynamic range. This avoids questionable measurements based on the use of blank media, or muting circuits. Audio: The term dynamic range may be confusing in audio production because it has two conflicting definitions, particularly in the understanding of the loudness war phenomenon. Dynamic range may refer to micro-dynamics, related to crest factor, whereas the European Broadcasting Union, in EBU3342 Loudness Range, defines dynamic range as the difference between the quietest and loudest volume, a matter of macro-dynamics. Electronics: In electronics dynamic range is used in the following contexts: Specifies the ratio of a maximum level of a parameter, such as power, current, voltage or frequency, to the minimum detectable value of that parameter. (See Audio system measurements.) In a transmission system, the ratio of the overload level (the maximum signal power that the system can tolerate without distortion of the signal) to the noise level of the system. Electronics: In digital systems or devices, the ratio of maximum and minimum signal levels required to maintain a specified bit error ratio. Electronics: Optimization of bit width of digital data path (according to the dynamic ranges of signal) can reduce the area, cost, and power consumption of digital circuits and systems while improving their performance. Optimal bit width for a digital data path is the smallest bit width that can satisfy the required signal-to-noise ratio and also avoid overflow.In audio and electronics applications, the ratio involved is often large enough that it is converted to a logarithm and specified in decibels. Metrology: In metrology, such as when performed in support of science, engineering or manufacturing objectives, dynamic range refers to the range of values that can be measured by a sensor or metrology instrument. Often this dynamic range of measurement is limited at one end of the range by saturation of a sensing signal sensor or by physical limits that exist on the motion or other response capability of a mechanical indicator. The other end of the dynamic range of measurement is often limited by one or more sources of random noise or uncertainty in signal levels that may be described as defining the sensitivity of the sensor or metrology device. When digital sensors or sensor signal converters are a component of the sensor or metrology device, the dynamic range of measurement will be also related to the number of binary digits (bits) used in a digital numeric representation in which the measured value is linearly related to the digital number. For example, a 12-bit digital sensor or converter can provide a dynamic range in which the ratio of the maximum measured value to the minimum measured value is up to 212 = 4096. Metrology: Metrology systems and devices may use several basic methods to increase their basic dynamic range. These methods include averaging and other forms of filtering, correction of receivers characteristics, repetition of measurements, nonlinear transformations to avoid saturation, etc. In more advance forms of metrology, such as multiwavelength digital holography, interferometry measurements made at different scales (different wavelengths) can be combined to retain the same low-end resolution while extending the upper end of the dynamic range of measurement by orders of magnitude. Music: In music, dynamic range describes the difference between the quietest and loudest volume of an instrument, part or piece of music. In modern recording, this range is often limited through dynamic range compression, which allows for louder volume, but can make the recording sound less exciting or live.The dynamic range of music as normally perceived in a concert hall does not exceed 80 dB, and human speech is normally perceived over a range of about 40 dB.: 4 Photography: Photographers use dynamic range to describe the luminance range of a scene being photographed, or the limits of luminance range that a given digital camera or film can capture, or the opacity range of developed film images, or the reflectance range of images on photographic papers. Photography: The dynamic range of digital photography is comparable to the capabilities of photographic film and both are comparable to the capabilities of the human eye.There are photographic techniques that support even higher dynamic range. Graduated neutral density filters are used to decrease the dynamic range of scene luminance that can be captured on photographic film (or on the image sensor of a digital camera): The filter is positioned in front of the lens at the time the exposure is made; the top half is dark and the bottom half is clear. The dark area is placed over a scene's high-intensity region, such as the sky. The result is more even exposure in the focal plane, with increased detail in the shadows and low-light areas. Though this doesn't increase the fixed dynamic range available at the film or sensor, it stretches usable dynamic range in practice. Photography: High-dynamic-range imaging overcomes the limited dynamic range of the sensor by selectively combining multiple exposures of the same scene in order to retain detail in light and dark areas. Tone mapping maps the image differently in shadow and highlights in order to better distribute the lighting range across the image. The same approach has been used in chemical photography to capture an extremely wide dynamic range: A three-layer film with each underlying layer at one hundredth (10−2) the sensitivity of the next higher one has, for example, been used to record nuclear-weapons tests.Consumer-grade image file formats sometimes restrict dynamic range. The most severe dynamic-range limitation in photography may not involve encoding, but rather reproduction to, say, a paper print or computer screen. In that case, not only local tone mapping but also dynamic range adjustment can be effective in revealing detail throughout light and dark areas: The principle is the same as that of dodging and burning (using different lengths of exposures in different areas when making a photographic print) in the chemical darkroom. The principle is also similar to gain riding or automatic level control in audio work, which serves to keep a signal audible in a noisy listening environment and to avoid peak levels that overload the reproducing equipment, or which are unnaturally or uncomfortably loud. Photography: If a camera sensor is incapable of recording the full dynamic range of a scene, high-dynamic-range (HDR) techniques may be used in postprocessing, which generally involve combining multiple exposures using software. External list: Audible dynamic range (online test) Steven E. Schoenherr (2002). "Dynamic Range". Recording Technology History. Archived from the original on 2006-09-05. Vaughan Wesson (October 2004). "TN200410A - Dynamic Range". Archived from the original on 2004-12-21.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Variation of parameters** Variation of parameters: In mathematics, variation of parameters, also known as variation of constants, is a general method to solve inhomogeneous linear ordinary differential equations. For first-order inhomogeneous linear differential equations it is usually possible to find solutions via integrating factors or undetermined coefficients with considerably less effort, although those methods leverage heuristics that involve guessing and do not work for all inhomogeneous linear differential equations. Variation of parameters: Variation of parameters extends to linear partial differential equations as well, specifically to inhomogeneous problems for linear evolution equations like the heat equation, wave equation, and vibrating plate equation. In this setting, the method is more often known as Duhamel's principle, named after Jean-Marie Duhamel (1797–1872) who first applied the method to solve the inhomogeneous heat equation. Sometimes variation of parameters itself is called Duhamel's principle and vice versa. History: The method of variation of parameters was first sketched by the Swiss mathematician Leonhard Euler (1707–1783), and later completed by the Italian-French mathematician Joseph-Louis Lagrange (1736–1813).A forerunner of the method of variation of a celestial body's orbital elements appeared in Euler's work in 1748, while he was studying the mutual perturbations of Jupiter and Saturn. In his 1749 study of the motions of the earth, Euler obtained differential equations for the orbital elements. In 1753, he applied the method to his study of the motions of the moon.Lagrange first used the method in 1766. Between 1778 and 1783, he further developed the method in two series of memoirs: one on variations in the motions of the planets and another on determining the orbit of a comet from three observations. During 1808–1810, Lagrange gave the method of variation of parameters its final form in a third series of papers. Description of method: Given an ordinary non-homogeneous linear differential equation of order n Let y1(x),…,yn(x) be a basis of the vector space of solutions of the corresponding homogeneous equation Then a particular solution to the non-homogeneous equation is given by where the ci(x) are differentiable functions which are assumed to satisfy the conditions Starting with (iii), repeated differentiation combined with repeated use of (iv) gives One last differentiation gives By substituting (iii) into (i) and applying (v) and (vi) it follows that The linear system (iv and vii) of n equations can then be solved using Cramer's rule yielding ci′(x)=Wi(x)W(x),i=1,…,n where W(x) is the Wronskian determinant of the basis y1(x),…,yn(x) and Wi(x) is the Wronskian determinant of the basis with the i-th column replaced by (0,0,…,b(x)). Description of method: The particular solution to the non-homogeneous equation can then be written as ∑i=1nyi(x)∫Wi(x)W(x)dx. Intuitive explanation: Consider the equation of the forced dispersionless spring, in suitable units: x″(t)+x(t)=F(t). Here x is the displacement of the spring from the equilibrium x = 0, and F(t) is an external applied force that depends on time. When the external force is zero, this is the homogeneous equation (whose solutions are linear combinations of sines and cosines, corresponding to the spring oscillating with constant total energy). We can construct the solution physically, as follows. Between times t=s and t=s+ds , the momentum corresponding to the solution has a net change F(s)ds (see: Impulse (physics)). A solution to the inhomogeneous equation, at the present time t > 0, is obtained by linearly superposing the solutions obtained in this manner, for s going between 0 and t. The homogeneous initial-value problem, representing a small impulse F(s)ds being added to the solution at time t=s , is x″(t)+x(t)=0,x(s)=0,x′(s)=F(s)ds. The unique solution to this problem is easily seen to be sin ⁡(t−s)ds . The linear superposition of all of these solutions is given by the integral: sin ⁡(t−s)ds. To verify that this satisfies the required equation: cos ⁡(t−s)ds sin ⁡(t−s)ds=F(t)−x(t), as required (see: Leibniz integral rule). The general method of variation of parameters allows for solving an inhomogeneous linear equation Lx(t)=F(t) by means of considering the second-order linear differential operator L to be the net force, thus the total impulse imparted to a solution between time s and s+ds is F(s)ds. Denote by xs the solution of the homogeneous initial value problem Lx(t)=0,x(s)=0,x′(s)=F(s)ds. Then a particular solution of the inhomogeneous equation is x(t)=∫0txs(t)ds, the result of linearly superposing the infinitesimal homogeneous solutions. There are generalizations to higher order linear differential operators. In practice, variation of parameters usually involves the fundamental solution of the homogeneous problem, the infinitesimal solutions xs then being given in terms of explicit linear combinations of linearly independent fundamental solutions. In the case of the forced dispersionless spring, the kernel sin sin cos sin cos ⁡t is the associated decomposition into fundamental solutions. Examples: First-order equation y′+p(x)y=q(x) The complementary solution to our original (inhomogeneous) equation is the general solution of the corresponding homogeneous equation (written below): y′+p(x)y=0 This homogeneous differential equation can be solved by different methods, for example separation of variables: ddxy+p(x)y=0 dydx=−p(x)y dyy=−p(x)dx, ∫1ydy=−∫p(x)dx ln ⁡|y|=−∫p(x)dx+C y=±e−∫p(x)dx+C=C0e−∫p(x)dx The complementary solution to our original equation is therefore: yc=C0e−∫p(x)dx Now we return to solving the non-homogeneous equation: y′+p(x)y=q(x) Using the method variation of parameters, the particular solution is formed by multiplying the complementary solution by an unknown function C(x): yp=C(x)e−∫p(x)dx By substituting the particular solution into the non-homogeneous equation, we can find C(x): C′(x)e−∫p(x)dx−C(x)p(x)e−∫p(x)dx+p(x)C(x)e−∫p(x)dx=q(x) C′(x)e−∫p(x)dx=q(x) C′(x)=q(x)e∫p(x)dx C(x)=∫q(x)e∫p(x)dxdx+C1 We only need a single particular solution, so we arbitrarily select C1=0 for simplicity. Therefore the particular solution is: yp=e−∫p(x)dx∫q(x)e∫p(x)dxdx The final solution of the differential equation is: y=yc+yp=C0e−∫p(x)dx+e−∫p(x)dx∫q(x)e∫p(x)dxdx This recreates the method of integrating factors. Examples: Specific second-order equation Let us solve cosh ⁡x We want to find the general solution to the differential equation, that is, we want to find solutions to the homogeneous differential equation 0. The characteristic equation is: λ2+4λ+4=(λ+2)2=0 Since λ=−2 is a repeated root, we have to introduce a factor of x for one solution to ensure linear independence: u1=e−2x and u2=xe−2x . The Wronskian of these two functions is W=|e−2xxe−2x−2e−2x−e−2x(2x−1)|=−e−2xe−2x(2x−1)+2xe−2xe−2x=e−4x. Because the Wronskian is non-zero, the two functions are linearly independent, so this is in fact the general solution for the homogeneous differential equation (and not a mere subset of it). We seek functions A(x) and B(x) so A(x)u1 + B(x)u2 is a particular solution of the non-homogeneous equation. We need only calculate the integrals A(x)=−∫1Wu2(x)b(x)dx,B(x)=∫1Wu1(x)b(x)dx Recall that for this example cosh ⁡x That is, cosh cosh 18 ex(9(x−1)+e2x(3x−1))+C1 cosh cosh ⁡xdx=16ex(3+e2x)+C2 where C1 and C2 are constants of integration. General second-order equation We have a differential equation of the form u″+p(x)u′+q(x)u=f(x) and we define the linear operator L=D2+p(x)D+q(x) where D represents the differential operator. We therefore have to solve the equation Lu(x)=f(x) for u(x) , where L and f(x) are known. We must solve first the corresponding homogeneous equation: u″+p(x)u′+q(x)u=0 by the technique of our choice. Once we've obtained two linearly independent solutions to this homogeneous differential equation (because this ODE is second-order) — call them u1 and u2 — we can proceed with variation of parameters. Now, we seek the general solution to the differential equation uG(x) which we assume to be of the form uG(x)=A(x)u1(x)+B(x)u2(x). Here, A(x) and B(x) are unknown and u1(x) and u2(x) are the solutions to the homogeneous equation. (Observe that if A(x) and B(x) are constants, then LuG(x)=0 .) Since the above is only one equation and we have two unknown functions, it is reasonable to impose a second condition. We choose the following: 0. Now, uG′(x)=(A(x)u1(x)+B(x)u2(x))′=(A(x)u1(x))′+(B(x)u2(x))′=A′(x)u1(x)+A(x)u1′(x)+B′(x)u2(x)+B(x)u2′(x)=A′(x)u1(x)+B′(x)u2(x)+A(x)u1′(x)+B(x)u2′(x)=A(x)u1′(x)+B(x)u2′(x) Differentiating again (omitting intermediary steps) uG″(x)=A(x)u1″(x)+B(x)u2″(x)+A′(x)u1′(x)+B′(x)u2′(x). Now we can write the action of L upon uG as LuG=A(x)Lu1(x)+B(x)Lu2(x)+A′(x)u1′(x)+B′(x)u2′(x). Since u1 and u2 are solutions, then LuG=A′(x)u1′(x)+B′(x)u2′(x). We have the system of equations [u1(x)u2(x)u1′(x)u2′(x)][A′(x)B′(x)]=[0f]. Expanding, [A′(x)u1(x)+B′(x)u2(x)A′(x)u1′(x)+B′(x)u2′(x)]=[0f]. So the above system determines precisely the conditions 0. A′(x)u1′(x)+B′(x)u2′(x)=LuG=f. Examples: We seek A(x) and B(x) from these conditions, so, given [u1(x)u2(x)u1′(x)u2′(x)][A′(x)B′(x)]=[0f] we can solve for (A′(x), B′(x))T, so [A′(x)B′(x)]=[u1(x)u2(x)u1′(x)u2′(x)]−1[0f]=1W[u2′(x)−u2(x)−u1′(x)u1(x)][0f], where W denotes the Wronskian of u1 and u2. (We know that W is nonzero, from the assumption that u1 and u2 are linearly independent.) So, A′(x)=−1Wu2(x)f(x),B′(x)=1Wu1(x)f(x)A(x)=−∫1Wu2(x)f(x)dx,B(x)=∫1Wu1(x)f(x)dx While homogeneous equations are relatively easy to solve, this method allows the calculation of the coefficients of the general solution of the inhomogeneous equation, and thus the complete general solution of the inhomogeneous equation can be determined. Examples: Note that A(x) and B(x) are each determined only up to an arbitrary additive constant (the constant of integration). Adding a constant to A(x) or B(x) does not change the value of LuG(x) because the extra term is just a linear combination of u1 and u2, which is a solution of L by definition.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**360-day calendar** 360-day calendar: The 360-day calendar is a method of measuring durations used in financial markets, in computer models, in ancient literature, and in prophetic literary genres. It is based on merging the three major calendar systems into one complex clock, with the 360-day year derived from the average year of the lunar and the solar: (365.2425 (solar) + 354.3829 (lunar))/2 = 719.6254/2 = 359.8127 days, rounding to 360. A 360-day year consists of 12 months of 30 days each, so to derive such a calendar from the standard Gregorian calendar, certain days are skipped. 360-day calendar: For example, the 27th of June (Gregorian calendar) would be the 4th of July in the USA. Ancient Calendars: Ancient calendars around the world initially used a 360 day calendar. Rome According to Plutarch's Parallel Lives Romans initially used a calendar which had 360 days, with varying length of months. However, Macrobius' Saturnalia and Censorinus' The Birthday Book, claim that the original Roman calendar had 304 days split into 10 months. India The Rig Veda describes a calendar with twelve months and 360 days. Mesoamerica In the Mayan Long Count Calendar, the equivalent of the year, the tun, was 360 days. Egypt Ancient Egyptians also used a 360 day calendar. One myth tells of how the extra 5 days were added. Ancient Calendars: A long time ago, Ra, who was god of the sun, ruled the earth. During this time, he heard of a prophecy that Nut, the sky goddess, would give birth to a son who would depose him. Therefore Re cast a spell to the effect that Nut could not give birth on any day of the year, which was then itself composed of precisely 360 days. To help Nut to counter this spell, the wisdom god Thoth devised a plan. Ancient Calendars: Thoth went to the moon god Khonsu and asked that he play a game known as Senet, requesting that they play for the very light of the moon itself. Feeling confident that he would win, Khonsu agreed. However, in the course of playing he lost the game several times in succession, such that Thoth ended up winning from the moon a substantial measure of its light, equal to about five days. Ancient Calendars: With this in hand, Thoth then took this extra time, and gave it to Nut. In doing so this had the effect of increasing the earth’s number of days per year, allowing Nut to give birth to a succession of children; one upon each of the extra 5 days that were added to the original 360. And as for the moon, losing its light had quite an effect upon it, for it became weaker and smaller in the sky. Being forced to hide itself periodically to recuperate; it could only show itself fully for a short period of time before having to disappear to regain its strength. Financial use: A duration is calculated as an integral number of days between startdate and enddate B. The difference in years, months and days are usually calculated separately: 360 30 +(Bd−Ad);A≤B There are several methods commonly available which differ in the way that they handle the cases where the months are not 30 days long, i.e. how they adjust dates: European method (30E/360) If either date A or B falls on the 31st of the month, that date will be changed to the 30th. Financial use: Where date B falls on the last day of February, the actual date B will be used. All months are considered to last 30 days and hence a full year has 360 days, but another source says that February has its actual number of days. US/NASD method (30US/360) If both date A and B fall on the last day of February, then date B will be changed to the 30th. If date A falls on the 31st of a month or last day of February, then date A will be changed to the 30th. If date A falls on the 30th of a month after applying (2) above and date B falls on the 31st of a month, then date B will be changed to the 30th. All months are considered to last 30 days and hence a full year has 360 days. ISDA method If date A falls on the 31st of a month, then date A will be changed to the 30th. If date A falls on the 30th of the month after applying the rule above, and date B falls on the 31st of the month, then date B will be changed to the 30th. All months are considered to last 30 days except February which has its actual length. Any full year, however, always counts for 360 days. BMA/PSA method If date A falls on the 31st of a month or last day of February, then date A will be changed to the 30th. If date A falls on the 30th of the month after applying the rule above, and date B falls on the 31st of the month, then date B will be changed to the 30th. All months are considered to last 30 days and hence a full year has 360 days. Alternative European method (30E+/360) If date A falls on the 31st of a month, then date A will be changed to the 30th. If date B falls on the 31st of a month, then date B will be changed to the 1st of the following month. Where date B falls on the last day of February, the actual date B will be used. All months are considered to last 30 days and hence a full year has 360 days.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**JLab** JLab: GroovyLab, formerly jLab, is a numerical computational environment implemented in Java. The main scripting engine of jLab is GroovySci, an extension of Groovy. Additionally, the interpreted J-Scripts (similar to MATLAB) and dynamic linking to Java class code are supported. The jLab environment aims to provide a MATLAB/Scilab like scientific computing platform that is supported by scripting engines implemented in the Java language. JLab: In the current implementation of jLab there coexist two scripting engines: the interpreted j-Script scripting engine and the compiled Groovy scripting engine. The later (i.e. Groovy) seems to be the preferred choice, since it is much faster, can execute directly Java code using only the familiar Java packaging rules, and is feature-rich language, i.e. Groovy enhanced with MATLAB style matrix operations and surrounding support environment.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Aminolevulinate transaminase** Aminolevulinate transaminase: In enzymology, an aminolevulinate transaminase (EC 2.6.1.43) is an enzyme that catalyzes the chemical reaction 5-aminolevulinate + pyruvate ⇌ 4,5-dioxopentanoate + L-alanineThus, the two substrates of this enzyme are 5-aminolevulinate and pyruvate, whereas its two products are 4,5-dioxopentanoate and L-alanine. Aminolevulinate transaminase: This enzyme belongs to the family of transferases, specifically the transaminases, which transfer nitrogenous groups. The systematic name of this enzyme class is 5-aminolevulinate:pyruvate aminotransferase. Other names in common use include aminolevulinate aminotransferase, gamma,delta-dioxovalerate, aminotransferase, gamma,delta-dioxovaleric acid transaminase, 4,5-dioxovalerate aminotransferase, 4,5-dioxovaleric acid transaminase, 4,5-dioxovaleric transaminase, 5-aminolevulinic acid transaminase, alanine-gamma,delta-dioxovalerate aminotransferase, alanine-dioxovalerate aminotransferase, alanine:4,5-dioxovalerate aminotransferase, aminolevulinic acid transaminase, dioxovalerate transaminase, L-alanine-4,5-dioxovalerate aminotransferase, L-alanine:4,5-dioxovaleric acid transaminase, L-alanine:dioxovalerate transaminase, DOVA transaminase, and 4,5-dioxovaleric acid aminotransferase. This enzyme participates in porphyrin and chlorophyll metabolism. It employs one cofactor, pyridoxal phosphate.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Andrew M. Stuart** Andrew M. Stuart: Andrew M. Stuart is a British and American mathematician, working in applied and computational mathematics. In particular, his research has focused on the numerical analysis of dynamical systems, applications of stochastic differential equations and stochastic partial differential equations, the Bayesian approach to inverse problems, data assimilation, and machine learning. Education: Andrew Stuart graduated in Mathematics from Bristol University in 1983, and then obtained his DPhil from the Oxford University Computing Laboratory in 1986. Career: After postdoctoral research in applied mathematics at Oxford and MIT, Stuart held permanent positions at the University of Bath (1989–1992), in mathematics, at Stanford University (1991–1999), in engineering, and at Warwick University (1999–2016), in mathematics. He is currently Bren Professor of Computing and Mathematical Sciences at the California Institute of Technology. Honors and awards: Stuart has been honored with several awards, including the 1989 Leslie Fox Prize for Numerical Analysis, the Monroe H. Martin Prize from the Institute for Physical Science and Technology at the University of Maryland, College Park, the SIAM James Wilkinson Prize, the SIAM Germund Dahlquist Prize in 1997, the Whitehead Prize from the London Mathematical Society in 2000, and the SIAM J.D. Crawford Prize in 2007. He was an invited speaker at the International Council for Industrial and Applied Mathematics (ICIAM) in Zurich in 2007 and Tokyo in 2023, and at the International Congress of Mathematicians (ICM) in Seoul, 2014. In 2009 he was elected an inaugural fellow of the Society for Industrial and Applied Mathematics (SIAM), and in 2020 he was elected a Fellow of the Royal Society. In 2022, he was named a Vannevar Bush Faculty Fellow. Publications: The majority of Stuart's published work is in archived journals. In addition to mathematics research published in archival journals, Stuart is also the author of several books in mathematics, including a research monograph concerning Dynamical Systems and Numerical Analysis, a research text on Multiscale Methods, a graduate text on Continuum Mechanics, a text on Data Assimilation, and a text on Inverse Problems. Publications: Stuart, A., Humphries, A.R. (1998). Dynamical Systems and Numerical Analysis. Cambridge University Press. ISBN 978-0-521-64563-8 G. A. Pavliotis, Andrew Stuart (2008). Multiscale Methods: Averaging and Homogenization. Springer Science & Business Media. ISBN 978-0-387-73828-4 Gonzalez, O., & Stuart, A. (2008). A First Course in Continuum Mechanics (Cambridge Texts in Applied Mathematics). Cambridge: Cambridge University Press. doi:10.1017/CBO9780511619571 Kody Law, Andrew Stuart, Konstantinos Zygalakis (2015). Data Assimilation: A Mathematical Introduction. Springer. ISBN 978-3-319-20325-6 Sanz-Alonso, D., Stuart, A., & Taeb, A. (2023). Inverse Problems and Data Assimilation (London Mathematical Society Student Texts). Cambridge: Cambridge University Press. ISBN 978-1-009-41431-9
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Intelligent driver model** Intelligent driver model: In traffic flow modeling, the intelligent driver model (IDM) is a time-continuous car-following model for the simulation of freeway and urban traffic. It was developed by Treiber, Hennecke and Helbing in 2000 to improve upon results provided with other "intelligent" driver models such as Gipps' model, which loses realistic properties in the deterministic limit. Model definition: As a car-following model, the IDM describes the dynamics of the positions and velocities of single vehicles. For vehicle α , xα denotes its position at time t , and vα its velocity. Furthermore, lα gives the length of the vehicle. To simplify notation, we define the net distance := xα−1−xα−lα−1 , where α−1 refers to the vehicle directly in front of vehicle α , and the velocity difference, or approaching rate, := vα−vα−1 . For a simplified version of the model, the dynamics of vehicle α are then described by the following two ordinary differential equations: x˙α=dxαdt=vα v˙α=dvαdt=a(1−(vαv0)δ−(s∗(vα,Δvα)sα)2) with s∗(vα,Δvα)=s0+vαT+vαΔvα2ab v0 , s0 , T , a , and b are model parameters which have the following meaning: desired velocity v0 : the velocity the vehicle would drive at in free traffic minimum spacing s0 : a minimum desired net distance. A car can't move if the distance from the car in the front is not at least s0 desired time headway T : the minimum possible time to the vehicle in front acceleration a : the maximum vehicle acceleration comfortable braking deceleration b : a positive numberThe exponent δ is usually set to 4. Model characteristics: The acceleration of vehicle α can be separated into a free road term and an interaction term: free int =−a(s∗(vα,Δvα)sα)2=−a(s0+vαTsα+vαΔvα2absα)2 Free road behavior: On a free road, the distance to the leading vehicle sα is large and the vehicle's acceleration is dominated by the free road term, which is approximately equal to a for low velocities and vanishes as vα approaches v0 . Therefore, a single vehicle on a free road will asymptotically approach its desired velocity v0 Behavior at high approaching rates: For large velocity differences, the interaction term is governed by −a(vαΔvα)2/(2absα)2=−(vαΔvα)2/(4bsα2) .This leads to a driving behavior that compensates velocity differences while trying not to brake much harder than the comfortable braking deceleration b Behavior at small net distances: For negligible velocity differences and small net distances, the interaction term is approximately equal to −a(s0+vαT)2/sα2 , which resembles a simple repulsive force such that small net distances are quickly enlarged towards an equilibrium net distance. Solution example: Let's assume a ring road with 50 vehicles. Then, vehicle 1 will follow vehicle 50. Initial speeds are given and since all vehicles are considered equal, vector ODEs are further simplified to: x˙=dxdt=v v˙=dvdt=a(1−(vv0)δ−(s∗(v,Δv)s)2) with s∗(v,Δv)=s0+vT+vΔv2ab For this example, the following values are given for the equation's parameters, in line with the original calibrated model. The two ordinary differential equations are solved using Runge–Kutta methods of orders 1, 3, and 5 with the same time step, to show the effects of computational accuracy in the results. Solution example: This comparison shows that the IDM does not show extremely irrealistic properties such as negative velocities or vehicles sharing the same space even for from a low order method such as with the Euler's method (RK1). However, traffic wave propagation is not as accurately represented as in the higher order methods, RK3 and RK 5. These last two methods show no significant differences, which lead to conclude that a solution for IDM reaches acceptable results from RK3 upwards and no additional computational requirements would be needed. Nonetheless, when introducing heterogeneous vehicles and both jam distance parameters, this observation could not suffice.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Points per game** Points per game: Points per game, often abbreviated PPG, is the average number of points scored by a player per game played in a sport, over the course of a series of games, a whole season, or a career. It is calculated by dividing the total number of points by number of games. The terminology is often used in basketball and ice hockey. For description of sports points see points for ice hockey or points for basketball. In games divided into fixed time periods, especially those in which a player may exit and re-enter the game multiple or an unlimited number of times, a player may receive the same credit (in this context, a liability) for participation in a game regardless of how long (i.e., for what portion of the game clock's elapsing) they were actually on the field or court. For this reason, the points-per-game statistic may understate the contribution of players who are highly effective but used only in certain specific "pinch" or "clutch" scenarios, such that a points-per-unit-time figure (e.g., "points per 48 minutes" in the case of professional basketball) may better represent their effectiveness within the context in which a coach or manager plays them. Although the points-per-game statistic has the advantage of factoring in the breadth of scenarios in which the player is effective, in that a player effective in many different scenarios will play more minutes per game and therefore contribute more to the team's overall performance, it still fails to distinguish between an ineffective player, an effective "pinch"/"clutch" offensive player, and a player assuming a primarily defensive role in a position whose title does not necessarily make the nature of their role obvious (e.g., basketball forward and star rebounder Dennis Rodman). Points per game: PPG has also been used as an alternative method for ranking association football teams, particularly during the COVID-19 pandemic, as a way to better compare performance when there is a differential in matches played (and thus traditional point-scoring is unsuitable). Major League Soccer used it to decide the standings for the 2020 season, as some teams had played as little as 15 of their planned 23 regular season matches.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Nyquist filter** Nyquist filter: A Nyquist filter is an electronic filter used in TV receivers to equalize the video characteristics. The filter is named after the Swedish–US engineer Harry Nyquist (1889–1976). Vestigial Side Band (VSB): In analogue TV broadcasting the visual radio frequency (RF) signal is produced by amplitude modulation (AM) i.e., the video signal (VF) modulates the amplitude of the carrier. In AM two symmetric sidebands appear, containing identical information. So the RF bandwidth is two times the VF bandwidth. For example, the RF bandwidth of a VF signal with a bandwidth of 4.2 MHz, is 8.4 MHz. (System M) In order to use the broadcast band more efficiently, one sideband can be suppressed. However, it is impossible to suppress one sideband completely without affecting the other. Furthermore, a very sharp edge filter characteristic causes intolerable delay problems. So as a compromise, a standard filter is used which reduces a considerable portion of one side band (lower side band in RF) without causing extensive delay problems. Such a filter is known as vestigial side band filter (VSB). Example of a VSB: In System B the VF bandwidth is 5 MHz. Without any suppression, the corresponding visual RF bandwidth must be 10 MHz. (Here, presence of aural signal is omitted for the sake of simplification.) But by using a VSB filter, the visual RF bandwidth is reduced to 6.25 MHz; 5 MHz in one sideband and 1.25 MHz in the other sideband. (The filter characteristic in the suppressed sideband is such that between 0 and 0.75 MHz there is no suppression.) By this method, 3.75 MHz is economised, which means that for the same band allocated for broadcasting, the number of TV services increases approximately one and half fold. Demodulation problems: When VSB filter is used in broadcasting, a problem arises during demodulation. While 0 - 0.75 MHz region has two side bands, the region beyond 1.25 MHz has only one side band. ( i.e., 0 - 0.75 MHz region is double sideband and the region beyond 1.25 MHz is single sideband ) Thus, the level of the demodulated signal in 0 - 0.75 MHz region is 6 dB higher than the level in the region beyond 1.25 MHz. Since high frequency components of the VF signal correspond to fine details and color subcarrier, the demodulation results in fading the detailed portions and color saturation of the picture with respect to less detailed portions of the picture. Nyquist filter: In order to equalise the low frequency and high frequency components of the VF signal, a filter named a Nyquist filter is used in receivers. This filter, which is used before demodulation, is actually a low-pass filter with 6 dB suppression at the intermediate frequency (IF) carrier. Thus, the level of double sideband portion of the VF signal is suppressed and the original band characteristic is reconstructed at the output of the demodulator. Tolerance masks of the Nyquist filter: The specifications below are given for sound trap off case. System B (G or H in UHF band) refers to broadcast system used in most countries. System M refers to broadcast system used in America. System B System M
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**SPRED2** SPRED2: Sprouty-related, EVH1 domain-containing protein 2 is a protein that in humans is encoded by the SPRED2 gene. Function: SPRED2 is a member of the Sprouty (see SPRY1)/SPRED family of proteins that regulate growth factor-induced activation of the MAP kinase cascade (see MAPK1).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Lists of stars by constellation** Lists of stars by constellation: All stars but one can be associated with an IAU (International Astronomical Union) constellation. IAU constellations are areas of the sky. Although there are only 88 IAU constellations, the sky is actually divided into 89 irregularly shaped boxes as the constellation Serpens is split into two separate sections, Serpens Caput (the snake's head) to the west and Serpens Cauda (the snake's tail) to the east. Lists of stars by constellation: The only star that does not belong to a constellation is the Sun. The Sun travels through the 13 constellations along the ecliptic, the 12 of the Zodiac and Ophiuchus. Lists of stars by constellation: Among the remaining stars, the nearer ones exhibit proper motion, so it is only a matter of time before some of them cross a constellation boundary and switch constellations as a consequence. In 1992, Rho Aquilae became the first star to have its Bayer designation "invalidated" by moving to a neighbouring constellation—it is now a star of the constellation Delphinus. Lists of stars by constellation: Stars are listed in the appropriate lists for the constellation, as follows: Criteria of inclusion: Stars named with a Bayer, Flamsteed, HR, or Draper (not from the supplements) designation. Stellar extremes or otherwise noteworthy stars. Notable variable stars (prototypes, rare or otherwise important). Nearest stars (<20 ly). Stars with planets. Notable neutron stars, black holes, and other exotic stellar objects/remnants.Note that these lists are currently unfinished, and there may be stars missing that satisfy these conditions. If you come across one, please feel free to add it.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Theta1 Orionis B** Theta1 Orionis B: Theta1 Orionis B (θ1 Orionis B), also known as BM Orionis, is a multiple star system containing at least five members. It is also one of the main stars of the Trapezium Cluster, with the others being A, C, and D. The primary is an eclipsing variable and one of the youngest known eclipsing binary systems. Variability: θ1 Orionis B varies in brightness and has been given the variable star designation BM Orionis. Every 6.47 days, it drops from magnitude 7.90 to a minimum of magnitude 8.65 for 8–9 hours. It was quickly classified as an eclipsing variable showing total eclipses of the brighter component, an Algol-type variable. In between the primary eclipses, there are slight brightness variations attributed to reflection effects, and a shallow secondary eclipse of less than a tenth of a magnitude.Although the light curve appears straightforward, it shows variations in the shape of the eclipse from cycle to cycle and the properties of the eclipsing component cannot easily be reconciled with the light curve. Mini-cluster: θ1 Orionis B has been resolved into four stars. Conventionally, the brightest star is known as B1, while the companions are known as B2, B3, and B4. B2 and B3 are only just over 0.1" apart, and the two are 0.9" from B1. B2 is approximately two magnitudes fainter than B1, and B3 another magnitude fainter. In between, B4 is 0.6" from B1 and five magnitudes fainter.The brightest component, B1, is known to be an eclipsing binary and its unresolved companion is generally called B5. A third component of the eclipsing system has been proposed to account for unusual variations in the timing of the eclipses, but is not yet widely accepted. The unseen companion is likely to be a pre-main-sequence star with an age of between 10,000 and 100,000 years, making it one of the least-evolved stars known. As of 2013, the pair were considered to be the youngest known eclipsing binary.The stars making up θ1 Orionis B are gravitationally bound, but their configuration is likely to be unstable and will eventually decay. Only the close B1/B5 binary will remain after a few million years. Properties: θ1 Orionis B1 is a hot main sequence star with a spectral type of B1. Its spectroscopic companion B5 is estimated to have a spectral type of G2 III from observations during the total eclipses. The unusual and changeable eclipses are thought to be caused by a translucent disc surrounding the secondary star. It is seen nearly edge-on and variations in its opacity cause differences in the light curve shape.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Bittering agent** Bittering agent: A bittering agent is a flavoring agent added to a food or beverage to impart a bitter taste, possibly in addition to other effects. While many substances are bitter to a greater or lesser degree, a few substances are used specifically for their bitterness, especially to balance other flavors, such as sweetness. Notable beverage examples include caffeine, found naturally in tea and coffee and added to many soft drinks, hops in beer, and quinine in tonic water. Bittering agent: Food examples include bitter melon, which may be mixed into a stir fry or soup for its bitter flavor. Bittering agent: Potent bittering agents may also be added to dangerous products as aversive agents to make them foul tasting, so as to prevent accidental poisoning. Examples including anti-freeze, household cleaning products and pesticides such as slug pellets. In general dangerous products with bright colours, which may be appealing to children, often contain agents such as denatonium. However, the efficacy of using bittering agents for this purpose is not conclusive. Beer: Prior to the introduction of hops, many other bitter herbs and flowers were used as bittering agents in beer, in a mixture called gruit, which could include dandelion, burdock root, marigold, horehound (the German name for horehound means "mountain hops"), ground ivy, and heather. Also bog myrtle.More recently, some Chinese and Okinawan beer uses bitter melon as a bittering agent. Other substances: Various other substances are used, including: Aloin Gesho, used in Tej, Ethiopian honey wine Other uses: Other prominent uses of bittering agents include: Bitters – used as digestifs or flavorings Dandelion and burdock – traditional British soft drink
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Gabriel synthesis** Gabriel synthesis: The Gabriel synthesis is a chemical reaction that transforms primary alkyl halides into primary amines. Traditionally, the reaction uses potassium phthalimide. The reaction is named after the German chemist Siegmund Gabriel.The Gabriel reaction has been generalized to include the alkylation of sulfonamides and imides, followed by deprotection, to obtain amines (see Alternative Gabriel reagents).The alkylation of ammonia is often an unselective and inefficient route to amines. In the Gabriel method, phthalimide anion is employed as a surrogate of H2N−. Traditional Gabriel synthesis: In this method, the sodium or potassium salt of phthalimide is N-alkylated with a primary alkyl halide to give the corresponding N-alkylphthalimide. Upon workup by acidic hydrolysis the primary amine is liberated as the amine salt. Alternatively the workup may be via the Ing–Manske procedure, involving reaction with hydrazine. This method produces a precipitate of phthalhydrazide (C6H4(CO)2N2H2) along with the primary amine: C6H4(CO)2NR + N2H4 → C6H4(CO)2N2H2 + RNH2Gabriel synthesis generally fails with secondary alkyl halides. The first technique often produces low yields or side products. Separation of phthalhydrazide can be challenging. For these reasons, other methods for liberating the amine from the phthalimide have been developed. Even with the use of the hydrazinolysis method, the Gabriel method suffers from relatively harsh conditions. Alternative Gabriel reagents: Many alternative reagents have been developed to complement the use of phthalimides. Most such reagents (e.g. the sodium salt of saccharin and di-tert-butyl-iminodicarboxylate) are electronically similar to the phthalimide salts, consisting of imido nucleophiles. In terms of their advantages, these reagents hydrolyze more readily, extend the reactivity to secondary alkyl halides, and allow the production of secondary amines.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Invincible ignorance fallacy** Invincible ignorance fallacy: The invincible ignorance fallacy, also known as argument by pigheadedness, is a deductive fallacy of circularity where the person in question simply refuses to believe the argument, ignoring any evidence given. It is not so much a fallacious tactic in argument as it is a refusal to argue in the proper sense of the word. The method used in this fallacy is either to make assertions with no consideration of objections or to simply dismiss objections by calling them excuses, conjecture, etc. or saying that they are proof of nothing, all without actually demonstrating how the objection fit these terms. It is similar to the ad lapidem fallacy, in which the person rejects all the evidence and logic presented, without providing any evidence or logic that could lead to a different conclusion. History: The term invincible ignorance has its roots in Catholic theology, whereas the opposite of the term vincible ignorance; it is used to refer to the state of persons (such as pagans and infants) who are ignorant of the Christian message because they have not yet had an opportunity to hear it. The first Pope to use the term officially seems to have been Pope Pius IX in the allocution Singulari Quadam (9 December 1854) and the encyclicals Singulari Quidem (17 March 1856) and Quanto Conficiamur Moerore (10 August 1863). The term, however, is far older than that. Aquinas, for instance, uses it in his Summa Theologica (written 1265–1274), and discussion of the concept can be found as far back as Origen (3rd century). When and how the term was taken by logicians to refer to the very different state of persons who deliberately refuse to attend to evidence remains unclear, but one of its first uses was in the book Fallacy: The Counterfeit of Argument by W. Ward Fearnside and William B. Holther in 1959.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Halstead-Reitan Neuropsychological Battery** Halstead-Reitan Neuropsychological Battery: The Halstead-Reitan Neuropsychological Test Battery (HRNB) and allied procedures is a comprehensive suite of neuropsychological tests used to assess the condition and functioning of the brain, including etiology, type (diffuse vs. specific), localization and lateralization of brain injury. The HRNB was first constructed by Ward C. Halstead, who was chairman of the Psychology Department at the University of Chicago, together with his doctoral student, Ralph Reitan (who later extended Halstead's Test Battery at the Indiana University Medical Center). A major aim of administering the HRNB to patients was if possible to lateralize a lesion to either the left or right cerebral hemisphere by comparing the functioning on both sides of the body on a variety of tests such as the Suppression or Sensory Imperception Test, the Finger Agnosia Test, Finger Tip Writing, the Finger Tapping Test, and the Tactual Performance Test. One difficulty with the HRNB was its excessive administration time (up to 3 hours or more in some brain-injured patients). In particular, administration of the Halstead Category Test was lengthy, so subsequent attempts were made to construct reliable and valid short-forms. Included: The HRNB includes: Wechsler Intelligence Scale Aphasia Screening Test Trail-Making Test, parts A and B (measures time to connect a sequence of numbers (Trail-Making, Part A) or alternating numbers and letters (Trail-Making, Part B). Halstead Category Test (a test of abstract concept learning ability—comprising seven subtests which form several factors: a Counting factor (subtests I and II), a Spatial Positional Reasoning factor (subtests III, IV, and VII), a Proportional Reasoning factor (subtests V, VI, and VII), and an Incidental Memory factor (subtest VII). Tactual Performance Test Seashore Rhythm Test Speech Sounds Perception Test Finger Tapping Test Sensory Perceptual Examination Lateral Dominance Examination
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**NLRP1** NLRP1: NLRP1 encodes NACHT, LRR, FIIND, CARD domain and PYD domains-containing protein 1 in humans. NLRP1 was the first protein shown to form an inflammasome. NLRP1 is expressed by a variety of cell types, which are predominantly epithelial or hematopoietic. The expression is also seen within glandular epithelial structures including the lining of the small intestine, stomach, airway epithelia and in hairless or glabrous skin. NLRP1 polymorphisms are associated with skin extra-intestinal manifestations in CD. Its highest expression was detected in human skin, in psoriasis and in vitiligo. Polymorphisms of NLRP1 were found in lupus erythematosus and diabetes type 1. Variants of mouse NLRP1 were found to be activated upon N-terminal cleavage by the protease in anthrax lethal factor. Function: This gene encodes a member of the Ced-4 family of apoptosis proteins. Ced-family members contain a caspase recruitment domain (CARD) and are known to be key mediators of programmed cell death. The encoded protein contains a distinct N-terminal pyrin-like motif, which is possibly involved in protein-protein interactions. The NLRP1 protein interacts strongly with caspase 2 and weakly with caspase 9. Overexpression of this gene was demonstrated to induce pyroptosis in cells. Multiple alternatively spliced transcript variants encoding distinct isoforms have been found for this gene, but the biological validity of some variants has not been determined. Mechanism of activation: NLRP1 activates an antibacterial or antiviral immune response. Antibacterial immune response compensates for the loss of the MAP kinase response. Humans produce NLRP1, but human NLRP1 is not activated by lethal factor. NLRP1 could be activated by proteolytic cleavage, resulting in the removal of an auto-inhibitory PYD and release of the CARD domain, responsible for the recruitment and activation of pro-caspase-1 in the active form of caspase-1. Human NLRP1 activation can be elicited by several means including enteroviral 3C proteases. Its function in immunity is just beginning to be understood. Interactions: NLRP1 has been shown to interact with caspase 9 and APAF1. Via its FIIND domain, NLRP1 interacts directly with DPP9 and DPP8 which are needed to prevent NLRP1 activation.Loss of DPP9 in humans and mice, results in NLRP1 activation. Variants of NLRP1 in human: As published by Bruno Reversade and colleagues, several Mendelian diseases caused by NLRP1 germline mutations have been described. These include Multiple Self-healing Palmoplantar Carcinoma, familial Nikam's disease and Autoinflammation with Arthritis and Dyskeratosis. Mutations in NLRP1, whether dominant or recessive, tend to be gain-of-function alleles that trigger inflammasome signaling with IL1B and IL18 release. Variants of NLRP1 in mice: Mice have three paralogs of the Nlrp1 gene (Nlrp1a, b, c). Nlrp1c is a pseudogene. Mouse NLRP1B is not activated by a receptor-ligand type mechanism. NLRP1B variants from certain inbred mouse strains, BALB/c and 129, can be activated by the lethal factor (LF) protease. The lethal factor protease is produced and secreted by Bacillus anthracis, the agent of anthrax. Together with protective antigen (PA), LF forms a bipartite toxin, Lethal Toxin. The role of PA is to form a translocation channel that delivers LF into the host cell cytosol, where LF play roles in immune response by cleaving and inactivating MAP kinases. LF also directly cleaves NLRP1B proximal to its N-terminus, it is necessary and sufficient for NLRP1B inflammasome formation and CASP1 activation. Activation of NLRP1B-dependent inflammasome responses appears in host defense with mechanism like IL-1β and neutrophils. NLRP1B can function as a sensor of bacterial proteases, immune responses are specifically activated by virulence factors.It is not clear what stimuli might activate NLRP1A, the other known functional murine NLRP1 paralog. The study identified a mouse carrying a missense gain-of-function mutation in NLRP1A (Q593P) that active inflammasome responses. The mechanism of wild-type NLRP1A activation is unclear.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Ripspeed** Ripspeed: Ripspeed is a sub brand of Halfords, one of the leading automotive parts retailer in the United Kingdom. It began as an independent retailer in the 1970s, two decades later the business changed hands and was purchased in 1999 by Halfords, and operates as one of the five subsections of a store if it is present. History: Keith Ripp (1947–2020), the 1981, 82 and 83 hat-trick British Rallycross Champion with Ford Fiesta 1600, started in a small shop in Pinner green before moving to Hertford Road, Enfield Wash specialising in tuning parts and accessories for Minis as Ripspeed International in 1973The motorsport and road car tuning and accessories side progressively grew over the years. By the 1990s, Ripspeed's main rivals were Demon Tweeks and Grand Prix Racewear, both owned by racing drivers, Alan Minshaw and Ray Bellm respectively. In 1996, Ripp sold Ripspeed to Tony Joseph. He then relocated the store from its original premises to a large one on Fore Street, Edmonton, London. with a plan for expansion in other areas. In turn Ripspeed relocated for a second time to an industrial estate in Enfield but this plan came to an abrupt end when the company collapsed due to financial problems after two years of ownership. History: Ripp with his two sons Adrian and Jason later started up Xtreme Motorsport, based in Harlow, Essex, which in turn was sold off in 2001. The Ripp brothers later founded R-Tec Auto Design, based in St Albans, Hertfordshire. In early 1999, Halfords took over the brand. One of the biggest changes was discarding the motorsport retail side of its business to concentrate on the lucrative boy racer market. History: Ripspeed itself has had only a limited number of demonstration cars out and about at shows and events. It started with a 1999 Vauxhall Corsa C SXI, which was followed in 1999 with a Ford Focus Ripspeed's current project is a 1991 Nissan 200SX turbo which is a drift car project and has been seen in action at Santa Pod raceway in 2007. These cars are used extensively for promoting the Ripspeed brand at car shows across the UK and at store openings or promotion weekends. History: Ripspeed itself did sponsor the Doncaster Performance and Custom Car Show, which begins the UK outdoor car show season. The event was renamed Ripspeed Donny. They have since moved on to sponsor numerous events at Santa Pod raceway which included shows such as "The Jap Show" and "USC" (Ultimate Street Car).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Dungeons &amp; Dragons Companion Set** Dungeons &amp; Dragons Companion Set: The Dungeons & Dragons Companion Set is an expansion boxed set for the Dungeons & Dragons (D&D) fantasy role-playing game. It was first published in 1984 as an expansion to the Dungeons & Dragons Basic Set. Publication history: The Dungeons & Dragons Basic Set was revised in 1983 by Frank Mentzer as Dungeons & Dragons Set 1: Basic Rules. Between 1983 and 1985, this system was revised and expanded by Mentzer as a series of five boxed sets, including the Basic Rules, Expert Rules (supporting character levels 4 through 14), Companion Rules (supporting levels 15 through 25), Master Rules (supporting levels 26 through 36), and Immortal Rules (supporting Immortals – characters who had transcended levels). The Companion Rules set was written by Mentzer, with art by Larry Elmore and Jeff Easley. It was published by TSR in 1984 as a boxed set containing a 64-page book and a 32-page book. The set contains two booklets: Player's Companion: Book One and Dungeon Master's Companion: Book Two, which were edited by Anne Gray.The 10th Anniversary Dungeons & Dragons Collector's Set boxed set, published by TSR in 1984, included the rulebooks from the Basic, Expert, and Companion sets; modules AC2, AC3, B1, B2, and M1, Blizzard Pass; Player Character Record Sheets; and dice. This set was limited to 1,000 copies and was sold by mail and at GenCon 17.: 147 Contents: The Player's Companion covers information on character levels 15-25. The book begins with commentary on the changes since a character began as an adventurer at level one. It introduces new weapons, armor types, and unarmed combat rules as well as providing details on running a stronghold and its recurrent costs, such as wages of the castle staff. The Player's Companion details the new abilities and increases in skills, spells, and other abilities that accrue to members of each character class as they rise in level. This section concentrates wholly on human characters, treating dwarves, elves, and halflings separately. The concept of "attack rank" is introduced for the three demi-human classes; although, per the Expert Set rules, they are capped at a specified maximum level, further accumulation of experience points increases their combat abilities. It also introduces the optional character class of druid, presented as a special progression for clerics of neutral alignment. Contents: The Dungeon Master's Companion begins with general guidelines on running a campaign and planning adventures for characters of level 15 and higher. The introduction also constructs a feudal system to provide a basis for the dominions, which will be granted to or conquered by the player characters. This section ends with notes on the organization and running of tournaments. The next section "The War Machine" was designed by Douglas Niles and Gary Spiegel as a method for coping with large-scale battles, especially those in the campaign's background. This book covers running high-level campaigns, including mass combat, other worlds and planes, and new monsters and treasure. It also contains three mini-scenarios. Reception: The Companion Set was reviewed by Megan C. Robertson in issue 61 of White Dwarf magazine (January 1985), rating it a 7 out of 10 overall. Robertson noted that most characters that reach 15th level in the Basic D&D game should be thinking of settling down and retiring and felt that the D&D Companion Set provides: "some ideas for this to be a little more interesting than simple retirement".
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Flammé (yarn)** Flammé (yarn): See flammé (vexillology) for the flag design.Flammé yarns are a kind of novelty yarn. It is generally a loose or untwisted core wrapped by at least one other strand. The extra element can be a metallic thread, or a much-thicker or much-narrower strand of yarn, or yarn that varies between thick and thin. Some companies have come to put twin yarns on the market to show off combinations of one regular yarn and novelty yarns in assorted colors or even two different types of novelty yarns.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Northern Hemisphere** Northern Hemisphere: The Northern Hemisphere is the half of Earth that is north of the Equator. For other planets in the Solar System, north is defined as being in the same celestial hemisphere relative to the invariable plane of the Solar System as Earth's North Pole.Due to Earth's axial tilt of 23.439281°, winter in the Northern Hemisphere lasts from the December solstice (typically December 21 UTC) to the March equinox (typically March 20 UTC), while summer lasts from the June solstice through to the September equinox (typically on 23 September UTC). The dates vary each year due to the difference between the calendar year and the astronomical year. Within the Northern Hemisphere, oceanic currents can change the weather patterns that affect many factors within the north coast. Such events include El Niño–Southern Oscillation. Northern Hemisphere: Trade winds blow from east to west just above the equator. The winds pull surface water with them, creating currents, which flow westward due to the Coriolis effect. The currents then bend to the right, heading north. At about 30 degrees north latitude, a different set of winds, the westerlies, push the currents back to the east, producing a closed clockwise loop.Its surface is 60.7% water, compared with 80.9% water in the case of the Southern Hemisphere, and it contains 67.3% of Earth's land. The continents of Europe and North America are located entirely on Earth's Northern Hemisphere, which also contains almost the entire continent of Asia, about two thirds of Africa, and a small part of South America. Geography and climate: During the 2.5 million years of the Pleistocene, numerous cold phases called glacials (Quaternary ice age), or significant advances of continental ice sheets, in Europe and North America, occurred at intervals of approximately 40,000 to 100,000 years. The long glacial periods were separated by more temperate and shorter interglacials which lasted about 10,000–15,000 years. The last cold episode of the last glacial period ended about 10,000 years ago. Earth is currently in an interglacial period of the Quaternary, called the Holocene. The glaciations that occurred during the glacial period covered many areas of the Northern Hemisphere. Geography and climate: The Arctic is a region around the North Pole (90° latitude). Its climate is characterized by cold winters and cool summers. Precipitation mostly comes in the form of snow. Areas inside the Arctic Circle (66°34′ latitude) experience some days in summer when the Sun never sets, and some days during the winter when it never rises. The duration of these phases varies from one day for locations right on the Arctic Circle to several months near the Pole, which is the middle of the Northern Hemisphere. Geography and climate: Between the Arctic Circle and the Tropic of Cancer (23°26′ latitude) lies the Northern temperate zone. The changes in these regions between summer and winter are generally mild, rather than extreme hot or cold. However, a temperate climate can have very unpredictable weather. Tropical regions (between the Tropic of Cancer and the Equator, 0° latitude) are generally hot all year round and tend to experience a rainy season during the summer months, and a dry season during the winter months. Geography and climate: In the Northern Hemisphere, objects moving across or above the surface of the Earth tend to turn to the right because of the Coriolis effect. As a result, large-scale horizontal flows of air or water tend to form clockwise-turning gyres. These are best seen in ocean circulation patterns in the North Atlantic and North Pacific oceans. Within the Northern Hemisphere, oceanic currents can change the weather patterns that affect many factors within the north coast; such as El Niño.For the same reason, flows of air down toward the northern surface of the Earth tend to spread across the surface in a clockwise pattern. Thus, clockwise air circulation is characteristic of high pressure weather cells in the Northern Hemisphere. Conversely, air rising from the northern surface of the Earth (creating a region of low pressure) tends to draw air toward it in a counterclockwise pattern. Hurricanes and tropical storms (massive low-pressure systems) spin counterclockwise in the Northern Hemisphere.The shadow of a sundial moves clockwise on latitudes north of the subsolar point and anticlockwise to the south. During the day at these latitudes, the Sun tends to rise to its maximum at a southerly position. Between the Tropic of Cancer and the Equator, the sun can be seen to the north, directly overhead, or to the south at noon, dependent on the time of year. In the Southern Hemisphere, the midday Sun is predominantly in the north. Geography and climate: When viewed from the Northern Hemisphere, the Moon appears inverted compared to a view from the Southern Hemisphere. The North Pole faces away from the Galactic Center of the Milky Way. This results in the Milky Way being sparser and dimmer in the Northern Hemisphere compared to the Southern Hemisphere, making the Northern Hemisphere more suitable for deep-space observation, as it is not "blinded" by the Milky Way. Demographics: As of 2015, the Northern Hemisphere is home to approximately 6.4 billion people which is around 87.0% of the earth's total human population of 7.3 billion people.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**E-4031** E-4031: E-4031 is an experimental class III antiarrhythmic drug that blocks potassium channels of the hERG-type. Chemistry: E-4031 is a synthesized toxin that is a methanesulfonanilide class III antiarrhythmic drug. Target: E-4031 acts on a specific class of voltage-gated potassium channels mainly found in the heart, the hERG channels. hERG channels (Kv11.1) mediate the IKr current, which repolarizes the myocardial cells. The hERG channel is encoded by ether-a-go-go related gene (hERG). Mode of action: E-4031 blocks hERG-type potassium channels by binding to the open channels. Its structural target within the hERG-channel is unclear, but some other methanesulfonanilide class III antiarrhythmic drugs are known to bind to the S6 domain or C-terminal of the hERG-channel.Reducing IKr in myocardial cells prolongs the cardiac action potential and thus prolongs the QT-interval. In non-cardiac cells, blocking Ikr has a different effect: it increases the frequency of action potentials. Toxicity: As E-4031 can prolong the QT-interval, it can cause lethal arrhythmias. Therapeutic use: E-4031 is solely used for research purposes. So far, one clinical trial has been conducted to test the effect of E-4031 on prolongation of the QT-interval.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Tampermonkey** Tampermonkey: Tampermonkey is a donationware userscript manager that is available as a browser extension. This software enables the user to add and use userscripts, which are JavaScript programs that can be used to modify web pages. History: Tampermonkey was first created in May 2010 by Jan Biniok. It first emerged as a Greasemonkey userscript that was wrapped to support Google Chrome. Eventually the code was re-used and published as a standalone extension for Chrome which had more features than Chrome's native script support. In 2011, Tampermonkey was ported to Android, enabling users to use userscripts on Android's internal browser. By 2019, Tampermonkey had over 10 million users. Tampermonkey is one of 33 extensions on the Chrome Web Store to have at least 10 million users. History: Chrome manifest V3 In January 2019, Biniok wrote in a Google Groups post that the new Chrome manifest V3 would break the extension. The new manifest would ban remotely accessed code which Tampermonkey is dependent on. The userscripts use code that is created by developers not at Google, and instead is created by third-party developers at places like Userscripts.org and Greasyfork. This code is inserted after the extension is installed, however the manifest requires the code to be present at installation. Controversy: On January 6, 2019, Opera banned the Tampermonkey extension from being installed through the Chrome Web Store, claiming it had been identified as malicious. Later, Bleeping Computer was able to determine that a piece of adware called Gom Player would install the Chrome Web Store version of Tampermonkey and likely utilize the extension to facilitate the injection of ads or other malicious behavior. The site stated, "This does not mean that Tampermonkey is malicious, but rather that a malicious program is utilizing a legitimate program for bad behavior," going on to call Opera's blacklisting the extension for this reason a "strange decision".
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Mesonet** Mesonet: In meteorology and climatology, a mesonet, portmanteau of mesoscale network, is a network of automated weather and, often also including environmental monitoring stations, designed to observe mesoscale meteorological phenomena and/or microclimates.Dry lines, squall lines, and sea breezes are examples of phenomena observed by mesonets. Due to the space and time scales associated with mesoscale phenomena and microclimates, weather stations comprising a mesonet are spaced closer together and report more frequently than synoptic scale observing networks, such as the WMO Global Observing System (GOS) and US ASOS. The term mesonet refers to the collective group of these weather stations, which are usually owned and operated by a common entity. Mesonets generally record in situ surface weather observations but some involve other observation platforms, particularly vertical profiles of the planetary boundary layer (PBL). Other environmental parameters may include insolation and various variables of interest to particular users, such as soil temperature or road conditions (the latter notable in Road Weather Information System (RWIS) networks). Mesonet: The distinguishing features that classify a network of weather stations as a mesonet are station density and temporal resolution with sufficiently robust station quality. Depending upon the phenomena meant to be observed, mesonet stations use a spatial spacing of 1 to 40 kilometres (0.6 to 20 mi) and report conditions every 1 to 15 minutes. Micronets (see microscale and storm scale), such as in metropolitan areas such as Oklahoma City, St. Louis, and Birmingham UK, are yet denser in spatial and sometimes temporal resolution. Purpose: Thunderstorms and other atmospheric convection, squall lines, drylines, sea and land breezes, mountain breeze and valley breezes, mountain waves, mesolows and mesohighs, wake lows, mesoscale convective vortices (MCVs), tropical cyclone and extratropical cyclone rainbands, macrobursts, gust fronts and outflow boundaries, heat bursts, urban heat islands (UHIs), and other mesoscale phenomena, as well as topographical features, can cause weather and climate conditions in a localized area to be significantly different from that dictated by the ambient large-scale conditions. As such, meteorologists must understand these phenomena in order to improve forecast skill. Observations are critical to understanding the processes by which these phenomena form, evolve, and dissipate. The long-term observing networks (ASOS, AWOS, COOP), however, are too sparse and report too infrequently for mesoscale research and forecasting. ASOS and AWOS stations are typically spaced 50 to 100 kilometres (30 to 60 mi) apart and report only hourly at many sites (though over time the frequency of reporting has increased, down to 5-15 minutes in the 2020s at major sites). The Cooperative Observer Program (COOP) database consists of only daily reports recorded manually. That network, like the more recent CoCoRaHS, is large but both are limited in reporting frequency and robustness of equipment. "Mesoscale" weather phenomena occur on spatial scales of a few to hundreds of kilometers and temporal (time) scales of minutes to hours. Thus, an observing network with finer temporal and spatial scales is needed for mesoscale research. This need led to the development of the mesonet. Mesonet data is directly used by humans for decision making, but also boosts the skill of numerical weather prediction (NWP) and is especially beneficial for short-range mesoscale models. Mesonets, along with remote sensing solutions (data assimilation of weather radar, weather satellites, wind profilers), allow for much greater temporal and spatial resolution in a forecast model. As the atmosphere is a chaotic nonlinear dynamical system (i.e. subject to the Butterfly effect), this increase in data increases understanding of initial conditions and boosts model performance. In addition to meteorology and climatology users, hydrologists, foresters, wildland firefighters, transportation departments, energy producers and distributors, other utility interests, and agricultural entities are prominent in their need for fine scale weather information. These organizations operate dozens of mesonets within the US and globally. Environmental, outdoor recreational, emergency management and public safety, military, and insurance interests also are heavy users of mesonet information. In many cases, mesonet stations may, by necessity or sometimes by lack of awareness, be located in positions where accurate measurements may be compromised. For instance, this is especially true of citizen science and crowdsourced data systems, such as the stations built for WeatherBug's network, many of which are located on school buildings. The Citizen Weather Observer Program (CWOP) facilitated by the US National Weather Service (NWS) and other networks such as those collected by Weather Underground help fill gaps with resolutions sometimes meeting or exceeding that of mesonets, but many stations also exhibit biases due to improper siting, calibration, and maintenance. These consumer grade "personal weather stations" (PWS) are also less sensitive and rigorous than scientific grade stations. The potential bias that these stations may cause must be accounted for when ingesting the data into a model, lest the phenomenon of "garbage in, garbage out" occur. Operations: Mesonets were born out of the need to conduct mesoscale research. The nature of this research is such that mesonets, like the phenomena they were meant to observe, were (and sometimes still are) short-lived and may change rapidly. Long-term research projects and non-research groups, however, have been able to maintain a mesonet for many years. For example, the U.S. Army Dugway Proving Ground in Utah has maintained a mesonet for many decades. The research-based origin of mesonets led to the characteristic that mesonet stations may be modular and portable, able to be moved from one field program to another. Nonetheless, most large contemporary mesonets or nodes within consist of permanent stations comprising stationary networks. Some research projects, however, utilize mobile mesonets. Prominent examples include the VORTEX projects. The problems of implementing and maintaining robust fixed stations are exacerbated by lighter, compact mobile stations and are further worsened by various issues related when moving, such as vehicle slipstream effects, and particularly during rapid changes in the ambient environment associated with traversing severe weather.Whether the mesonet is temporary or semi-permanent, each weather station is typically independent, drawing power from a battery and solar panels. An on-board computer records readings from several instruments measuring temperature, humidity, wind speed and direction, and atmospheric pressure, as well as soil temperature and moisture, and other environmental variables deemed important to the mission of the mesonet, solar irradiance being a common non-meteorological parameter. The computer periodically saves these data to memory, typically using data loggers, and transmits the observations to a base station via radio, telephone (wireless, such as cellular or landline), or satellite transmission. Advancements in computer technology and wireless communications in recent decades made possible the collection of mesonet data in real-time. Some stations or networks report using Wi-Fi and grid powered with backups for redundancy. The availability of mesonet data in real-time can be extremely valuable to operational forecasters, and particularly for nowcasting, as they can monitor weather conditions from many points in their forecast area. In addition to operational work, and weather, climate, and environmental research, mesonet and micronet data are often important in forensic meteorology. History: Early mesonets operated differently from modern mesonets. Each constituent instrument of the weather station was purely mechanical and fairly independent of the other sensors. Data were recorded continuously by an inked stylus that pivoted about a point onto a rotating drum covered by a sheath of graphed paper called a trace chart, much like a traditional seismograph station. Data analysis could occur only after the trace charts from the various instruments were collected. History: One of the earliest mesonets operated in the summer of 1946 and 1947 and was part of a field campaign called The Thunderstorm Project. As the name implies, the objective of this program was to better understand thunderstorm convection. The earliest mesonets were typically funded and operated by government agencies for specific campaigns. In time, universities and other quasi-public entities began implementing permanent mesonets for a wide variety of uses, such as agricultural or maritime interests. Consumer grade stations added to the professional grade synoptic and mesoscale networks by the 1990s and by the 2010s professional grade station networks operated by private companies and public-private consortia increased in prominence. Some of these privately implemented systems are permanent and at fixed locations, but many also service specific users and campaigns/events so may be installed for limited periods, and may also be mobile. History: The first known mesonet was operated by Germany from 1939 to 1941. Early mesonets with project based purposes operated for limited periods of time from seasons to a few years. The first permanently operating mesonet began in the United States in the 1970s with more entering operation in the 1980s-1990s as numbers gradually increased preceding a steeper expansion by the 2000s. By the 2010s there was also an increase in mesonets on other continents. Some wealthy densely populated countries also deploy observation networks with the density of a mesonet, such as the AMeDAS in Japan. The US was an early adopter of mesonets, yet funding has long been scattered and meager. By the 2020s declining funding atop the earlier scarcity and uncertainty of funding was leading to understaffing and problems maintaining stations, the closure of some stations, and the viability of entire networks threatened.Mesonets capable of being moved for fixed station deployments in field campaigns came into use in the US by the 1970s and fully mobile vehicle-mounted mesonets became fixtures of large field research projects following the field campaigns of Project VORTEX in 1994 and 1995, in which significant mobile mesonets were deployed. Significant mesonets: The following table is an incomplete list of mesonets operating in the past and present: * Not all stations owned or operated by network. ** As these are private stations, although QA/QC measures may be taken, these may not be scientific grade, and may lack proper siting, calibration, sensitivity, durability, and maintenance. Although not labeled a mesonet, the Japan Meteorological Agency (JMA) also maintains a nationwide surface observation network with the density of a mesonet. JMA operates AMeDAS, consisting of approximately 1,300 stations at a spacing of 17 kilometres (11 mi). The network began operating in 1974.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**NSPACE** NSPACE: In computational complexity theory, non-deterministic space or NSPACE is the computational resource describing the memory space for a non-deterministic Turing machine. It is the non-deterministic counterpart of DSPACE. Complexity classes: The measure NSPACE is used to define the complexity class whose solutions can be determined by a non-deterministic Turing machine. The complexity class NSPACE(f(n)) is the set of decision problems that can be solved by a non-deterministic Turing machine, M, using space O(f(n)), where n is the length of the input.Several important complexity classes can be defined in terms of NSPACE. These include: REG = DSPACE(O(1)) = NSPACE(O(1)), where REG is the class of regular languages (nondeterminism does not add power in constant space). Complexity classes: NL = NSPACE(O(log n)) CSL = NSPACE(O(n)), where CSL is the class of context-sensitive languages. PSPACE = NPSPACE = ⋃k∈NNSPACE(nk) EXPSPACE = NEXPSPACE = ⋃k∈NNSPACE(2nk) The Immerman–Szelepcsényi theorem states that NSPACE(s(n)) is closed under complement for every function s(n) ≥ log n. A further generalization is ASPACE, defined with alternating Turing machines. Relation with other complexity classes: DSPACE NSPACE is the non-deterministic counterpart of DSPACE, the class of memory space on a deterministic Turing machine. First by definition, then by Savitch's theorem, we have that: DSPACE[s(n)]⊆NSPACE[s(n)]⊆DSPACE[(s(n))2]. Time NSPACE can also be used to determine the time complexity of a deterministic Turing machine by the following theorem: If a language L is decided in space S(n) (where S(n) ≥ log n) by a non-deterministic TM, then there exists a constant C such that L is decided in time O(CS(n)) by a deterministic one. Limitations: The measure of space complexity in terms of DSPACE is useful because it represents the total amount of memory that an actual computer would need to solve a given computational problem with a given algorithm. The reason is that DSPACE describes the space complexity used by deterministic Turing machines, which can represent actual computers. On the other hand, NSPACE describes the space complexity of non-deterministic Turing machines, which are not useful when trying to represent actual computers. For this reason, NSPACE is limited in its usefulness to real-world applications.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Chemoprotective agent** Chemoprotective agent: A chemo-protective agent is any drug that helps to reduce the side- effects of chemotherapy. These agents protect specific body parts from harmful anti-cancer treatments that could potentially cause permanent damage to important bodily tissues. Chemo-protective agents have only recently been introduced as a factor involved with chemotherapy with the intent to assist those cancer patients that require treatment, which as an end result, improves the patients' quality of life. Examples include: Amifostine, approved by the FDA in 1995, which helps prevent kidney damage in patients undergoing cisplatin and carboplatin chemotherapy Mesna, approved by the FDA in 1988, which helps prevent hemorrhagic cystitis (bladder bleeding) in patients undergoing cyclophosphamide or ifosfamide chemotherapy Dexrazoxane, approved by the FDA in 1995, which helps prevent heart problems in patients undergoing doxorubicin chemotherapy Risks: Chemo-protective agents are common drugs and like many other drugs, may have side effects of their own. Each agent has different side effects though the most common consist of dizziness, sleepiness, nausea, fever, etc. It is important to discuss the side effects of these drugs with a doctor before using them to combat any type of chemotherapy to insure the drug will benefit each and every patient.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Metal carbonyl** Metal carbonyl: Metal carbonyls are coordination complexes of transition metals with carbon monoxide ligands. Metal carbonyls are useful in organic synthesis and as catalysts or catalyst precursors in homogeneous catalysis, such as hydroformylation and Reppe chemistry. In the Mond process, nickel tetracarbonyl is used to produce pure nickel. In organometallic chemistry, metal carbonyls serve as precursors for the preparation of other organometallic complexes. Metal carbonyl: Metal carbonyls are toxic by skin contact, inhalation or ingestion, in part because of their ability to carbonylate hemoglobin to give carboxyhemoglobin, which prevents the binding of oxygen. Nomenclature and terminology: The nomenclature of the metal carbonyls depends on the charge of the complex, the number and type of central atoms, and the number and type of ligands and their binding modes. They occur as neutral complexes, as positively-charged metal carbonyl cations or as negatively charged metal carbonylates. The carbon monoxide ligand may be bound terminally to a single metal atom or bridging to two or more metal atoms. These complexes may be homoleptic, containing only CO ligands, such as nickel tetracarbonyl (Ni(CO)4), but more commonly metal carbonyls are heteroleptic and contain a mixture of ligands.Mononuclear metal carbonyls contain only one metal atom as the central atom. Except vanadium hexacarbonyl, only metals with even atomic number, such as chromium, iron, nickel, and their homologs, build neutral mononuclear complexes. Polynuclear metal carbonyls are formed from metals with odd atomic numbers and contain a metal–metal bond. Complexes with different metals but only one type of ligand are called isoleptic.Carbon monoxide has distinct binding modes in metal carbonyls. They differ in terms of their hapticity, denoted η, and their bridging mode. In η2-CO complexes, both the carbon and oxygen are bonded to the metal. More commonly only carbon is bonded, in which case the hapticity is not mentioned.The carbonyl ligand engages in a wide range of bonding modes in metal carbonyl dimers and clusters. In the most common bridging mode, denoted μ2 or simply μ, the CO ligand bridges a pair of metals. This bonding mode is observed in the commonly available metal carbonyls: Co2(CO)8, Fe2(CO)9, Fe3(CO)12, and Co4(CO)12. In certain higher nuclearity clusters, CO bridges between three or even four metals. These ligands are denoted μ3-CO and μ4-CO. Less common are bonding modes in which both C and O bond to the metal, such as μ3η2. Structure and bonding: Carbon monoxide bonds to transition metals using "synergistic pi* back-bonding". The M-C bonding has three components, giving rise to a partial triple bond. A sigma (σ) bond arises from overlap of the nonbonding (or weakly anti-bonding) sp-hybridized electron pair on carbon with a blend of d-, s-, and p-orbitals on the metal. A pair of pi (π) bonds arises from overlap of filled d-orbitals on the metal with a pair of π*-antibonding orbitals projecting from the carbon atom of the CO. The latter kind of binding requires that the metal have d-electrons, and that the metal is in a relatively low oxidation state (0 or +1) which makes the back-donation of electron density favorable. As electrons from the metal fill the π-antibonding orbital of CO, they weaken the carbon–oxygen bond compared with free carbon monoxide, while the metal–carbon bond is strengthened. Because of the multiple bond character of the M–CO linkage, the distance between the metal and carbon atom is relatively short, often less than 1.8 Å, about 0.2 Å shorter than a metal–alkyl bond. The M-CO and MC-O distance are sensitive to other ligands on the metal. Illustrative of these effects are the following data for Mo-C and C-O distances in Mo(CO)6 and Mo(CO)3(4-methylpyridine)3: 2.06 vs 1.90 and 1.11 vs 1.18 Å. Structure and bonding: Infrared spectroscopy is a sensitive probe for the presence of bridging carbonyl ligands. For compounds with doubly bridging CO ligands, denoted μ2-CO or often just μ-CO, the bond stretching frequency νCO is usually shifted by 100–200 cm−1 to lower energy compared to the signatures of terminal CO, which are in the region 1800 cm−1. Bands for face capping (μ3) CO ligands appear at even lower energies. In addition to symmetrical bridging modes, CO can be found to bridge asymmetrically or through donation from a metal d orbital to the π* orbital of CO. The increased π-bonding due to back-donation from multiple metal centers results in further weakening of the C–O bond. Structure and bonding: Physical characteristics Most mononuclear carbonyl complexes are colorless or pale yellow volatile liquids or solids that are flammable and toxic. Vanadium hexacarbonyl, a uniquely stable 17-electron metal carbonyl, is a blue-black solid. Dimetallic and polymetallic carbonyls tend to be more deeply colored. Triiron dodecacarbonyl (Fe3(CO)12) forms deep green crystals. The crystalline metal carbonyls often are sublimable in vacuum, although this process is often accompanied by degradation. Metal carbonyls are soluble in nonpolar and polar organic solvents such as benzene, diethyl ether, acetone, glacial acetic acid, and carbon tetrachloride. Some salts of cationic and anionic metal carbonyls are soluble in water or lower alcohols. Analytical characterization: Apart from X-ray crystallography, important analytical techniques for the characterization of metal carbonyls are infrared spectroscopy and 13C NMR spectroscopy. These two techniques provide structural information on two very different time scales. Infrared-active vibrational modes, such as CO-stretching vibrations, are often fast compared to intramolecular processes, whereas NMR transitions occur at lower frequencies and thus sample structures on a time scale that, it turns out, is comparable to the rate of intramolecular ligand exchange processes. NMR data provide information on "time-averaged structures", whereas IR is an instant "snapshot". Illustrative of the differing time scales, investigation of dicobalt octacarbonyl (Co2(CO)8) by means of infrared spectroscopy provides 13 νCO bands, far more than expected for a single compound. This complexity reflects the presence of isomers with and without bridging CO ligands. The 13C NMR spectrum of the same substance exhibits only a single signal at a chemical shift of 204 ppm. This simplicity indicates that the isomers quickly (on the NMR timescale) interconvert. Analytical characterization: Iron pentacarbonyl exhibits only a single 13C NMR signal owing to rapid exchange of the axial and equatorial CO ligands by Berry pseudorotation. Analytical characterization: Infrared spectra An important technique for characterizing metal carbonyls is infrared spectroscopy. The C–O vibration, typically denoted νCO, occurs at 2143 cm−1 for carbon monoxide gas. The energies of the νCO band for the metal carbonyls correlates with the strength of the carbon–oxygen bond, and inversely correlated with the strength of the π-backbonding between the metal and the carbon. The π-basicity of the metal center depends on a lot of factors; in the isoelectronic series (titanium to iron) at the bottom of this section, the hexacarbonyls show decreasing π-backbonding as one increases (makes more positive) the charge on the metal. π-Basic ligands increase π-electron density at the metal, and improved backbonding reduces νCO. The Tolman electronic parameter uses the Ni(CO)3 fragment to order ligands by their π-donating abilities.The number of vibrational modes of a metal carbonyl complex can be determined by group theory. Only vibrational modes that transform as the electric dipole operator will have nonzero direct products and are observed. The number of observable IR transitions (but not their energies) can thus be predicted. For example, the CO ligands of octahedral complexes, such as Cr(CO)6, transform as a1g, eg, and t1u, but only the t1u mode (antisymmetric stretch of the apical carbonyl ligands) is IR-allowed. Thus, only a single νCO band is observed in the IR spectra of the octahedral metal hexacarbonyls. Spectra for complexes of lower symmetry are more complex. For example, the IR spectrum of Fe2(CO)9 displays CO bands at 2082, 2019 and 1829 cm−1. The number of IR-observable vibrational modes for some metal carbonyls are shown in the table. Exhaustive tabulations are available. These rules apply to metal carbonyls in solution or the gas phase. Low-polarity solvents are ideal for high resolution. For measurements on solid samples of metal carbonyls, the number of bands can increase owing in part to site symmetry. Analytical characterization: Nuclear magnetic resonance spectroscopy Metal carbonyls are often characterized by 13C NMR spectroscopy. To improve the sensitivity of this technique, complexes are often enriched with 13CO. Typical chemical shift range for terminally bound ligands is 150 to 220 ppm. Bridging ligands resonate between 230 and 280 ppm. The 13C signals shift toward higher fields with an increasing atomic number of the central metal. Analytical characterization: NMR spectroscopy can be used for experimental determination of the fluxionality.The activation energy of ligand exchange processes can be determined by the temperature dependence of the line broadening. Mass spectrometry Mass spectrometry provides information about the structure and composition of the complexes. Spectra for metal polycarbonyls are often easily interpretable, because the dominant fragmentation process is the loss of carbonyl ligands (m/z = 28). Analytical characterization: M(CO)+n → M(CO)+n−1 + COElectron ionization is the most common technique for characterizing the neutral metal carbonyls. Neutral metal carbonyls can be converted to charged species by derivatization, which enables the use of electrospray ionization (ESI), instrumentation for which is often widely available. For example, treatment of a metal carbonyl with alkoxide generates an anionic metallaformate that is amenable to analysis by ESI-MS: LnM(CO) + RO− → [LnM−C(=O)OR]−Some metal carbonyls react with azide to give isocyanato complexes with release of nitrogen. By adjusting the cone voltage or temperature, the degree of fragmentation can be controlled. The molar mass of the parent complex can be determined, as well as information about structural rearrangements involving loss of carbonyl ligands under ESI-MS conditions.Mass spectrometry combined with infrared photodissociation spectroscopy can provide vibrational informations for ionic carbonyl complexes in gas phase. Occurrence in nature: In the investigation of the infrared spectrum of the Galactic Center of the Milky Way, monoxide vibrations of iron carbonyls in interstellar dust clouds were detected. Iron carbonyl clusters were also observed in Jiange H5 chondrites identified by infrared spectroscopy. Four infrared stretching frequencies were found for the terminal and bridging carbon monoxide ligands.In the oxygen-rich atmosphere of the Earth, metal carbonyls are subject to oxidation to the metal oxides. It is discussed whether in the reducing hydrothermal environments of the prebiotic prehistory such complexes were formed and could have been available as catalysts for the synthesis of critical biochemical compounds such as pyruvic acid. Traces of the carbonyls of iron, nickel, and tungsten were found in the gaseous emanations from the sewage sludge of municipal treatment plants.The hydrogenase enzymes contain CO bound to iron. It is thought that the CO stabilizes low oxidation states, which facilitates the binding of hydrogen. The enzymes carbon monoxide dehydrogenase and acetyl-CoA synthase also are involved in bioprocessing of CO. Carbon monoxide containing complexes are invoked for the toxicity of CO and signaling. Synthesis: The synthesis of metal carbonyls is a widely studied subject of organometallic research. Since the work of Mond and then Hieber, many procedures have been developed for the preparation of mononuclear metal carbonyls as well as homo- and heterometallic carbonyl clusters. Synthesis: Direct reaction of metal with carbon monoxide Nickel tetracarbonyl and iron pentacarbonyl can be prepared according to the following equations by reaction of finely divided metal with carbon monoxide: Ni + 4 CO → Ni(CO)4 (1 bar, 55 °C) Fe + 5 CO → Fe(CO)5 (100 bar, 175 °C)Nickel tetracarbonyl is formed with carbon monoxide already at 80 °C and atmospheric pressure, finely divided iron reacts at temperatures between 150 and 200 °C and a carbon monoxide pressure of 50–200 bar. Other metal carbonyls are prepared by less direct methods. Synthesis: Reduction of metal salts and oxides Some metal carbonyls are prepared by the reduction of metal halides in the presence of high pressure of carbon monoxide. A variety of reducing agents are employed, including copper, aluminum, hydrogen, as well as metal alkyls such as triethylaluminium. Illustrative is the formation of chromium hexacarbonyl from anhydrous chromium(III) chloride in benzene with aluminum as a reducing agent, and aluminum chloride as the catalyst: CrCl3 + Al + 6 CO → Cr(CO)6 + AlCl3The use of metal alkyls, such as triethylaluminium and diethylzinc, as the reducing agent leads to the oxidative coupling of the alkyl radical to form the dimer alkane: WCl6 + 6 CO + 2 Al(C2H5)3 → W(CO)6 + 2 AlCl3 + 3 C4H10Tungsten, molybdenum, manganese, and rhodium salts may be reduced with lithium aluminium hydride. Vanadium hexacarbonyl is prepared with sodium as a reducing agent in chelating solvents such as diglyme. Synthesis: VCl3 + 4 Na + 6 CO + 2 diglyme → Na(diglyme)2[V(CO)6] + 3 NaCl [V(CO)6]− + H+ → H[V(CO)6] → 1/2 H2 + V(CO)6In the aqueous phase, nickel or cobalt salts can be reduced, for example by sodium dithionite. In the presence of carbon monoxide, cobalt salts are quantitatively converted to the tetracarbonylcobalt(−1) anion: Co2+ + 3/2 S2O2−4 + 6 OH− + 4 CO → Co(CO)−4 + 3 SO2−3 + 3 H2OSome metal carbonyls are prepared using CO directly as the reducing agent. In this way, Hieber and Fuchs first prepared dirhenium decacarbonyl from the oxide: Re2O7 + 17 CO → Re2(CO)10 + 7 CO2If metal oxides are used carbon dioxide is formed as a reaction product. In the reduction of metal chlorides with carbon monoxide phosgene is formed, as in the preparation of osmium carbonyl chloride from the chloride salts. Carbon monoxide is also suitable for the reduction of sulfides, where carbonyl sulfide is the byproduct. Synthesis: Photolysis and thermolysis Photolysis or thermolysis of mononuclear carbonyls generates di- and polymetallic carbonyls such as diiron nonacarbonyl (Fe2(CO)9). On further heating, the products decompose eventually into the metal and carbon monoxide. Synthesis: 2 Fe(CO)5 → Fe2(CO)9 + COThe thermal decomposition of triosmium dodecacarbonyl (Os3(CO)12) provides higher-nuclear osmium carbonyl clusters such as Os4(CO)13, Os6(CO)18 up to Os8(CO)23.Mixed ligand carbonyls of ruthenium, osmium, rhodium, and iridium are often generated by abstraction of CO from solvents such as dimethylformamide (DMF) and 2-methoxyethanol. Typical is the synthesis of IrCl(CO)(PPh3)2 from the reaction of iridium(III) chloride and triphenylphosphine in boiling DMF solution. Synthesis: Salt metathesis Salt metathesis reaction of salts such as KCo(CO)4 with [Ru(CO)3Cl2]2 leads selectively to mixed-metal carbonyls such as RuCo2(CO)11. Synthesis: 4 KCo(CO)4 + [Ru(CO)3Cl2]2 → 2 RuCo2(CO)11 + 4 KCl + 11 CO Metal carbonyl cations and carbonylates The synthesis of ionic carbonyl complexes is possible by oxidation or reduction of the neutral complexes. Anionic metal carbonylates can be obtained for example by reduction of dinuclear complexes with sodium. A familiar example is the sodium salt of iron tetracarbonylate (Na2Fe(CO)4, Collman's reagent), which is used in organic synthesis.The cationic hexacarbonyl salts of manganese, technetium and rhenium can be prepared from the carbonyl halides under carbon monoxide pressure by reaction with a Lewis acid. Synthesis: Mn(CO)5Cl + AlCl3 + CO → [Mn(CO)+6][AlCl−4]The use of strong acids succeeded in preparing gold carbonyl cations such as [Au(CO)2]+, which is used as a catalyst for the carbonylation of alkenes. The cationic platinum carbonyl complex [Pt(CO)4]2+ can be prepared by working in so-called superacids such as antimony pentafluoride. Although CO is considered generally as a ligand for low-valent metal ions, the tetravalent iron complex [Cp*2Fe]2+ (16-valence electron complex) quantitatively binds CO to give the diamagnetic Fe(IV)-carbonyl [Cp*2FeCO]2+ (18-valence electron complex). Reactions: Metal carbonyls are important precursors for the synthesis of other organometallic complexes. Common reactions are the substitution of carbon monoxide by other ligands, the oxidation or reduction reactions of the metal center, and reactions at the carbon monoxide ligand. Reactions: CO substitution The substitution of CO ligands can be induced thermally or photochemically by donor ligands. The range of ligands is large, and includes phosphines, cyanide (CN−), nitrogen donors, and even ethers, especially chelating ones. Alkenes, especially dienes, are effective ligands that afford synthetically useful derivatives. Substitution of 18-electron complexes generally follows a dissociative mechanism, involving 16-electron intermediates.Substitution proceeds via a dissociative mechanism: M(CO)n → M(CO)n−1 + CO M(CO)n−1 + L → M(CO)n−1LThe dissociation energy is 105 kJ/mol (25 kcal/mol) for nickel tetracarbonyl and 155 kJ/mol (37 kcal/mol) for chromium hexacarbonyl.Substitution in 17-electron complexes, which are rare, proceeds via associative mechanisms with a 19-electron intermediates. Reactions: M(CO)n + L → M(CO)nL M(CO)nL → M(CO)n−1L + COThe rate of substitution in 18-electron complexes is sometimes catalysed by catalytic amounts of oxidants, via electron transfer. Reactions: Reduction Metal carbonyls react with reducing agents such as metallic sodium or sodium amalgam to give carbonylmetalate (or carbonylate) anions: Mn2(CO)10 + 2 Na → 2 Na[Mn(CO)5]For iron pentacarbonyl, one obtains the tetracarbonylferrate with loss of CO: Fe(CO)5 + 2 Na → Na2[Fe(CO)4] + COMercury can insert into the metal–metal bonds of some polynuclear metal carbonyls: Co2(CO)8 + Hg → (CO)4Co−Hg−Co(CO)4 Nucleophilic attack at CO The CO ligand is often susceptible to attack by nucleophiles. For example, trimethylamine oxide and potassium bis(trimethylsilyl)amide convert CO ligands to CO2 and CN−, respectively. In the "Hieber base reaction", hydroxide ion attacks the CO ligand to give a metallacarboxylic acid, followed by the release of carbon dioxide and the formation of metal hydrides or carbonylmetalates. A well-studied example of this nucleophilic addition is the conversion of iron pentacarbonyl to hydridoiron tetracarbonyl anion: Fe(CO)5 + NaOH → Na[Fe(CO)4CO2H] Na[Fe(CO)4COOH] + NaOH → Na[HFe(CO)4] + NaHCO3Hydride reagents also attack CO ligands, especially in cationic metal complexes, to give the formyl derivative: [Re(CO)6]+ + H− → Re(CO)5CHOOrganolithium reagents add with metal carbonyls to acylmetal carbonyl anions. O-Alkylation of these anions, such as with Meerwein salts, affords Fischer carbenes. Reactions: With electrophiles Despite being in low formal oxidation states, metal carbonyls are relatively unreactive toward many electrophiles. For example, they resist attack by alkylating agents, mild acids, and mild oxidizing agents. Most metal carbonyls do undergo halogenation. Iron pentacarbonyl, for example, forms ferrous carbonyl halides: Fe(CO)5 + X2 → Fe(CO)4X2 + COMetal–metal bonds are cleaved by halogens. Depending on the electron-counting scheme used, this can be regarded as an oxidation of the metal atoms: Mn2(CO)10 + Cl2 → 2 Mn(CO)5Cl Compounds: Most metal carbonyl complexes contain a mixture of ligands. Examples include the historically important IrCl(CO)(P(C6H5)3)2 and the antiknock agent (CH3C5H4)Mn(CO)3. The parent compounds for many of these mixed ligand complexes are the binary carbonyls, those species of the formula [Mx(CO)n]z, many of which are commercially available. The formulae of many metal carbonyls can be inferred from the 18-electron rule. Charge-neutral binary metal carbonyls Group 2 elements calcium, strontium, and barium can all form octacarbonyl complexes M(CO)8 (M = Ca, Sr, Ba). The compounds were characterized in cryogenic matrices by vibrational spectroscopy and in gas phase by mass spectrometry. Group 4 elements with 4 valence electrons are expected to form heptacarbonyls; while these are extremely rare, substituted derivatives of Ti(CO)7 are known. Group 5 elements with 5 valence electrons, again are subject to steric effects that prevent the formation of M–M bonded species such as V2(CO)12, which is unknown. The 17-VE V(CO)6 is however well known. Group 6 elements with 6 valence electrons form hexacarbonyls Cr(CO)6, Mo(CO)6, W(CO)6, and Sg(CO)6. Group 6 elements (as well as group 7) are also well known for exhibiting the cis effect (the labilization of CO in the cis position) in organometallic synthesis. Group 7 elements with 7 valence electrons form pentacarbonyl dimers Mn2(CO)10, Tc2(CO)10, and Re2(CO)10. Group 8 elements with 8 valence electrons form pentacarbonyls Fe(CO)5, Ru(CO)5 and Os(CO)5. The heavier two members are unstable, tending to decarbonylate to give Ru3(CO)12, and Os3(CO)12. The two other principal iron carbonyls are Fe3(CO)12 and Fe2(CO)9. Group 9 elements with 9 valence electrons and are expected to form tetracarbonyl dimers M2(CO)8. In fact the cobalt derivative of this octacarbonyl is the only stable member, but all three tetramers are well known: Co4(CO)12, Rh4(CO)12, Rh6(CO)16, and Ir4(CO)12. Co2(CO)8 unlike the majority of the other 18 VE transition metal carbonyls is sensitive to oxygen. Group 10 elements with 10 valence electrons form tetracarbonyls such as Ni(CO)4. Curiously Pd(CO)4 and Pt(CO)4 are not stable. Anionic binary metal carbonyls Group 3 elements scandium and yttrium form monoanions, [M(CO)8]− (M = Sc, Y) which are 20-electron carbonyls, as does the lanthanide lanthanum. Group 4 elements as dianions resemble neutral group 6 derivatives: [Ti(CO)6]2−. Group 5 elements as monoanions resemble again neutral group 6 derivatives: [V(CO)6]−. Group 7 elements as monoanions resemble neutral group 8 derivatives: [M(CO)5]− (M = Mn, Tc, Re). Group 8 elements as dianaions resemble neutral group 10 derivatives: [M(CO)4]2− (M = Fe, Ru, Os). Condensed derivatives are also known. Group 9 elements as monoanions resemble neutral group 10 metal carbonyl. [Co(CO)4]− is the best studied member.Large anionic clusters of nickel, palladium, and platinum are also well known. Many metal carbonyl anions can be protonated to give metal carbonyl hydrides. Cationic binary metal carbonyls Group 2 elements form [M(CO)8]+ (M = Ca, Sr, Ba), characterized in gas phase by mass spectrometry and vibrational spectroscopy. Group 3 elements form [Sc(CO)7]+ and [Y(CO)8]+ in gas phase. Group 7 elements as monocations resemble neutral group 6 derivative [M(CO)6]+ (M = Mn, Tc, Re). Group 8 elements as dications also resemble neutral group 6 derivatives [M(CO)6]2+ (M = Fe, Ru, Os). Nonclassical carbonyl complexes Nonclassical describes those carbonyl complexes where νCO is higher than that for free carbon monoxide. In nonclassical CO complexes, the C-O distance is shorter than free CO (113.7 pm). The structure of [Fe(CO)6]2+, with dC-O = 112.9 pm, illustrates this effect. These complexes are usually cationic, sometimes dicationic. Applications: Metallurgical uses Metal carbonyls are used in several industrial processes. Perhaps the earliest application was the extraction and purification of nickel via nickel tetracarbonyl by the Mond process (see also carbonyl metallurgy).By a similar process carbonyl iron, a highly pure metal powder, is prepared by thermal decomposition of iron pentacarbonyl. Carbonyl iron is used inter alia for the preparation of inductors, pigments, as dietary supplements, in the production of radar-absorbing materials in the stealth technology, and in thermal spraying. Applications: Catalysis Metal carbonyls are used in a number of industrially important carbonylation reactions. In the oxo process, an alkene, hydrogen gas, and carbon monoxide react together with a catalyst (such as dicobalt octacarbonyl) to give aldehydes. Illustrative is the production of butyraldehyde from propylene: CH3CH=CH2 + H2 + CO → CH3CH2CH2CHOButyraldehyde is converted on an industrial scale to 2-ethylhexanol, a precursor to PVC plasticizers, by aldol condensation, followed by hydrogenation of the resulting hydroxyaldehyde. The "oxo aldehydes" resulting from hydroformylation are used for large-scale synthesis of fatty alcohols, which are precursors to detergents. The hydroformylation is a reaction with high atom economy, especially if the reaction proceeds with high regioselectivity. Applications: Another important reaction catalyzed by metal carbonyls is the hydrocarboxylation. The example below is for the synthesis of acrylic acid and acrylic acid esters: Also the cyclization of acetylene to cyclooctatetraene uses metal carbonyl catalysts:In the Monsanto and Cativa processes, acetic acid is produced from methanol, carbon monoxide, and water using hydrogen iodide as well as rhodium and iridium carbonyl catalysts, respectively. Related carbonylation reactions afford acetic anhydride. Applications: CO-releasing molecules (CO-RMs) Carbon monoxide-releasing molecules are metal carbonyl complexes that are being developed as potential drugs to release CO. At low concentrations, CO functions as a vasodilatory and an anti-inflammatory agent. CO-RMs have been conceived as a pharmacological strategic approach to carry and deliver controlled amounts of CO to tissues and organs. Related compounds: Many ligands are known to form homoleptic and mixed ligand complexes that are analogous to the metal carbonyls. Nitrosyl complexes Metal nitrosyls, compounds featuring NO ligands, are numerous. In contrast to metal carbonyls, however, homoleptic metal nitrosyls are rare. NO is a stronger π-acceptor than CO. Well known nitrosyl carbonyls include CoNO(CO)3 and Fe(NO)2(CO)2, which are analogues of Ni(CO)4. Related compounds: Thiocarbonyl complexes Complexes containing CS are known but uncommon. The rarity of such complexes is partly attributable to the fact that the obvious source material, carbon monosulfide, is unstable. Thus, the synthesis of thiocarbonyl complexes requires indirect routes, such as the reaction of disodium tetracarbonylferrate with thiophosgene: Na2Fe(CO)4 + CSCl2 → Fe(CO)4CS + 2 NaClComplexes of CSe and CTe have been characterized. Related compounds: Isocyanide complexes Isocyanides also form extensive families of complexes that are related to the metal carbonyls. Typical isocyanide ligands are methyl isocyanide and t-butyl isocyanide (Me3CNC). A special case is CF3NC, an unstable molecule that forms stable complexes whose behavior closely parallels that of the metal carbonyls. Toxicology: The toxicity of metal carbonyls is due to toxicity of carbon monoxide, the metal, and because of the volatility and instability of the complexes, any inherent toxicity of the metal is generally made much more severe due to ease of exposure. Exposure occurs by inhalation, or for liquid metal carbonyls by ingestion or due to the good fat solubility by skin resorption. Most clinical experience were gained from toxicological poisoning with nickel tetracarbonyl and iron pentacarbonyl due to their use in industry. Nickel tetracarbonyl is considered as one of the strongest inhalation poisons.Inhalation of nickel tetracarbonyl causes acute non-specific symptoms similar to a carbon monoxide poisoning, such as nausea, cough, headache, fever, and dizziness. After some time, severe pulmonary symptoms such as cough, tachycardia, and cyanosis, or problems in the gastrointestinal tract occur. In addition to pathological alterations of the lung, such as by metalation of the alveoli, damages are observed in the brain, liver, kidneys, adrenal glands, and spleen. A metal carbonyl poisoning often necessitates a lengthy recovery.Chronic exposure by inhalation of low concentrations of nickel tetracarbonyl can cause neurological symptoms such as insomnia, headaches, dizziness and memory loss. Nickel tetracarbonyl is considered carcinogenic, but it can take 20 to 30 years from the start of exposure to the clinical manifestation of cancer. History: Initial experiments on the reaction of carbon monoxide with metals were carried out by Justus von Liebig in 1834. By passing carbon monoxide over molten potassium he prepared a substance having the empirical formula KCO, which he called Kohlenoxidkalium. As demonstrated later, the compound was not a carbonyl, but the potassium salt of benzenehexol (K6C6O6) and the potassium salt of acetylenediol (K2C2O2). History: The synthesis of the first true heteroleptic metal carbonyl complex was performed by Paul Schützenberger in 1868 by passing chlorine and carbon monoxide over platinum black, where dicarbonyldichloroplatinum (Pt(CO)2Cl2) was formed.Ludwig Mond, one of the founders of Imperial Chemical Industries, investigated in the 1890s with Carl Langer and Friedrich Quincke various processes for the recovery of chlorine which was lost in the Solvay process by nickel metals, oxides, and salts. As part of their experiments the group treated nickel with carbon monoxide. They found that the resulting gas colored the gas flame of a burner in a greenish-yellowish color; when heated in a glass tube it formed a nickel mirror. The gas could be condensed to a colorless, water-clear liquid with a boiling point of 43 °C. Thus, Mond and his coworker had discovered the first pure, homoleptic metal carbonyl, nickel tetracarbonyl (Ni(CO)4). The unusual high volatility of the metal compound nickel tetracarbonyl led Kelvin to the statement that Mond had "given wings to the heavy metals".The following year, Mond and Marcellin Berthelot independently discovered iron pentacarbonyl, which is produced by a similar procedure as nickel tetracarbonyl. Mond recognized the economic potential of this class of compounds, which he commercially used in the Mond process and financed more research on related compounds. Heinrich Hirtz and his colleague M. Dalton Cowap synthesized metal carbonyls of cobalt, molybdenum, ruthenium, and diiron nonacarbonyl. In 1906 James Dewar and H. O. Jones were able to determine the structure of diiron nonacarbonyl, which is produced from iron pentacarbonyl by the action of sunlight. After Mond, who died in 1909, the chemistry of metal carbonyls fell for several years in oblivion. BASF started in 1924 the industrial production of iron pentacarbonyl by a process which was developed by Alwin Mittasch. The iron pentacarbonyl was used for the production of high-purity iron, so-called carbonyl iron, and iron oxide pigment. Not until 1927 did A. Job and A. Cassal succeed in the preparation of chromium hexacarbonyl and tungsten hexacarbonyl, the first synthesis of other homoleptic metal carbonyls.Walter Hieber played in the years following 1928 a decisive role in the development of metal carbonyl chemistry. He systematically investigated and discovered, among other things, the Hieber base reaction, the first known route to metal carbonyl hydrides and synthetic pathways leading to metal carbonyls such as dirhenium decacarbonyl. Hieber, who was since 1934 the Director of the Institute of Inorganic Chemistry at the Technical University Munich published in four decades 249 papers on metal carbonyl chemistry. History: Also in the 1930s Walter Reppe, an industrial chemist and later board member of BASF, discovered a number of homogeneous catalytic processes, such as the hydrocarboxylation, in which olefins or alkynes react with carbon monoxide and water to form products such as unsaturated acids and their derivatives. In these reactions, for example, nickel tetracarbonyl or cobalt carbonyls act as catalysts. Reppe also discovered the cyclotrimerization and tetramerization of acetylene and its derivatives to benzene and benzene derivatives with metal carbonyls as catalysts. BASF built in the 1960s a production facility for acrylic acid by the Reppe process, which was only superseded in 1996 by more modern methods based on the catalytic propylene oxidation. History: For the rational design of new complexes the concept of the isolobal analogy has been found useful. Roald Hoffmann was awarded the Nobel Prize in chemistry for the development of the concept. This describes metal carbonyl fragments of M(CO)n as parts of octahedral building blocks in analogy to the tetrahedral CH3–, CH2– or CH– fragments in organic chemistry. In example dimanganese decacarbonyl is formed in terms of the isolobal analogy of two d7 Mn(CO)5 fragments, that are isolobal to the methyl radical CH•3. In analogy to how methyl radicals combine to form ethane, these can combine to dimanganese decacarbonyl. The presence of isolobal analog fragments does not mean that the desired structures can be synthesized. In his Nobel Prize lecture Hoffmann emphasized that the isolobal analogy is a useful but simple model, and in some cases does not lead to success.The economic benefits of metal-catalysed carbonylations, such as Reppe chemistry and hydroformylation, led to growth of the area. Metal carbonyl compounds were discovered in the active sites of three naturally occurring enzymes.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Rhodesian Brushstroke** Rhodesian Brushstroke: The Rhodesian Brushstroke is a brushstroke-type camouflage pattern used by the Rhodesian Security Forces from 1965 until its replacement by a vertical lizard stripe in 1980. It was the default camouflage appearing on battledress of the Rhodesian Army and British South Africa Police, although used in smaller quantities by INTAF personnel. The design was also used on uniforms issued to South African Special Forces for clandestine operations. A similar pattern is fielded by the Zimbabwe National Army. Development and history: Rhodesian Brushstroke consists of large, contrasting, shapes tailored to break up the outline of an object. Like most disruptive camouflage, the pattern is dependent on countershading, using hues with high-intensity contrast or noticeable differences in chromaticity.Prior to Rhodesia's Unilateral Declaration of Independence, enlisted personnel in the Rhodesian Army were issued with uniforms in khaki drill. The Battle of Sinoia and the outbreak of the Rhodesian Bush War prompted the security forces to devise a more appropriate uniform especially designed for the region. This incorporated a three colour, high contrast, disruptive fabric with green and brown strokes on a sandy background. Early shortages of textile and equipment were overcome with South African and Portuguese technical assistance, and a home industry for the new battledress developed.The pattern was supposedly designed by Di Cameron of David Whitehead Textiles. Users: Rhodesia The basic Rhodesian military battledress adopted universally between 1964 and 1966 consisted of a camouflage jacket, field cap, and trousers with wide belt loops for a stable belt and large cargo pockets. Ranks, name tapes, or unit patches were sewn on. In 1969, the jackets were largely superseded by shirts of a lighter material for combat operations in the hot African climate. Late in the bush war, Rhodesian battledress commonly took the form of one-piece coveralls, but uniform regulations remained quite lax in the field. Individual servicemen often modified their uniforms to shorten the sleeves while others wore privately purchased T-shirts with the same camouflage print. The long camouflage trousers were also discarded in large numbers in favour of running shorts.While the brushstroke pattern itself was considered very effective, the fabric in locally-made uniforms was of poor quality and the Rhodesian troops frequently envied foreign volunteers who brought their more durable foreign-produced clothing with them. Users: Zimbabwe The Zimbabwe Defence Forces initially discarded its preexisting stocks of Rhodesian battledress in favour of a Portuguese-designed vertical lizardstripe during the 1980s; however, the original brushstroke pattern was re-adopted during the 1990s just prior to the Second Congo War. Zimbabwe currently produces military uniforms in two variations of Rhodesian Brushstroke designed for the dry season and rainy season, respectively. The dry season variant uses a light khaki base while the rainy season variant is designed on a green base. The difference between the original Rhodesian camouflage and the ZNA version is that in the Zimbabwe pattern, brown is printed over the green, and not beneath it. Users: South Africa During the late 1970s, South African pilots, technical personnel, and special forces frequently operated alongside the Rhodesian security forces. Due to the covert nature of their presence, they were forbidden from wearing their regulation uniforms and instead issued with Rhodesian battledress. South African units known to have received stocks of Rhodesian uniforms included 3 South African Infantry Battalion and 1 Parachute Battalion. South African special forces also wore Rhodesian battledress during raids in Mozambique during the Mozambican Civil War. This practice was largely discontinued following Zimbabwean independence in 1980. The Rhodesian battledress did continue to be issued to ex-Rhodesian service members serving with South African special forces units operating in Zimbabwe between 1981 and 1984. Users: Non-State actors Pilfered Rhodesian fatigues occasionally turned up in the hands of the Zimbabwe People's Revolutionary Army (ZIPRA), which used it to impersonate members of the Rhodesian security forces. Prior to standardising its uniforms during the mid 1970s, the People's Armed Forces for the Liberation of Angola (FAPLA) also adopted Rhodesian battledress uniforms in limited quantities. Users: Trials While developing a new disruptive camouflage pattern in the 2000, the United States Marine Corps (USMC) evaluated Rhodesian Brushstroke as one of the three best military camouflage patterns previously developed, along with Canadian Pattern (CADPAT) and tigerstripe. None of the three patterns were adopted because the USMC desired a more distinctive design. In 2002, it adopted the MARPAT digital camouflage pattern, a re-coloured version of CADPAT.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Vitrified clay pipe** Vitrified clay pipe: Vitrified clay pipe (VCP) is pipe made from a blend of clay and shale that has been subjected to high temperature to achieve vitrification, which results in a hard, inert ceramic. Vitrified clay pipe: VCP is commonly used in gravity sewer collection mains because of its long life and resistance to almost all domestic and industrial sewage, particularly the sulfuric acid that is generated by hydrogen sulfide, a common component of sewage. Only hydrofluoric acid and highly concentrated caustic wastes are known to attack VCP. Such wastes would not be permitted to be discharged into a municipal sewage collection system without adequate pretreatment.There are three main types of VCP produced in the U.S.: Bell & Spigot Pipe (with factory-applied compression joints), Band-Seal pipe (with rubber compression couplings) and NO-DIG(R) Pipe (for trenchless installation with an elastomeric gasket and stainless steel collar for a low-profile compression joint). All VCP manufactured in the U.S. must comply with ASTM C425 to provide a flexible leak-free joint. Vitrified clay pipe: Clay pipe has been in-use in sanitary sewer systems for at least 5,000 years Production: VCP pipe is made by forming clay then heating it to 2000 degrees Fahrenheit (1100 degrees Celsius). The pipe is then vitrified. In some areas the pipe is then glazed to ensure that it will be water-tight. Benefits: VCP products use clay as a major component in its production, making its raw materials environmentally friendly. The manufacturing process has been fine-tuned for centuries and was designed to be fiscally responsible which had the added benefit of being environmentally responsible. But the primary benefit (both environmental and fiscal) of using VCP in sanitary sewers is its long service life.As Sanitary Sewer Overflows (SSOs) have become an area of concern for the US EPA and thus a very large potential liability for municipalities, cleaning sewers for condition assessment and maintenance has become a critical factor in system design. Flexible thermoplastic pipe limits the tools available for this cleaning as they are more easily damaged. VCP allows for aggressive cleaning methods which prolongs the service life of a sewer line and frequently eliminates the need for expensive dig-ups.Further, VCP's resistance to a wide variety of acids besides hydrofluoric acid make it a long lasting choice for use in underground sewers.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Dorico** Dorico: Dorico () is a scorewriter software; along with Finale and Sibelius, it is one of the three leading professional-level music notation programs.Dorico's development team consists of most of the former core developers of a rival software, Sibelius. After the developers of Sibelius were laid off in a 2012 restructuring by their corporate owner, Avid, most of the team were re-hired by a competing company, Steinberg, to create a new software. They aimed to build a "next-generation" music notation program, and released Dorico four years later, in 2016. History: The project was unveiled on 20 February 2013 by the Product Marketing Manager, Daniel Spreadbury, on the blog Making Notes, and the software was first released on 19 October 2016.The program's title Dorico was revealed on the same blog on 17 May 2016. The name honours the 16th-century Italian music engraver Valerio Dorico (1500 – c. 1565), who printed first editions of sacred music by Giovanni Pierluigi da Palestrina and Giovanni Animuccia and pioneered the use of a single impression printing process first developed in England and France.The iPad version was released on 28 July 2021; it was the first major desktop scorewriter application to be made available on a mobile platform. It offers most of the functionality of the desktop app. Features: Dorico is known for its stability and reliability in creating aesthetically pleasing scores and its intuitive interface. User feedback influences Dorico's feature design, and the development team actively use the forum and Facebook group. Automation Reviews have claimed that Dorico has become more efficient than other notation software. For example, a signature time-saving feature is its automatic creation of instrumental part layouts. Another signature feature is its automated condensing, where it combines multiple players' parts onto a single staff, such as for a conductor's score. Keyboard input Dorico natively supports note input entirely from the computer keyboard without the need to use the mouse. It also supports MIDI input from a piano keyboard. Features: SMuFL music fonts The Standard Music Font Layout (SMuFL) standard was created by the Dorico development team at Steinberg. It provides a consistent standard way of mapping the thousands of musical symbols required by conventional music notation into a single font that can be used by a variety of software and font designers. It was first implemented in MuseScore, then in Dorico's first release and in Finale.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Limit load (aeronautics)** Limit load (aeronautics): For aircraft specification calculation in aeronautics, limit load (LL) is the maximum load factor authorized during flight, Mathematically, limit load is LL = LLF x W, where LL = limit load, LLF = limit load factor, and W = weight of the aircraft. Limit load (aeronautics): Limit load is constant for all weights above design gross weight. The limit load factor is reduced if gross weight is increased. But the LLF cannot be increased if the gross weight is decreased below the design gross weight. Engine mounts and other structural members are designed for the nominal LLF. The nominal or limit load Bn is the load which should only occur once (or only a very few times) during the lifetime of an aircraft. Bn may therefore only occur once during (e.g.) 60,000 hours of flying. No plastic deformation is allowed at this level of a load. Limit load (aeronautics): The limit load can be found relatively easily by statistically analysing the data collected during the many hours of logged flights (which is continuously being gathered).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Fashion museum** Fashion museum: A fashion museum is dedicated to or features a significant collection of accessories or clothing. There is some overlap with textile museums. Fashion museum: Notable examples include the Costume Museum of Canada, the Fashion Museum, Bath, the Musée Galliera in Paris, and the Fashion Museum of the Province of Antwerp MoMu. National museums with significant fashion collections include the Victoria and Albert Museum in London. The Metropolitan Museum of Art in New York contains a collection of more than 75,000 costumes and accessories.Another in London is the Fashion and Textile Museum, founded by designer Zandra Rhodes in 2003, and the only museum in Britain dedicated to showcasing developments in contemporary fashion, as well as providing inspiration, support and training for those working in the industry.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**OR3A2** OR3A2: Olfactory receptor 3A2 is a protein that in humans is encoded by the OR3A2 gene.Olfactory receptors interact with odorant molecules in the nose, to initiate a neuronal response that triggers the perception of a smell. The olfactory receptor proteins are members of a large family of G-protein-coupled receptors (GPCR) arising from single coding-exon genes. Olfactory receptors share a 7-transmembrane domain structure with many neurotransmitter and hormone receptors and are responsible for the recognition and G protein-mediated transduction of odorant signals. The olfactory receptor gene family is the largest in the genome. The nomenclature assigned to the olfactory receptor genes and proteins for this organism is independent of other organisms.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Press conference** Press conference: A press conference or news conference is a media event in which notable individuals or organizations invite journalists to hear them speak and ask questions. Press conferences are often held by politicians, corporations, non-governmental organizations, as well as organizers for newsworthy events. Practice: In a press conference, one or more speakers may make a statement, which may be followed by questions from reporters. Sometimes only questioning occurs; sometimes there is a statement with no questions permitted. Practice: A media event at which no statements are made, and no questions allowed, is called a photo op. A government may wish to open their proceedings for the media to witness events, such as the passing of a piece of legislation from the government in parliament to the senate, via a media availability.American television stations and networks especially value press conferences: because today's TV news programs air for hours at a time, or even continuously, assignment editors have a steady appetite for ever-larger quantities of footage. Practice: News conferences are often held by politicians; by sports teams; by celebrities or film studios; by commercial organizations to promote products; by attorneys to promote lawsuits; and by almost anyone who finds benefit in the free publicity afforded by media coverage. Some people, including many police chiefs, hold press conferences reluctantly in order to avoid dealing with reporters individually. A press conference is often announced by sending an advisory or news release to assignment editors, preferably well in advance. Sometimes they are held spontaneously when several reporters gather around a newsmaker. Practice: News conferences can be held just about anywhere, in settings as formal as the White House room set aside for the purpose or as informal as the street in front of a crime scene. Hotel conference rooms and courthouses are often used for press conferences. Sometimes such gatherings are recorded for press use and later released on an interview disc. Media day: Media day is a special press conference event where rather than holding a conference after an event to field questions about the event that has recently transpired, a conference is held for the sole purpose of making newsmakers available to the media for general questions and photographs often before an event or series of events (such as an athletic season) occur. In athletics, teams and leagues host media days prior to the season and may host them prior to special events during the season like all-star games and championship games.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Hilbert's second problem** Hilbert's second problem: In mathematics, Hilbert's second problem was posed by David Hilbert in 1900 as one of his 23 problems. It asks for a proof that the arithmetic is consistent – free of any internal contradictions. Hilbert stated that the axioms he considered for arithmetic were the ones given in Hilbert (1900), which include a second order completeness axiom. In the 1930s, Kurt Gödel and Gerhard Gentzen proved results that cast new light on the problem. Some feel that Gödel's theorems give a negative solution to the problem, while others consider Gentzen's proof as a partial positive solution. Hilbert's problem and its interpretation: In one English translation, Hilbert asks: "When we are engaged in investigating the foundations of a science, we must set up a system of axioms which contains an exact and complete description of the relations subsisting between the elementary ideas of that science. ... But above all I wish to designate the following as the most important among the numerous questions which can be asked with regard to the axioms: To prove that they are not contradictory, that is, that a definite number of logical steps based upon them can never lead to contradictory results. In geometry, the proof of the compatibility of the axioms can be effected by constructing a suitable field of numbers, such that analogous relations between the numbers of this field correspond to the geometrical axioms. ... On the other hand a direct method is needed for the proof of the compatibility of the arithmetical axioms." Hilbert's statement is sometimes misunderstood, because by the "arithmetical axioms" he did not mean a system equivalent to Peano arithmetic, but a stronger system with a second-order completeness axiom. The system Hilbert asked for a completeness proof of is more like second-order arithmetic than first-order Peano arithmetic. Hilbert's problem and its interpretation: As a nowadays common interpretation, a positive solution to Hilbert's second question would in particular provide a proof that Peano arithmetic is consistent. Hilbert's problem and its interpretation: There are many known proofs that Peano arithmetic is consistent that can be carried out in strong systems such as Zermelo–Fraenkel set theory. These do not provide a resolution to Hilbert's second question, however, because someone who doubts the consistency of Peano arithmetic is unlikely to accept the axioms of set theory (which is much stronger) to prove its consistency. Thus a satisfactory answer to Hilbert's problem must be carried out using principles that would be acceptable to someone who does not already believe PA is consistent. Such principles are often called finitistic because they are completely constructive and do not presuppose a completed infinity of natural numbers. Gödel's second incompleteness theorem (see Gödel's incompleteness theorems) places a severe limit on how weak a finitistic system can be while still proving the consistency of Peano arithmetic. Gödel's incompleteness theorem: Gödel's second incompleteness theorem shows that it is not possible for any proof that Peano Arithmetic is consistent to be carried out within Peano arithmetic itself. This theorem shows that if the only acceptable proof procedures are those that can be formalized within arithmetic then Hilbert's call for a consistency proof cannot be answered. However, as Nagel & Newman (1958) explain, there is still room for a proof that cannot be formalized in arithmetic: "This imposing result of Godel's analysis should not be misunderstood: it does not exclude a meta-mathematical proof of the consistency of arithmetic. What it excludes is a proof of consistency that can be mirrored by the formal deductions of arithmetic. Meta-mathematical proofs of the consistency of arithmetic have, in fact, been constructed, notably by Gerhard Gentzen, a member of the Hilbert school, in 1936, and by others since then. ... But these meta-mathematical proofs cannot be represented within the arithmetical calculus; and, since they are not finitistic, they do not achieve the proclaimed objectives of Hilbert's original program. ... The possibility of constructing a finitistic absolute proof of consistency for arithmetic is not excluded by Gödel’s results. Gödel showed that no such proof is possible that can be represented within arithmetic. His argument does not eliminate the possibility of strictly finitistic proofs that cannot be represented within arithmetic. But no one today appears to have a clear idea of what a finitistic proof would be like that is not capable of formulation within arithmetic." Gentzen's consistency proof: In 1936, Gentzen published a proof that Peano Arithmetic is consistent. Gentzen's result shows that a consistency proof can be obtained in a system that is much weaker than set theory. Gentzen's consistency proof: Gentzen's proof proceeds by assigning to each proof in Peano arithmetic an ordinal number, based on the structure of the proof, with each of these ordinals less than ε0. He then proves by transfinite induction on these ordinals that no proof can conclude in a contradiction. The method used in this proof can also be used to prove a cut elimination result for Peano arithmetic in a stronger logic than first-order logic, but the consistency proof itself can be carried out in ordinary first-order logic using the axioms of primitive recursive arithmetic and a transfinite induction principle. Tait (2005) gives a game-theoretic interpretation of Gentzen's method. Gentzen's consistency proof: Gentzen's consistency proof initiated the program of ordinal analysis in proof theory. In this program, formal theories of arithmetic or set theory are assigned ordinal numbers that measure the consistency strength of the theories. A theory will be unable to prove the consistency of another theory with a higher proof theoretic ordinal. Modern viewpoints on the status of the problem: While the theorems of Gödel and Gentzen are now well understood by the mathematical logic community, no consensus has formed on whether (or in what way) these theorems answer Hilbert's second problem. Simpson (1988) argues that Gödel's incompleteness theorem shows that it is not possible to produce finitistic consistency proofs of strong theories. Kreisel (1976) states that although Gödel's results imply that no finitistic syntactic consistency proof can be obtained, semantic (in particular, second-order) arguments can be used to give convincing consistency proofs. Detlefsen (1990) argues that Gödel's theorem does not prevent a consistency proof because its hypotheses might not apply to all the systems in which a consistency proof could be carried out. Dawson (2006) calls the belief that Gödel's theorem eliminates the possibility of a persuasive consistency proof "erroneous", citing the consistency proof given by Gentzen and a later one given by Gödel in 1958.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Kripke–Platek set theory with urelements** Kripke–Platek set theory with urelements: The Kripke–Platek set theory with urelements (KPU) is an axiom system for set theory with urelements, based on the traditional (urelement-free) Kripke–Platek set theory. It is considerably weaker than the (relatively) familiar system ZFU. The purpose of allowing urelements is to allow large or high-complexity objects (such as the set of all reals) to be included in the theory's transitive models without disrupting the usual well-ordering and recursion-theoretic properties of the constructible universe; KP is so weak that this is hard to do by traditional means. Preliminaries: The usual way of stating the axioms presumes a two sorted first order language L∗ with a single binary relation symbol ∈ Letters of the sort p,q,r,... designate urelements, of which there may be none, whereas letters of the sort a,b,c,... designate sets. The letters x,y,z,... may denote both sets and urelements. Preliminaries: The letters for sets may appear on both sides of ∈ , while those for urelements may only appear on the left, i.e. the following are examples of valid expressions: p∈a , b∈a The statement of the axioms also requires reference to a certain collection of formulae called Δ0 -formulae. The collection Δ0 consists of those formulae that can be built using the constants, ∈ , ¬ , ∧ , ∨ , and bounded quantification. That is quantification of the form ∀x∈a or ∃x∈a where a is given set. Axioms: The axioms of KPU are the universal closures of the following formulae: Extensionality: ∀x(x∈a↔x∈b)→a=b Foundation: This is an axiom schema where for every formula ϕ(x) we have ∃a.ϕ(a)→∃a(ϕ(a)∧∀x∈a(¬ϕ(x))) Pairing: ∃a(x∈a∧y∈a) Union: ∃a∀c∈b.∀y∈c(y∈a) Δ0-Separation: This is again an axiom schema, where for every Δ0 -formula ϕ(x) we have the following ∃a∀x(x∈a↔x∈b∧ϕ(x)) Δ0-SCollection: This is also an axiom schema, for every Δ0 -formula ϕ(x,y) we have ∀x∈a.∃y.ϕ(x,y)→∃b∀x∈a.∃y∈b.ϕ(x,y) Set Existence: ∃a(a=a) Additional assumptions Technically these are axioms that describe the partition of objects into sets and urelements. Applications: KPU can be applied to the model theory of infinitary languages. Models of KPU considered as sets inside a maximal universe that are transitive as such are called admissible sets.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Tritium Systems Test Assembly** Tritium Systems Test Assembly: The Tritium Systems Test Assembly (TSTA) was a facility at Los Alamos National Laboratory dedicated to the development and demonstration of technologies required for fusion-relevant deuterium-tritium processing. Facility design was launched in 1977. It was commissioned in 1982, and the first tritium was processed in 1984. The maximum tritium inventory was 140 grams.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Natural logarithm** Natural logarithm: The natural logarithm of a number is its logarithm to the base of the mathematical constant e, which is an irrational and transcendental number approximately equal to 2.718281828459. The natural logarithm of x is generally written as ln x, loge x, or sometimes, if the base e is implicit, simply log x. Parentheses are sometimes added for clarity, giving ln(x), loge(x), or log(x). This is done particularly when the argument to the logarithm is not a single symbol, so as to prevent ambiguity. Natural logarithm: The natural logarithm of x is the power to which e would have to be raised to equal x. For example, ln 7.5 is 2.0149..., because e2.0149... = 7.5. The natural logarithm of e itself, ln e, is 1, because e1 = e, while the natural logarithm of 1 is 0, since e0 = 1. Natural logarithm: The natural logarithm can be defined for any positive real number a as the area under the curve y = 1/x from 1 to a (with the area being negative when 0 < a < 1). The simplicity of this definition, which is matched in many other formulas involving the natural logarithm, leads to the term "natural". The definition of the natural logarithm can then be extended to give logarithm values for negative numbers and for all non-zero complex numbers, although this leads to a multi-valued function: see complex logarithm for more. Natural logarithm: The natural logarithm function, if considered as a real-valued function of a positive real variable, is the inverse function of the exponential function, leading to the identities: ln if ln if x∈R Like all logarithms, the natural logarithm maps multiplication of positive numbers into addition: ln ln ln ⁡y. Natural logarithm: Logarithms can be defined for any positive base other than 1, not only e. However, logarithms in other bases differ only by a constant multiplier from the natural logarithm, and can be defined in terms of the latter, log ln ln ln log b⁡e Logarithms are useful for solving equations in which the unknown appears as the exponent of some other quantity. For example, logarithms are used to solve for the half-life, decay constant, or unknown time in exponential decay problems. They are important in many branches of mathematics and scientific disciplines, and are used to solve problems involving compound interest. History: The concept of the natural logarithm was worked out by Gregoire de Saint-Vincent and Alphonse Antonio de Sarasa before 1649. Their work involved quadrature of the hyperbola with equation xy = 1, by determination of the area of hyperbolic sectors. Their solution generated the requisite "hyperbolic logarithm" function, which had the properties now associated with the natural logarithm. History: An early mention of the natural logarithm was by Nicholas Mercator in his work Logarithmotechnia, published in 1668, although the mathematics teacher John Speidell had already compiled a table of what in fact were effectively natural logarithms in 1619. It has been said that Speidell's logarithms were to the base e, but this is not entirely true due to complications with the values being expressed as integers.: 152 Notational conventions: The notations ln x and loge x both refer unambiguously to the natural logarithm of x, and log x without an explicit base may also refer to the natural logarithm. This usage is common in mathematics, along with some scientific contexts as well as in many programming languages. In some other contexts such as chemistry, however, log x can be used to denote the common (base 10) logarithm. It may also refer to the binary (base 2) logarithm in the context of computer science, particularly in the context of time complexity. Definitions: The natural logarithm can be defined in several equivalent ways. Definitions: Inverse of exponential The most general definition is as the inverse function of ex , so that ln ⁡(x)=x . Because ex is positive and invertible for any real input x , this definition of ln ⁡(x) is well defined for any positive x. For the complex numbers, ez is not invertible, so ln ⁡(z) is a multivalued function. In order to make ln ⁡(z) a proper, single-output function, we therefore need to restrict it to a particular principal branch, often denoted by Ln ⁡(z) . As the inverse function of ez , ln ⁡(z) can be defined by inverting the usual definition of ez lim n→∞(1+zn)n Doing so yields: ln lim n→∞n⋅(zn−1) This definition therefore derives its own principal branch from the principal branch of nth roots. Definitions: Integral definition The natural logarithm of a positive, real number a may be defined as the area under the graph of the hyperbola with equation y = 1/x between x = 1 and x = a. This is the integral ln ⁡a=∫1a1xdx. If a is in (0,1) then the region has negative area and the logarithm is negative. This function is a logarithm because it satisfies the fundamental multiplicative property of a logarithm: ln ln ln ⁡b. This can be demonstrated by splitting the integral that defines ln ab into two parts, and then making the variable substitution x = at (so dx = a dt) in the second part, as follows: ln ln ln ⁡b. In elementary terms, this is simply scaling by 1/a in the horizontal direction and by a in the vertical direction. Area does not change under this transformation, but the region between a and ab is reconfigured. Because the function a/(ax) is equal to the function 1/x, the resulting area is precisely ln b. The number e can then be defined to be the unique real number a such that ln a = 1. The natural logarithm also has an improper integral representation, which can be derived with Fubini's theorem as follows: ln ⁡(x)=∫1x1udu=∫1x∫0∞e−tudtdu=∫0∞∫1xe−tududt=∫0∞e−t−e−txtdt Properties: The natural logarithm has the following mathematical properties: ln ⁡1=0 ln ⁡e=1 ln ln ln for and y>0 ln ln ln ⁡y ln ln for x>0 ln ln for 0<x<y lim ln ⁡(1+x)x=1 lim ln for x>0 ln for x>0 ln for and α≥1 Derivative: The derivative of the natural logarithm as a real-valued function on the positive reals is given by ln ⁡x=1x. How to establish this derivative of the natural logarithm depends on how it is defined firsthand. If the natural logarithm is defined as the integral ln ⁡x=∫1x1tdt, then the derivative immediately follows from the first part of the fundamental theorem of calculus. On the other hand, if the natural logarithm is defined as the inverse of the (natural) exponential function, then the derivative (for x > 0) can be found by using the properties of the logarithm and a definition of the exponential function. From the definition of the number lim u→0(1+u)1/u, the exponential function can be defined as lim lim h→0(1+hx)1/h , where u=hx,h=ux. The derivative can then be found from first principles. ln lim ln ln lim ln lim ln all above for logarithmic properties ln lim for continuity of the logarithm ln for the definition of lim for the definition of the ln as inverse function. Also, we have: ln ln ln ln ln ⁡x=1x. so, unlike its inverse function eax , a constant in the function doesn't alter the differential. Series: Since the natural logarithm is undefined at 0, ln ⁡(x) itself does not have a Maclaurin series, unlike many other elementary functions. Instead, one looks for Taylor expansions around other points. For example, if and x≠0, then ln ⁡x=∫1x1tdt=∫0x−111+udu=∫0x−1(1−u+u2−u3+⋯)du=(x−1)−(x−1)22+(x−1)33−(x−1)44+⋯=∑k=1∞(−1)k−1(x−1)kk. This is the Taylor series for ln x around 1. A change of variables yields the Mercator series: ln ⁡(1+x)=∑k=1∞(−1)k−1kxk=x−x22+x33−⋯, valid for |x| ≤ 1 and x ≠ −1. Series: Leonhard Euler, disregarding x≠−1 , nevertheless applied this series to x = −1 to show that the harmonic series equals the natural logarithm of 1/(1 − 1), that is, the logarithm of infinity. Nowadays, more formally, one can prove that the harmonic series truncated at N is close to the logarithm of N, when N is large, with the difference converging to the Euler–Mascheroni constant. Series: The figure is a graph of ln(1 + x) and some of its Taylor polynomials around 0. These approximations converge to the function only in the region −1 < x ≤ 1; outside this region, the higher-degree Taylor polynomials devolve to worse approximations for the function. A useful special case for positive integers n, taking x=1n , is: ln ⁡(n+1n)=∑k=1∞(−1)k−1knk=1n−12n2+13n3−14n4+⋯ If Re ⁡(x)≥1/2, then ln ln ⁡(1x)=−∑k=1∞(−1)k−1(1x−1)kk=∑k=1∞(x−1)kkxk=x−1x+(x−1)22x2+(x−1)33x3+(x−1)44x4+⋯ Now, taking x=n+1n for positive integers n, we get: ln ⁡(n+1n)=∑k=1∞1k(n+1)k=1n+1+12(n+1)2+13(n+1)3+14(n+1)4+⋯ If Re and x≠0, then ln ln ln ln ln ⁡(1−x−1x+1). Since ln ln ⁡(1−y)=∑i=1∞1i((−1)i−1yi−(−1)i−1(−y)i)=∑i=1∞yii((−1)i−1+1)=y∑i=1∞yi−1i((−1)i−1+1)=i−1→2k2y∑k=0∞y2k2k+1, we arrive at ln ⁡(x)=2(x−1)x+1∑k=0∞12k+1((x−1)2(x+1)2)k=2(x−1)x+1(11+13(x−1)2(x+1)2+15((x−1)2(x+1)2)2+⋯). Using the substitution x=n+1n again for positive integers n, we get: ln ⁡(n+1n)=22n+1∑k=0∞1(2k+1)((2n+1)2)k=2(12n+1+13(2n+1)3+15(2n+1)5+⋯). This is, by far, the fastest converging of the series described here. The natural logarithm can also be expressed as an infinite product: ln ⁡(x)=(x−1)∏k=1∞(21+x2k) Two examples might be: ln 16 )... 16 )... From this identity, we can easily get that: ln ⁡(x)=xx−1−∑k=1∞2−kx2−k1+x2−k For example: ln ⁡(2)=2−22+22−244+424−288+828⋯ The natural logarithm in integration: The natural logarithm allows simple integration of functions of the form g(x) = f '(x)/f(x): an antiderivative of g(x) is given by ln(|f(x)|). This is the case because of the chain rule and the following fact: ln ⁡|x|=1x,x≠0 In other words, when integrating over an interval of the real line that does not include x=0 then ln ⁡|x|+C where C is an arbitrary constant of integration. The natural logarithm in integration: Likewise, when the integral is over an interval where f(x)≠0 , ln ⁡|f(x)|+C. For example, consider the integral of tan(x) over an interval that does not include points where tan(x) is infinite: tan sin cos cos cos ln cos ln sec ⁡x|+C. The natural logarithm can be integrated using integration by parts: ln ln ⁡x−x+C. Let: ln ⁡x⇒du=dxx dv=dx⇒v=x then: ln ln ln ln ⁡x−x+C Efficient computation: For ln(x) where x > 1, the closer the value of x is to 1, the faster the rate of convergence of its Taylor series centered at 1. The identities associated with the logarithm can be leveraged to exploit this: ln 123.456 ln 1.23456 10 ln 1.23456 ln 10 ln 1.23456 ln 10 ln 1.23456 2.3025851. Such techniques were used before calculators, by referring to numerical tables and performing manipulations such as those above. Natural logarithm of 10 The natural logarithm of 10, which has the decimal expansion 2.30258509..., plays a role for example in the computation of natural logarithms of numbers represented in scientific notation, as a mantissa multiplied by a power of 10: ln 10 ln ln 10. This means that one can effectively calculate the logarithms of numbers with very large or very small magnitude using the logarithms of a relatively small set of decimals in the range [1, 10). Efficient computation: High precision To compute the natural logarithm with many digits of precision, the Taylor series approach is not efficient since the convergence is slow. Especially if x is near 1, a good alternative is to use Halley's method or Newton's method to invert the exponential function, because the series of the exponential function converges more quickly. For finding the value of y to give exp(y) − x = 0 using Halley's method, or equivalently to give exp(y/2) − x exp(−y/2) = 0 using Newton's method, the iteration simplifies to exp exp ⁡(yn) which has cubic convergence to ln(x). Efficient computation: Another alternative for extremely high precision calculation is the formula ln ln ⁡2, where M denotes the arithmetic-geometric mean of 1 and 4/s, and s=x2m>2p/2, with m chosen so that p bits of precision is attained. (For most purposes, the value of 8 for m is sufficient.) In fact, if this method is used, Newton inversion of the natural logarithm may conversely be used to calculate the exponential function efficiently. (The constants ln 2 and π can be pre-computed to the desired precision using any of several known quickly converging series.) Or, the following formula can be used: ln ⁡x=πM(θ22(1/x),θ32(1/x)),x∈(1,∞) where θ2(x)=∑n∈Zx(n+1/2)2,θ3(x)=∑n∈Zxn2 are the Jacobi theta functions.Based on a proposal by William Kahan and first implemented in the Hewlett-Packard HP-41C calculator in 1979 (referred to under "LN1" in the display, only), some calculators, operating systems (for example Berkeley UNIX 4.3BSD), computer algebra systems and programming languages (for example C99) provide a special natural logarithm plus 1 function, alternatively named LNP1, or log1p to give more accurate results for logarithms close to zero by passing arguments x, also close to zero, to a function log1p(x), which returns the value ln(1+x), instead of passing a value y close to 1 to a function returning ln(y). The function log1p avoids in the floating point arithmetic a near cancelling of the absolute term 1 with the second term from the Taylor expansion of the ln. This keeps the argument, the result, and intermediate steps all close to zero where they can be most accurately represented as floating-point numbers.In addition to base e the IEEE 754-2008 standard defines similar logarithmic functions near 1 for binary and decimal logarithms: log2(1 + x) and log10(1 + x). Efficient computation: Similar inverse functions named "expm1", "expm" or "exp1m" exist as well, all with the meaning of expm1(x) = exp(x) − 1.An identity in terms of the inverse hyperbolic tangent, log ⁡(1+x)=2artanh(x2+x), gives a high precision value for small values of x on systems that do not implement log1p(x). Computational complexity The computational complexity of computing the natural logarithm using the arithmetic-geometric mean (for both of the above methods) is O(M(n) ln n). Here n is the number of digits of precision at which the natural logarithm is to be evaluated and M(n) is the computational complexity of multiplying two n-digit numbers. Continued fractions: While no simple continued fractions are available, several generalized continued fractions are, including: ln ⁡(1+x)=x11−x22+x33−x44+x55−⋯=x1−0x+12x2−1x+22x3−2x+32x4−3x+42x5−4x+⋱ ln ⁡(1+xy)=xy+1x2+1x3y+2x2+2x5y+3x2+⋱=2x2y+x−(1x)23(2y+x)−(2x)25(2y+x)−(3x)27(2y+x)−⋱ These continued fractions—particularly the last—converge rapidly for values close to 1. However, the natural logarithms of much larger numbers can easily be computed, by repeatedly adding those of smaller numbers, with similarly rapid convergence. For example, since 2 = 1.253 × 1.024, the natural logarithm of 2 can be computed as: ln ln ln 125 27 45 63 253 759 1265 1771 −⋱. Furthermore, since 10 = 1.2510 × 1.0243, even the natural logarithm of 10 can be computed similarly as: ln 10 10 ln ln 125 20 27 45 63 18 253 759 1265 1771 −⋱. The reciprocal of the natural logarithm can be also written in this way: ln ⁡(x)=2xx2−112+x2+14x12+1212+x2+14x… For example: ln ⁡(2)=4312+5812+1212+58… Complex logarithms: The exponential function can be extended to a function which gives a complex number as ez for any arbitrary complex number z; simply use the infinite series with x=z complex. This exponential function can be inverted to form a complex logarithm that exhibits most of the properties of the ordinary logarithm. There are two difficulties involved: no x has ex = 0; and it turns out that e2iπ = 1 = e0. Since the multiplicative property still works for the complex exponential function, ez = ez+2kiπ, for all complex z and integers k. Complex logarithms: So the logarithm cannot be defined for the whole complex plane, and even then it is multi-valued—any complex logarithm can be changed into an "equivalent" logarithm by adding any integer multiple of 2iπ at will. The complex logarithm can only be single-valued on the cut plane. For example, ln i = iπ/2 or 5iπ/2 or -3iπ/2, etc.; and although i4 = 1, 4 ln i can be defined as 2iπ, or 10iπ or −6iπ, and so on. Complex logarithms: Plots of the natural logarithm function on the complex plane (principal branch)
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Diffusion capacitance** Diffusion capacitance: Diffusion Capacitance is the capacitance that happens due to transport of charge carriers between two terminals of a device, for example, the diffusion of carriers from anode to cathode in a forward biased diode or from emitter to baseforward-biased junction of a transistor. In a semiconductor device with a current flowing through it (for example, an ongoing transport of charge by diffusion) at a particular moment there is necessarily some charge in the process of transit through the device. If the applied voltage changes to a different value and the current changes to a different value, a different amount of charge will be in transit in the new circumstances. The change in the amount of transiting charge divided by the change in the voltage causing it is the diffusion capacitance. The adjective "diffusion" is used because the original use of this term was for junction diodes, where the charge transport was via the diffusion mechanism. See Fick's laws of diffusion. Diffusion capacitance: To implement this notion quantitatively, at a particular moment in time let the voltage across the device be V . Now assume that the voltage changes with time slowly enough that at each moment the current is the same as the DC current that would flow at that voltage, say I=I(V) (the quasistatic approximation). Suppose further that the time to cross the device is the forward transit time τF . In this case the amount of charge in transit through the device at this particular moment, denoted Q , is given by Q=I(V)τF .Consequently, the corresponding diffusion capacitance: Cdiff . is Cdiff=dQdV=dI(V)dVτF .In the event the quasi-static approximation does not hold, that is, for very fast voltage changes occurring in times shorter than the transit time τF , the equations governing time-dependent transport in the device must be solved to find the charge in transit, for example the Boltzmann equation. That problem is a subject of continuing research under the topic of non-quasistatic effects. See Liu , and Gildenblat et al.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**DNA digital data storage** DNA digital data storage: DNA digital data storage is the process of encoding and decoding binary data to and from synthesized strands of DNA.While DNA as a storage medium has enormous potential because of its high storage density, its practical use is currently severely limited because of its high cost and very slow read and write times.In June 2019, scientists reported that all 16 GB of text from the English Wikipedia had been encoded into synthetic DNA. In 2021, scientists reported that a custom DNA data writer had been developed that was capable of writing data into DNA at 18 Mbps. Encoding methods: Countless methods for encoding data in DNA are possible. The optimal methods are those that make economical use of DNA and protect against errors. If the message DNA is intended to be stored for a long period of time, for example, 1,000 years, it is also helpful if the sequence is obviously artificial and the reading frame is easy to identify. Encoding methods: Encoding text Several simple methods for encoding text have been proposed. Most of these involve translating each letter into a corresponding "codon", consisting of a unique small sequence of nucleotides in a lookup table. Some examples of these encoding schemes include Huffman codes, comma codes, and alternating codes. Encoding methods: Encoding arbitrary data To encode arbitrary data in DNA, the data is typically first converted into ternary (base 3) data rather than binary (base 2) data. Each digit (or "trit") is then converted to a nucleotide using a lookup table. To prevent homopolymers (repeating nucleotides), which can cause problems with accurate sequencing, the result of the lookup also depends on the preceding nucleotide. Using the example lookup table below, if the previous nucleotide in the sequence is T (thymine), and the trit is 2, the next nucleotide will be G (guanine). Encoding methods: Various systems may be incorporated to partition and address the data, as well as to protect it from errors. One approach to error correction is to regularly intersperse synchronization nucleotides between the information-encoding nucleotides. These synchronization nucleotides can act as scaffolds when reconstructing the sequence from multiple overlapping strands. In vivo: The genetic code within living organisms can potentially be co-opted to store information. Furthermore synthetic biology can be used to engineer cells with "molecular recorders" to allow the storage and retrieval of information stored in the cell's genetic material. CRISPR gene editing can also be used to insert artificial DNA sequences into the genome of the cell. For encoding developmental lineage data (molecular flight recorder), roughly 30 trillion cell nuclei per mouse * 60 recording sites per nucleus * 7-15 bits per site yields about 2 TeraBytes per mouse written (but only very selectively read). In vivo: In-vivo light-based direct image and data recording A proof-of-concept in-vivo direct DNA data recording system was demonstrated through incorporation of optogenetically regulated recombinases as part of an engineered "molecular recorder" allows for direct encoding of light-based stimuli into engineered E.coli cells. This approach can also be parallelized to store and write text or data in 8-bit form through the use of physically separated individual cell cultures in cell-culture plates. In vivo: This approach leverages the editing of a "recorder plasmid" by the light-regulated recombinases, allowing for identification of cell populations exposed to different stimuli. This approach allows for the physical stimulus to be directly encoded into the "recorder plasmid" through recombinase action. Unlike other approaches, this approach does not require manual design, insertion and cloning of artificial sequences to record the data into the genetic code. In this recording process, each individual cell population in each cell-culture plate culture well can be treated as a digital "bit", functioning as a biological transistor capable of recording a single bit of data. History: The idea of DNA digital data storage dates back to 1959, when the physicist Richard P. Feynman, in "There's Plenty of Room at the Bottom: An Invitation to Enter a New Field of Physics" outlined the general prospects for the creation of artificial objects similar to objects of the microcosm (including biological) and having similar or even more extensive capabilities. In 1964–65, Mikhail Samoilovich Neiman, the Soviet physicist, published 3 articles about microminiaturization in electronics at the molecular-atomic level, which independently presented general considerations and some calculations regarding the possibility of recording, storage, and retrieval of information on synthesized DNA and RNA molecules. After the publication of the first M.S. Neiman's paper and after receiving by Editor the manuscript of his second paper (January, the 8th, 1964, as indicated in that paper) the interview with cybernetician Norbert Wiener was published. N. Wiener expressed ideas about miniaturization of computer memory, close to the ideas, proposed by M. S. Neiman independently. These Wiener's ideas M. S. Neiman mentioned in the third of his papers. This story is described in details.One of the earliest uses of DNA storage occurred in a 1988 collaboration between artist Joe Davis and researchers from Harvard University. The image, stored in a DNA sequence in E.coli, was organized in a 5 x 7 matrix that, once decoded, formed a picture of an ancient Germanic rune representing life and the female Earth. In the matrix, ones corresponded to dark pixels while zeros corresponded to light pixels.In 2007 a device was created at the University of Arizona using addressing molecules to encode mismatch sites within a DNA strand. These mismatches were then able to be read out by performing a restriction digest, thereby recovering the data.In 2011, George Church, Sri Kosuri, and Yuan Gao carried out an experiment that would encode a 659 kb book that was co-authored by Church. To do this, the research team did a two-to-one correspondence where a binary zero was represented by either an adenine or cytosine and a binary one was represented by a guanine or thymine. After examination, 22 errors were found in the DNA.In 2012, George Church and colleagues at Harvard University published an article in which DNA was encoded with digital information that included an HTML draft of a 53,400 word book written by the lead researcher, eleven JPEG images and one JavaScript program. Multiple copies for redundancy were added and 5.5 petabits can be stored in each cubic millimeter of DNA. The researchers used a simple code where bits were mapped one-to-one with bases, which had the shortcoming that it led to long runs of the same base, the sequencing of which is error-prone. This result showed that besides its other functions, DNA can also be another type of storage medium such as hard disk drives and magnetic tapes.In 2013, an article led by researchers from the European Bioinformatics Institute (EBI) and submitted at around the same time as the paper of Church and colleagues detailed the storage, retrieval, and reproduction of over five million bits of data. All the DNA files reproduced the information with an accuracy between 99.99% and 100%. The main innovations in this research were the use of an error-correcting encoding scheme to ensure the extremely low data-loss rate, as well as the idea of encoding the data in a series of overlapping short oligonucleotides identifiable through a sequence-based indexing scheme. Also, the sequences of the individual strands of DNA overlapped in such a way that each region of data was repeated four times to avoid errors. Two of these four strands were constructed backwards, also with the goal of eliminating errors. The costs per megabyte were estimated at $12,400 to encode data and $220 for retrieval. However, it was noted that the exponential decrease in DNA synthesis and sequencing costs, if it continues into the future, should make the technology cost-effective for long-term data storage by 2023.In 2013, a software called DNACloud was developed by Manish K. Gupta and co-workers to encode computer files to their DNA representation. It implements a memory efficiency version of the algorithm proposed by Goldman et al. to encode (and decode) data to DNA (.dnac files).The long-term stability of data encoded in DNA was reported in February 2015, in an article by researchers from ETH Zurich. The team added redundancy via Reed–Solomon error correction coding and by encapsulating the DNA within silica glass spheres via Sol-gel chemistry.In 2016 research by Church and Technicolor Research and Innovation was published in which, 22 MB of a MPEG compressed movie sequence were stored and recovered from DNA. The recovery of the sequence was found to have zero errors.In March 2017, Yaniv Erlich and Dina Zielinski of Columbia University and the New York Genome Center published a method known as DNA Fountain that stored data at a density of 215 petabytes per gram of DNA. The technique approaches the Shannon capacity of DNA storage, achieving 85% of the theoretical limit. The method was not ready for large-scale use, as it costs $7000 to synthesize 2 megabytes of data and another $2000 to read it.In March 2018, University of Washington and Microsoft published results demonstrating storage and retrieval of approximately 200MB of data. The research also proposed and evaluated a method for random access of data items stored in DNA. In March 2019, the same team announced they have demonstrated a fully automated system to encode and decode data in DNA.Research published by Eurecom and Imperial College in January 2019, demonstrated the ability to store structured data in synthetic DNA. The research showed how to encode structured or, more specifically, relational data in synthetic DNA and also demonstrated how to perform data processing operations (similar to SQL) directly on the DNA as chemical processes.In April 2019, due to a collaboration with TurboBeads Labs in Switzerland, Mezzanine by Massive Attack was encoded into synthetic DNA, making it the first album to be stored in this way.In June 2019, scientists reported that all 16 GB of Wikipedia have been encoded into synthetic DNA. In 2021, CATALOG reported that they had developed a custom DNA writer capable of writing data at 18 Mbps into DNA.The first article describing data storage on native DNA sequences via enzymatic nicking was published in April 2020. In the paper, scientists demonstrate a new method of recording information in DNA backbone which enables bit-wise random access and in-memory computing. Davos Bitcoin Challenge: On January 21, 2015, Nick Goldman from the European Bioinformatics Institute (EBI), one of the original authors of the 2013 Nature paper, announced the Davos Bitcoin Challenge at the World Economic Forum annual meeting in Davos. During his presentation, DNA tubes were handed out to the audience, with the message that each tube contained the private key of exactly one bitcoin, all coded in DNA. The first one to sequence and decode the DNA could claim the bitcoin and win the challenge. The challenge was set for three years and would close if nobody claimed the prize before January 21, 2018.Almost three years later on January 19, 2018, the EBI announced that a Belgian PhD student, Sander Wuyts, of the University of Antwerp and Vrije Universiteit Brussel, was the first one to complete the challenge. Next to the instructions on how to claim the bitcoin (stored as a plain text and PDF file), the logo of the EBI, the logo of the company that printed the DNA (CustomArray), and a sketch of James Joyce were retrieved from the DNA. The Lunar Library: The Lunar Library, launched on the Beresheet Lander by the Arch Mission Foundation, carries information encoded in DNA, which includes 20 famous books and 10,000 images. This was one of the optimal choices of storage, as DNA can last a long time. The Arch Mission Foundation suggests that it can still be read after billions of years. DNA of things: The concept of the DNA of Things (DoT) was introduced in 2019 by a team of researchers from Israel and Switzerland, including Yaniv Erlich and Robert Grass. DoT encodes digital data into DNA molecules, which are then embedded into objects. This gives the ability to create objects that carry their own blueprint, similar to biological organisms. In contrast to Internet of things, which is a system of interrelated computing devices, DoT creates objects which are independent storage objects, completely off-grid. DNA of things: As a proof of concept for DoT, the researcher 3D-printed a Stanford bunny which contains its blueprint in the plastic filament used for printing. By clipping off a tiny bit of the ear of the bunny, they were able to read out the blueprint, multiply it and produce a next generation of bunnies. In addition, the ability of DoT to serve for steganographic purposes was shown by producing non-distinguishable lenses which contain a YouTube video integrated into the material.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Ectrodactyly** Ectrodactyly: Ectrodactyly, split hand, or cleft hand (derived from Greek ektroma "miscarriage" and daktylos "finger") involves the deficiency or absence of one or more central digits of the hand or foot and is also known as split hand/split foot malformation (SHFM). The hands and feet of people with ectrodactyly (ectrodactyls) are often described as "claw-like" and may include only the thumb and one finger (usually either the little finger, ring finger, or a syndactyly of the two) with similar abnormalities of the feet.It is a substantial rare form of a congenital disorder in which the development of the hand is disturbed. It is a type I failure of formation – longitudinal arrest. The central ray of the hand is affected and usually appears without proximal deficiencies of nerves, vessels, tendons, muscles and bones in contrast to the radial and ulnar deficiencies. The cleft hand appears as a V-shaped cleft situated in the centre of the hand. The digits at the borders of the cleft might be syndactilyzed, and one or more digits can be absent. In most types, the thumb, ring finger and little finger are the less affected parts of the hand. The incidence of cleft hand varies from 1 in 90,000 to 1 in 10,000 births depending on the used classification. Cleft hand can appear unilateral or bilateral, and can appear isolated or associated with a syndrome. Ectrodactyly: Split hand/foot malformation (SHFM) is characterized by underdeveloped or absent central digital rays, clefts of hands and feet, and variable syndactyly of the remaining digits. SHFM is a heterogeneous condition caused by abnormalities at one of multiple loci, including SHFM1 (SHFM1 at 7q21-q22), SHFM2 (Xq26), SHFM3 (FBXW4/DACTYLIN at 10q24), SHFM4 (TP63 at 3q27), and SHFM5 (DLX1 and DLX 2 at 2q31). SHFM3 is unique in that it is caused by submicroscopic tandem chromosome duplications of FBXW4/DACTYLIN. SHFM3 is considered 'isolated' ectrodactyly and does not show a mutation of the tp63 gene. Presentation: Ectrodactyly can be caused by various changes to 7q. When 7q is altered by a deletion or a translocation, ectrodactyly can sometimes be associated with hearing loss. Ectrodactyly, or Split hand/split foot malformation (SHFM) type 1 is the only form of split hand/ malformation associated with sensorineural hearing loss. Genetics: A large number of human gene defects can cause ectrodactyly. The most common mode of inheritance is autosomal dominant with reduced penetrance, while autosomal recessive and X-linked forms occur more rarely. Ectrodactyly can also be caused by a duplication on 10q24. Detailed studies of a number of mouse models for ectrodactyly have also revealed that a failure to maintain median apical ectodermal ridge (AER) signalling can be the main pathogenic mechanism in triggering this abnormality.A number of factors make the identification of the genetic defects underlying human ectrodactyly a complicated process: the limited number of families linked to each split hand/foot malformation (SHFM) locus, the large number of morphogens involved in limb development, the complex interactions between these morphogens, the involvement of modifier genes, and the presumed involvement of multiple gene or long-range regulatory elements in some cases of ectrodactyly. In the clinical setting these genetic characteristics can become problematic and making predictions of carrier status and severity of the disease impossible to predict.In 2011, a novel mutation in DLX5 was found to be involved in SHFM.Ectrodactyly is frequently seen with other congenital anomalies. Syndromes in which ectrodactyly is associated with other abnormalities can occur when two or more genes are affected by a chromosomal rearrangement. Disorders associated with ectrodactyly include Ectrodactyly-Ectodermal Dysplasia-Clefting (EEC) syndrome, which is closely correlated to the ADULT syndrome and Limb-mammary (LMS) syndrome, Ectrodactyly-Cleft Palate (ECP) syndrome, Ectrodactyly-Ectodermal Dysplasia-Macular Dystrophy syndrome, Ectrodactyly-Fibular Aplasia/Hypoplasia (EFA) syndrome, and Ectrodactyly-Polydactyly. More than 50 syndromes and associations involving ectrodactyly are distinguished in the London Dysmorphology Database. Pathophysiology: The pathophysiology of cleft hand is thought to be a result of a wedge-shaped defect of the apical ectoderm of the limb bud (AER: apical ectodermal ridge). Polydactyly, syndactyly and cleft hand can occur within the same hand, therefore some investigators suggest that these entities occur from the same mechanism. This mechanism is not yet defined. Pathophysiology: Genetics The cause of cleft hand lies, for what is known, partly in genetics. The inheritance of cleft hand is autosomal dominant and has a variable penetrance of 70%. Cleft hand can be a spontaneous mutation during pregnancy (de novo mutation). The exact chromosomal defect in isolated cleft hand is not yet defined. However, the genetic causes of cleft hand related to syndromes have more clarity. Pathophysiology: The identified mutation for SHSF syndrome (split-hand/split-foot syndrome) a duplication on 10q24, and not a mutation of the tp63 gene as in families affected by EEC syndrome (ectrodactyly–ectodermal dysplasia–cleft syndrome). The p63 gene plays a critical role in the development of the apical ectodermal ridge (AER), this was found in mutant mice with dactylaplasia. Embryology Some studies have postulated that polydactyly, syndactyly and cleft hand have the same teratogenic mechanism. In vivo tests showed that limb anomalies were found alone or in combination with cleft hand when they were given Myleran. These anomalies take place in humans around day 41 of gestation. Diagnosis: Classification There are several classifications for cleft hand, but the most used classification is described by Manske and Halikis see table 3. This classification is based on the first web space. The first web space is the space between the thumb and the index finger. Table 3: Classification for cleft hand described by Manske and Halikis Treatment: The treatment of cleft hand is usually invasive and can differ each time because of the heterogeneity of the condition. The function of a cleft hand is mostly not restricted, yet improving the function is one of the goals when the thumb or first webspace is absent.The social and stigmatising aspects of a cleft hand require more attention. The hand is a part of the body which is usually shown during communication. When this hand is obviously different and deformed, stigmatisation or rejection can occur. Sometimes, in families with cleft hand with good function, operations for cosmetic aspects are considered marginal and the families choose not to have surgery. Treatment: Indications Surgical treatment of the cleft hand is based on several indications: Improving function Absent thumb Deforming syndactyly (mostly between digits of unequal length like index and thumb) Transverse bones (this will progress the deformity; growth of these bones will widen the cleft) Narrowed first webspace The feetAesthetical aspects Reducing deformity Timing of surgical interventions The timing of surgical interventions is debatable. Parents have to decide about their child in a very vulnerable time of their parenthood. Indications for early treatment are progressive deformities, such as syndactyly between index and thumb or transverse bones between the digital rays. Other surgical interventions are less urgent and can wait for 1 or 2 years. Treatment: Classification and treatment When surgery is indicated, the choice of treatment is based on the classification. Table 4 shows the treatment of cleft hand divided into the classification of Manske and Halikis. Treatment: Techniques described by Ueba, Miura and Komada and the procedure of Snow-Littler are guidelines; since clinical and anatomical presentation within the types differ, the actual treatment is based on the individual abnormality.Table 4: Treatment based on the classification of Manske and Halikis Snow-Littler The goal of this procedure is to create a wide first web space and to minimise the cleft in the hand. The index digit will be transferred to the ulnar side of the cleft. Simultaneously a correction of index malrotation and deviation is performed. To minimise the cleft, it is necessary to fix together the metacarpals which used to border the cleft. Through repositioning flaps, the wound can be closed. Treatment: Ueba Ueba described a less complicated surgery. Transverse flaps are used to resurface the palm, the dorsal side of the transposed digit and the ulnar part of the first web space. A tendon graft is used to connect the common extensor tendons of the border digits of the cleft to prevent digital separation during extension. The closure is simpler, but has cosmetic disadvantage because of the switch between palmar and dorsal skin. Treatment: Miura and Komada The release of the first webspace has the same principle as the Snow-Littler procedure. The difference is the closure of the first webspace; this is done by simple closure or closure with Z-plasties. History: Literature shows that cleft hand is described centuries ago. In City of God (426 A.D.), St. Augustine remarks: At Hippo-Diarrhytus there is a man whose hands are crescent-shaped, and have only two fingers each, and his feet similarly formed. History: The first modern reference to what might be considered a cleft hand was by Ambroise Paré in 1575. Hartsink (1770) wrote the first report of true cleft hand. In 1896, the first operation of the cleft hand was performed by Doctor Charles N. Dowed of New York City. However, the first certain description of what we know as a cleft hand as we know it today was described at the end of the 19th century. History: Symbrachydactyly Historically, a U-type cleft hand was also known as atypical cleft hand. The classification in which typical and atypical cleft hand are described was mostly used for clinical aspects and is shown in table 1. Nowadays, this "atypical cleft hand" is referred to as symbrachydactyly and is not a subtype of cleft hand. Notable cases: Bree Walker Once a popular television anchor woman in Los Angeles, she has appeared in the television drama Nip/Tuck as an inspirational character who battles her disease and counsels another family who have children with ectrodactyly Grady Stiles Sr. and Grady Stiles Jr.: known publicly as Lobster Boy and family, famous side show acts, featured on the AMC reality show, Freakshow. Notable cases: The Vadoma tribe in northern Zimbabwe Mikhail Tal, Soviet chess player, World Chess Champion 1960–61 Lee Hee-ah, a Korean pianist with only two fingers on each hand. Cédric Grégoire (better known as Lord Lokhraed) is the guitarist and lead vocalist of French black metal band Nocturnal Depression and has ectrodactyly on his fretting hand, which has only two fingers. Black Scorpion, freak show performer. Sam Schröder, 2020 US Open Quad Champion. Francesca Jones, British pro tennis player, former #149 in WTA rankings. Other animals: Ectrodactyly is not only a genetic characteristic in humans, but can also occur in frogs and toads, mice, salamanders, cows, chickens, rabbits, marmosets, cats and dogs, and even West Indian manatees. The following examples are studies showing the natural occurrence of ectrodactyly in animals, without the disease being reproduced and tested in a laboratory. In all three examples we see how rare the actual occurrence of ectrodactyly is. Other animals: Wood frog The Department of Biological Sciences at the University of Alberta in Edmonton, Alberta performed a study to estimate deformity levels in wood frogs in areas of relatively low disturbance. After roughly 22,733 individuals were examined during field studies, it was found that only 49 wood frogs had the ectrodactyly deformity. Other animals: Salamanders In a study performed by the Department of Forestry and Natural Resources at Purdue University, approximately 2000 salamanders (687 adults and 1259 larvae) were captured from a large wetland complex and evaluated for malformations. Among the 687 adults, 54 (7.9%) were malformed. Of these 54 adults, 46 (85%) had missing (ectrodactyly), extra (polyphalangy) or dwarfed digits (brachydactyly). Among the 1259 larvae, 102 were malformed, with 94 (92%) of the malformations involving ectrodactyly, polyphalangy, and brachydactyly. Results showed few differences in the frequency of malformations among life-history changes, suggesting that malformed larvae do not have substantially higher mortality than their adult conspecifics. Other animals: Cats and dogs Davis and Barry 1977 tested allele frequencies in domestic cats. Among the 265 cats observed, there were 101 males and 164 females. Only one cat was recorded to have the ectrodactyly abnormality, illustrating this rare disease. According to M.P. Ferreira, a case of ectrodactyly was found in a two-month-old male mixed Terrier dog. In another study, Carrig and co-workers also reported a series of 14 dogs with this abnormality proving that although ectrodactyly is an uncommon occurrence for dogs, it is not entirely unheard of.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Sequential auction** Sequential auction: A sequential auction is an auction in which several items are sold, one after the other, to the same group of potential buyers. In a sequential first-price auction (SAFP), each individual item is sold using a first price auction, while in a sequential second-price auction (SASP), each individual item is sold using a second price auction. Sequential auction: A sequential auction differs from a combinatorial auction, in which many items are auctioned simultaneously and the agents can bid on bundles of items. A sequential auction is much simpler to implement and more common in practice. However, the bidders in each auction know that there are going to be future auctions, and this may affect their strategic considerations. Here are some examples. Sequential auction: Example 1. There are two items for sale and two potential buyers: Alice and Bob, with the following valuations: Alice values each item as 5, and both items as 10 (i.e., her valuation is additive). Sequential auction: Bob values each item as 4, and both items as 4 (i.e., his valuation is unit demand).In a SASP, each item is put to a second-price-auction. Usually, such auction is a truthful mechanism, so if each item is sold in isolation, Alice wins both items and pays 4 for each item, her total payment is 4+4=8 and her net utility is 5 + 5 − 8 = 2. But, if Alice knows Bob's valuations, she has a better strategy: she can let Bob win the first item (e.g. by bidding 0). Then, Bob will not participate in the second auction at all, so Alice will win the second item and pay 0, and her net utility will be 5 − 0 = 5. Sequential auction: A similar outcome happens in a SAFP. If each item is sold in isolation, there is a Nash equilibrium in which Alice bids slightly above 4 and wins, and her net utility is slightly below 2. But, if Alice knows Bob's valuations, she can deviate to a strategy that lets Bob win in the first round so that in the second round she can win for a price slightly above 0. Sequential auction: Example 2. Multiple identical objects are auctioned, and the agents have budget constraints. It may be advantageous for a bidder to bid aggressively on one object with a view to raising the price paid by his rival and depleting his budget so that the second object may then be obtained at a lower price. In effect, a bidder may wish to “raise a rival’s costs” in one market in order to gain advantage in another. Such considerations seem to have played a significant role in the auctions for radio spectrum licenses conducted by the Federal Communications Commission. Assessment of rival bidders’ budget constraints was a primary component of the pre-bidding preparation of GTE’s bidding team. Nash equilibrium: A sequential auction is a special case of a sequential game. A natural question to ask for such a game is when there exists a subgame perfect equilibrium in pure strategies (SPEPS). When the players have full information (i.e., they know the sequence of auctions in advance), and a single item is sold in each round, a SAFP always has a SPEPS, regardless of the players' valuations. The proof is by backward induction:: 872–874  In the last round, we have a simple first price auction. It has a pure-strategy Nash equilibrium in which the highest-value agent wins by bidding slightly above the second-highest value. Nash equilibrium: In each previous round, the situation is a special case of a first-price auction with externalities. In such an auction, each agent may gain value, not only when he wins, but also when other agents win. In general, the valuation of agent i is represented by a vector vi[1],…,vi[n] , where vi[j] is the value of agent i when agent j wins. In a sequential auction, the externalities are determined by the equilibrium outcomes in the future rounds. In the introductory example, there are two possible outcomes: If Alice wins the first round, then the equilibrium outcome in the second round is that Alice buys an item worth $5 for $4, so her net gain is $1. Therefore, her total value for winning the first round is Alice Alice ]=5+1=6 If Bob wins the first round, then the equilibrium outcome in the second round is that Alice buys an item worth $5 for $0, so her net gain is $5. Therefore, her total value for letting Bob win is Alice Bob ]=0+5=5 Each first-price auction with externalities has a pure-strategy Nash equilibrium. In the above example, the equilibrium in the first round is that Bob wins and pays $1. Nash equilibrium: Therefore, by backward induction, each SAFP has a pure-strategy SPE.Notes: The existence result also holds for SASP. In fact, any equilibrium-outcome of a first-price auction with externalities is also an equilibrium-outcome of a second-price auction with the same externalities. The existence result holds regardless of the valuations of the bidders – they may have arbitrary utility functions on indivisible goods. In contrast, if all auctions are done simultaneously, a pure-strategy Nash equilibrium does not always exist, even if the bidders have subadditive utility functions. Social welfare: Once we know that a subgame perfect equilibrium exists, the next natural question is how efficient it is – does it obtain the maximum social welfare? This is quantified by the price of anarchy (PoA) – the ratio of the maximum attainable social welfare to the social welfare in the worst equilibrium. In the introductory Example 1, the maximum attainable social welfare is 10 (when Alice wins both items), but the welfare in equilibrium is 9 (Bob wins the first item and Alice wins the second), so the PoA is 10/9. In general, the PoA of sequential auctions depends on the utility functions of the bidders. Social welfare: The first five results apply to agents with complete information (all agents know the valuations of all other agents): Case 1: Identical items. There are several identical items. There are two bidders. At least one of them has a concave valuation function (diminishing returns). The PoA of SASP is at most 1.58 . Numerical results show that, when there are many bidders with concave valuation functions, the efficiency loss decreases as the number of users increases. Social welfare: Case 2: Additive bidders.: 885  The items are different, and all bidders regard all items as independent goods, so their valuations are additive set functions. The PoA of SASP is unbounded – the welfare in a SPEPS might be arbitrarily small. Case 3: Unit-demand bidders. All bidders regard all items as pure substitute goods, so their valuations are unit demand. The PoA of SAFP is at most 2 – the welfare in a SPEPS is at least half the maximum (if mixed strategies are allowed, the PoA is at most 4). In contrast, the PoA in SASP is again unbounded. These results are surprising and they emphasize the importance of the design decision of using a first-price auction (rather than a second-price auction) in each round. Social welfare: Case 4: submodular bidders. The bidders' valuations are arbitrary submodular set functions (note that additive and unit-demand are special cases of submodular). In this case, the PoA of both SAFP and SASP is unbounded, even when there are only four bidders. The intuition is that the high-value bidder might prefer to let a low-value bidder win, in order to decrease the competition that he might face in the future rounds. Social welfare: Case 5: additive+UD. Some bidders have additive valuations while others have unit-demand valuations. The PoA of SAFP might be at least min (n,m) , where m is the number of items and n is the number of bidders. Moreover, the inefficient equilibria persist even under iterated elimination of weakly dominated strategies. This implies linear inefficiency for many natural settings, including: Bidders with gross substitute valuations, capacitated valuations, budget-additive valuations, additive valuations with hard budget constraints on the payments.Case 6: unit-demand bidders with incomplete information. The agents do not know the valuations of the other agents, but only the probability-distribution from which their valuations are drawn. The sequential auction is then a Bayesian game, and its PoA might be higher. When all bidders have unit demand valuations, the PoA of a Bayesian Nash equilibrium in a SAFP is at most 3. Revenue maximization: An important practical question for sellers selling several items is how to design an auction that maximizes their revenue. There are several questions: 1. Is it better to use a sequential auction or a simultaneous auction? Sequential auctions with bids announced between sales seem preferable because the bids may convey information about the value of objects to be sold later. The auction literature shows that this information effect increases the seller's expected revenue since it reduces the winner's curse. However, there is also a deception effect which develops in the sequential sales. If a bidder knows that his current bid will reveal information about later objects then he has an incentive to underbid. Revenue maximization: 2. If a sequential auction is used, in what order should the items be sold in order to maximize the seller's revenue?Suppose there are two items and there is a group of bidders who are subject to budget constraints. The objects have common values to all bidders but need not be identical, and may be either complement goods or substitute goods. In a game with complete information: 1. A sequential auction yields more revenue than a simultaneous ascending auction if: (a) the difference between the items' values is large, or (b) there are significant complementarities. A hybrid simultaneous-sequential form yields higher revenue than the sequential auction. Revenue maximization: 2. If the objects are sold by means of a sequence of open ascending auctions, then it is always optimal to sell the more valuable object first (assuming the objects' values are common knowledge).Moreover, budget constraints may arise endogenously. I.e, a bidding company may tell its representative "you may spend at most X on this auction", although the company itself has much more money to spend. Limiting the budget in advance gives the bidders some strategic advantages. Revenue maximization: When multiple objects are sold, budget constraints can have some other unanticipated consequences. For example, a reserve price can raise the seller's revenue even though it is set at such a low level that it is never binding in equilibrium. Composeable mechanisms: Sequential-auctions and simultaneous-auctions are both special case of a more general setting, in which the same bidders participate in several different mechanisms. Syrgkanis and Tardos suggest a general framework for efficient mechanism design with guaranteed good properties even when players participate in multiple mechanisms simultaneously or sequentially. The class of smooth mechanisms – mechanisms that generate approximately market clearing prices – result in high-quality outcome both in equilibrium and in learning outcomes in the full information setting, as well as in Bayesian equilibrium with uncertainty about participants. Smooth mechanisms compose well: smoothness locally at each mechanism implies global efficiency. For mechanisms where good performance requires that bidders do not bid above their value, weakly smooth mechanisms can be used, such as the Vickrey auction. They are approximately efficient under the no-overbidding assumption, and the weak smoothness property is also maintained by composition. Some of the results are valid also when participants have budget constraints.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Mitotic recombination** Mitotic recombination: Mitotic recombination is a type of genetic recombination that may occur in somatic cells during their preparation for mitosis in both sexual and asexual organisms. In asexual organisms, the study of mitotic recombination is one way to understand genetic linkage because it is the only source of recombination within an individual. Additionally, mitotic recombination can result in the expression of recessive genes in an otherwise heterozygous individual. This expression has important implications for the study of tumorigenesis and lethal recessive genes. Mitotic recombination: Mitotic homologous recombination occurs mainly between sister chromatids subsequent to replication (but prior to cell division). Inter-sister homologous recombination is ordinarily genetically silent. During mitosis the incidence of recombination between non-sister homologous chromatids is only about 1% of that between sister chromatids. Discovery: The discovery of mitotic recombination came from the observation of twin spotting in Drosophila melanogaster. This twin spotting, or mosaic spotting, was observed in D. melanogaster as early as 1925, but it was only in 1936 that Curt Stern explained it as a result of mitotic recombination. Prior to Stern's work, it was hypothesized that twin spotting happened because certain genes had the ability to eliminate the chromosome on which they were located. Later experiments uncovered when mitotic recombination occurs in the cell cycle and the mechanisms behind recombination. Occurrence: Mitotic recombination can happen at any locus but is observable in individuals that are heterozygous at a given locus. If a crossover event between non-sister chromatids affects that locus, then both homologous chromosomes will have one chromatid containing each genotype. The resulting phenotype of the daughter cells depends on how the chromosomes line up on the metaphase plate. If the chromatids containing different alleles line up on the same side of the plate, then the resulting daughter cells will appear heterozygous and be undetectable, despite the crossover event. However, if chromatids containing the same alleles line up on the same side, the daughter cells will be homozygous at that locus. This results in twin spotting, where one cell presents the homozygous recessive phenotype and the other cell has the homozygous wild type phenotype. If those daughter cells go on to replicate and divide, the twin spots will continue to grow and reflect the differential phenotype. Occurrence: Mitotic recombination takes place during interphase. It has been suggested that recombination takes place during G1, when the DNA is in its 2-strand phase, and replicated during DNA synthesis. It is also possible to have the DNA break leading to mitotic recombination happen during G1, but for the repair to happen after replication. Occurrence: Response to DNA damage In the budding yeast Saccharomyces cerevisiae, mutations in several genes needed for mitotic (and meiotic) recombination cause increased sensitivity to inactivation by radiation and/or genotoxic chemicals. For example, gene rad52 is required for mitotic recombination as well as meiotic recombination. Rad52 mutant yeast cells have increased sensitivity to killing by X-rays, methyl methanesulfonate and the DNA crosslinking agent 8-methoxypsoralen-plus-UV light, suggesting that mitotic recombinational repair is required for removal of the different DNA damages caused by these agents. Mechanisms: The mechanisms behind mitotic recombination are similar to those behind meiotic recombination. These include sister chromatid exchange and mechanisms related to DNA double strand break repair by homologous recombination such as single-strand annealing, synthesis-dependent strand annealing (SDSA), and gene conversion through a double-Holliday Junction intermediate or SDSA. In addition, non-homologous mitotic recombination is a possibility and can often be attributed to non-homologous end joining. Method: There are several theories on how mitotic crossover occurs. In the simple crossover model, the two homologous chromosomes overlap on or near a common Chromosomal fragile site (CFS). This leads to a double-strand break, which is then repaired using one of the two strands. This can lead to the two chromatids switching places. In another model, two overlapping sister chromatids form a double Holliday junction at a common repeat site and are later sheared in such a way that they switch places. In either model, the chromosomes are not guaranteed to trade evenly, or even to rejoin on opposite sides thus most patterns of cleavage do not result in any crossover event. Uneven trading introduces many of the deleterious effects of mitotic crossover. Method: Alternatively, a crossover can occur during DNA repair if, due to extensive damage, the homologous chromosome is chosen to be the template over the sister chromatid. This leads to gene synthesis since one copy of the allele is copied across from the homologous chromosome and then synthesized into the breach on the damaged chromosome. The net effect of this would be one heterozygous chromosome and one homozygous chromosome. Advantages and disadvantages: Mitotic crossover is known to occur in D. melanogaster, some asexually reproducing fungi and in normal human cells, where the event may allow normally recessive cancer-causing genes to be expressed and thus predispose the cell in which it occurs to the development of cancer. Alternately, a cell may become a homozygous mutant for a tumor-suppressing gene, leading to the same result. For example, Bloom's syndrome is caused by a mutation in RecQ helicase, which plays a role in DNA replication and repair. This mutation leads to high rates of mitotic recombination in mice, and this recombination rate is in turn responsible for causing tumor susceptibility in those mice. At the same time, mitotic recombination may be beneficial: it may play an important role in repairing double stranded breaks, and it may be beneficial to the organism if having homozygous dominant alleles is more functional than the heterozygous state. For use in experimentation with genomes in model organisms such as Drosophila melanogaster, mitotic recombination can be induced via X-ray and the FLP-FRT recombination system.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Penile implant** Penile implant: A penile implant is an implanted device intended for the treatment of erectile dysfunction, Peyronie's disease, ischemic priapism, deformity and any traumatic injury of the penis, and for phalloplasty or metoidioplasty, including in gender-affirming surgery. Men also opt for penile implants for aesthetic purposes. Men's satisfaction and sexual function is influenced by discomfort over genital size which leads to seek surgical and non-surgical solutions for penis alteration. Although there are many distinct types of implants, most fall into one of two categories: malleable and inflatable transplants. History: The first modern prosthetic reconstruction of a penis is attributed to NA Borgus, a German physician who performed the first surgical attempts in 1936 on soldiers with traumatic amputations of the penis. He used rib cartilages as prosthetic material and reconstructed the genitals for both micturition and intercourse purposes. Willard E. Goodwin and William Wallace Scott were the first to describe the placement of synthetic penile implants using acrylic prosthesis in 1952. Silicone-based penile implants were developed by Harvey Lash and the first case series were published in 1964. The development of a high-grade silicone that is currently used in penile implants is credited to NASA. The prototypes of the contemporary inflatable and malleable penile implants were presented in 1973 during the annual meeting of the American Urological Association by two groups of physicians from Baylor University (Gerald Timm, William E. Bradley and F. Brantley Scott) and University of Miami (Michael P. Small and Hernan M. Carrion). Small and Carrion pioneered the popularization of semi-rigid penile implants with the introduction of Small-Carrion prosthesis (Mentor, USA) in 1975. Brantley Scott described the initial device as composed of two inflatable cylindrical bodies made up of silicone, a reservoir containing radiopaque fluid and two pumping units. The first generation products were marketed through American Medical Systems (AMS; currently Boston Scientific), with which Brantley Scott was associated. Many device updates have been released by AMS since the first generation implants. In 1983, Mentor (currently Coloplast) joined the market. In 2017, there were more than ten manufacturers of penile implants in the world, however only a few now remain in the market. The latest additions to the market are Zephyr Surgical Implants and Rigicon Innovative Urological Solutions. Zephyr Surgical Implants, along with penile implants for biological men, introduced the first line of inflatable and malleable penile implants designed for sex reassignment for trans men. In recent years, Rigicon Innovative Urological Solutions, a US-based company, has made significant advancements in the field of penile implants. In 2017, they released the 'Rigi10,' a malleable implant that expanded the market's options. Following this, in 2019, they introduced both the 'Infla10' series, which includes the Infla10 AX, Infla10 X, and Infla10 models, and the 'Rigi10 Hydrophilic.' These inflatable and hydrophilic-coated malleable models respectively were important additions to the range of penile implant technologies available. These advancements have contributed to the diversity and progress in the development of penile implants, offering patients more varied and tailored treatment solutions. According to analysis of the 5% Medicare Public Use Files from 2001 to 2010 approximately 3% of patients diagnosed with erectile dysfunction opt for penile implantation. Each year nearly 25,000 inflatable penile prostheses are implanted in the USA.The list shows penile implants available in the market in 2020. Types: Malleable penile implant The malleable (also known as non-inflatable or semi-rigid) penile prosthesis is a pair of rods implanted into the corpora of the penis. The rods are hard, but 'malleable' in the sense that they can be adjusted manually into the erect position. There are two types of malleable implants: one that is made of silicone and does not have a rod inside, also called soft implants, and another with a silver or steel spiral wire core inside coated with silicone. Some of the models have trimmable tails intended for length adjustment. Currently, a variety of malleable penile implants are available worldwide. Types: Inflatable penile implant The inflatable penile implant (IPP), more recently developed, is a set of inflatable cylinders and a pump system. Based on the differences in structure, there are two types of inflatable penile implants: two-piece and three-piece IPPs. Both types of inflatable devices are filled with sterile saline solution which is pumped into cylinders when in process. The cylinders are implanted into the cavernous body of the penis. The pump system is attached to the cylinders and placed in the scrotum. Three-piece implants have a separate large reservoir connected to the pump. The reservoir is commonly placed in the retropubic space (Retzius' space), however other locations have also been described, such as between the transverse muscle and rectus muscle. Three-piece implants provide more desirable rigidity and girth of the penis resembling natural erection. Additionally, due to the presence of a large reservoir, three-piece implants provide full flaccidity of the penis when deflated, thus bringing more comfort than two-piece inflatable and malleable implants.The saline solution is pumped manually from the reservoir into bilateral chambers of cylinders implanted in the shaft of the penis, which replaces the non- or minimally-functioning erectile tissue. This produces an erection. The glans of the penis, however, remains unaffected. Ninety to ninety-five percent of inflatable prostheses produce erections suitable for sexual intercourse. In the United States, the inflatable prosthesis has largely replaced the malleable one, due to its lower rate of infections, high device survival rate and 80–90% satisfaction rate.The first IPP prototype presented in 1975 by Scott and colleagues was a three-piece prosthesis (two cylinders, two pumps and a fluid reservoir). Since then, the IPP has undergone multiple modifications and improvements for device reliability and durability, including change in the chemical material used in implant manufacturing, using hydrophilic and antibiotic eluting coatings to reduce the rates of infections, introducing one-touch release etc. Surgical techniques used for the implantation of penile prostheses have also improved along with evolution of the device. Inflatable penile implants were one of the first interventions in urology where the "no-touch" surgical technique was introduced. This has significantly reduced the rates of post-operative infections. Medical use: Erectile dysfunction In spite of recent rapid and extensive development of non-surgical management options for erectile dysfunction, especially novel targeted medications and gene therapy, the penile implants remain the mainstay and the gold standard choice for the treatment of erectile dysfunction refractory to oral medications and injectable therapy. Additionally, penile implants can be a relevant option for those with erectile dysfunction who wants to proceed with a permanent solution without medical therapy. Penile implants have been used for the treatment of erectile dysfunction with various etiologies, including vascular, cavernosal, neurogenic, psychological and post-surgical (e.g. prostatectomy). The American Urological Association recommends informing all men with erectile dysfunction about penile implants as a choice of treatment and discussing the potential outcomes with them. Medical use: Penile deformity Penile implants can help recover the natural shape of the penis in various conditions that have led to penile deformity. These can be traumatic injuries, penile surgeries, disfiguring and fibrosing diseases of the penis, such as Peyronie's disease. In Peyronie's disease, the change in penile curvature affects normal sexual intercourse as well as causing erectile dysfunction due to disruption of blood flow in the cavernous bodies of the penis. Therefore, implantation of penile prosthesis in Peyronie's disease addresses several mechanisms involved in the pathophysiology of the disease. Medical use: Female-to-male sex reassignment Although different models of penile prostheses have been reported to be implanted after phalloplasty procedures, with the first case described in 1978 by Pucket and Montie, the first penile implants designed and produced specifically for female-to-male gender reassignment surgery for trans men were introduced in 2015 by Zephyr Surgical Implants. Both malleable and inflatable models are available. These implants have more realistic shape with an ergonomic glans at the tip of the prosthesis. The inflatable model has an attached pump resembling a testicle. The prosthesis is implanted with a sturdy fixation on pubic bone. Another, thinner malleable implant is intended for metoidioplasty. Outcomes: Satisfaction The overall satisfaction rate with penile implants reaches over 90%. Both self and partner-reported satisfaction rates are evaluated to assess the outcomes. It has been shown that implantation of inflatable penile prosthesis brings more patient and partner satisfaction than medication therapy with PDE5 inhibitors or intracavernosal injections. Satisfaction rates are reported to be higher with inflatable rather than malleable implants, but there are no difference between two-piece and three-piece devices. The most frequent reasons for dissatisfaction are reduced penis length and girth, failed expectations and difficulties with device use. Thus, it is vital to provide patients and their partners with detailed preoperative counselling and instructions. Outcomes: Curvature correction 33% to 90% of cases of patients with Peyronie's disease that have had an inflatable PI procedure have successfully corrected their penile deformity. The residual curvature after penile implant placement usually requires intraoperative surgical intervention. Complications: The most common complication associated with penile implant placement appears to be infections with reported rates of 1–3%. Both surgical site and device infections are reported. When the infection involves the penile implant itself, implant removal is required and irrigation of the cavities with antiseptic solutions. In this scenario, placement of a new implant is needed to avoid further tissue fibrosis and shortening of the penis. The rate of repeat surgeries or device replacements ranges from 6% to 13%. Other reported complications include perforation of the corpus cavernosum and urethra (0.1–3%), commonly occurring in patients with previous fibrosis, prosthesis erosion or extrusion, change in glans shape, hematoma, shortening of penis length, and device malfunction. Due to continuous improvement of surgical techniques and modifications of implants, complication rates have dramatically decreased over time.To overcome post-operative penile shortening and to increase the perceived length of the penis and patient satisfaction, ventral and dorsal phalloplasty procedures in combination with penile implants have been described. Modified glanulopexy has been proposed to prevent supersonic transporter deformity and glandular hypermobility which are possible complications of penile implants. Sliding techniques in which the penis is cut and elongated with penile implants have been performed in cases of severe penile shortening. However, these techniques had higher rates of complications and are currently avoided.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Singmaster's conjecture** Singmaster's conjecture: Singmaster's conjecture is a conjecture in combinatorial number theory, named after the British mathematician David Singmaster who proposed it in 1971. It says that there is a finite upper bound on the multiplicities of entries in Pascal's triangle (other than the number 1, which appears infinitely many times). It is clear that the only number that appears infinitely many times in Pascal's triangle is 1, because any other number x can appear only within the first x + 1 rows of the triangle. Statement: Let N(a) be the number of times the number a > 1 appears in Pascal's triangle. In big O notation, the conjecture is: N(a)=O(1). Known bound: Singmaster (1971) showed that log ⁡a). Abbot, Erdős, and Hanson (1974) (see References) refined the estimate to: log log log ⁡a). Known bound: The best currently known (unconditional) bound is log log log log log log ⁡a)3), and is due to Kane (2007). Abbot, Erdős, and Hanson note that conditional on Cramér's conjecture on gaps between consecutive primes that log ⁡a)2/3+ε) holds for every ε>0 Singmaster (1975) showed that the Diophantine equation (n+1k+1)=(nk+2) has infinitely many solutions for the two variables n, k. It follows that there are infinitely many triangle entries of multiplicity at least 6: For any non-negative i, a number a with six appearances in Pascal's triangle is given by either of the above two expressions with n=F2i+2F2i+3−1, k=F2iF2i+3−1, where Fj is the jth Fibonacci number (indexed according to the convention that F0 = 0 and F1 = 1). The above two expressions locate two of the appearances; two others appear symmetrically in the triangle with respect to those two; and the other two appearances are at (a1) and (aa−1). Elementary examples: 2 appears just once; all larger positive integers appear more than once; 3, 4, 5 each appear two times; infinitely many appear exactly twice; all odd prime numbers appear two times; 6 appears three times, as do all central binomial coefficients except for 1 and 2; (it is in principle not excluded that such a coefficient would appear 5, 7 or more times, but no such example is known) all numbers of the form (p2) for prime p>3 appear four times; Infinitely many appear exactly six times, including each of the following: 120 120 120 119 16 16 14 10 10 7) 210 210 210 209 21 21 19 10 10 6) 1540 1540 1540 1539 56 56 54 22 22 19 ) 7140 7140 7140 7139 120 120 118 36 36 33 ) 11628 11628 11628 11627 153 153 151 19 19 14 ) 24310 24310 24310 24309 221 221 219 17 17 9) The next number in Singmaster's infinite family (given in terms of Fibonacci numbers), and the next smallest number known to occur six or more times, is 61218182743304701891431482520 104 39 104 65 103 40 103 63 ) The smallest number to appear eight times – indeed, the only number known to appear eight times – is 3003, which is also a member of Singmaster's infinite family of numbers with multiplicity at least 6: 3003 3003 78 15 14 14 15 10 78 76 3003 3002 ) It is not known whether infinitely many numbers appear eight times, nor even whether any other numbers than 3003 appear eight times.The number of times n appears in Pascal's triangle is ∞, 1, 2, 2, 2, 3, 2, 2, 2, 4, 2, 2, 2, 2, 4, 2, 2, 2, 2, 3, 4, 2, 2, 2, 2, 2, 2, 4, 2, 2, 2, 2, 2, 2, 4, 4, 2, 2, 2, 2, 2, 2, 2, 2, 4, 2, 2, 2, 2, 2, 2, 2, 2, 2, 4, 4, 2, 2, 2, 2, 2, 2, 2, 2, 2, 4, 2, 2, 2, 3, 2, 2, 2, 2, 2, 2, 2, 4, 2, 2, 2, 2, 2, 4, 2, 2, 2, 2, 2, 2, 4, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 4, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 6, 2, 2, 2, 2, 2, 4, 2, 2, ... (sequence A003016 in the OEIS)By Abbott, Erdős, and Hanson (1974), the number of integers no larger than x that appear more than twice in Pascal's triangle is O(x1/2). Elementary examples: The smallest natural number (above 1) that appears (at least) n times in Pascal's triangle is 2, 3, 6, 10, 120, 120, 3003, 3003, ... (sequence A062527 in the OEIS)The numbers which appear at least five times in Pascal's triangle are 1, 120, 210, 1540, 3003, 7140, 11628, 24310, 61218182743304701891431482520, ... (sequence A003015 in the OEIS)Of these, the ones in Singmaster's infinite family are 1, 3003, 61218182743304701891431482520, ... (sequence A090162 in the OEIS) Open questions: It is not known whether any number appears more than eight times, nor whether any number besides 3003 appears that many times. The conjectured finite upper bound could be as small as 8, but Singmaster thought it might be 10 or 12. It is also unknown whether numbers appear exactly five or seven times.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**H.262/MPEG-2 Part 2** H.262/MPEG-2 Part 2: H.262 or MPEG-2 Part 2 (formally known as ITU-T Recommendation H.262 and ISO/IEC 13818-2, also known as MPEG-2 Video) is a video coding format standardised and jointly maintained by ITU-T Study Group 16 Video Coding Experts Group (VCEG) and ISO/IEC Moving Picture Experts Group (MPEG), and developed with the involvement of many companies. It is the second part of the ISO/IEC MPEG-2 standard. The ITU-T Recommendation H.262 and ISO/IEC 13818-2 documents are identical. H.262/MPEG-2 Part 2: The standard is available for a fee from the ITU-T and ISO. MPEG-2 Video is very similar to MPEG-1, but also provides support for interlaced video (an encoding technique used in analog NTSC, PAL and SECAM television systems). MPEG-2 video is not optimized for low bit-rates (e.g., less than 1 Mbit/s), but somewhat outperforms MPEG-1 at higher bit rates (e.g., 3 Mbit/s and above), although not by a large margin unless the video is interlaced. All standards-conforming MPEG-2 Video decoders are also fully capable of playing back MPEG-1 Video streams. History: The ISO/IEC approval process was completed in November 1994. The first edition was approved in July 1995 and published by ITU-T and ISO/IEC in 1996. Didier LeGall of Bellcore chaired the development of the standard and Sakae Okubo of NTT was the ITU-T coordinator and chaired the agreements on its requirements.The technology was developed with contributions from a number of companies. Hyundai Electronics (now SK Hynix) developed the first MPEG-2 SAVI (System/Audio/Video) decoder in 1995.The majority of patents that were later asserted in a patent pool to be essential for implementing the standard came from three companies: Sony (311 patents), Thomson (198 patents) and Mitsubishi Electric (119 patents).In 1996, it was extended by two amendments to include the registration of copyright identifiers and the 4:2:2 Profile. ITU-T published these amendments in 1996 and ISO in 1997.There are also other amendments published later by ITU-T and ISO/IEC. The most recent edition of the standard was published in 2013 and incorporates all prior amendments. Video coding: Picture sampling An HDTV camera with 8-bit sampling generates a raw video stream of 25 × 1920 × 1080 × 3 = 155,520,000 bytes per second for 25 frame-per-second video (using the 4:4:4 sampling format). This stream of data must be compressed if digital TV is to fit in the bandwidth of available TV channels and if movies are to fit on DVDs. Video compression is practical because the data in pictures is often redundant in space and time. For example, the sky can be blue across the top of a picture and that blue sky can persist for frame after frame. Also, because of the way the eye works, it is possible to delete or approximate some data from video pictures with little or no noticeable degradation in image quality. Video coding: A common (and old) trick to reduce the amount of data is to separate each complete "frame" of video into two "fields" upon broadcast/encoding: the "top field", which is the odd numbered horizontal lines, and the "bottom field", which is the even numbered lines. Upon reception/decoding, the two fields are displayed alternately with the lines of one field interleaving between the lines of the previous field; this format is called interlaced video. The typical field rate is 50 (Europe/PAL) or 59.94 (US/NTSC) fields per second, corresponding to 25 (Europe/PAL) or 29.97 (North America/NTSC) whole frames per second. If the video is not interlaced, then it is called progressive scan video and each picture is a complete frame. MPEG-2 supports both options. Video coding: Digital television requires that these pictures be digitized so that they can be processed by computer hardware. Each picture element (a pixel) is then represented by one luma number and two chroma numbers. These describe the brightness and the color of the pixel (see YCbCr). Thus, each digitized picture is initially represented by three rectangular arrays of numbers. Video coding: Another common practice to reduce the amount of data to be processed is to subsample the two chroma planes (after low-pass filtering to avoid aliasing). This works because the human visual system better resolves details of brightness than details in the hue and saturation of colors. The term 4:2:2 is used for video with the chroma subsampled by a ratio of 2:1 horizontally, and 4:2:0 is used for video with the chroma subsampled by 2:1 both vertically and horizontally. Video that has luma and chroma at the same resolution is called 4:4:4. The MPEG-2 Video document considers all three sampling types, although 4:2:0 is by far the most common for consumer video, and there are no defined "profiles" of MPEG-2 for 4:4:4 video (see below for further discussion of profiles). Video coding: While the discussion below in this section generally describes MPEG-2 video compression, there are many details that are not discussed, including details involving fields, chrominance formats, responses to scene changes, special codes that label the parts of the bitstream, and other pieces of information. Aside from features for handling fields for interlaced coding, MPEG-2 Video is very similar to MPEG-1 Video (and even quite similar to the earlier H.261 standard), so the entire description below applies equally well to MPEG-1. Video coding: I-frames, P-frames, and B-frames MPEG-2 includes three basic types of coded frames: intra-coded frames (I-frames), predictive-coded frames (P-frames), and bidirectionally-predictive-coded frames (B-frames). Video coding: An I-frame is a separately-compressed version of a single uncompressed (raw) frame. The coding of an I-frame takes advantage of spatial redundancy and of the inability of the eye to detect certain changes in the image. Unlike P-frames and B-frames, I-frames do not depend on data in the preceding or the following frames, and so their coding is very similar to how a still photograph would be coded (roughly similar to JPEG picture coding). Briefly, the raw frame is divided into 8 pixel by 8 pixel blocks. The data in each block is transformed by the discrete cosine transform (DCT). The result is an 8×8 matrix of coefficients that have real number values. The transform converts spatial variations into frequency variations, but it does not change the information in the block; if the transform is computed with perfect precision, the original block can be recreated exactly by applying the inverse cosine transform (also with perfect precision). The conversion from 8-bit integers to real-valued transform coefficients actually expands the amount of data used at this stage of the processing, but the advantage of the transformation is that the image data can then be approximated by quantizing the coefficients. Many of the transform coefficients, usually the higher frequency components, will be zero after the quantization, which is basically a rounding operation. The penalty of this step is the loss of some subtle distinctions in brightness and color. The quantization may either be coarse or fine, as selected by the encoder. If the quantization is not too coarse and one applies the inverse transform to the matrix after it is quantized, one gets an image that looks very similar to the original image but is not quite the same. Next, the quantized coefficient matrix is itself compressed. Typically, one corner of the 8×8 array of coefficients contains only zeros after quantization is applied. By starting in the opposite corner of the matrix, then zigzagging through the matrix to combine the coefficients into a string, then substituting run-length codes for consecutive zeros in that string, and then applying Huffman coding to that result, one reduces the matrix to a smaller quantity of data. It is this entropy coded data that is broadcast or that is put on DVDs. In the receiver or the player, the whole process is reversed, enabling the receiver to reconstruct, to a close approximation, the original frame. Video coding: The processing of B-frames is similar to that of P-frames except that B-frames use the picture in a subsequent reference frame as well as the picture in a preceding reference frame. As a result, B-frames usually provide more compression than P-frames. B-frames are never reference frames in MPEG-2 Video. Typically, every 15th frame or so is made into an I-frame. P-frames and B-frames might follow an I-frame like this, IBBPBBPBBPBB(I), to form a Group Of Pictures (GOP); however, the standard is flexible about this. The encoder selects which pictures are coded as I-, P-, and B-frames. Video coding: Macroblocks P-frames provide more compression than I-frames because they take advantage of the data in a previous I-frame or P-frame – a reference frame. To generate a P-frame, the previous reference frame is reconstructed, just as it would be in a TV receiver or DVD player. The frame being compressed is divided into 16 pixel by 16 pixel macroblocks. Then, for each of those macroblocks, the reconstructed reference frame is searched to find a 16 by 16 area that closely matches the content of the macroblock being compressed. The offset is encoded as a "motion vector". Frequently, the offset is zero, but if something in the picture is moving, the offset might be something like 23 pixels to the right and 4-and-a-half pixels up. In MPEG-1 and MPEG-2, motion vector values can either represent integer offsets or half-integer offsets. The match between the two regions will often not be perfect. To correct for this, the encoder takes the difference of all corresponding pixels of the two regions, and on that macroblock difference then computes the DCT and strings of coefficient values for the four 8×8 areas in the 16×16 macroblock as described above. This "residual" is appended to the motion vector and the result sent to the receiver or stored on the DVD for each macroblock being compressed. Sometimes no suitable match is found. Then, the macroblock is treated like an I-frame macroblock. Video profiles and levels: MPEG-2 video supports a wide range of applications from mobile to high quality HD editing. For many applications, it is unrealistic and too expensive to support the entire standard. To allow such applications to support only subsets of it, the standard defines profiles and levels. A profile defines sets of features such as B-pictures, 3D video, chroma format, etc. The level limits the memory and processing power needed, defining maximum bit rates, frame sizes, and frame rates. A MPEG application then specifies the capabilities in terms of profile and level. For example, a DVD player may say it supports up to main profile and main level (often written as MP@ML). It means the player can play back any MPEG stream encoded as MP@ML or less. The tables below summarizes the limitations of each profile and level, though there are constraints not listed here.: Annex E  Note that not all profile and level combinations are permissible, and scalable modes modify the level restrictions. A few common MPEG-2 Profile/Level combinations are presented below, with particular maximum limits noted: Applications: Some applications are listed below. DVD-Video - a standard definition consumer video format. Uses 4:2:0 color subsampling and variable video data rate up to 9.8 Mbit/s. MPEG IMX - a standard definition professional video recording format. Uses intraframe compression, 4:2:2 color subsampling and user-selectable constant video data rate of 30, 40 or 50 Mbit/s. HDV - a tape-based high definition video recording format. Uses 4:2:0 color subsampling and 19.4 or 25 Mbit/s total data rate. Applications: XDCAM - a family of tapeless video recording formats, which, in particular, includes formats based on MPEG-2 Part 2. These are: standard definition MPEG IMX (see above), high definition MPEG HD, high definition MPEG HD422. MPEG IMX and MPEG HD422 employ 4:2:2 color subsampling, MPEG HD employs 4:2:0 color subsampling. Most subformats use selectable constant video data rate from 25 to 50 Mbit/s, although there is also a variable bitrate mode with maximum 18 Mbit/s data rate. Applications: XF Codec - a professional tapeless video recording format, similar to MPEG HD and MPEG HD422 but stored in a different container file. HD DVD - defunct high definition consumer video format. Blu-ray Disc - high definition consumer video format. Broadcast TV - in some countries MPEG-2 Part 2 is used for digital broadcast in high definition. For example, ATSC specifies both several scanning formats (480i, 480p, 720p, 1080i, 1080p) and frame/field rates at 4:2:0 color subsampling, with up to 19.4 Mbit/s data rate per channel. Digital cable TV Satellite TV Patent holders: The following organizations have held patents for MPEG-2 video technology, as listed at MPEG LA. All of these patents are now expired.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Silver halide** Silver halide: A silver halide (or silver salt) is one of the chemical compounds that can form between the element silver (Ag) and one of the halogens. In particular, bromine (Br), chlorine (Cl), iodine (I) and fluorine (F) may each combine with silver to produce silver bromide (AgBr), silver chloride (AgCl), silver iodide (AgI), and four forms of silver fluoride, respectively. As a group, they are often referred to as the silver halides, and are often given the pseudo-chemical notation AgX. Although most silver halides involve silver atoms with oxidation states of +1 (Ag+), silver halides in which the silver atoms have oxidation states of +2 (Ag2+) are known, of which silver(II) fluoride is the only known stable one. Silver halide: Silver halides are light-sensitive chemicals, and are commonly used in photographic film and paper. Applications: Light sensitivity Silver halides are used in photographic film and photographic paper, including graphic art film and paper, where silver halide crystals in gelatin are coated on to a film base, glass or paper substrate. The gelatin is a vital part of the emulsion as the protective colloid of appropriate physical and chemical properties. The gelatin may also contain trace elements (such as sulfur) which increase the light sensitivity of the emulsion, although modern practice uses gelatin without such components. When a silver halide crystal is exposed to light, a sensitivity speck on the surface of the crystal is turned into a speck of metallic silver (these comprise the invisible or latent image). If the speck of silver contains approximately four or more atoms, it is rendered developable - meaning that it can undergo development which turns the entire crystal into metallic silver. Areas of the emulsion receiving larger amounts of light (reflected from a subject being photographed, for example) undergo the greatest development and therefore results in the highest optical density. Applications: Silver bromide and silver chloride may be used separately or combined, depending on the sensitivity and tonal qualities desired in the product. Silver iodide is always combined with silver bromide or silver chloride, except in the case of some historical processes such as the collodion wet plate and daguerreotype, in which the iodide is sometimes used alone (generally regarded as necessary if a daguerreotype is to be developed by the Becquerel method, in which exposure to strong red light, which affects only the crystals bearing latent image specks, is substituted for exposure to mercury fumes). Silver fluoride is not used in photography. Applications: The mechanism of the formation of the speck of metallic silver is as follows. When absorbed by an AgX crystal, photons cause electrons to be promoted to a conduction band (de-localized electron orbital with higher energy than a valence band) which can be attracted by a sensitivity speck, which is a shallow electron trap, which may be a crystalline defect or a cluster of silver sulfide, gold, other trace elements (dopant), or combination thereof, and then combined with an interstitial silver ion to form a silver metal speck.Silver halides are also used to make corrective lenses darken when exposed to ultraviolet light (see photochromism). Applications: Chemistry Silver halides, except for silver fluoride, are very insoluble in water. Silver nitrate can be used to precipitate halides; this application is useful in quantitative analysis of halides. The three main silver halide compounds have distinctive colours that can be used to quickly identify halide ions in a solution. The silver chloride compound forms a white precipitate, silver bromide a creamy coloured precipitate and silver iodide a yellow coloured precipitate. Applications: However, close attention is necessary for other compounds in the test solution. Some compounds can considerably increase or decrease the solubility of AgX. Examples of compounds that increase the solubility include: cyanide, thiocyanate, thiosulfate, thiourea, amines, ammonia, sulfite, thioether, crown ether. Examples of compounds that reduces the solubility include many organic thiols and nitrogen compounds that do not possess solubilizing group other than mercapto group or the nitrogen site, such as mercaptooxazoles, mercaptotetrazoles, especially 1-phenyl-5-mercaptotetrazole, benzimidazoles, especially 2-mercaptobenzimidazole, benzotriazole, and these compounds further substituted by hydrophobic groups. Compounds such as thiocyanate and thiosulfate enhance solubility when they are present in a sufficiently large quantity, due to formation of highly soluble complex ions, but they also significantly depress solubility when present in a very small quantity, due to formation of sparingly soluble complex ions. Applications: Archival Use Silver Halide can be used to deposit fine details of metallic silver on surfaces, such as film. Because of the chemical stability of metallic silver, this can be used for archival purposes. For example, the Arctic World Archive uses film developed with Silver Halides to store data of historical and cultural interest, such as a snapshot of the Open Source code in all active GitHub repositories as of 2020. Medical technology and use Scientists from Tel Aviv University are experimenting with silver halide optical fibers for transmitting mid-infrared light from carbon dioxide lasers. The fibers allow laser welding of human tissue, as an alternative to traditional sutures.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Wide character** Wide character: A wide character is a computer character datatype that generally has a size greater than the traditional 8-bit character. The increased datatype size allows for the use of larger coded character sets. History: During the 1960s, mainframe and mini-computer manufacturers began to standardize around the 8-bit byte as their smallest datatype. The 7-bit ASCII character set became the industry standard method for encoding alphanumeric characters for teletype machines and computer terminals. The extra bit was used for parity, to ensure the integrity of data storage and transmission. As a result, the 8-bit byte became the de facto datatype for computer systems storing ASCII characters in memory. History: Later, computer manufacturers began to make use of the spare bit to extend the ASCII character set beyond its limited set of English alphabet characters. 8-bit extensions such as IBM code page 37, PETSCII and ISO 8859 became commonplace, offering terminal support for Greek, Cyrillic, and many others. However, such extensions were still limited in that they were region specific and often could not be used in tandem. Special conversion routines had to be used to convert from one character set to another, often resulting in destructive translation when no equivalent character existed in the target set. History: In 1989, the International Organization for Standardization began work on the Universal Character Set (UCS), a multilingual character set that could be encoded using either a 16-bit (2-byte) or 32-bit (4-byte) value. These larger values required the use of a datatype larger than 8-bits to store the new character values in memory. Thus the term wide character was used to differentiate them from traditional 8-bit character datatypes. Relation to UCS and Unicode: A wide character refers to the size of the datatype in memory. It does not state how each value in a character set is defined. Those values are instead defined using character sets, with UCS and Unicode simply being two common character sets that encode more characters than an 8-bit wide numeric value (255 total) would allow. Relation to multibyte characters: Just as earlier data transmission systems suffered from the lack of an 8-bit clean data path, modern transmission systems often lack support for 16-bit or 32-bit data paths for character data. This has led to character encoding systems such as UTF-8 that can use multiple bytes to encode a value that is too large for a single 8-bit symbol. The C standard distinguishes between multibyte encodings of characters, which use a fixed or variable number of bytes to represent each character (primarily used in source code and external files), from wide characters, which are run-time representations of characters in single objects (typically, greater than 8 bits). Size of a wide character: Early adoption of UCS-2 ("Unicode 1.0") led to common use of UTF-16 in a number of platforms, most notably Microsoft Windows, .NET and Java. In these systems, it is common to have a "wide character" (wchar_t in C/C++; char in Java) type of 16-bits. These types do not always map directly to one "character", as surrogate pairs are required to store the full range of Unicode (1996, Unicode 2.0).Unix-like generally use a 32-bit wchar_t to fit the 21-bit Unicode code point, as C90 prescribed.The size of a wide character type does not dictate what kind of text encodings a system can process, as conversions are available. (Old conversion code commonly overlook surrogates, however.) The historical circumstances of their adoption does also decide what types of encoding they prefer. A system influenced by Unicode 1.0, such as Windows, tends to mainly use "wide strings" made out of wide character units. Other systems such as the Unix-likes, however, tend to retain the 8-bit "narrow string" convention, using a multibyte encoding (almost universally UTF-8) to handle "wide" characters. Programming specifics: C/C++ The C and C++ standard libraries include a number of facilities for dealing with wide characters and strings composed of them. The wide characters are defined using datatype wchar_t, which in the original C90 standard was defined as "an integral type whose range of values can represent distinct codes for all members of the largest extended character set specified among the supported locales" (ISO 9899:1990 §4.1.5)Both C and C++ introduced fixed-size character types char16_t and char32_t in the 2011 revisions of their respective standards to provide unambiguous representation of 16-bit and 32-bit Unicode transformation formats, leaving wchar_t implementation-defined. The ISO/IEC 10646:2003 Unicode standard 4.0 says that: "The width of wchar_t is compiler-specific and can be as small as 8 bits. Consequently, programs that need to be portable across any C or C++ compiler should not use wchar_t for storing Unicode text. The wchar_t type is intended for storing compiler-defined wide characters, which may be Unicode characters in some compilers." Python According to Python 2.7's documentation, the language sometimes uses wchar_t as the basis for its character type Py_UNICODE. It depends on whether wchar_t is "compatible with the chosen Python Unicode build variant" on that system. This distinction has been deprecated since Python 3.3, which introduced a flexibly-sized UCS1/2/4 storage for strings and formally aliased Py_UNICODE to wchar_t.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Glomus cell** Glomus cell: Glomus cells are the cell type mainly located in the carotid bodies and aortic bodies. Glomus type I cells are peripheral chemoreceptors which sense the oxygen, carbon dioxide and pH levels of the blood. When there is a decrease in the blood's pH, a decrease in oxygen (pO2), or an increase in carbon dioxide (pCO2), the carotid bodies and the aortic bodies signal the dorsal respiratory group in the medulla oblongata to increase the volume and rate of breathing. The glomus cells have a high metabolic rate and good blood perfusion and thus are sensitive to changes in arterial blood gas tension. Glomus type II cells are sustentacular cells having a similar supportive function to glial cells. Structure: The signalling within the chemoreceptors is thought to be mediated by the release of neurotransmitters by the glomus cells, including dopamine, noradrenaline, acetylcholine, substance P, vasoactive intestinal peptide and enkephalins. Vasopressin has been found to inhibit the response of glomus cells to hypoxia, presumably because the usual response to hypoxia is vasodilation, which in case of hypovolemia should be avoided. Furthermore, glomus cells are highly responsive to angiotensin II through AT1 receptors, providing information about the body's fluid and electrolyte status. Function: Glomus type I cells are chemoreceptors which monitor arterial blood for the partial pressure of oxygen (pO2), partial pressure of carbon dioxide (pCO2) and pH. Glomus type I cells are secretory sensory neurons that release neurotransmitters in response to hypoxemia (low pO2), hypercapnia (high pCO2) or acidosis (low pH). Signals are transmitted to the afferent nerve fibers of the sinus nerve and may include dopamine, acetylcholine, and adenosine. This information is sent to the respiratory center and helps the brain to regulate breathing. Innervation: The glomus type I cells of the carotid body are innervated by the sensory neurons found in the inferior ganglion of the glossopharyngeal nerve. The carotid sinus nerve is the branch of the glossopharyngeal nerve which innervates them. Alternatively, the glomus type I cells of the aortic body are innervated by sensory neurons found in the inferior ganglion of the vagus nerve. Centrally the axons of neurons which innervate glomus type I cells synapse in the caudal portion of the solitary nucleus in the medulla. Glomus type II cells are not innervated. Development: Glomus type I cells are embryonically derived from the neural crest. In the carotid body the respiratory chemoreceptors need a period of time postnatally in order to reach functional maturity. This maturation period is known as resetting. At birth the chemorecptors express a low sensitivity for lack of oxygen but this increases over the first few days or weeks of life. The mechanisms underlying the postnatal maturity of chemotransduction are obscure. Clinical significance: Clusters of glomus cells, of which the carotid bodies and aortic bodies are the most important, are called non-chromaffin or parasympathetic paraganglia. They are also present along the vagus nerve, in the inner ears, in the lungs, and at other sites. Neoplasms of glomus cells are known as paraganglioma, among other names, they are generally non-malignant. Research: The autotransplantation of glomus cells of the carotid body into the striatum – a nucleus in the forebrain, has been investigated as a cell-based therapy for people with Parkinson's disease.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Immirzi parameter** Immirzi parameter: The Immirzi parameter (also known as the Barbero–Immirzi parameter) is a numerical coefficient appearing in loop quantum gravity (LQG), a nonperturbative theory of quantum gravity. The Immirzi parameter measures the size of the quantum of area in Planck units. As a result, its value is currently fixed by matching the semiclassical black hole entropy, as calculated by Stephen Hawking, and the counting of microstates in loop quantum gravity. The reality conditions: The Immirzi parameter arises in the process of expressing a Lorentz connection with noncompact group SO(3,1) in terms of a complex connection with values in a compact group of rotations, either SO(3) or its double cover SU(2). Although named after Giorgio Immirzi, the possibility of including this parameter was first pointed out by Fernando Barbero. The significance of this parameter remained obscure until the spectrum of the area operator in LQG was calculated. It turns out that the area spectrum is proportional to the Immirzi parameter. Black hole thermodynamics: In the 1970s Stephen Hawking, motivated by the analogy between the law of increasing area of black hole event horizons and the second law of thermodynamics, performed a semiclassical calculation showing that black holes are in equilibrium with thermal radiation outside them, and that black hole entropy (that is, the entropy of the black hole itself, not the entropy of the radiation in equilibrium with the black hole, which is infinite) equals S=A/4 (in Planck units)In 1997, Ashtekar, Baez, Corichi and Krasnov quantized the classical phase space of the exterior of a black hole in vacuum General Relativity. They showed that the geometry of spacetime outside a black hole is described by spin networks, some of whose edges puncture the event horizon, contributing area to it, and that the quantum geometry of the horizon can be described by a U(1) Chern–Simons theory. The appearance of the group U(1) is explained by the fact that two-dimensional geometry is described in terms of the rotation group SO(2), which is isomorphic to U(1). The relationship between area and rotations is explained by Girard's theorem relating the area of a spherical triangle to its angular excess. Black hole thermodynamics: By counting the number of spin-network states corresponding to an event horizon of area A, the entropy of black holes is seen to be S=γ0A/4γ. Here γ is the Immirzi parameter and either ln ⁡(2)/3π or ln ⁡(3)/8π, depending on the gauge group used in loop quantum gravity. So, by choosing the Immirzi parameter to be equal to γ0 , one recovers the Bekenstein–Hawking formula. This computation appears independent of the kind of black hole, since the given Immirzi parameter is always the same. However, Krzysztof Meissner and Marcin Domagala with Jerzy Lewandowski have corrected the assumption that only the minimal values of the spin contribute. Their result involves the logarithm of a transcendental number instead of the logarithms of integers mentioned above. The Immirzi parameter appears in the denominator because the entropy counts the number of edges puncturing the event horizon and the Immirzi parameter is proportional to the area contributed by each puncture. Immirzi parameter in spin foam theory: In late 2006, independent from the definition of isolated horizon theory, Ansari reported that in loop quantum gravity the eigenvalues of the area operator are symmetric by the ladder symmetry. Corresponding to each eigenvalue there are a finite number of degenerate states. One application could be if the classical null character of a horizon is disregarded in the quantum sector, in the lack of energy condition and presence of gravitational propagation the Immirzi parameter tunes to: ln ⁡(3)/8π, by the use of Olaf Dreyer's conjecture for identifying the evaporation of minimal area cell with the corresponding area of the highly damping quanta. This proposes a kinematical picture for defining a quantum horizon via spin foam models, however the dynamics of such a model has not yet been studied. Scale-invariant theory: For scale-invariant dilatonic theories of gravity with standard model-type matter couplings, Charles Wang and co-workers show that their loop quantization lead to a conformal class of Ashtekar–Barbero connection variables using the Immirzi parameter as a conformal gauge parameter without a preferred value. Accordingly, a different choice of the value for the Immirzi parameter for such a theory merely singles out a conformal frame without changing the physical descriptions. Interpretation: The parameter may be viewed as a renormalization of Newton's constant. Various speculative proposals to explain this parameter have been suggested: for example, an argument due to Olaf Dreyer based on quasinormal modes.Another more recent interpretation is that it is the measure of the value of parity violation in quantum gravity, analogous to the theta parameter of QCD, and its positive real value is necessary for the Kodama state of loop quantum gravity. As of today (2004), no alternative calculation of this constant exists. If a second match with experiment or theory (for example, the value of Newton's force at long distance) were found requiring a different value of the Immirzi parameter, it would constitute evidence that loop quantum gravity cannot reproduce the physics of general relativity at long distances. On the other hand, the Immirzi parameter seems to be the only free parameter of vacuum LQG, and once it is fixed by matching one calculation to an "experimental" result, it could in principle be used to predict other experimental results. Unfortunately, no such alternative calculations have been made so far.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Null hypersurface** Null hypersurface: In relativity and in pseudo-Riemannian geometry, a null hypersurface is a hypersurface whose normal vector at every point is a null vector (has zero length with respect to the local metric tensor). A light cone is an example. An alternative characterization is that the tangent space at every point of a hypersurface contains a nonzero vector such that the metric applied to such a vector and any vector in the tangent space is zero. Another way of saying this is that the pullback of the metric onto the tangent space is degenerate. Null hypersurface: For a Lorentzian metric, all the vectors in such a tangent space are space-like except in one direction, in which they are null. Physically, there is exactly one lightlike worldline contained in a null hypersurface through each point that corresponds to the worldline of a particle moving at the speed of light, and no contained worldlines that are time-like. Examples of null hypersurfaces include a light cone, a Killing horizon, and the event horizon of a black hole.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Virtual Pro Wrestling** Virtual Pro Wrestling: Virtual Pro Wrestling (Japanese: バーチャル・プロレスリング) is a professional wrestling video game series developed by AKI Corporation and published by Asmik Ace exclusively in Japan. The series started in 1996 with the release of the first Virtual Pro Wrestling for the PlayStation. It was localized in the West as WCW vs. the World. Two other games in the series were released exclusively for the Nintendo 64, Virtual Pro Wrestling 64 and Virtual Pro Wrestling 2.All games in the series feature characters largely based on real-life wrestlers working for Japanese professional wrestling promotions. The series has been highly regarded for its gameplay engine, featuring weak/strong attacks and maneuvers and the Nintendo 64 games have been popular import titles.The games served as the basis for several games published by THQ and based on the American wrestling promotions World Championship Wrestling (WCW) and the World Wrestling Federation (WWF). The first game in the series was released outside Japan as WCW vs. the World. The last two games in the series had Western counterparts in WCW vs. nWo: World Tour and WWF WrestleMania 2000.Although AKI stopped producing Virtual Pro Wrestling titles, they continued to use tweaked versions of the gameplay system in newer titles such as Def Jam Vendetta, Def Jam: Fight for NY and games based on the Ultimate Muscle franchise such as Ultimate Muscle: Legends vs. New Generation.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**S3 Texture Compression** S3 Texture Compression: S3 Texture Compression (S3TC) (sometimes also called DXTn, DXTC, or BCn) is a group of related lossy texture compression algorithms originally developed by Iourcha et al. of S3 Graphics, Ltd. for use in their Savage 3D computer graphics accelerator. The method of compression is strikingly similar to the previously published Color Cell Compression, which is in turn an adaptation of Block Truncation Coding published in the late 1970s. Unlike some image compression algorithms (e.g. JPEG), S3TC's fixed-rate data compression coupled with the single memory access (cf. Color Cell Compression and some VQ-based schemes) made it well-suited for use in compressing textures in hardware-accelerated 3D computer graphics. Its subsequent inclusion in Microsoft's DirectX 6.0 and OpenGL 1.3 (via the GL_EXT_texture_compression_s3tc extension) led to widespread adoption of the technology among hardware and software makers. While S3 Graphics is no longer a competitor in the graphics accelerator market, license fees have been levied and collected for the use of S3TC technology until October 2017, for example in game consoles and graphics cards. The wide use of S3TC has led to a de facto requirement for OpenGL drivers to support it, but the patent-encumbered status of S3TC presented a major obstacle to open source implementations, while implementation approaches which tried to avoid the patented parts existed. Patent: Some (e.g. US 5956431 A) of the multiple USPTO patents on S3 Texture Compression expired on October 2, 2017. At least one continuation patent, US6,775,417, however had a 165-day extension. This continuation patent expired on March 16, 2018. Codecs: There are five variations of the S3TC algorithm (named DXT1 through DXT5, referring to the FourCC code assigned by Microsoft to each format), each designed for specific types of image data. All convert a 4×4 block of pixels to a 64-bit or 128-bit quantity, resulting in compression ratios of 6:1 with 24-bit RGB input data or 4:1 with 32-bit RGBA input data. S3TC is a lossy compression algorithm, resulting in image quality degradation, an effect which is minimized by the ability to increase texture resolutions while maintaining the same memory requirements. Hand-drawn cartoon-like images do not compress well, nor do normal map data, both of which usually generate artifacts. ATI's 3Dc compression algorithm is a modification of DXT5 designed to overcome S3TC's shortcomings with regard to normal maps. id Software worked around the normalmap compression issues in Doom 3 by moving the red component into the alpha channel before compression and moving it back during rendering in the pixel shader.Like many modern image compression algorithms, S3TC only specifies the method used to decompress images, allowing implementers to design the compression algorithm to suit their specific needs, although the patent still covers compression algorithms. The nVidia GeForce 256 through to GeForce 4 cards also used 16-bit interpolation to render DXT1 textures, which resulted in banding when unpacking textures with color gradients. Again, this created an unfavorable impression of texture compression, not related to the fundamentals of the codec itself. DXT1: DXT1 (also known as Block Compression 1 or BC1) is the smallest variation of S3TC, storing 16 input pixels in 64 bits of output, consisting of two 16-bit RGB 5:6:5 color values c0 and c1 , and a 4×4 two-bit lookup table. DXT1: If c0>c1 (compare these colors by interpreting them as two 16-bit unsigned numbers), then two other colors are calculated, such that for each component, {\textstyle c_{2}={2 \over 3}c_{0}+{1 \over 3}c_{1}} and {\textstyle c_{3}={1 \over 3}c_{0}+{2 \over 3}c_{1}} This mode operates similarly to mode 0xC0 of the original Apple Video codec.Otherwise, if c0≤c1 , then {\textstyle c_{2}={1 \over 2}c_{0}+{1 \over 2}c_{1}} and c3 is transparent black corresponding to a premultiplied alpha format. This color sometimes causes a black border surrounding the transparent area when linear texture filtering and alpha test is used, due to colors being interpolated between the color of opaque texel and neighbouring black transparent texel. DXT1: The lookup table is then consulted to determine the color value for each pixel, with a value of 0 corresponding to c0 and a value of 3 corresponding to c3 DXT2 and DXT3: DXT2 and DXT3 (collectively also known as Block Compression 2 or BC2) converts 16 input pixels (corresponding to a 4x4 pixel block) into 128 bits of output, consisting of 64 bits of alpha channel data (4 bits for each pixel) followed by 64 bits of color data, encoded the same way as DXT1 (with the exception that the 4-color version of the DXT1 algorithm is always used instead of deciding which version to use based on the relative values of c0 and c1 ). DXT2 and DXT3: In DXT2, the color data is interpreted as being premultiplied by alpha, in DXT3 it is interpreted as not having been premultiplied by alpha. Typically DXT2/3 are well suited to images with sharp alpha transitions, between translucent and opaque areas. DXT4 and DXT5: DXT4 and DXT5 (collectively also known as Block Compression 3 or BC3) converts 16 input pixels into 128 bits of output, consisting of 64 bits of alpha channel data (two 8-bit alpha values and a 4×4 3-bit lookup table) followed by 64 bits of color data (encoded the same way as DXT1). DXT4 and DXT5: If α0>α1 , then six other alpha values are calculated, such that {\textstyle \alpha _{2}={{6\alpha _{0}+1\alpha _{1}} \over 7}} , {\textstyle \alpha _{3}={{5\alpha _{0}+2\alpha _{1}} \over 7}} , {\textstyle \alpha _{4}={{4\alpha _{0}+3\alpha _{1}} \over 7}} , {\textstyle \alpha _{5}={{3\alpha _{0}+4\alpha _{1}} \over 7}} , {\textstyle \alpha _{6}={{2\alpha _{0}+5\alpha _{1}} \over 7}} , and {\textstyle \alpha _{7}={{1\alpha _{0}+6\alpha _{1}} \over 7}} Otherwise, if {\textstyle \alpha _{0}\leq \alpha _{1}} , four other alpha values are calculated such that {\textstyle \alpha _{2}={{4\alpha _{0}+1\alpha _{1}} \over 5}} , {\textstyle \alpha _{3}={{3\alpha _{0}+2\alpha _{1}} \over 5}} , {\textstyle \alpha _{4}={{2\alpha _{0}+3\alpha _{1}} \over 5}} , and {\textstyle \alpha _{5}={{1\alpha _{0}+4\alpha _{1}} \over 5}} with α6=0 and 255 The lookup table is then consulted to determine the alpha value for each pixel, with a value of 0 corresponding to α0 and a value of 7 corresponding to α7 . DXT4's color data is premultiplied by alpha, whereas DXT5's is not. Because DXT4/5 use an interpolated alpha scheme, they generally produce superior results for alpha (transparency) gradients than DXT2/3. Further variants: BC4 and BC5 BC4 and BC5 (Block Compression 4 and 5) are added in Direct3D 10. They reuse the alpha channel encoding found in DXT4/5 (BC3). BC4 stores 16 input single-channel (e.g. greyscale) pixels into 64 bits of output, encoded in nearly the same way as BC3 alphas. The expanded palette provides higher quality. BC5 stores 16 input double-channel (e.g. tangent space normal map) pixels into 128 bits of output, consisting of two halves each encoded like BC4. BC6H and BC7 BC6H (sometimes BC6) and BC7 (Block Compression 6H and 7) are added in Direct3D 11. BC6H encodes 16 input RGB HDR (float16) pixels into 128 bits of output. It essentially treats float16 as 16 sign-magnitude integer value and interpolates such integers linearly. It works well for blocks without sign changes. A total of 14 modes are defined, though most differ minimally: only two prediction modes are really used. Further variants: BC7 encodes 16 input RGB8/RGBA8 pixels into 128 bits of output. It can be understood as a much-enhanced BC3.BC6H and BC7 have a much more complex algorithm with a selection of encoding modes. The quality is much better as a result. These two modes are also specified much more exactly, with ranges of accepted deviation. Earlier BCn modes decode slightly differently among GPU vendors. Data preconditioning: BCn textures can be further compressed for on-disk storage and distribution. An application would decompress this extra layer and send the BCn data to the GPU as usual. Data preconditioning: BCn can be combined with Oodle Texture, a lossy preprocessor that modifies the input texture so that the BCn output is more easily compressed by a LZ77 compressor (rate-distortion optimization). BC7 specifically can also use "bc7prep", a lossless pass to re-encode the texture in a more compressible form (requiring its inverse at decompression).crunch is another tool that performs RDO and optionally further re-encoding.In 2021, Microsoft produced a "BCPack" compression algorithm specifically for BCn-compressed textures. Xbox series X and S have hardware support for decompressing BCPack streams.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Nuclear timescale** Nuclear timescale: In astrophysics, the nuclear timescale is an estimate of the lifetime of a star based solely on its rate of fuel consumption. Along with the thermal and free-fall (aka dynamical) time scales, it is used to estimate the length of time a particular star will remain in a certain phase of its life and its lifespan if hypothetical conditions are met. In reality, the lifespan of a star is greater than what is estimated by the nuclear time scale because as one fuel becomes scarce, another will generally take its place—hydrogen burning gives way to helium burning, etc. However, all the phases after hydrogen burning combined typically add up to less than 10% of the duration of hydrogen burning. Stellar astrophysics: Hydrogen generally determines a star's nuclear lifetime because it is used as the main source of fuel in a main sequence star. Hydrogen becomes helium in the nuclear reaction that takes place within stars; when the hydrogen has been exhausted, the star moves on to another phase of its life and begins burning the helium. total mass of fuel available rate of fuel consumption fraction of star over which fuel is burned =MXLQ×F where M is the mass of the star, X is the fraction of the star (by mass) that is composed of the fuel, L is the star's luminosity, Q is the energy released per mass of the fuel from nuclear fusion (the chemical equation should be examined to get this value), and F is the fraction of the star where the fuel is burned (F is generally equal to .1 or so). As an example, the Sun's nuclear time scale is approximately 10 billion years.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Bayesian structural time series** Bayesian structural time series: Bayesian structural time series (BSTS) model is a statistical technique used for feature selection, time series forecasting, nowcasting, inferring causal impact and other applications. The model is designed to work with time series data. Bayesian structural time series: The model has also promising application in the field of analytical marketing. In particular, it can be used in order to assess how much different marketing campaigns have contributed to the change in web search volumes, product sales, brand popularity and other relevant indicators. Difference-in-differences models and interrupted time series designs are alternatives to this approach. "In contrast to classical difference-in-differences schemes, state-space models make it possible to (i) infer the temporal evolution of attributable impact, (ii) incorporate empirical priors on the parameters in a fully Bayesian treatment, and (iii) flexibly accommodate multiple sources of variation, including the time-varying influence of contemporaneous covariates, i.e., synthetic controls." General model description: The model consists of three main components: Kalman filter. The technique for time series decomposition. In this step, a researcher can add different state variables: trend, seasonality, regression, and others. Spike-and-slab method. In this step, the most important regression predictors are selected. General model description: Bayesian model averaging. Combining the results and prediction calculation.The model could be used to discover the causations with its counterfactual prediction and the observed data.A possible drawback of the model can be its relatively complicated mathematical underpinning and difficult implementation as a computer program. However, the programming language R has ready-to-use packages for calculating the BSTS model, which do not require strong mathematical background from a researcher.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Dosimeter** Dosimeter: A radiation dosimeter is a device that measures dose uptake of external ionizing radiation. It is worn by the person being monitored when used as a personal dosimeter, and is a record of the radiation dose received. Modern electronic personal dosimeters can give a continuous readout of cumulative dose and current dose rate, and can warn the wearer with an audible alarm when a specified dose rate or a cumulative dose is exceeded. Other dosimeters, such as thermoluminescent or film types, require processing after use to reveal the cumulative dose received, and cannot give a current indication of dose while being worn. Personal dosimeters: The personal ionising radiation dosimeter is of fundamental importance in the disciplines of radiation dosimetry and radiation health physics and is primarily used to estimate the radiation dose deposited in an individual wearing the device. Personal dosimeters: Ionising radiation damage to the human body is cumulative, and is related to the total dose received, for which the SI unit is the sievert. Radiographers, nuclear power plant workers, doctors using radiotherapy, HAZMAT workers, and other people in situations that involve handling radionuclides are often required to wear dosimeters so a record of occupational exposure can be made. Such devices are known as "legal dosimeters" if they have been approved for use in recording personnel dose for regulatory purposes. Personal dosimeters: Dosimeters are typically worn on the outside of clothing, a "whole body" dosimeter is worn on the chest or torso to represent dose to the whole body. This location monitors exposure of most vital organs and represents the bulk of body mass. Additional dosimeters can be worn to assess dose to extremities or in radiation fields that vary considerably depending on orientation of the body to the source. Personal dosimeters: Modern types The electronic personal dosimeter, the most commonly used type, is an electronic device that has a number of sophisticated functions, such as continual monitoring which allows alarm warnings at preset levels and live readout of dose accumulated. These are especially useful in high dose areas where residence time of the wearer is limited due to dose constraints. The dosimeter can be reset, usually after taking a reading for record purposes, and thereby re-used multiple times. Personal dosimeters: MOSFET dosimeter Metal–oxide–semiconductor field-effect transistor dosimeters are now used as clinical dosimeters for radiotherapy radiation beams. The main advantages of MOSFET devices are: 1. The MOSFET dosimeter is direct reading with a very thin active area (less than 2μm). 2. The physical size of the MOSFET when packaged is less than 4 mm. 3. The post radiation signal is permanently stored and is dose rate independent. Personal dosimeters: Gate oxide of MOSFET which is conventionally silicon dioxide is an active sensing material in MOSFET dosimeters. Radiation creates defects (acts like electron-hole pairs) in oxide, which in turn affects the threshold voltage of the MOSFET. This change in threshold voltage is proportional to radiation dose. Alternate high-k gate dielectrics like hafnium dioxide and aluminum oxides are also proposed as a radiation dosimeters. Personal dosimeters: Thermoluminescent dosimeter A thermoluminescent dosimeter measures ionizing radiation exposure by measuring the intensity of light emitted from a Dy or B doped crystal in the detector when heated. The intensity of light emitted is dependent upon the radiation exposure. These were once sold surplus and one format once used by submariners and nuclear workers resembled a dark green wristwatch containing the active components and a highly sensitive IR wire ended diode mounted to the doped LiF2 glass chip that when the assembly is precisely heated (hence thermoluminescent) emits the stored radiation as narrow band infrared light until it is depleted The main advantage is that the chip records dosage passively until exposed to light or heat so even a used sample kept in darkness can provide valuable scientific data. Personal dosimeters: Legacy type Film badge dosimeter Film badge dosimeters are for one-time use only. The level of radiation absorption is indicated by a change to the film emulsion, which is shown when the film is developed. They are now mostly superseded by electronic personal dosimeters and thermoluminescent dosimeters. Personal dosimeters: Quartz fiber dosimeter These use the property of a quartz fiber to measure the static electricity held on the fiber. Before use by the wearer a dosimeter is charged to a high voltage, causing the fiber to deflect due to electrostatic repulsion. As the gas in the dosimeter chamber becomes ionized by radiation the charge leaks away, causing the fiber to straighten and thereby indicate the amount of dose received against a graduated scale, which is viewed by a small in-built microscope. Personal dosimeters: They are only used for short durations, such as a day or a shift, as they can suffer from charge leakage, which gives a false high reading. However they are immune to EMP so were used during the Cold War as a failsafe method of determining radiation exposure. They are now largely superseded by electronic personal dosimeters for short term monitoring. Personal dosimeters: Geiger tube dosimeter These use a conventional Geiger-Muller tube, typically a ZP1301 or similar energy-compensated tube, requiring between 600 and 700V and pulse detection components. The display on most is a bubble or miniature LCD type with 4 digits and a discrete counter integrated chip such as 74C925/6. LED units usually have a button to turn the display on and off for longer battery life, and an infrared emitter for count verification and calibration. Personal dosimeters: The voltage is derived from a separate pinned or wire-ended module that often uses a unijunction transistor driving a small step-up coil and multiplier stage. While expensive, it is reliable over time and especially in high-radiation environments, sharing this trait with tunnel diodes, though the encapsulants, inductors and capacitors have been known to break down internally over time. Personal dosimeters: These have the disadvantage that the stored dose in becquerels or microsieverts is volatile and vanishes if the power supply is disconnected, though there can be a low-leakage capacitor to preserve the memory for short periods without a battery. Because of this, most units use long-life batteries and high-quality contacts. Recently-designed units log dose over time to non-volatile memory, such as a 24C256 chip so it may be read out via a serial port. Dosimetry dose quantities: The operational quantity for personal dosimetry is the personal dose equivalent, which is defined by the International Commission on Radiological Protection as the dose equivalent in soft tissue at an appropriate depth, below a specified point on the human body. The specified point is usually given by the position where the individual’s dosimeter is worn. Dosimetry dose quantities: Instrument and dosimeter response This is an actual reading obtained from such as an ambient dose gamma monitor, or a personal dosimeter. The dosimeter is calibrated in a known radiation field to ensure display of accurate operational quantities and allow a relationship to known health effect. The personal dose equivalent is used to assess dose uptake, and allow regulatory limits to be met. It is the figure usually entered into the records of external dose for occupational radiation workers. Dosimetry dose quantities: The dosimeter plays an important role within the international radiation protection system developed by the International Commission on Radiological Protection and the International Commission on Radiation Units and Measurements. This is shown in the accompanying diagram. Dosimeter calibration The "slab" phantom is used to represent the human torso for calibration of whole body dosimeters. This replicates the radiation scattering and absorption effects of the human torso. The International Atomic Energy Agency states "The slab phantom is 300 mm × 300 mm × 150 mm depth to represent the human torso". Process irradiation verification: Manufacturing processes that treat products with ionizing radiation, such as food irradiation, use dosimeters to calibrate doses deposited in the matter being irradiated. These usually must have a greater dose range than personal dosimeters, and doses are normally measured in the unit of absorbed dose: the gray (Gy). The dosimeter is located on or adjacent to the items being irradiated during the process as a validation of dose levels received.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Lists of graphics cards** Lists of graphics cards: Lists of graphics cards follow. A graphics card, or graphics processing unit, is a specialized electronic circuit that rapidly manipulates and alters memory to build images in a frame buffer for output to a display. By manufacturer, they include: List of AMD graphics processing units Intel Graphics Technology List of Nvidia graphics processing units
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Degenerative lumbosacral stenosis** Degenerative lumbosacral stenosis: Degenerative lumbosacral stenosis (DLSS), also known as cauda equina syndrome, is a pathologic degeneration in the lumbosacral disk in dogs; affecting the articulation, nerve progression, tissue and joint connections of the disk. This degeneration causes compressions in soft tissues and nerve root locations in the ultimate caudal area of the medulla, causing neuropathic pain in the lumbar vertebrae. Signs and symptoms: DLSS has been found to affect dogs between the ages of 7 and 8, males ranging twice as higher than females in the research area. Medium to large-sized working breeds with high rates of activity are mostly affected by this disease, the German Shepherd breed being the most common on DLSS diagnosis.Common symptoms in dogs are physical difficulties in normal daily activities, such as: Mild to severe pain when walking (dragged hind limbs). Signs and symptoms: Discomfort when ascending or descending stairs. Lumbar disturbances when resting or lying down. Unwillingness to perform exercise. Signs and symptoms: Urinal and defecation discomfort.Behavioural problems will also be presented in dogs affected by DLSS, due to the pain they suffer on their lower back. It has been researched that there is a positive correlation regarding a dog's behaviour with the amount of lumbar vertebrae that are affected by this disease, respectively showing that behavioural disturbances are more likely to appear with dogs that have 3+ affected vertebrae. Symptoms such as anxiety, sudden loss of appetite, or mild aggressiveness when performing physical activities can become clear signs of this disease. Research: DLSS is associated with behavioural problems depending on how much the disease affects the dog; in other words, the more tissue and bone that is affected by DLSS, the more reluctant the dog will be to perform any kind of physical activity. Its most general overview and research ground for understanding this pathological disease takes place in the military, since dogs who take part in the special forces (German and Dutch Shepherd, Labrador Retriever and Belgian Malinois being the most proper breeds) are widely studied as they progress through their incredibly active life. Those affected by DLSS, generally diagnosed in their retirement period, show a wide range of decreased activity when performing certain demanding tasks that require physical stress, thus, becoming crucial exemplars for lumbar diseases. Diagnosis: DLSS is commonly identified through magnetic resonance imaging (MRI) or computed tomography (CT) due to their precision in recognising abnormalities in soft tissue and small bone structures. Treatment: Medical treatment is necessary to correct this lumbar disease, generally varying from anti-inflammatory drugs (lacking steroids, such as: tramadol and gabapentin) to surgical correction; surgery being the most effective of course. Dorsal Laminectomy is the most common procedure for DLSS treatment, which implies the decompression or des-inflammation of soft tissues and nerve roots.↵Surgical fusion of the lumbosacral vertebrates has also been found to improve the affected vertebrae, since it reduces motion by eliminating certain nerve compressions located in the vertebral canal. Specific facetectomy (fat surgery) can also be performed in order to maintain stability in the affected joint tissue.Alternative conservative or non-surgical treatment is also a convenient option with dogs that have not fully developed Degenerative Lumbosacral Stenosis; ranging from regular walks to underwater exercises that aid the affected lumbar vertebrae decompress and tone the corresponding muscle. Statistically, physiotherapy has a success rate of 79% in all affected patients. If there is no surgical intervention, oral tramadol and alternative gabapentin have shown to decrease the neuropathological pain dogs suffer when affected by the disease.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Idiographic image** Idiographic image: In the field of clinical human sciences, an idiographic image is the representation of a result which has been obtained thanks to a study or research method whose subject-matters are specific cases, i.e. a portrayal which avoids nomothetic generalizations. Diagnostic formulation follows an idiographic criterion, while diagnostic classification follows a nomothetic criterion. In the field of psychiatry, psychology and clinical psychopathology, idiographic criterion is a method (also called historical method) which involves evaluating past experiences and selecting and comparing information about a specific individual or event. An example of idiographic image is a report, diagram or health history showing medical, psychological and pathological features which make the subject under examination unique. Where there is no prior detailed presentation of clinical data, the summary should present sufficient relevant information to support the diagnostic and aetiological components of the formulation. The term diagnostic formulation is preferable to diagnosis, because it emphasises that matters of clinical concern about which the clinician proposes aetiological hypotheses and targets of intervention include much more than just diagnostic category assignment, though this is usually an important component. Idiographic image: The expression idiographic image appeared for the first time in 1996 in the SESAMO research method manual.This term was coined to mean that the report of the test provided an anamnestic report containing a family, relational and health history of the subject and providing semiological data regarding both the psychosexual and the social-affective profile. These profiles were useful to the clinician in order to formulate pathogenetic and pathognomonic hypotheses.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Electron capture** Electron capture: Electron capture (K-electron capture, also K-capture, or L-electron capture, L-capture) is a process in which the proton-rich nucleus of an electrically neutral atom absorbs an inner atomic electron, usually from the K or L electron shells. This process thereby changes a nuclear proton to a neutron and simultaneously causes the emission of an electron neutrino. p + e− → n + νe or when written as a nuclear reaction equation, e−10+p11⟶n01+00 νe Since this single emitted neutrino carries the entire decay energy, it has this single characteristic energy. Similarly, the momentum of the neutrino emission causes the daughter atom to recoil with a single characteristic momentum. Electron capture: The resulting daughter nuclide, if it is in an excited state, then transitions to its ground state. Usually, a gamma ray is emitted during this transition, but nuclear de-excitation may also take place by internal conversion. Electron capture: Following capture of an inner electron from the atom, an outer electron replaces the electron that was captured and one or more characteristic X-ray photons is emitted in this process. Electron capture sometimes also results in the Auger effect, where an electron is ejected from the atom's electron shell due to interactions between the atom's electrons in the process of seeking a lower energy electron state. Electron capture: Following electron capture, the atomic number is reduced by one, the neutron number is increased by one, and there is no change in mass number. Simple electron capture by itself results in a neutral atom, since the loss of the electron in the electron shell is balanced by a loss of positive nuclear charge. However, a positive atomic ion may result from further Auger electron emission. Electron capture: Electron capture is an example of weak interaction, one of the four fundamental forces. Electron capture: Electron capture is the primary decay mode for isotopes with a relative superabundance of protons in the nucleus, but with insufficient energy difference between the isotope and its prospective daughter (the isobar with one less positive charge) for the nuclide to decay by emitting a positron. Electron capture is always an alternative decay mode for radioactive isotopes that do have sufficient energy to decay by positron emission. Electron capture is sometimes included as a type of beta decay, because the basic nuclear process, mediated by the weak force, is the same. In nuclear physics, beta decay is a type of radioactive decay in which a beta ray (fast energetic electron or positron) and a neutrino are emitted from an atomic nucleus. Electron capture is sometimes called inverse beta decay, though this term usually refers to the interaction of an electron antineutrino with a proton.If the energy difference between the parent atom and the daughter atom is less than 0.511 MeV, positron emission is forbidden as not enough decay energy is available to allow it, and thus electron capture is the sole decay mode. For example, rubidium-83 (37 protons, 46 neutrons) will decay to krypton-83 (36 protons, 47 neutrons) solely by electron capture (the energy difference, or decay energy, is about 0.9 MeV). History: The theory of electron capture was first discussed by Gian-Carlo Wick in a 1934 paper, and then developed by Hideki Yukawa and others. K-electron capture was first observed by Luis Alvarez, in vanadium, 48V, which he reported in 1937. Alvarez went on to study electron capture in gallium (67Ga) and other nuclides. Reaction details: The electron that is captured is one of the atom's own electrons, and not a new, incoming electron, as might be suggested by the way the above reactions are written. A few examples of electron capture are: Radioactive isotopes that decay by pure electron capture can be inhibited from radioactive decay if they are fully ionized ("stripped" is sometimes used to describe such ions). It is hypothesized that such elements, if formed by the r-process in exploding supernovae, are ejected fully ionized and so do not undergo radioactive decay as long as they do not encounter electrons in outer space. Anomalies in elemental distributions are thought to be partly a result of this effect on electron capture. Inverse decays can also be induced by full ionisation; for instance, 163Ho decays into 163Dy by electron capture; however, a fully ionised 163Dy decays into a bound state of 163Ho by the process of bound-state β− decay.Chemical bonds can also affect the rate of electron capture to a small degree (in general, less than 1%) depending on the proximity of electrons to the nucleus. For example, in 7Be, a difference of 0.9% has been observed between half-lives in metallic and insulating environments. This relatively large effect is due to the fact that beryllium is a small atom that employs valence electrons that are close to the nucleus, and also in orbitals with no orbital angular momentum. Electrons in s orbitals (regardless of shell or primary quantum number), have a probability antinode at the nucleus, and are thus far more subject to electron capture than p or d electrons, which have a probability node at the nucleus. Around the elements in the middle of the periodic table, isotopes that are lighter than stable isotopes of the same element tend to decay through electron capture, while isotopes heavier than the stable ones decay by electron emission. Electron capture happens most often in the heavier neutron-deficient elements where the mass change is smallest and positron emission is not always possible. When the loss of mass in a nuclear reaction is greater than zero but less than 2mec2 the process cannot occur by positron emission, but occurs spontaneously for electron capture. Common examples: Some common radionuclides that decay solely by electron capture include: For a full list, see the table of nuclides.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Assumptive mood** Assumptive mood: The assumptive mood (abbreviated ASS) is an epistemic grammatical mood found in some languages, which indicates that the statement is assumed to be true, because it usually is under similar circumstances, although there may not be any specific evidence that it is true in this particular case. An English example (although assumptive mood is not specially marked in English), would be, "That must be my mother. (She always comes at this time.)" Another example in English, using a different modal verb, would be, "He should be a good worker. (He has 15 years of prior experience.)"
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Walker–Warburg syndrome** Walker–Warburg syndrome: Walker–Warburg syndrome (WWS), also called Warburg syndrome, Chemke syndrome, HARD syndrome (Hydrocephalus, Agyria and Retinal Dysplasia), Pagon syndrome, cerebroocular dysgenesis (COD) or cerebroocular dysplasia-muscular dystrophy syndrome (COD-MD), is a rare form of autosomal recessive congenital muscular dystrophy. It is associated with brain (lissencephaly, hydrocephalus, cerebellar malformations) and eye abnormalities. This condition has a worldwide distribution. Walker-Warburg syndrome is estimated to affect 1 in 60,500 newborns worldwide. Presentation: The clinical manifestations present at birth are generalized hypotonia, muscle weakness, developmental delay with intellectual disability and occasional seizures. The congenital muscular dystrophy is characterized by hypoglycosylation of α-dystroglycan. Presentation: Those born with the disease also experience severe ocular and brain defects. Half of all children with WWS are born with encephalocele, which is a gap in the skull that will not seal. The meninges of the brain protrude through this gap due to the neural tube failing to close during development. A malformation of the a baby's cerebellum is often a sign of this disease. Common ocular issues associated with WWS are abnormally small eyes and retinal abnormalities cause by an underdeveloped light-sensitive area in the back of the eye. Genetics: Several genes have been implicated in the etiology of Walker–Warburg syndrome, and others are as yet unknown. Several mutations were found in the protein O-Mannosyltransferase POMT1 and POMT2 genes, and one mutation was found in each of the fukutin and fukutin-related protein genes. Another gene that has been linked to this condition is Beta-1,3-N-acetylgalactosaminyltransferase 2 (B3GALNT2). Diagnosis: Laboratory investigations usually show elevated creatine kinase, myopathic/dystrophic muscle pathology and altered α-dystroglycan. Antenatal diagnosis is possible in families with known mutations. Prenatal ultrasound may be helpful for diagnosis in families where the molecular defect is unknown. Prognosis: No specific treatment is available. Management is only supportive and preventive. Those who are diagnosed with the disease often die within the first few months of life. Almost all children with the disease die by the age of three. Eponym: WWS is named for Arthur Earl Walker and Mette Warburg (1926-2015), a Danish ophthalmologist. Its alternative names include Chemke’s syndrome and Pagon’s syndrome, named after Juan M. Chemke and Roberta A. Pagon.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Minimal subtraction scheme** Minimal subtraction scheme: In quantum field theory, the minimal subtraction scheme, or MS scheme, is a particular renormalization scheme used to absorb the infinities that arise in perturbative calculations beyond leading order, introduced independently by Gerard 't Hooft and Steven Weinberg in 1973. The MS scheme consists of absorbing only the divergent part of the radiative corrections into the counterterms. Minimal subtraction scheme: In the similar and more widely used modified minimal subtraction, or MS-bar scheme ( MS ¯ ), one absorbs the divergent part plus a universal constant that always arises along with the divergence in Feynman diagram calculations into the counterterms. When using dimensional regularization, i.e. d4p→μ4−dddp , it is implemented by rescaling the renormalization scale: μ2→μ2eγE4π , with γE the Euler–Mascheroni constant.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Cognitive map** Cognitive map: A cognitive map is a type of mental representation which serves an individual to acquire, code, store, recall, and decode information about the relative locations and attributes of phenomena in their everyday or metaphorical spatial environment. The concept was introduced by Edward Tolman in 1948. He tried to explain the behavior of rats that appeared to learn the spatial layout of a maze, and subsequently the concept was applied to other animals, including humans. The term was later generalized by some researchers, especially in the field of operations research, to refer to a kind of semantic network representing an individual's personal knowledge or schemas. Overview: Cognitive maps have been studied in various fields, such as psychology, education, archaeology, planning, geography, cartography, architecture, landscape architecture, urban planning, management and history. Because of the broad use and study of cognitive maps, it has become a colloquialism for almost any mental representation or model. As a consequence, these mental models are often referred to, variously, as cognitive maps, mental maps, scripts, schemata, and frame of reference. Overview: Cognitive maps are a function of the working brain that humans and animals use for movement in a new environment. They help us in recognizing places, computing directions and distances, and in critical-thinking on shortcuts. They support us in wayfinding in an environment, and act as blueprints for new technology. Overview: Cognitive maps serve the construction and accumulation of spatial knowledge, allowing the "mind's eye" to visualize images in order to reduce cognitive load, enhance recall and learning of information. This type of spatial thinking can also be used as a metaphor for non-spatial tasks, where people performing non-spatial tasks involving memory and imaging use spatial knowledge to aid in processing the task. They include information about the spatial relations that objects have among each other in an environment and they help us in orienting and moving in a setting and in space. Overview: They are internal representation, they are not a fixed image, instead they are a schema, dynamic and flexible, with a degree of personal level. A spatial map needs to be acquired according to a frame of reference. Because it is independent from the observer's point of view, it is based on an allocentric reference system— with an object-to-object relation. It codes configurational information, using a world-centred coding system. Overview: The neural correlates of a cognitive map have been speculated to be the place cell system in the hippocampus and the recently discovered grid cells in the entorhinal cortex. History: The idea of a cognitive map was first developed by Edward C. Tolman. Tolman, one of the early cognitive psychologists, introduced this idea when doing an experiment involving rats and mazes. In Tolman's experiment, a rat was placed in a cross shaped maze and allowed to explore it. After this initial exploration, the rat was placed at one arm of the cross and food was placed at the next arm to the immediate right. The rat was conditioned to this layout and learned to turn right at the intersection in order to get to the food. When placed at different arms of the cross maze however, the rat still went in the correct direction to obtain the food because of the initial cognitive map it had created of the maze. Rather than just deciding to turn right at the intersection no matter what, the rat was able to determine the correct way to the food no matter where in the maze it was placed.Unfortunately, further research was slowed due to the behaviorist point of view prevalent in the field of psychology at the time. In later years, O'Keefe and Nadel attributed Tolman's research to the hippocampus, stating that it was the key to the rat's mental representation of its surroundings. This observation furthered research in this area and consequently much of hippocampus activity is explained through cognitive map making.As time went on, the cognitive map was researched in other prospective fields that found it useful, therefore leading to broader and differentiating definitions and applications. A very prominent researcher, Colin Eden, has specifically mentioned his application of cognitive mapping simply as any representation of thinking models. Mental map distinction: A cognitive map is a spatial representation of the outside world that is kept within the mind, until an actual manifestation (usually, a drawing) of this perceived knowledge is generated, a mental map. Cognitive mapping is the implicit, mental mapping the explicit part of the same process. In most cases, a cognitive map exists independently of a mental map, an article covering just cognitive maps would remain limited to theoretical considerations. Mental map distinction: Mental mapping is typically associated with landmarks, locations, and geography when demonstrated. Creating mental maps depends on the individual and their perceptions whether they are influenced by media, real-life, or other sources. Because of their factual storage mental maps can be useful when giving directions and navigating. As stated previously this distinction is hard to identify when posed with almost identical definitions, nevertheless there is a distinction.In some uses, mental map refers to a practice done by urban theorists by having city dwellers draw a map, from memory, of their city or the place they live. This allows the theorist to get a sense of which parts of the city or dwelling are more substantial or imaginable. This, in turn, lends itself to a decisive idea of how well urban planning has been conducted. Acquisition of the cognitive maps: The cognitive map is generated from a number of sources, both from the visual system and elsewhere. Much of the cognitive map is created through self-generated movement cues. Inputs from senses like vision, proprioception, olfaction, and hearing are all used to deduce a person's location within their environment as they move through it. This allows for path integration, the creation of a vector that represents one's position and direction within one's environment, specifically in comparison to an earlier reference point. This resulting vector can be passed along to the hippocampal place cells where it is interpreted to provide more information about the environment and one's location within the context of the cognitive map.Directional cues and positional landmarks are also used to create the cognitive map. Within directional cues, both explicit cues, like markings on a compass, as well as gradients, like shading or magnetic fields, are used as inputs to create the cognitive map. Directional cues can be used both statically, when a person does not move within his environment while interpreting it, and dynamically, when movement through a gradient is used to provide information about the nature of the surrounding environment. Positional landmarks provide information about the environment by comparing the relative position of specific objects, whereas directional cues give information about the shape of the environment itself. These landmarks are processed by the hippocampus together to provide a graph of the environment through relative locations.Alex Siegel and Sheldon White (1975) proposed a model of acquisition of spatial knowledge based on different levels. The first stage of the process is said to be limited to the landmarks available in a new environment. Then, as a second stage, information about the routes that connect landmarks will be encoded, at the beginning in a non-metric representation form and consequently they will be expanded with metric properties, such as distances, durations and angular deviations. In the third and final step, the observer will be able to use a survey representation of the surroundings, using an allocentric point of view.All in all, the acquisition of cognitive maps is a gradual construction. This kind of knowledge is multimodal in nature and it is built up by different pieces of information coming from different sources that are integrated step by step. Neurological basis: Cognitive mapping is believed to largely be a function of the hippocampus. The hippocampus is connected to the rest of the brain in such a way that it is ideal for integrating both spatial and nonspatial information. Connections from the postrhinal cortex and the medial entorhinal cortex provide spatial information to the hippocampus. Connections from the perirhinal cortex and lateral entorhinal cortex provide nonspatial information. The integration of this information in the hippocampus makes the hippocampus a practical location for cognitive mapping, which necessarily involves combining information about an object's location and its other features.O'Keefe and Nadel were the first to outline a relationship between the hippocampus and cognitive mapping. Many additional studies have shown additional evidence that supports this conclusion. Specifically, pyramidal cells (place cells, boundary cells, and grid cells) have been implicated as the neuronal basis for cognitive maps within the hippocampal system. Neurological basis: Numerous studies by O'Keefe have implicated the involvement of place cells. Individual place cells within the hippocampus correspond to separate locations in the environment with the sum of all cells contributing to a single map of an entire environment. The strength of the connections between the cells represents the distances between them in the actual environment. The same cells can be used for constructing several environments, though individual cells' relationships to each other may differ on a map by map basis. The possible involvement of place cells in cognitive mapping has been seen in a number of mammalian species, including rats and macaque monkeys. Additionally, in a study of rats by Manns and Eichenbaum, pyramidal cells from within the hippocampus were also involved in representing object location and object identity, indicating their involvement in the creation of cognitive maps. However, there has been some dispute as to whether such studies of mammalian species indicate the presence of a cognitive map and not another, simpler method of determining one's environment.While not located in the hippocampus, grid cells from within the medial entorhinal cortex have also been implicated in the process of path integration, actually playing the role of the path integrator while place cells display the output of the information gained through path integration. The results of path integration are then later used by the hippocampus to generate the cognitive map. The cognitive map likely exists on a circuit involving much more than just the hippocampus, even if it is primarily based there. Other than the medial entorhinal cortex, the presubiculum and parietal cortex have also been implicated in the generation of cognitive maps. Neurological basis: Parallel map theory There has been some evidence for the idea that the cognitive map is represented in the hippocampus by two separate maps. The first is the bearing map, which represents the environment through self-movement cues and gradient cues. The use of these vector-based cues creates a rough, 2D map of the environment. The second map would be the sketch map that works off of positional cues. The second map integrates specific objects, or landmarks, and their relative locations to create a 2D map of the environment. The cognitive map is thus obtained by the integration of these two separate maps. This leads to an understanding that it is not just one map but three that help us create this mental process. It should be clear that parallel map theory is still growing. The sketch map has foundation in previous neurobiological processes and explanations while the bearing map has very little research to support its evidence. Cognitive maps in animals: According to O’Keefe and Nadel (1978), not only humans require spatial abilities. Non-humans animals need them as well to find food, shelters, and others animals whether it is mates or predators. To do so, some animals establish relationships between landmarks, allowing them to make spatial inferences and detect positions.The first experiments on rats in a maze, conducted by Tolman, Ritchie, and Kalish (1946), showed that rats can form mental maps of spatial locations with a good comprehension of them. But these experiments, led again later by other researchers (for example by Eichenbaum, Stewart, & Morris, 1990 and by Singer et al. 2006) have not concluded with such clear results. Some authors tried to bring to light the way rats can take shortcuts. The results have demonstrated that in most cases, rats fail to use a shortcut when reaching for food unless they receive a preexposure to this shortcut route. In that case, rats use that route significantly faster and more often than those who were not preexposed. Moreover, they have difficulties making a spatial inference such as taking a novel shortcut route.In 1987, Chapuis and Varlet led an experiment on dogs to determine if they were able to infer shortcuts. The conclusion confirmed their hypothesis. Indeed, the results demonstrated that the dogs were able to go from starting point to point A with food and then go directly to point B without returning to the starting point. But for Andrew T.D. Bennett (1996) it can simply mean that the dogs have seen some landmarks near point B such as trees or buildings and headed towards them because they associated them with the food. Later, in 1998, Cheng and Spetch did an experiment on gerbils. When looking for the hidden food (goal), gerbils were using the relationship between the goal and one landmark at a time. Instead of deducing that the food was equidistant from two landmarks, gerbils were searching it by its position from two independent landmarks. This means that even though animals use landmarks to locate positions, they do it in a certain way.Another experiment, including pigeons this time, showed that they also use landmarks to locate positions. The task was for the pigeons to find hidden food in an arena. A part of the testing was to make sure that they were not using their smell to locate food. These results show and confirm other evidence of links present in those animals between one or multiple landmark(s) and hidden food (Cheng and Spetch, 1998, 2001 ; Spetch and Mondloch, 1993 ; Spetch et al., 1996, 1997). Criticism: In a review, Andrew T.D. Bennett noted two principal definitions for the “cognitive map” term. The first one, according to Tolman, O’Keefe, and Nadel, implies the capacity to create novel short-cutting thanks to vigorous memorization of the landmarks. The second one, according to Gallistel, considers a cognitive map as “any representation of space held by an animal”. This lack of a proper definition is also shared by Thinus-Blanc (1996) who stated that the definition is not clear enough. Therefore, this makes further experiments difficult to conclude.However, Bennett argued that there is no clear evidence for cognitive maps in non-human animals (i.e. cognitive map according to Tolman's definition). This argument is based on analyses of studies where it has been found that simpler explanations can account for experimental results. Bennett highlights three simpler alternatives that cannot be ruled out in tests of cognitive maps in non-human animals "These alternatives are (1) that the apparently novel short-cut is not truly novel; (2) that path integration is being used; and (3) that familiar landmarks are being recognised from a new angle, followed by movement towards them." This point of view is also shared by Grieves and Dudchenko (2013) that showed with their experiment on rats (briefly presented above) that these animals are not capable of making spatial inferences using cognitive maps. Heuristics: Heuristics were found to be used in the manipulation and creation of cognitive maps. These internal representations are used by our memory as a guide in our external environment. It was found that when questioned about maps imaging, distancing, etc., people commonly made distortions to images. These distortions took shape in the regularisation of images (i.e., images are represented as more like pure abstract geometric images, though they are irregular in shape). Heuristics: There are several ways that humans form and use cognitive maps, with visual intake being an especially key part of mapping: the first is by using landmarks, whereby a person uses a mental image to estimate a relationship, usually distance, between two objects. The second is route-road knowledge, and is generally developed after a person has performed a task and is relaying the information of that task to another person. The third is a survey, whereby a person estimates a distance based on a mental image that, to them, might appear like an actual map. This image is generally created when a person's brain begins making image corrections. These are presented in five ways: Right-angle bias: when a person straightens out an image, like mapping an intersection, and begins to give everything 90-degree angles, when in reality it may not be that way. Heuristics: Symmetry heuristic: when people tend to think of shapes, or buildings, as being more symmetrical than they really are. Rotation heuristic: when a person takes a naturally (realistically) distorted image and straightens it out for their mental image. Alignment heuristic: similar to the previous, where people align objects mentally to make them straighter than they really are. Relative-position heuristic: people do not accurately distance landmarks in their mental image based on how well they remember them.Another method of creating cognitive maps is by means of auditory intake based on verbal descriptions. Using the mapping based from a person's visual intake, another person can create a mental image, such as directions to a certain location.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Torpedo defence** Torpedo defence: Torpedo defence includes evasive maneuvers, passive defense like torpedo belts, torpedo nets, torpedo bulges and active defenses, like anti-torpedo torpedoes similar in idea to missile defense systems. Surface Ship Torpedo Defense and Countermeasure Anti-Torpedo systems are highly experimental and the US Navy ended trials on them in 2018.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Stresslinux** Stresslinux: Stresslinux is a light-weight Linux distribution designed to test a computer's hardware by running the components at high load while monitoring their health. It is designed to be booted from CD-ROM or via PXE.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Cipargamin** Cipargamin: Cipargamin (NITD609, KAE609) is an experimental synthetic antimalarial drug belonging to the spiroindolone class. The compound was developed at the Novartis Institute for Tropical Diseases in Singapore, through a collaboration with the Genomics Institute of the Novartis Research Foundation (GNF), the Biomedical Primate Research Centre and the Swiss Tropical Institute. Cipargamin: Cipargamin is a synthetic antimalarial molecule belonging to the spiroindolone class, awarded MMV Project of the Year 2009. It is structurally related to GNF 493, a compound first identified as a potent inhibitor of Plasmodium falciparum growth in a high throughput phenotypic screen of natural products conducted at the Genomics Institute of the Novartis Research Foundation in San Diego, California in 2006. Cipargamin: Cipargamin was discovered by screening the Novartis library of 12,000 natural products and synthetic compounds to find compounds active against Plasmodium falciparum. The first screen turned up 275 compounds and the list was narrowed to 17 potential candidates. The current spiroindolone was optimized to address its metabolic liabilities leading to improved stability and exposure levels in animals. As a result, cipargamin is one of only a handful of molecules capable of completely curing mice infected with Plasmodium berghei (a model of blood-stage malaria). Given its good physicochemical properties, promising pharmacokinetic and efficacy profile, the molecule was recently approved as a preclinical candidate and is now entering GLP toxicology studies with the aim of entering Phase I studies in humans in late 2010. If its safety and tolerability are acceptable, cipargamin would be the first antimalarial not belonging to either the artemisinin or peroxide class to go into a proof-of-concept study in malaria. If cipargamin behaves similarly in people to the way it works in mice, it may be possible to develop it into a drug that could be taken just once - far easier than current standard treatments in which malaria drugs are taken between one and four times a day for up to seven days. Cipargamin also has properties which could enable it to be manufactured in pill form and in large quantities. Further animal studies have been performed and researchers have begun human-stage trials.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Cereal Milk** Cereal Milk: Cereal Milk is a flavor, beverage, and ingredient introduced commercially by Christina Tosi in 2006 while working at Momofuku. Cereal Milk is milk flavored with breakfast cereal. Cereal milk has inspired various food creations, including cereal milk ice cream and cereal milk-flavored beverages. Development: Tosi first created Cereal Milk in 2006 as an ingredient while working for David Chang at Momofoku. There were no desserts on the menu when Tosi came on board, and she created her own recipes inspired by the flavors of childhood favorites, including Cereal Milk panna cotta. The Los Angeles Times described Tosi's take on panna cotta as "[taking] something upscale...and [yanking] it down. Tosi further developed the concept at Momofuku Bakery and Milk Bar, now Milk Bar. In addition to the panna cotta Tosi has developed an ice cream, cookies, a Charlotte, and beverages such as milk, lattes, and milkshakes. The basic ingredient is also used as a mixer for coffees and cocktails.When Tosi opened Milk Bar, one of the first menu items was her Cereal Milk softserve. Tosi has since expanded to packaged custard-style ice cream in Cereal Milk flavors. Preparation and ingredients: The beverage and basic ingredient Cereal Milk is prepared by toasting cornflakes, steeping them at room temperature in milk, draining off the cereal, and adding brown sugar and salt. Other cereals can also be used. Reception: According to the New York Times, "Nothing bears the trademark of the pastry chef Christina Tosi more than her cereal milk flavor," and that she had made it a "household name". Axios called Cereal Milk a "cult favorite". Trade journal Restaurant Business called Tosi's cereal milk "iconic".Saveur's Megan Zhang theorized that the appeal of the cereal milk flavor was rooted in the nostalgia of recalling that breakfast cereal was 'the first thing many of us learned to “cook” and eat ourselves as youngsters'. Influence: In 2017 Ben & Jerry's introduced a line of flavors that Eater called a [blatant ripoff] of Tosi's creation. Burger King produced a Cereal Milk milkshake.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Homogeneous catalysis** Homogeneous catalysis: In chemistry, homogeneous catalysis is catalysis where the catalyst is in same phase as reactants, principally by a soluble catalyst a in solution. In contrast, heterogeneous catalysis describes processes where the catalysts and substrate are in distinct phases, typically solid-gas, respectively. The term is used almost exclusively to describe solutions and implies catalysis by organometallic compounds. Homogeneous catalysis is an established technology that continues to evolve. An illustrative major application is the production of acetic acid. Enzymes are examples of homogeneous catalysts. Examples: Acid catalysis The proton is a pervasive homogeneous catalyst because water is the most common solvent. Water forms protons by the process of self-ionization of water. In an illustrative case, acids accelerate (catalyze) the hydrolysis of esters: CH3CO2CH3 + H2O ⇌ CH3CO2H + CH3OHAt neutral pH, aqueous solutions of most esters do not hydrolyze at practical rates. Examples: Transition metal-catalysis Hydrogenation and related reactions A prominent class of reductive transformations are hydrogenations. In this process, H2 added to unsaturated substrates. A related methodology, transfer hydrogenation, involves by transfer of hydrogen from one substrate (the hydrogen donor) to another (the hydrogen acceptor). Related reactions entail "HX additions" where X = silyl (hydrosilylation) and CN (hydrocyanation). Most large-scale industrial hydrogenations – margarine, ammonia, benzene-to-cyclohexane – are conducted with heterogeneous catalysts. Fine chemical syntheses, however, often rely on homogeneous catalysts. Examples: Carbonylations Hydroformylation, a prominent form of carbonylation, involves the addition of H and "C(O)H" across a double bond. This process is almost exclusively conducted with soluble rhodium- and cobalt-containing complexes.A related carbonylation is the conversion of alcohols to carboxylic acids. MeOH and CO react in the presence of homogeneous catalysts to give acetic acid, as practiced in the Monsanto process and Cativa processes. Related reactions include hydrocarboxylation and hydroesterifications. Examples: Polymerization and metathesis of alkenes A number of polyolefins, e.g. polyethylene and polypropylene, are produced from ethylene and propylene by Ziegler-Natta catalysis. Heterogeneous catalysts dominate, but many soluble catalysts are employed especially for stereospecific polymers. Olefin metathesis is usually catalyzed heterogeneously in industry, but homogeneous variants are valuable in fine chemical synthesis. Oxidations Homogeneous catalysts are also used in a variety of oxidations. In the Wacker process, acetaldehyde is produced from ethene and oxygen. Many non-organometallic complexes are also widely used in catalysis, e.g. for the production of terephthalic acid from xylene. Alkenes are epoxidized and dihydroxylated by metal complexes, as illustrated by the Halcon process and the Sharpless dihydroxylation. Examples: Enzymes (including metalloenzymes) Enzymes are homogeneous catalysts that are essential for life but are also harnessed for industrial processes. A well-studied example is carbonic anhydrase, which catalyzes the release of CO2 into the lungs from the bloodstream. Enzymes possess properties of both homogeneous and heterogeneous catalysts. As such, they are usually regarded as a third, separate category of catalyst. Water is a common reagent in enzymatic catalysis. Esters and amides are slow to hydrolyze in neutral water, but the rates are sharply affected by metalloenzymes, which can be viewed as large coordination complexes. Acrylamide is prepared by the enzyme-catalyzed hydrolysis of acrylonitrile. US demand for acrylamide was 253,000,000 pounds (115,000,000 kg) as of 2007. Advantages and disadvantages: Advantages Homogeneous catalysts are generally more selective than heterogeneous catalysts. For exothermic processes, homogeneous catalysts dump heat into the solvent. Homogeneous catalysts are easier to characterize precisely, so their reaction mechanisms are amenable to rational manipulation. Disadvantages The separation of homogeneous catalysts from products can be challenging. In some cases involving high activity catalysts, the catalyst is not removed from the product. In other cases, organic products are sufficiently volatile that they can be separated by distillation. Homogeneous catalyst have limited thermal stability compared to heterogeneous catalysts. Many organometallic complexes degrade <100 °C. Some pincer-based catalysts, however, operate near 200 °C.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Insecure direct object reference** Insecure direct object reference: Insecure direct object reference (IDOR) is a type of access control vulnerability in digital security.This can occur when a web application or application programming interface uses an identifier for direct access to an object in an internal database but does not check for access control or authentication. For example, if the request URL sent to a web site directly uses an easily enumerated unique identifier (such as http://foo.com/doc/1234), that can provide an exploit for unintended access to all records. Insecure direct object reference: A directory traversal attack is considered a special case of a IDOR.The vulnerability is of such significant concern that for many years it was listed as one of the Open Web Application Security Project’s (OWASP) Top 10 vulnerabilities. Examples: In November 2020, the firm Silent Breach identified an IDOR vulnerability with the United States Department of Defense web site and privately reported it via the DOD's Vulnerability Disclosure Program. The bug was fixed by adding a user session mechanism to the account system, which would require authenticating on the site first.It was reported that the Parler social networking service used sequential post IDs, and that this had enabled the scraping of terabytes of data from the service in January 2021. The researcher responsible for the project has said this was inaccurate.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Secundative language** Secundative language: A secundative language is a language in which the recipients of ditransitive verbs (which takes a subject and two objects: a theme and a recipient) are treated like the patients (targets) of monotransitive verbs (verbs that take only one object), and the themes get distinct marking. Secundative languages contrast with indirective languages, where the recipient is treated in a special way. Secundative language: While English is mostly not a secundative language, there are some examples. The sentence John gave Mary the ball uses this construction, where the ball is the theme and Mary is the recipient. Secundative language: The alternative wording John presented Mary with the ball is essentially analogous to the structure found in secundative languages; the ball is not the direct object here, but basically a secondary object marked by the preposition with. In German, the prefix be- (which is sometimes likened to an applicative voice) can be used to change the valency of verbs in a similar way: In John schenkte Mary den Ball, the theme Ball is the direct object and the recipient Mary the indirect object (in the dative case); in John beschenkte Mary mit dem Ball, the recipient Mary is now the direct object and the theme Ball is now an oblique argument (an oblique dative) marked by the preposition mit. Terminology: This language type was called dechticaetiative in an article by Edward L. Blansitt, Jr. (from Greek dekhomai 'take, receive' and an obscure second element, unlikely kaitoi 'and indeed'), but that term did not catch on. They have also been called anti-ergative languages and primary object languages. Usage: Ditransitive verbs have two arguments other than the subject: a theme that undergoes the action and a recipient that receives the theme (see thematic relation). In a secundative language, the recipient of a ditransitive verb is treated in the same way as the single object of a monotransitive verb, and this syntactic category is called primary object, which is equivalent to the indirect object in English. The theme of a ditransitive verb is treated separately and called secondary object, which is equivalent to the direct object. Usage: English is not a true secundative language, as neither the theme nor recipient is primary, or either can be primary depending on context. Usage: A true secundative construction is found in West Greenlandic, the direct object of a monotransitive verb appears in the absolutive case: In a ditransitive sentence, the recipient appears in absolutive case and the theme is marked with the instrumental case: Similarly, in Lahu, both the patient of a monotransitive verb and the recipient of a ditransitive verb are marked with the postposition thàʔ: In secundative languages with passive constructions, passivation promotes the primary object to subject. For example, in Swahili: the recipient Fatuma is promoted to subject and not the theme zawadi 'gift'. Use in English: Many languages show mixed indirective/secundative behavior. English, which is primarily indirective, arguably contains secundative constructions, traditionally referred to as dative shift. For example, the passive of the sentence John gave Mary the ball.is Mary was given the ball by John.in which the recipient rather than the theme is promoted to subject. This is complicated by the fact that some dialects of English may promote either the recipient (Mary) or the theme (the ball) argument to subject status, and for these dialects ' The ball was given Mary by John.(meaning that the ball was given to Mary) is also well-formed. In addition, the argument structure of verbs like provide is essentially secundative: in The project provides young people with work.the recipient argument is treated like a monotransitive direct object.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**PSB-SB-1202** PSB-SB-1202: PSB-SB-1202 is a coumarin derivative which is an agonist at the cannabinoid receptors CB1 and CB2, with a CB1 Ki of 32nM and a CB2 Ki of 49nM. It is also a weak antagonist at the related receptor GPR55, with an IC50 of 6350nM, but has no significant affinity for GPR18.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Hay's test** Hay's test: Hay's test, also known as Hay's sulphur powder test, is a chemical test used for detecting the presence of bile salts in urine. Procedure: Sulphur powder is sprinkled into a test tube with three millilitres of urine and if the test is positive, the sulphur powder sinks to the bottom of the test tube. Sulphur powder sinks because bile salts decrease the surface tension of urine.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Phorbol-diester hydrolase** Phorbol-diester hydrolase: The enzyme phorbol-diester hydrolase (EC 3.1.1.51) catalyzes the reaction phorbol 12,13-dibutanoate + H2O ⇌ phorbol 13-butanoate + butanoateThis enzyme belongs to the family of hydrolases, specifically those acting on carboxylic ester bonds. The systematic name is 12,13-diacylphorbate 12-acylhydrolase. Other names in common use include diacylphorbate 12-hydrolase, diacylphorbate 12-hydrolase, phorbol-12,13-diester 12-ester hydrolase, and PDEH.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Interleukin 1 beta** Interleukin 1 beta: Interleukin-1 beta (IL-1β) also known as leukocytic pyrogen, leukocytic endogenous mediator, mononuclear cell factor, lymphocyte activating factor and other names, is a cytokine protein that in humans is encoded by the IL1B gene. There are two genes for interleukin-1 (IL-1): IL-1 alpha and IL-1 beta (this gene). IL-1β precursor is cleaved by cytosolic caspase 1 (interleukin 1 beta convertase) to form mature IL-1β. Function: The fever-producing property of human leukocytic pyrogen (interleukin 1) was purified by Dinarello in 1977 with a specific activity of 10–20 nanograms/kg. In 1979, Dinarello reported that purified human leukocytic pyrogen was the same molecule that was described by Igal Gery in 1972. He named it lymphocyte-activating factor (LAF) because it was a lymphocyte mitogen. It was not until 1984 that interleukin 1 was discovered to consist of two distinct proteins, now called interleukin-1 alpha and interleukin-1 beta.IL-1β is a member of the interleukin 1 family of cytokines. This cytokine is produced by activated macrophages, monocytes, and a subset of dentritic cells known as slanDC, as a proprotein, which is proteolytically processed to its active form by caspase 1 (CASP1/ICE). This cytokine is an important mediator of the inflammatory response, and is involved in a variety of cellular activities, including cell proliferation, differentiation, and apoptosis. The induction of cyclooxygenase-2 (PTGS2/COX2) by this cytokine in the central nervous system (CNS) is found to contribute to inflammatory pain hypersensitivity. This gene and eight other interleukin 1 family genes form a cytokine gene cluster on chromosome 2.IL-1β, in combination with IL-23, induced expression of IL-17, IL-21 and IL-22 by γδ T cells. This induction of expression is in the absence of additional signals. That suggests IL-1β is involved in modulation of autoimmune inflammation Different inflammasome complex — cytosolic molecular complex — have been described. Inflammasomes recognize danger signals and activate proinflammatory process and production of IL-1β and IL-18. NLRP3 (contains three domain: pyrin domain, a nucleotide-binding domain and a leucine-rich repeat) type of inflammasome is activated by various stimuli and there are documented several diseases connected to NLRP3 activation like type 2 diabetes mellitus , Alzheimer's disease, obesity and atherosclerosis. Properties: Before cleavage by caspase 1, pro-IL-1β has a molecular weight of 37 kDa. The molecular weight of the proteolytically processed IL-1β is 17.5 kDa. IL-1β has the following amino acid sequence: APVRSLNCTL RDSQQKSLVM SGPYELKALH LQGQDMEQQV VFSMSFVQGE ESNDKIPVAL GLKEKNLYLS CVLKDDKPTL QLESVDPKNY PKKKMEKRFV FNKIEINNKL EFESAQFPNW YISTSQAENM PVFLGGTKGG QDITDFTMQF VSSThe physiological activity determined from the dose dependent proliferation of murine D10S cells is 2.5 x 108 to 7.1 x 108 units/mg. Clinical significance: Increased production of IL-1β causes a number of different autoinflammatory syndromes, most notably the monogenic conditions referred to as Cryopyrin-Associated Periodic Syndromes (CAPS), due to mutations in the inflammasome receptor NLRP3 which triggers processing of IL-1β.Intestinal dysbiosis has been observed to induce osteomyelitis through a IL-1β dependent manner.The presence of IL-1β has been also found in patients with multiple sclerosis (a chronic autoimmune disease of the central nervous system). However, it is not known exactly which cells produce IL-1β. Treatment of multiple sclerosis with glatiramer acetate or natalizumab has also been shown to reduce the presence of IL-1β or its receptor. Clinical significance: Role in carcinogenesis Several types of inflammasomes are suggested to play role in tumorgenesis due to their immunomodulatory properties, modulation of gut microbiota, differentiation and apoptosis. Over-expression of IL-1β caused by inflammasome may result in carcinogenesis. Some data suggest that NLRP3 inflammasome polymorphisms is connected to malignancies such as colon cancer and melanoma. It was reported that IL-1β secretion was elevated in the lung adenocarcinoma cell line A549. It has also been shown in another study that IL-1β, together with IL-8, plays an important role in chemoresistance of malignant pleural mesothelioma by inducing expression of transmembrane transporters. Another study showed that inhibition of inflammasome and IL-1β expression decreased development of cancer cells in melanoma.Furthermore, it has been found that in breast cancer cells, IL-1β activates p38 and p42/22 MAPK pathways which ultimately lead to the secretion of the RANK/RANKL inhibitor osteoprotegerin. Higher osteoprotegerin and IL-1β levels are a characteristic of breast cancer cells with a higher metastatic potential. Clinical significance: In HIV-1 infections The human immunodeficiency virus (HIV) infects cells of the immune system, such as macrophages, dendritic cells, and CD4+ T helper cells (TH). The latter can be infected by the virus in various ways with different fates depending on the state of activation of the T helper cell.Firstly, TH cells can die of viral lysis due to an active infection that produces enough virions to kill the cell. Secondly, CD4+ T cells can be infected by the virus but instead of producing more viral particles, the cell enters a latent phase. In this period, the T helper cells looks identical from the outside but any stressor could lead to the renewed production of HIV and its propagation to new immune cells. Lastly, the TH cell can become abortively infected, where the virus gets detected inside the cell and a programmed cell-death, known as pyroptosis, kills the infected cell. Pyroptosis is mediated via caspase-1 and is characterized by cell lysis and the secretion of IL-1β causing inflammation and attraction of more immune cells. This can create a cycle of CD4+ T cells getting abortively infect with HIV, dying of pyroptosis, new T helper cells arriving to the site of inflammation where they get infected again. The results is the depletion of T helper cells. Even though, levels of IL-1β in blood are not majorly different between HIV positive and negative individuals, studies have shown elevated levels of IL-1β of lymphatic tissue in HIV-infected individuals.In fact, the gut-associated lymphoid tissue (GALT) has a high density of immune cells as the gut is an interface between symbiotic gut microbes that should remain with the host and pathogenic bacteria that should not gain access into the circulatory system. If HIV-infection leads to the secretion of IL-1βin monocytes and macrophages, it causes inflammation of this area. The mucosal epithelial layer responds to this by producing less or altering the tight junction proteins which makes it easier for pathogenic microbes to move into the lamina propria. Here, the pathogens can further activate local immune cells and amplify the inflammatory response. Clinical significance: Retinal degeneration It has been shown that IL-1 family plays important role in inflammation in many degenerative diseases, such as age-related macular degeneration, diabetic retinopathy and retinitis pigmentosa. Significantly increased protein level of IL-1β has been found in the vitreous of diabetic retinopathy patient. The role of IL-1β has been investigated for potential therapeutic target for treatment of diabetic retinopathy. However, systemic using of canakinumab did not have an significant effect. The role of IL-1β in age-related macular degeneration has not been proven in patient, but in many animal models and in vitro studies it has been demonstrated the role of IL-1β in retinal pigmented epithelial cells and photoreceptor cells damage. NLRP3 inflammasome activate caspase-1 which catalyze cleavage of inactive cytosolic precursor pro-IL-1β to its mature form IL-1β. Retinal pigmented epithelial cells forms blood retinal barrier in human retina which is important for retinal metabolic activity, integrity and inhibition of immune cells infiltration. It has been shown that human retinal pigmented epithelial cells can secrete IL-1 β in exposure to oxidative stress. The inflammatory reaction leads to damage of retinal cells and infiltration of cells of the immune system. The inflammatory process including NLRP3 upregulation is one of the causes of age-related macular degeneration and other retinal diseases that lead to vision loss. Additionally, it has been shown that caspase-1 is upregulated in the retina of diabetic patients, causing a higher production of IL-1β and subsequent death of retinal neurons. Clinical significance: Neuroinflammation Studies in mice on experimental autoimmune encephalomyelitis (EAE), a model for multiple sclerosis (MS) research, have found that blocking IL-1β could make the animals resistant to EAE. IL-1β led to the production of an antigen-specific pro-inflammatory subset of T helper cells (TH17). In combination with other cytokines, interleukin-1β can upregulate the production of the cytokine GM-CSF which is correlated to neuroinflammation. Detailed mechanisms on this front are yet too be elucidated.IL-1β has also been observed in elevated levels of the cerebrospinal fluid and brain tissues of Alzheimer patients. The amyloid-β plaques, that are characteristic of Alzheimer disease, are damage-associated molecular patterns (DAMPs) that are recognized by pattern recognition receptors (PRRs) and lead to the activation of microglia. Consequently, microglia release interleukin-1β among other cytokines. Nevertheless, the significance of IL-1β in Alzheimer disease and the onset of neuroinflammation still remains largely unknown.Lastly, in vitro studies have shown that IL-1β causes an increase in mitochondrial glutaminase activity. In response, there is excessive glutamate secretion which has a neurotoxic effect. As a therapeutic target: Anakinra is a recombinant and slightly modified version of the human interleukin 1 receptor antagonist protein. Anakinra blocks the biologic activity of IL-1 alpha and beta by competitively inhibiting IL-1 binding to the interleukin type 1 receptor (IL-1RI), which is expressed in a wide variety of tissues and organs. Anakinra is marketed as Kineret and is approved in the US for treatment of RA, NOMID, DIRA. As a therapeutic target: Canakinumab is a human monoclonal antibody targeted at IL-1B, and approved in many countries for treatment of cryopyrin-associated periodic syndromes. Rilonacept is an IL-1 trap developed by Regeneron targeting IL-1B, and approved in the US as Arcalyst. Orthographic note: Because many authors of scientific manuscripts make the minor error of using a homoglyph, sharp s (ß), instead of beta (β), mentions of "IL-1ß" [sic] often become "IL-1ss" [sic] upon automated transcoding (because ß transcodes to ss). This is why so many mentions of the latter appear in web search results.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**LyX** LyX: LyX (styled as LYX; pronounced [ˈlɪks]) is an open source, graphical user interface document processor based on the LaTeX typesetting system. Unlike most word processors, which follow the WYSIWYG ("what you see is what you get") paradigm, LyX has a WYSIWYM ("what you see is what you mean") approach, where what shows up on the screen roughly depicts the semantic structure of the page and is only an approximation of the document produced by TeX. LyX: Since LyX relies on the typesetting system of LaTeX without being a full-fledged LaTeX editor itself, it has the power and flexibility of LaTeX, and can handle documents including books, notes, theses, academic papers, letters, etc. LyX's interface is structured so that while knowledge of the LaTeX markup language is not necessary for basic usage, new LaTeX directives can be added into the document to support more complex features during editing — though not at the level of full control a full-fledged LaTeX editor can provide.LyX is popular among technical authors and scientists for its advanced mathematical modes, though it is increasingly used by non-mathematically-oriented scholars as well for its bibliographic database integration and its ability to manage multiple files. LyX has also become a popular publishing tool among self-publishers.LyX is available for all major operating systems, including Windows, MacOS, Linux, UNIX, ChromeOS, OS/2 and Haiku. LyX can be redistributed and modified under the terms of the GNU General Public License and is thus free software. Features: LyX is a fully featured document processor. It provides structured document creation and editing, branches for having different versions of the same document, master and child documents, change tracking, support for writing documents in many languages and scripts, spell checking, graphics, table editing and automatic cross-reference (hyperlink) managing. LyX provides automatically numbered headings, titles, and paragraphs, with document outline. It features a powerful mathematical formula editor with point-and-click or keyboard-only interface. Features: LyX has native support for many document classes and templates available in LaTeX through \documentclass{theclass}. User layouts and modules can be made for those missing. Text is laid out according to standard typographic rules, including ligatures, kerning, indents, spacing, and hyphenation. It provides BibTeX/BibLaTeX citation support, comprehensive cross-referencing and PDF hyperlinks. LyX can import various common text formats. Documents can be processed in LaTeX, PdfLaTeX, XeTeX and LuaTeX typesetting systems or exported to DocBook SGML, XHTML and plain text. Versioning is provided through external control systems like SVN, Git, RCS, and CVS. LyX supports right-to-left languages like Arabic, Persian, and Hebrew, along with support for bi-directional text. Chinese, Japanese, and Korean languages are supported as well. Documents can embed calculations via Octave or Computer Algebra Systems (CAS) like Maple, Maxima and Mathematica. Commands will be forwarded to the external programs and results will be embedded in the document. History: Matthias Ettrich started developing a shareware program called Lyrix in 1995. It was then announced on Usenet, where it received a great deal of attention in the following years. Shortly after the initial release, Lyrix was renamed to LyX due to a name clash with a word processor produced by the company Santa Cruz Operation. The name LyX was chosen because of the file suffix .lyx for Lyrix files. Versions: LyX has no set release schedule. Releases occur when there are important bug fixes or significant improvements. The following table lists the dates of all major releases. For collaboration between different users using the same major release is recommended as LyX file format remains fixed within each major release (e.g. all minor LyX versions 2.3.0, 2.3.1, 2.3.2, ... use strictly the same file format). Although the current major release is 2.3.0, released in 2018, the current version, 2.3.7, dates from January 7, 2023.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Isoquinoline** Isoquinoline: Isoquinoline is a heterocyclic aromatic organic compound. It is a structural isomer of quinoline. Isoquinoline and quinoline are benzopyridines, which are composed of a benzene ring fused to a pyridine ring. In a broader sense, the term isoquinoline is used to make reference to isoquinoline derivatives. 1-Benzylisoquinoline is the structural backbone in naturally occurring alkaloids including papaverine. The isoquinoline ring in these natural compound derives from the aromatic amino acid tyrosine. Properties: Isoquinoline is a colorless hygroscopic liquid at temperatures above its melting point with a penetrating, unpleasant odor. Impure samples can appear brownish, as is typical for nitrogen heterocycles. It crystallizes in form of platelets that have a low solubility in water but dissolve well in ethanol, acetone, diethyl ether, carbon disulfide, and other common organic solvents. It is also soluble in dilute acids as the protonated derivative. Properties: Being an analog of pyridine, isoquinoline is a weak base, with a pKa of 5.14. It protonates to form salts upon treatment with strong acids, such as HCl. It forms adducts with Lewis acids, such as BF3. Production: Isoquinoline was first isolated from coal tar in 1885 by Hoogewerf and van Dorp. They isolated it by fractional crystallization of the acid sulfate. Weissgerber developed a more rapid route in 1914 by selective extraction of coal tar, exploiting the fact that isoquinoline is more basic than quinoline. Isoquinoline can then be isolated from the mixture by fractional crystallization of the acid sulfate. Production: Although isoquinoline derivatives can be synthesized by several methods, relatively few direct methods deliver the unsubstituted isoquinoline. The Pomeranz–Fritsch reaction provides an efficient method for the preparation of isoquinoline. This reaction uses a benzaldehyde and aminoacetoaldehyde diethyl acetal, which in an acid medium react to form isoquinoline. Alternatively, benzylamine and a glyoxal acetal can be used, to produce the same result using the Schlittler-Müller modification. Production: Several other methods are useful for the preparation of various isoquinoline derivatives. In the Bischler–Napieralski reaction an β-phenylethylamine is acylated and cyclodehydrated by a Lewis acid, such as phosphoryl chloride or phosphorus pentoxide. The resulting 1-substituted 3,4-dihydroisoquinoline can then be dehydrogenated using palladium. The following Bischler–Napieralski reaction produces papaverine. Production: The Pictet–Gams reaction and the Pictet–Spengler reaction are both variations on the Bischler–Napieralski reaction. A Pictet–Gams reaction works similarly to the Bischler–Napieralski reaction; the only difference being that an additional hydroxy group in the reactant provides a site for dehydration under the same reaction conditions as the cyclization to give the isoquinoline rather than requiring a separate reaction to convert a dihydroisoquinoline intermediate. Production: In a Pictet–Spengler reaction, a condensation of a β-phenylethylamine and an aldehyde forms an imine, which undergoes a cyclization to form a tetrahydroisoquinoline instead of the dihydroisoquinoline. In enzymology, the (S)-norcoclaurine synthase (EC 4.2.1.78) is an enzyme that catalyzes a biological Pictect-Spengler synthesis: Intramolecular aza Wittig reactions also afford isoquinolines. Applications of derivatives: Isoquinolines find many applications, including: anesthetics; dimethisoquin is one example (shown below). antihypertension agents, such as quinapril and debrisoquine (all derived from 1,2,3,4-tetrahydroisoquinoline). antiretroviral agents, such as saquinavir with an isoquinolyl functional group, (shown below). vasodilators, a well-known example, papaverine, shown below. Bisbenzylisoquinolinium compounds are compounds similar in structure to tubocurarine. They have two isoquinolinium structures, linked by a carbon chain, containing two ester linkages. In the human body: Parkinson's disease, a slowly progressing movement disorder, is thought to be caused by certain neurotoxins. A neurotoxin called MPTP (1[N]-methyl-4-phenyl-1,2,3,6-tetrahydropyridine), the precursor to MPP+, was found and linked to Parkinson's disease in the 1980s. The active neurotoxins destroy dopaminergic neurons, leading to parkinsonism and Parkinson's disease. Several tetrahydroisoquinoline derivatives have been found to have the same neurochemical properties as MPTP. These derivatives may act as precursors to active neurotoxins. Other uses: Isoquinolines are used in the manufacture of dyes, paints, insecticides and fungicides. It is also used as a solvent for the liquid–liquid extraction of resins and terpenes, and as a corrosion inhibitor.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Ion** Ion: An ion () is an atom or molecule with a net electrical charge. The charge of an electron is considered to be negative by convention and this charge is equal and opposite to the charge of a proton, which is considered to be positive by convention. The net charge of an ion is not zero because its total number of electrons is unequal to its total number of protons. Ion: A cation is a positively charged ion with fewer electrons than protons while an anion is a negatively charged ion with more electrons than protons. Opposite electric charges are pulled towards one another by electrostatic force, so cations and anions attract each other and readily form ionic compounds. Ion: Ions consisting of only a single atom are termed atomic or monatomic ions, while two or more atoms form molecular ions or polyatomic ions. In the case of physical ionization in a fluid (gas or liquid), "ion pairs" are created by spontaneous molecule collisions, where each generated pair consists of a free electron and a positive ion. Ions are also created by chemical interactions, such as the dissolution of a salt in liquids, or by other means, such as passing a direct current through a conducting solution, dissolving an anode via ionization. History of discovery: The word ion was coined from Greek neuter present participle of ienai (Greek: ἰέναι), meaning "to go". A cation is something that moves down (Greek: κάτω pronounced kato, meaning "down") and an anion is something that moves up (Greek: ano ἄνω, meaning "up"). They are so called because ions move toward the electrode of opposite charge. This term was introduced (after a suggestion by the English polymath William Whewell) by English physicist and chemist Michael Faraday in 1834 for the then-unknown species that goes from one electrode to the other through an aqueous medium. Faraday did not know the nature of these species, but he knew that since metals dissolved into and entered a solution at one electrode and new metal came forth from a solution at the other electrode; that some kind of substance has moved through the solution in a current. This conveys matter from one place to the other. In correspondence with Faraday, Whewell also coined the words anode and cathode, as well as anion and cation as ions that are attracted to the respective electrodes.Svante Arrhenius put forth, in his 1884 dissertation, the explanation of the fact that solid crystalline salts dissociate into paired charged particles when dissolved, for which he would win the 1903 Nobel Prize in Chemistry. Arrhenius' explanation was that in forming a solution, the salt dissociates into Faraday's ions, he proposed that ions formed even in the absence of an electric current. Characteristics: Ions in their gas-like state are highly reactive and will rapidly interact with ions of opposite charge to give neutral molecules or ionic salts. Ions are also produced in the liquid or solid state when salts interact with solvents (for example, water) to produce solvated ions, which are more stable, for reasons involving a combination of energy and entropy changes as the ions move away from each other to interact with the liquid. These stabilized species are more commonly found in the environment at low temperatures. A common example is the ions present in seawater, which are derived from dissolved salts. Characteristics: As charged objects, ions are attracted to opposite electric charges (positive to negative, and vice versa) and repelled by like charges. When they move, their trajectories can be deflected by a magnetic field. Characteristics: Electrons, due to their smaller mass and thus larger space-filling properties as matter waves, determine the size of atoms and molecules that possess any electrons at all. Thus, anions (negatively charged ions) are larger than the parent molecule or atom, as the excess electron(s) repel each other and add to the physical size of the ion, because its size is determined by its electron cloud. Cations are smaller than the corresponding parent atom or molecule due to the smaller size of the electron cloud. One particular cation (that of hydrogen) contains no electrons, and thus consists of a single proton – much smaller than the parent hydrogen atom. Characteristics: Anions and cations Anion (−) and cation (+) indicate the net electric charge on an ion. An ion that has more electrons than protons, giving it a net negative charge, is named an anion, and a minus indication "Anion (−)" indicates the negative charge. With a cation it is just the opposite: it has less electrons than protons, giving it a net positive charge, hence the indication "Cation (+)". Characteristics: Since the electric charge on a proton is equal in magnitude to the charge on an electron, the net electric charge on an ion is equal to the number of protons in the ion minus the number of electrons. Characteristics: An anion (−) ( ANN-eye-ən, from the Greek word ἄνω (ánō), meaning "up") is an ion with more electrons than protons, giving it a net negative charge (since electrons are negatively charged and protons are positively charged).A cation (+) ( KAT-eye-ən, from the Greek word κάτω (káto), meaning "down") is an ion with fewer electrons than protons, giving it a positive charge.There are additional names used for ions with multiple charges. For example, an ion with a −2 charge is known as a dianion and an ion with a +2 charge is known as a dication. A zwitterion is a neutral molecule with positive and negative charges at different locations within that molecule.Cations and anions are measured by their ionic radius and they differ in relative size: "Cations are small, most of them less than 10−10 m (10−8 cm) in radius. But most anions are large, as is the most common Earth anion, oxygen. From this fact it is apparent that most of the space of a crystal is occupied by the anion and that the cations fit into the spaces between them."The terms anion and cation (for ions that respectively travel to the anode and cathode during electrolysis) were introduced by Michael Faraday in 1834 following his consultation with William Whewell. Characteristics: Natural occurrences Ions are ubiquitous in nature and are responsible for diverse phenomena from the luminescence of the Sun to the existence of the Earth's ionosphere. Atoms in their ionic state may have a different color from neutral atoms, and thus light absorption by metal ions gives the color of gemstones. In both inorganic and organic chemistry (including biochemistry), the interaction of water and ions is extremely important; an example is energy that drives the breakdown of adenosine triphosphate (ATP). Related technology: Ions can be non-chemically prepared using various ion sources, usually involving high voltage or temperature. These are used in a multitude of devices such as mass spectrometers, optical emission spectrometers, particle accelerators, ion implanters, and ion engines. As reactive charged particles, they are also used in air purification by disrupting microbes, and in household items such as smoke detectors. As signalling and metabolism in organisms are controlled by a precise ionic gradient across membranes, the disruption of this gradient contributes to cell death. This is a common mechanism exploited by natural and artificial biocides, including the ion channels gramicidin and amphotericin (a fungicide). Inorganic dissolved ions are a component of total dissolved solids, a widely known indicator of water quality. Related technology: Detection of ionizing radiation The ionizing effect of radiation on a gas is extensively used for the detection of radiation such as alpha, beta, gamma, and X-rays. The original ionization event in these instruments results in the formation of an "ion pair"; a positive ion and a free electron, by ion impact by the radiation on the gas molecules. The ionization chamber is the simplest of these detectors, and collects all the charges created by direct ionization within the gas through the application of an electric field.The Geiger–Müller tube and the proportional counter both use a phenomenon known as a Townsend avalanche to multiply the effect of the original ionizing event by means of a cascade effect whereby the free electrons are given sufficient energy by the electric field to release further electrons by ion impact. Chemistry: Denoting the charged state When writing the chemical formula for an ion, its net charge is written in superscript immediately after the chemical structure for the molecule/atom. The net charge is written with the magnitude before the sign; that is, a doubly charged cation is indicated as 2+ instead of +2. However, the magnitude of the charge is omitted for singly charged molecules/atoms; for example, the sodium cation is indicated as Na+ and not Na1+. Chemistry: An alternative (and acceptable) way of showing a molecule/atom with multiple charges is by drawing out the signs multiple times, this is often seen with transition metals. Chemists sometimes circle the sign; this is merely ornamental and does not alter the chemical meaning. All three representations of Fe2+, Fe++, and Fe⊕⊕ shown in the figure, are thus equivalent. Chemistry: Monatomic ions are sometimes also denoted with Roman numerals, particularly in spectroscopy; for example, the Fe2+ (positively doubly charged) example seen above is referred to as Fe(III), FeIII or Fe III (Fe I for a neutral Fe atom, Fe II for a singly ionized Fe ion). The Roman numeral designates the formal oxidation state of an element, whereas the superscripted Indo-Arabic numerals denote the net charge. The two notations are, therefore, exchangeable for monatomic ions, but the Roman numerals cannot be applied to polyatomic ions. However, it is possible to mix the notations for the individual metal centre with a polyatomic complex, as shown by the uranyl ion example. Chemistry: Sub-classes If an ion contains unpaired electrons, it is called a radical ion. Just like uncharged radicals, radical ions are very reactive. Polyatomic ions containing oxygen, such as carbonate and sulfate, are called oxyanions. Molecular ions that contain at least one carbon to hydrogen bond are called organic ions. If the charge in an organic ion is formally centred on a carbon, it is termed a carbocation (if positively charged) or carbanion (if negatively charged). Chemistry: Formation Formation of monatomic ions Monatomic ions are formed by the gain or loss of electrons to the valence shell (the outer-most electron shell) in an atom. The inner shells of an atom are filled with electrons that are tightly bound to the positively charged atomic nucleus, and so do not participate in this kind of chemical interaction. The process of gaining or losing electrons from a neutral atom or molecule is called ionization. Chemistry: Atoms can be ionized by bombardment with radiation, but the more usual process of ionization encountered in chemistry is the transfer of electrons between atoms or molecules. This transfer is usually driven by the attaining of stable ("closed shell") electronic configurations. Atoms will gain or lose electrons depending on which action takes the least energy. Chemistry: For example, a sodium atom, Na, has a single electron in its valence shell, surrounding 2 stable, filled inner shells of 2 and 8 electrons. Since these filled shells are very stable, a sodium atom tends to lose its extra electron and attain this stable configuration, becoming a sodium cation in the process Na Na ++e− On the other hand, a chlorine atom, Cl, has 7 electrons in its valence shell, which is one short of the stable, filled shell with 8 electrons. Thus, a chlorine atom tends to gain an extra electron and attain a stable 8-electron configuration, becoming a chloride anion in the process: Cl Cl − This driving force is what causes sodium and chlorine to undergo a chemical reaction, wherein the "extra" electron is transferred from sodium to chlorine, forming sodium cations and chloride anions. Being oppositely charged, these cations and anions form ionic bonds and combine to form sodium chloride, NaCl, more commonly known as table salt. Chemistry: Na Cl NaCl Formation of polyatomic and molecular ions Polyatomic and molecular ions are often formed by the gaining or losing of elemental ions such as a proton, H+, in neutral molecules. For example, when ammonia, NH3, accepts a proton, H+—a process called protonation—it forms the ammonium ion, NH+4. Ammonia and ammonium have the same number of electrons in essentially the same electronic configuration, but ammonium has an extra proton that gives it a net positive charge. Chemistry: Ammonia can also lose an electron to gain a positive charge, forming the ion NH+3. However, this ion is unstable, because it has an incomplete valence shell around the nitrogen atom, making it a very reactive radical ion. Due to the instability of radical ions, polyatomic and molecular ions are usually formed by gaining or losing elemental ions such as H+, rather than gaining or losing electrons. This allows the molecule to preserve its stable electronic configuration while acquiring an electrical charge. Chemistry: Ionization potential The energy required to detach an electron in its lowest energy state from an atom or molecule of a gas with less net electric charge is called the ionization potential, or ionization energy. The nth ionization energy of an atom is the energy required to detach its nth electron after the first n − 1 electrons have already been detached. Chemistry: Each successive ionization energy is markedly greater than the last. Particularly great increases occur after any given block of atomic orbitals is exhausted of electrons. For this reason, ions tend to form in ways that leave them with full orbital blocks. For example, sodium has one valence electron in its outermost shell, so in ionized form it is commonly found with one lost electron, as Na+. On the other side of the periodic table, chlorine has seven valence electrons, so in ionized form it is commonly found with one gained electron, as Cl−. Caesium has the lowest measured ionization energy of all the elements and helium has the greatest. In general, the ionization energy of metals is much lower than the ionization energy of nonmetals, which is why, in general, metals will lose electrons to form positively charged ions and nonmetals will gain electrons to form negatively charged ions. Chemistry: Ionic bonding Ionic bonding is a kind of chemical bonding that arises from the mutual attraction of oppositely charged ions. Ions of like charge repel each other, and ions of opposite charge attract each other. Therefore, ions do not usually exist on their own, but will bind with ions of opposite charge to form a crystal lattice. The resulting compound is called an ionic compound, and is said to be held together by ionic bonding. In ionic compounds there arise characteristic distances between ion neighbours from which the spatial extension and the ionic radius of individual ions may be derived. Chemistry: The most common type of ionic bonding is seen in compounds of metals and nonmetals (except noble gases, which rarely form chemical compounds). Metals are characterized by having a small number of electrons in excess of a stable, closed-shell electronic configuration. As such, they have the tendency to lose these extra electrons in order to attain a stable configuration. This property is known as electropositivity. Non-metals, on the other hand, are characterized by having an electron configuration just a few electrons short of a stable configuration. As such, they have the tendency to gain more electrons in order to achieve a stable configuration. This tendency is known as electronegativity. When a highly electropositive metal is combined with a highly electronegative nonmetal, the extra electrons from the metal atoms are transferred to the electron-deficient nonmetal atoms. This reaction produces metal cations and nonmetal anions, which are attracted to each other to form a salt.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**The SWORD Project** The SWORD Project: The SWORD Project is the CrossWire Bible Society's free software project. Its purpose is to create cross-platform open-source tools—covered by the GNU General Public License—that allow programmers and Bible societies to write new Bible software more quickly and easily. Overview: The core of The SWORD Project is a cross-platform library written in C++, providing access, search functions and other utilities to a growing collection of over 200 texts in over 50 languages. Any software based on their API can use this collection. JSword is a separate implementation, written in Java, which reproduces most of the API features of the C++ API and supports most SWORD data content. The project is one of the primary implementers of and contributors to the Open Scripture Information Standard (OSIS), a standardized XML language for the encoding of scripture. The software is also capable of utilizing certain resources encoded in using the Text Encoding Initiative (TEI) format and maintains deprecated support for Theological Markup Language (ThML) and General Bible Format (GBF). Bible study front-end applications: A variety of front ends based on The SWORD Project are available: And Bible And Bible, based on JSword, is an Android application. Alkitab Bible Study Alkitab Bible Study, based on JSword, is a multiplatform application with binaries available for Windows, Linux, and OS X. It has been described as "an improved Windows front-end for JSword". The Bible Tool The Bible Tool is a web front end to SWORD. One instance of the tool is hosted at CrossWire's own site. BibleDesktop BibleDesktop is built on JSword featuring binaries for Windows (98SE and later), OS X, and Linux (and other Unix-like OSes). BibleTime BibleTime is a C++ SWORD front end using the Qt GUI toolkit, with binaries for Linux, Windows, FreeBSD, and OS X. BibleTime Mini BibleTime Mini is a multiplatform application for Android, BlackBerry, jailbroken iOS, MeeGo, Symbian, and Windows Mobile. BPBible BPBible is a SWORD front end written in Python, which supports Linux and Windows. A notable feature is that a PortableApps version of BPBible is available. Bible study front-end applications: Eloquent Eloquent (formerly MacSword) is a free open-source application for research and study of the Bible, developed specifically for Macintosh computers running macOS. It is a native OS X app built in Objective-C. Eloquent allows users to read and browse different bible translations in many languages, devotionals, commentaries, dictionaries and lexicons. It also supports searching and advanced features such as services enabling users to access the Bible within other application programs. Bible study front-end applications: Eloquent is one of About.com's top 10 Bible programs.Version 2.3.5 of Eloquent continues with the Snow Leopard development. However, starting with the version 2.4.0, Eloquent has started with the OS X Lion testing, implementing features that are specific only to the Lion operating system. Ezra Bible App Ezra Bible App is an open source bible study tool focussing on topical study based on keywords/tags. It is based on Electron and works on Windows, Linux, macOS and Android. FireBible FireBible is a Firefox extension that works on Windows, Linux, and OS X. PocketSword PocketSword is an iOS front end supporting iPad, iPhone, and iPod Touch available in Apple's App Store. Bible study front-end applications: STEPBible STEPBible (STEP - Scripture Tools for Every Person) is an initiative by Tyndale House, Cambridge to build an online Bible study tool based on The SWORD Project. The first public release (Beta launch) of the software as an online platform was on 25 July 2013. The desktop version runs in any browser on the desktop computer. Additionally, the STEPBible app can be installed on an iOS device such as phones or tablets running iOS, or Android, and on a Chrome book. Bible study front-end applications: The SWORD Project for Windows The SWORD Project for Windows (known internally as BibleCS) is a Windows application built in C++Builder. Bible study front-end applications: Xiphos Xiphos (formerly GnomeSword) is a C++ SWORD front end using GTK+, with binaries available for Linux, UNIX, and Windows (2000 and later). It has been described as "a top-of-the-line Bible study program." xulsword xulsword is a XUL-based front end for Windows and Linux. Portable versions of the application, intended to be run from a USB stick, are also available. Bible study front-end applications: Others Additional front ends to SWORD exist to support a number of legacy and niche platforms, including: diatheke (CLI & CGI) SwordReader (Windows Mobile) Rapier (Maemo) Reviews: It is one of About.com's top 10 bible programs. Bible Software Review, Review of MacSword version 1.2, June 13, 2005. Foster Tribe SwordBible Review [5] November 25, 2008 Michael Hansen, Studying the Bible for Free, Stimulus, Volume 12 Number 3, August 2004, page 33 - 38
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Polyacrylic acid** Polyacrylic acid: Poly(acrylic acid) (PAA; trade name Carbomer) is a polymer with the formula (CH2-CHCO2H)n. It is a derivative of acrylic acid (CH2=CHCO2H). In addition to the homopolymers, a variety of copolymers and crosslinked polymers, and partially deprotonated derivatives thereof are known and of commercial value. In a water solution at neutral pH, PAA is an anionic polymer, i.e., many of the side chains of PAA lose their protons and acquire a negative charge. Partially or wholly deprotonated PAAs are polyelectrolytes, with the ability to absorb and retain water and swell to many times their original volume. These properties – acid-base and water-attracting – are the bases of many applications. Synthesis: PAA is produced by free radical polymerization. Initiators include potassium persulfate and AIBN. Synthesis: Polyacrylic acid is a polyolefin. It can be viewed as polyethylene with carboxylic acid (CO2H) substituents on alternating carbons. Owing to these groups, alternating carbon atoms in the backbone are stereogenic (colloquially: chiral). For this reason, acrylic acid exists in atactic, syndiotactic, and isotactic forms, although this aspect is rarely discussed. The polymerization is initiated with radicals and is assumed to be stereorandom. Crosslinking can be introduced in many ways. Production: About 1,600,000,000 kg were produced in 2008. Structure and derivatives: Polyacrylic acid is a weak anionic polyelectrolyte, whose degree of ionisation is dependent on solution pH. In its non-ionised form at low pHs, PAA may associate with various non-ionic polymers (such as polyethylene oxide, poly-N-vinyl pyrrolidone, polyacrylamide, and some cellulose ethers) and form hydrogen-bonded interpolymer complexes. In aqueous solutions PAA can also form polycomplexes with oppositely charged polymers such as chitosan, surfactants, and drug molecules (for example, streptomycin). Physical properties: Dry PAAs are sold as white, fluffy powders. In the dry powder form, the positively charged sodium ions are bound to the polyacrylate, however in aqueous solutions the sodium ions can dissociate. The presence of many metal cations allows the polymer to absorb a high amount of water. Applications: Absorbent PAA is widely used in dispersants and since the molecular weight has a significant impact on the rheological properties and dispersion capacity, and hence applications. The dominant application for PAA is as a superabsorbent. About 25% of PAA is used for detergents and dispersants.Polyacrylic acid and its derivatives are used in disposable diapers. Acrylic acid is also the main component of Superadsorbent Polymers (SAPs), cross-linked polyacrylates that can absorb and retain more than 100 times of their own weight in liquid. The US Food and Drug Administration authorised the use of SAPs in packaging with indirect food contact. Applications: Cleaning Detergents often contain copolymers of acrylic acid that assist in sequestering dirt. Cross-linked polyacrylic acid has also been used in the processing of household products, including floor cleaners. PAA may inactivate the antiseptic chlorhexidine gluconate. Biocompatible materials The neutralized polyacrylic acid gels are suitable biocompatible matrices for medical applications such as gels for skin care products. PAA films can be deposited on orthopaedic implants to protect them from corrosion. Crosslinked hydrogels of PAA and gelatin have also been used as medical glue. Applications: Paints and cosmetics Other applications involve paints and cosmetics. They stabilize suspended solid in liquids, prevent emulsions from separating, and control the consistency in flow of cosmetics. Carbomer codes (910, 934, 940, 941, and 934P) are an indication of molecular weight and the specific components of the polymer. For many applications PAAs are used in form of alkali metal or ammonium salts, e.g. sodium polyacrylate. Applications: Emerging applications Hydrogels derived from PAA have attracted much study for use as bandages and aids for wound healing. Drilling fluid and metal quenching A few reports were made on PAA use as deflocculant (so called alkaline polyacrylates) for oil drilling industry.It was also reported to be used for metal quenching in metalworking (see Sodium polyacrylate).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Cast bullet** Cast bullet: A cast bullet is made by allowing molten metal to solidify in a mold. Most cast bullets are made of lead alloyed with tin and antimony; but zinc alloys have been used when lead is scarce, and may be used again in response to concerns about lead toxicity. Most commercial bullet manufacturers use swaging in preference to casting, but bullet casting remains popular with handloaders. History: Firearms projectiles were being cast in the 14th century. Iron was used for cannon, while lead was the preferred material for small arms. Lead was more expensive than iron, but it was softer and less damaging to the relatively weak iron barrels of early muskets. Lead could be cast in a ladle over a wood fire used for cooking or home heating, while casting iron required higher temperatures. Greater density of lead allowed lead bullets to retain velocity and energy better than iron bullets of the same weight and initial firing velocity. History: Swaging, rather than casting, became a preferred manufacturing technique during the 19th century industrial revolution; but cast bullets remained popular in early rimmed black powder cartridges like the .32-20 Winchester, .32-40 Ballard, .38-40 Winchester, .38-55 Winchester, .44-40 Winchester, .45 Colt, and .45-70. Disadvantages became evident as loadings shifted to smokeless powder in the late 19th century. Higher velocity smokeless powder loadings caused lead to melt and be torn from soft bullets to remain in the barrel after firing in small deposits called leading. Manufacturers of high-velocity military ammunition modified their bullet swaging process to apply a thin sheet of stronger metal over the soft lead bullet. Although it took several decades to devise bullet jacket alloys and manufacturing procedures to duplicate the accuracy of cast bullets at lower velocities; jacketed bullets were more accurate at the velocity of 20th century military rifle cartridges. Jacketed bullets also functioned more reliably and are less likely to be deformed in the mechanical loading process of self-loading pistols and machine-guns. Cast bullet advantages: Bullet casting remained popular for shooters accustomed to older weapons. Firearms were often sold with a mould designed for that particular weapon; so individuals living in remote areas would be able to manufacture their own ammunition rather than relying upon undependable supplies from local merchants. The uniform fit of bullets from an individual mould offered superior accuracy when early manufacturing tolerances were comparatively large.These basic advantages remain true today. Moulds can be obtained to uniformly cast bullets of a diameter producing optimum accuracy in a specific firearm, and a firearm owner possessing such a mould can obtain a supply of those bullets independent of unreliable manufacturers and distributors. Bullets cast over a fireplace or stove from readily obtainable scrap materials still offer excellent performance in subsonic revolver cartridges, and more sophisticated casting techniques can produce bullets suitable for loading at velocities up to about 2,000 feet per second (610 m/s). Cast bullet advantages: Recent advances in Cast Bullet Lubes have enabled shooters to be able to push cast bullets past 2,800 feet per second (850 m/s) in slow twist 30 cal rifles. Safety: Although some bullet casting procedures can be accomplished with heating elements used for cooking; care must be taken to avoid contaminating food preparation areas and/or utensils with lead alloys. Most bullet casters prefer to use portable electric melting pots in areas with good ventilation. Molten metal can cause serious burns; and molten metal can be sprayed around the working area by violently expanding steam if it comes in contact with water from spilled drinks or other sources. Bullet casters should wear protective clothing including eye protection, and should carefully wash hands prior to eating, drinking, or smoking. Young children are especially vulnerable to lead poisoning and are unlikely to appreciate the danger of shiny molten metal and newly cast bullets. Bullet casting must be limited to times and locations when children are absent. Safety: Particular risk comes from the oxides of lead and other metals present in lead alloys, as oxides are often more easily absorbed than the metallic forms. This means that the dross that is skimmed from the lead pot may pose a larger hazard than the metallic alloys. Bullet shapes: Cast bullets require a longer bearing surface than jacketed bullets to maintain an equivalent alignment with the bore of the firearm; because the softer cast bullet can be more readily deformed. The most successful cast bullet designs have a round or flattened nose rather than a long, unsupported ogive. Bullet designs with a forward diameter designed to be supported on the rifling lands work best in barrels rifled with wide lands and narrow grooves like the 2-groove M1903-A3 rifles. Forward bearing surfaces of full groove diameter provide more effective alignment in barrels with wide grooves and narrow lands, provided the chamber throat is long enough to accept such bullets. Gas checks: One of the earlier efforts to obtain better high-velocity performance involves placing a very shallow cup of copper alloy over the base of the bullet. This cup resembles a very short jacket and is called a gas check. Cast bullets require a smaller diameter at the base to accept the gas check. Some gas checks are designed to crimp onto the base of the bullet, while others have a looser fit. Bullet lubrication: Tallow or Lard was used as a lubricant to ease the insertion of muzzle loaded bullets. Elongated rifle bullets were designed to be cast with grooves encircling the bullet to provide a reservoir for lubricant. These lubricants softened the black powder fouling for easier removal and reduced the tendency of bullets to leave deposits of lead in the barrel as they were fired. The latter advantage continued to be significant with smokeless powder. Attempts to obtain satisfactory high-velocity performance with cast bullets have included experimentation with a variety of lubricant mixtures including such things as beeswax, carnauba wax, Japan wax, bayberry wax, paraffin, petroleum jelly, sperm oil, castor oil, stearyl alcohol, lauryl alcohol, graphite, molybdenum disulphide, mica, zinc oxide, Teflon, cup grease, lithium soap, water pump grease, and a variety of more modern lubricating materials. Bullet alloys: Pure lead was used to cast hollow-base bullets for Civil War era muskets. These bullets were designed to load easily and then expand into the grooves of the rifling when fired. Pure lead is undesirably soft for casting bullets not requiring such expansion. Tin is a common alloying element. Lead alloyed with a small amount of tin fills out moulds more uniformly than pure lead. Tin also increases the hardness of cast bullets up to a maximum at about eight to ten percent tin. Tin is relatively expensive, so many modern alloys rely upon antimony to increase hardness while retaining the casting advantages of a minimal addition of tin. Linotype metal is a eutectic alloy of 3% tin, 12% antimony, and 85% lead. It is a very satisfactory alloy for casting most bullets. However, bullets from Linotype alloy tend to be brittle, and not suitable for some game hunting. Heat treating: Heat treating can increase the hardness of commonly used lead alloys. The basic procedure is to rapidly cool, or quench, hot bullets. Some suggest this can be done by dropping hot bullets from the mold into a tub of water; but this procedure carries the risk of splashing water onto the mold or into the molten casting metal and causing a steam explosion. An alternative procedure is to re-heat cast bullets (usually in a wire mesh basket) in a temperature-controlled oven and then remove and quench. The oven temperature should be less than the melting temperature of the bullet alloy. This temperature will vary with the concentrations of alloying elements; but is often in the range of 450 to 500 degrees Fahrenheit (232 to 260 degrees Celsius). Paper-patched bullets: As velocity increased and rifling was introduced a problem of lead remaining in the bore was of concern. One of the earlier attempts, which is still popular today with muzzle loaders and users of black powder rifles, prevents leading and obtains potentially better velocity and performance with cast projectiles involves application of a paper jacket. Patching is the hand process of applying paper jackets. The projectile is cast to a diameter that is usually that of the bore and needs to be brought up to groove diameter by a uniform number of paper wrappings. Some prefer a relatively strong paper precisely cut to wrap exactly twice around the bullet with no overlap where the ends meet. Others substitute a range of papers from wax doped rice paper used for rolling cigarettes through greased cooking paper, waxed confectionery paper bags, printer labels and even silicone impregnated baking paper. The width of the piece of paper is slightly longer than the bearing surface of the projectile; so some paper extends past the base and is folded or twisted under. Some projectile have a base cavity into which the twisted end fits. The paper patch is moistened slightly with water to make it more pliable and slightly sticky. The patch is carefully wrapped around the bearing surface of the bullet. The lip of paper extending past the base of the bullet is then twisted together, and may be pushed into a depression cast into the base of the bullet. The lubricant may be allowed to evaporate after the jacket has been applied; and a different lubricant may be applied after the formed paper has dried. Very good accuracy has been obtained with paper-patched bullets, but the assembly procedure is relatively labor-intensive. There is some question about whether the accuracy improvements result from the paper jackets or from the greater uniformity of shooting procedures by people with the patience to apply the patches. A small number of dedicated target shooters still load paper-patched bullets at velocities up to about 2,000 feet per second (610 m/s).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Currier** Currier: A currier is a specialist in the leather processing industry. After the tanning process, the currier applies techniques of dressing, finishing and colouring to a tanned hide to make it strong, flexible and waterproof. The leather is stretched and burnished to produce a uniform thickness and suppleness, and dyeing and other chemical finishes give the leather its desired colour. After currying, the leather is then ready to pass to the fashioning trades such as saddlery, bridlery, shoemaking and glovemaking.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded