text stringlengths 60 353k | source stringclasses 2 values |
|---|---|
**Kinesis (keyboard)**
Kinesis (keyboard):
Kinesis is a company based near Seattle that offers computer keyboards with ergonomic designs as alternatives to the traditional keyboard design. Most widely known among these are the contoured Advantage line, which features recessed keys in two bucket-like hollows to allow the user's fingers to reach keys with less effort. Moreover, the keys are laid out in perfect vertical rows to avoid the need for lateral movements during typing. In addition, the modifiers such as enter, alt, backspace, control, etc. are moved to a central location so they can be pressed with the stronger thumbs rather than the pinky fingers.
Corporate history:
Kinesis was founded in 1991 with its headquarters in Bothell, Washington, a suburb of Seattle. The company released its first keyboard, the Model 100 (first in the long-running Contoured/Advantage line), in 1992. Kinesis's first adjustable keyboard, the Maxim, was released in 1997.In 2000, Kinesis entered a strategic alliance with Cramer, Inc. of Kansas City, which manufactured ergonomic seating. Kinesis took over production for the Cramer Interfaces chair arm-mounted split keyboard, releasing a revised version as the Kinesis Evolution in 2001.
Products:
Contoured / Advantage The original Model 100, released in 1992, featured a single-piece contoured design similar to the Maltron keyboard, with the keys laid out in a traditional QWERTY arrangement, separated into two clusters for the left and right hands. A 1993 article in PC Magazine described the US$690 (equivalent to $1,400 in 2022) keyboard's arrangement as having "the alphabet keys in precisely vertical (not diagonal) columns in two concave depressions. The Kinesis Keyboard also puts the Backspace, Delete, Enter, Space, Ctrl, Alt, Home, End, Page Up, and Page Down keys under your thumbs in the middle".The top row of keys, including the escape key and function keys, are small soft-touch keys with membrane dome switches. The remaining keys are standard size and each has its own Cherry MX brown key switch, providing a tactile feel, but no click. A piezo buzzer provides optional key click. All Kinesis contoured keyboards (except the Essential) support the capability to re-map individual keys. Recent models also come with the ability to switch between the Dvorak layout with the press of a special key combination, though keycaps printed with dual-legend QWERTY/Dvorak letters are included only on specific models.
Products:
By 1995, Kinesis had released its fourth-generation Model 130. In 1996, Kinesis added two sub-lines to its contoured keyboards: the Essential (introduced that June), which was not programmable but could be upgraded, and the Professional (November), which included on-board macro programming, a foot switch, and Keyware software for Windows. The Essential had a suggested retail price of US$265 (equivalent to $490 in 2022) while the Professional was more expensive at US$395 (equivalent to $740 in 2022). In July 1997, the mid-range Classic sub-line was launched to replace the Model 130, offering key remapping and macro programmability, but with half the memory of the Professional. Specific model numbers included the Essential (KB132PC), Classic (KB133PC), and Professional (KB134PC); Classic and Professional models also were available with QWERTY/Dvorak dual-legend keys (KB133PC/QD and KB134PC/QD, respectively). The Ergo Elan (KB333PC) was launched in 1999 with a revised (U.S. International) layout to accommodate European and Japanese users; it has basic programmability, corresponding to the capabilities of the Classic.
Products:
In 2002, Kinesis released the Advantage line, updating its contoured keyboards with a USB interface. The non-programmable model was dropped and the USB contoured keyboards were available in two versions: the Advantage was offered in black or white and the Advantage Pro had a metallic silver finish with black keys. Specific models included the Advantage MPC (KB500USB) and Advantage Pro MPC (KB510USB); MPC indicates Macintosh and PC compability. The Advantage MPC model also was available with QWERTY/Dvorak dual-legend keys (KB500USB/QD) and in a version (KB500USB-LF) that uses Cherry MX red linear keyswitches.The Advantage2 line was released in 2016; one major change was implementing mechanical Cherry ML keyswitches for the function key row. Internally, the Advantage2 had a new "SmartSet" engine and expanded memory for macro programming. Like the preceding Advantage, the Advantage2 has options for dual-printed keycaps and low-force Cherry MX red keyswitches.
Products:
Maxim The Maxim adjustable keyboard was launched in 1997 to compete with similar adjustable staggered-column keyboards with split halves and/or tenting such as the Apple Adjustable, Microsoft Natural, and IBM Adjustable (M15)/Lexmark Select-Ease keyboards. The front/back slope (6° and 12°), rotation (0–30°), and lateral tenting (0°, 8°, 14°) were adjustable to accommodate a wider variety of users; Kinesis advised new users to gradually increase split opening and tenting angles to acclimate to a more relaxed hand/wrist position. The Maxim is a tenkeyless QWERTY layout with the split halves bordered by F6, 6, T, G, and B on the left and F7, 7, Y, H, and N on the right; a numeric keypad with PS/2 interface was available separately. In addition, the Maxim is compatible with the Kinesis 3-action footswitch via a daisychain connection.As originally released, the Maxim was fitted with a PS/2 mini-DIN interface; versions compatible with Macintosh and Sun were released in bundles with interface conversion boxes. The keyboard was reconfigurable to use the Dvorak layout by selecting the appropriate driver in Windows. The accessory keypad connected using an RJ11 connector.A USB version was released in 2004; it uses a passive adapter for PS/2 compatibility. With the USB update, the keypad was revised to incorporate a USB hub. It was named a PC Magazine Editor's Choice that year in a comparison of keyboards.: 100 Savant The Savant and Savant Professional were keypads with 20 and 58 unlabeled keys, respectively; each key could be programmed with up to 480 (Savant) or 915 (Savant Professional) keystrokes. They were designed and produced by P.I. Engineering and marketed by Kinesis; P.I. Engineering marketed identical products as the X-keys Desktop and Pro.
Products:
Interfaces / Evolution The Interfaces keyboard was developed by Cramer, Inc. as a split keyboard mounted on the user's chair arms. Kinesis assumed responsibility for manufacturing the Interfaces in 2000, then released an updated version in 2001 as the Kinesis Evolution. The Evolution added programming capabilities and included a bundled touchpad pointing device that could be moved to the left or right halves. The cable connecting the keyboard to the computer was 10 ft (3.0 m) long, while the cable linking the halves was 5 ft (1.5 m); mechanical, tactile keyswitches were used. To improve flexibility, track-mounted (taking the place of an underdesk tray) and desktop configurations of the Evolution were released. The specific model number depends on the mount style chosen and the trackpad options, which were for the left, right, or both sides.
Products:
Freestyle The Freestyle keyboard line was launched in 2007; like the Maxim, the Freestyle was a staggered-column keyboard split into two halves, but each half was an independent module now, linked by a cable. The keyboard halves were sold as the Freestyle Solo, and was bundled with the PivotTether, which allowed any arbitrary split angle. Initially, several additional accessories were released with the Solo, including palm rests (AC706), the Incline (AC710, a base providing a fixed 10° tenting angle and adjustable split angle) and the VIP (AC720, adjustable 10°/15° tenting angle via lifters on each half). The next year, V3 (AC730) and Ascent (AC740) tenting accessories were also offered; the V3 allowed tenting angles of 5, 10, and 15° without palm rests, and the Ascent provided tenting from 20–90° in 10° increments. Also that year, Kinesis released a version of the Freestyle for Macintosh. The different versions could be distinguished by model number (KB700PB for the PC version, and KB700MW for the Macintosh version) and color (black for PC, white for Mac).In 2012, the Freestyle2 (KB800) was introduced in PC and Mac versions; externally, the most visible improvement was a reduction in keyboard height/thickness. A matching low-profile numeric keypad was available. The Freestyle2 Blue was released in 2015, updating the Freestyle2 with a multi-channel Bluetooth wireless connection. Freestyle2 was released with an updated VIP3 (AC820) tenting accessory, but remained compatible with the V3 (AC730) and Ascent (AC740).
Products:
The Freestyle Edge was created by the Kinesis Gaming brand as a Kickstarter project in 2017; it incorporated mechanical keyswitches into the split-module Freestyle design. Backers had a choice of Cherry MX blue, brown, or red switches in a keyboard with blue backlighting. Updates followed in 2018 (Freestyle Pro, which deleted the backlighting and was aimed at business/office users with brown and red keyswitches) and 2019 (Freestyle Edge RGB, which added RGB backlighting). The Edge and Pro were released with updated VIP3 (AC910/AC925) and V3 (AC930) tenting accessories, the former bundled with palm rests.
Products:
Advantage 360 In 2021, Kinesis announced the Advantage 360, which combined aspects of the Advantage (contoured, linear key layout) and Freestyle (individual split left/right key modules). It is available as the standard version, which uses USB-C ports to link the halves to each other and to the computer, and the Professional version, which uses Bluetooth Low Energy to link the halves together and to the computer; for the Professional, the link to the computer can be made using a USB-C cable if desired. In addition, the Professional model uses an open-source ZMK programming engine, while the regular 360 uses the Kinesis SmartSet engine. The Professional has white backlighting, while the regular 360 has none. Preorders opened in December 2021, limited to 360 reservations.
Applications:
The Kinesis line of keyboards are marketed to those who type throughout the work-day, and thus perceive a higher risk for such injuries as RSI. The Kinesis was first used among computer programmers, who continue to be the primary market for the devices.
In popular media Kinesis keyboards have appeared in films and television shows, including: Contact (1997) Men in Black (1997) Flubber (1998) NetForce (1999) | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**ALS Functional Rating Scale - Revised**
ALS Functional Rating Scale - Revised:
Amyotrophic Lateral Sclerosis (ALS), is a neurodegenerative disease that typically affects adults around 54-67 years of age, although anyone can be diagnosed with the disease. People diagnosed with ALS live on average 2–4 years after diagnosis due to the quick progression of the disease. The progression and severity of ALS is rated by doctors on the ALS Functional Rating Scale, which has been revised and is referred to as ALSFRS-R.
Criteria:
ALSFRS-R includes 12 questions that can have a score of 0 to 4. A score of 0 on a question would indicate no function while a score of 4 would indicate full function. This scale has been useful for doctors in diagnosing patients, measuring disease progression and also for researchers when selecting patients for a study and measuring the potential effects of a clinical trial.The ALSFRS-R scale has some limitations though since it is not useful to compare scores of people who present with different onset. In ALS the main type of onset is bulbar followed by limb-onset ALS which describes the region of motor neurons first affected. Individuals may also present with respiratory-onset ALS, but this occurs very rarely. Since there are three different types of ALS, ALSFRS-R scores are often grouped in categories depending on type of onset.Since there are three main pathways of progression, the questions are also divided in relation to the types of onset. Questions 1 to 3 are related to bulbar onset, questions 4 to 9 are related to limb onset and questions 10-12 are related to respiratory onset. Further developments of the ALSFRS-R include an extended version (ALSFRS-EX) to mitigate the floor effect and a version with explanatory notes, which is particularly suitable for self-assessment (ALSFRS-R-SE, self-explanatory).
Progression:
ALSFRS-R scores calculated at diagnosis can be compared to scores throughout time to determine the speed of progression. The rate of change, called the ALSFRS-R slope can be used as a prognostic indicator.Although the ALSFRS-R score is a recognized prognostic indicator, it is more useful to compare various indicators including vital capacity (FVC%) and the Sickness Impact Profile (SIP) to increase the accuracy of a given prognosis.
Progression:
Relating the ALSFRS-R score to staging criteria is also useful in determining prognosis. King's system relies on the clinical spread of disease as a measure of progression while Milano-Torino Staging (MiToS) utilizes the subscores produced by the ALSFRS-R to define stages.
Questions:
The questions used to determine an individual's ALSFRS-R score are listed below. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Quifenadine**
Quifenadine:
Quifenadine (Russian: хифенадин, trade name: Phencarol, Фенкарол) is a 2nd generation antihistamine drug, marketed mainly in post-Soviet countries. Chemically, it is a quinuclidine derivative.
Quifenadine:
The drug has antiarrhythmic properties, probably due to the presence of a quinuclidine nucleus in the molecule's core. It acts as a calcium channel blocker and influences the activity of potassium channels. In children with cardiac arrhythmia, combination therapy with quifenadone and either amiodarone or propafenone was found to be more effective than monotherapy with either amiodarone or propafenone.Quifenadine is a derivative of quinuclidylcarbinol, which reduces the effects of histamine on organs and systems. Quifenadine is a competitive blocker of H1 receptors. In addition, it activates the diamine oxidase enzyme, which breaks down about 30% of endogenous histamine. This explains the effectiveness of quifenadine in patients insensitive to other antihistamines. The antihistaminic qualities of quifenadine are associated with the presence of a cyclic quinuclidine core in the structure and the distance between the diphenylcarbinol group and the nitrogen atom. In terms of antihistaminic activity and duration of action, quifenadine is superior to diphenhydramine. Quifenadine reduces the toxic effect of histamine, eliminates or weakens its bronchoconstrictor effect and spasmodic effect on the smooth muscles of the intestines, has a moderate antiserotonin and weak cholinolytic effect, has well-defined antipruritic and desensitizing properties. Quifenadine weakens the hypotensive effect of histamine and its effect on capillary permeability, does not directly affect cardiac activity and blood pressure, does not have a protective effect in aconitine arrhythmias.
Indications:
Allergic rhinitis Acute and chronic urticaria Angioedema Dermatitis Atopic dermatitis Pruritus
Synthesis:
Same precursor as for mequitazine.
Use patent: The Grignard reaction between methylquinuclidine-3-carboxylate [38206-86-9] (1) and phenylmagnesium bromide (2) gave the benzhydryl alcohol product ~29% yield.
The ethyl ester is cas: [6238-33-1] | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Machine tool**
Machine tool:
A machine tool is a machine for handling or machining metal or other rigid materials, usually by cutting, boring, grinding, shearing, or other forms of deformations. Machine tools employ some sort of tool that does the cutting or shaping. All machine tools have some means of constraining the workpiece and provide a guided movement of the parts of the machine. Thus, the relative movement between the workpiece and the cutting tool (which is called the toolpath) is controlled or constrained by the machine to at least some extent, rather than being entirely "offhand" or "freehand". It is a power-driven metal cutting machine which assists in managing the needed relative motion between cutting tool and the job that changes the size and shape of the job material. The precise definition of the term machine tool varies among users, as discussed below. While all machine tools are "machines that help people to make things", not all factory machines are machine tools.
Machine tool:
Today machine tools are typically powered other than by the human muscle (e.g., electrically, hydraulically, or via line shaft), used to make manufactured parts (components) in various ways that include cutting or certain other kinds of deformation.
With their inherent precision, machine tools enabled the economical production of interchangeable parts.
Nomenclature and key concepts, interrelated:
Many historians of technology consider that true machine tools were born when the toolpath first became guided by the machine itself in some way, at least to some extent, so that direct, freehand human guidance of the toolpath (with hands, feet, or mouth) was no longer the only guidance used in the cutting or forming process. In this view of the definition, the term, arising at a time when all tools up till then had been hand tools, simply provided a label for "tools that were machines instead of hand tools". Early lathes, those prior to the late medieval period, and modern woodworking lathes and potter's wheels may or may not fall under this definition, depending on how one views the headstock spindle itself; but the earliest historical records of a lathe with direct mechanical control of the cutting tool's path are of a screw-cutting lathe dating to about 1483. This lathe "produced screw threads out of wood and employed a true compound slide rest".
Nomenclature and key concepts, interrelated:
The mechanical toolpath guidance grew out of various root concepts: First is the spindle concept itself, which constrains workpiece or tool movement to rotation around a fixed axis. This ancient concept predates machine tools per se; the earliest lathes and potter's wheels incorporated it for the workpiece, but the movement of the tool itself on these machines was entirely freehand.
Nomenclature and key concepts, interrelated:
The machine slide (tool way), which has many forms, such as dovetail ways, box ways, or cylindrical column ways. Machine slides constrain tool or workpiece movement linearly. If a stop is added, the length of the line can also be accurately controlled. (Machine slides are essentially a subset of linear bearings, although the language used to classify these various machine elements may be defined differently by some users in some contexts, and some elements may be distinguished by contrasting with others) Tracing, which involves following the contours of a model or template and transferring the resulting motion to the toolpath.
Nomenclature and key concepts, interrelated:
Cam operation, which is related in principle to tracing but can be a step or two removed from the traced element's matching the reproduced element's final shape. For example, several cams, no one of which directly matches the desired output shape, can actuate a complex toolpath by creating component vectors that add up to a net toolpath.
Nomenclature and key concepts, interrelated:
Van Der Waals Force between like materials is high; freehand manufacture of square plates, produces only square, flat, machine tool building reference components, accurate to millionths of an inch, but of nearly no variety. The process of feature replication allows the flatness and squareness of a milling machine cross slide assembly, or the roundness, lack of taper, and squareness of the two axes of a lathe machine to be transferred to a machined work piece with accuracy and precision better than a thousandth of an inch, not as fine as millionths of an inch. As the fit between sliding parts of a made product, machine, or machine tool approaches this critical thousandth of an inch measurement, lubrication and capillary action combine to prevent Van Der Waals force from welding like metals together, extending the lubricated life of sliding parts by a factor of thousands to millions; the disaster of oil depletion in the conventional automotive engine is an accessible demonstration of the need, and in aerospace design, like-to-unlike design is used along with solid lubricants to prevent Van Der Waals welding from destroying mating surfaces. Given the modulus of elasticity of metals, the range of fit tolerances near one thousandth of an inch correlates to the relevant range of constraint between at one extreme, permanent assembly of two mating parts and at the other, a free sliding fit of those same two parts.Abstractly programmable toolpath guidance began with mechanical solutions, such as in musical box cams and Jacquard looms. The convergence of programmable mechanical control with machine tool toolpath control was delayed many decades, in part because the programmable control methods of musical boxes and looms lacked the rigidity for machine tool toolpaths. Later, electromechanical solutions (such as servos) and soon electronic solutions (including computers) were added, leading to numerical control and computer numerical control.
Nomenclature and key concepts, interrelated:
When considering the difference between freehand toolpaths and machine-constrained toolpaths, the concepts of accuracy and precision, efficiency, and productivity become important in understanding why the machine-constrained option adds value.
Nomenclature and key concepts, interrelated:
Matter-Additive, Matter-Preserving, and Matter-Subtractive "Manufacturing" can proceed in sixteen ways: Firstly, the work may be held either in a hand, or a clamp; secondly, the tool may be held either in a hand, or a clamp; thirdly, the energy can come from either the hand(s) holding the tool and/or the work, or from some external source, including for examples a foot treadle by the same worker, or a motor, without limitation; and finally, the control can come from either the hand(s) holding the tool and/or the work, or from some other source, including computer numerical control. With two choices for each of four parameters, the types are enumerated to sixteen types of Manufacturing, where Matter-Additive might mean painting on canvas as readily as it might mean 3D printing under computer control, Matter-Preserving might mean forging at the coal fire as readily as stamping license plates, and Matter-Subtracting might mean casually whittling a pencil point as readily as it might mean precision grinding the final form of a laser deposited turbine blade.
Nomenclature and key concepts, interrelated:
A precise description of what a machine tool is and does in an instant moment is given by a 12 component vector relating the linear and rotational degrees of freedom of the single work piece and the single tool contacting that work piece in any machine arbitrarily and in order to visualize this vector it makes sense to arrange it in four rows of three columns with labels x y and z on the columns and labels spin and move on the rows, with those two labels repeated one more time to make a total of four rows so that the first row might be labeled spin work, the second row might be labeled move work, the third row might be labeled spin tool, and the fourth row might be labeled move tool although the position of the labels is arbitrary which is to say there is no agreement in the literature of mechanical engineering on what order these labels should be but there are 12 degrees of freedom in a machine tool. That said it is important to remember that this is in an instant moment and that instant moment may be a preparatory moment before a tool makes contact with a work piece, or maybe an engaged moment during which contact with work and tool requires an input of rather large amounts of power to get work done which is why machine tools are large and heavy and stiff. Since what these vectors describe our instant moments of degrees of freedom the vector structure is capable of expressing the changing mode of a machine tool as well as expressing its fundamental structure in the following way: imagine a lathe spending a cylinder on a horizontal axis with a tool ready to cut a face on that cylinder in some preparatory moment. What the operator of such a lathe would do is lock the x-axis on the carriage of the lathe establishing a new vector condition with a zero in the x slide position for the tool. Then the operator would unlock the y-axis on the cross slide of the lathe, assuming that our example is were equipped with that, and then the operator would apply some method of traversing the facing tool across the face of the cylinder being cut and a depth combined with the rotational speed selected which engages cutting ability within the power of range of the motor powering the lathe. So the answer to what a machine tool is, is a very simple answer but it's highly technical and is unrelated to the history of machine tools.
Nomenclature and key concepts, interrelated:
Preceding, there is an answer for what machine tools are. We may consider what they do also. Machine tools produce finished surfaces. They may produce any finish from an arbitrary degree of very rough work to a specular optical grade finish the improvement of which is moot. Machine tools produce the surfaces comprising the features of machine parts by removing chips. These chips may be very rough or even as fine as dust. Every machine tools supports its removal process with a stiff, redundant and so vibration resisting structure because each chip is removed in a semi a synchronous way, creating multiple opportunities for vibration to interfere with precision.
Nomenclature and key concepts, interrelated:
Humans are generally quite talented in their freehand movements; the drawings, paintings, and sculptures of artists such as Michelangelo or Leonardo da Vinci, and of countless other talented people, show that human freehand toolpath has great potential. The value that machine tools added to these human talents is in the areas of rigidity (constraining the toolpath despite thousands of newtons (pounds) of force fighting against the constraint), accuracy and precision, efficiency, and productivity. With a machine tool, toolpaths that no human muscle could constrain can be constrained; and toolpaths that are technically possible with freehand methods, but would require tremendous time and skill to execute, can instead be executed quickly and easily, even by people with little freehand talent (because the machine takes care of it). The latter aspect of machine tools is often referred to by historians of bytechnology as "building the skill into the tool", in contrast to the toolpath-constraining skill being in the person who wields the tool. As an example, it is physically possible to make interchangeable screws, bolts, and nuts entirely with freehand toolpaths. But it is economically practical to make them only with machine tools.
Nomenclature and key concepts, interrelated:
In the 1930s, the U.S. National Bureau of Economic Research (NBER) referenced the definition of a machine tool as "any machine operating by other than hand power which employs a tool to work on metal".The narrowest colloquial sense of the term reserves it only for machines that perform metal cutting—in other words, the many kinds of [conventional] machining and grinding. These processes are a type of deformation that produces swarf. However, economists use a slightly broader sense that also includes metal deformation of other types that squeeze the metal into shape without cutting off swarf, such as rolling, stamping with dies, shearing, swaging, riveting, and others. Thus presses are usually included in the economic definition of machine tools. For example, this is the breadth of definition used by Max Holland in his history of Burgmaster and Houdaille, which is also a history of the machine tool industry in general from the 1940s through the 1980s; he was reflecting the sense of the term used by Houdaille itself and other firms in the industry. Many reports on machine tool export and import and similar economic topics use this broader definition.
Nomenclature and key concepts, interrelated:
The colloquial sense implying [conventional] metal cutting is also growing obsolete because of changing technology over the decades. The many more recently developed processes labeled "machining", such as electrical discharge machining, electrochemical machining, electron beam machining, photochemical machining, and ultrasonic machining, or even plasma cutting and water jet cutting, are often performed by machines that could most logically be called machine tools. In addition, some of the newly developed additive manufacturing processes, which are not about cutting away material but rather about adding it, are done by machines that are likely to end up labeled, in some cases, as machine tools. In fact, machine tool builders are already developing machines that include both subtractive and additive manufacturing in one work envelope, and retrofits of existing machines are underway.The natural language use of the terms varies, with subtle connotative boundaries. Many speakers resist using the term "machine tool" to refer to woodworking machinery (joiners, table saws, routing stations, and so on), but it is difficult to maintain any true logical dividing line, and therefore many speakers accept a broad definition. It is common to hear machinists refer to their machine tools simply as "machines". Usually the mass noun "machinery" encompasses them, but sometimes it is used to imply only those machines that are being excluded from the definition of "machine tool". This is why the machines in a food-processing plant, such as conveyors, mixers, vessels, dividers, and so on, may be labeled "machinery", while the machines in the factory's tool and die department are instead called "machine tools" in contradistinction.
Nomenclature and key concepts, interrelated:
Regarding the 1930s NBER definition quoted above, one could argue that its specificity to metal is obsolete, as it is quite common today for particular lathes, milling machines, and machining centers (definitely machine tools) to work exclusively on plastic cutting jobs throughout their whole working lifespan. Thus the NBER definition above could be expanded to say "which employs a tool to work on metal or other materials of high hardness". And its specificity to "operating by other than hand power" is also problematic, as machine tools can be powered by people if appropriately set up, such as with a treadle (for a lathe) or a hand lever (for a shaper). Hand-powered shapers are clearly "the 'same thing' as shapers with electric motors except smaller", and it is trivial to power a micro lathe with a hand-cranked belt pulley instead of an electric motor. Thus one can question whether power source is truly a key distinguishing concept; but for economics purposes, the NBER's definition made sense, because most of the commercial value of the existence of machine tools comes about via those that are powered by electricity, hydraulics, and so on. Such are the vagaries of natural language and controlled vocabulary, both of which have their places in the business world.
History:
Forerunners of machine tools included bow drills and potter's wheels, which had existed in ancient Egypt prior to 2500 BC, and lathes, known to have existed in multiple regions of Europe since at least 1000 to 500 BC. But it was not until the later Middle Ages and the Age of Enlightenment that the modern concept of a machine tool—a class of machines used as tools in the making of metal parts, and incorporating machine-guided toolpath—began to evolve. Clockmakers of the Middle Ages and renaissance men such as Leonardo da Vinci helped expand humans' technological milieu toward the preconditions for industrial machine tools. During the 18th and 19th centuries, and even in many cases in the 20th, the builders of machine tools tended to be the same people who would then use them to produce the end products (manufactured goods). However, from these roots also evolved an industry of machine tool builders as we define them today, meaning people who specialize in building machine tools for sale to others.
History:
Historians of machine tools often focus on a handful of major industries that most spurred machine tool development. In order of historical emergence, they have been firearms (small arms and artillery); clocks; textile machinery; steam engines (stationary, marine, rail, and otherwise) (the story of how Watt's need for an accurate cylinder spurred Boulton's boring machine is discussed by Roe); sewing machines; bicycles; automobiles; and aircraft. Others could be included in this list as well, but they tend to be connected with the root causes already listed. For example, rolling-element bearings are an industry of themselves, but this industry's main drivers of development were the vehicles already listed—trains, bicycles, automobiles, and aircraft; and other industries, such as tractors, farm implements, and tanks, borrowed heavily from those same parent industries.
History:
Machine tools filled a need created by textile machinery during the Industrial Revolution in England in the middle to late 1700s. Until that time, machinery was made mostly from wood, often including gearing and shafts. The increase in mechanization required more metal parts, which were usually made of cast iron or wrought iron. Cast iron could be cast in molds for larger parts, such as engine cylinders and gears, but was difficult to work with a file and could not be hammered. Red hot wrought iron could be hammered into shapes. Room temperature wrought iron was worked with a file and chisel and could be made into gears and other complex parts; however, hand working lacked precision and was a slow and expensive process.
History:
James Watt was unable to have an accurately bored cylinder for his first steam engine, trying for several years until John Wilkinson invented a suitable boring machine in 1774, boring Boulton & Watt's first commercial engine in 1776.The advance in the accuracy of machine tools can be traced to Henry Maudslay and refined by Joseph Whitworth. That Maudslay had established the manufacture and use of master plane gages in his shop (Maudslay & Field) located on Westminster Road south of the Thames River in London about 1809, was attested to by James Nasmyth who was employed by Maudslay in 1829 and Nasmyth documented their use in his autobiography.
History:
The process by which the master plane gages were produced dates back to antiquity but was refined to an unprecedented degree in the Maudslay shop. The process begins with three square plates each given an identification (ex., 1,2 and 3). The first step is to rub plates 1 and 2 together with a marking medium (called bluing today) revealing the high spots which would be removed by hand scraping with a steel scraper, until no irregularities were visible. This would not produce true plane surfaces but a "ball and socket" concave-concave and convex-convex fit, as this mechanical fit, like two perfect planes, can slide over each other and reveal no high spots. The rubbing and marking are repeated after rotating 2 relative to 1 by 90 degrees to eliminate concave-convex "potato-chip" curvature. Next, plate number 3 is compared and scraped to conform to plate number 1 in the same two trials. In this manner plates number 2 and 3 would be identical. Next plates number 2 and 3 would be checked against each other to determine what condition existed, either both plates were "balls" or "sockets" or "chips" or a combination. These would then be scraped until no high spots existed and then compared to plate number 1. Repeating this process of comparing and scraping the three plates could produce plane surfaces accurate to within millionths of an inch (the thickness of the marking medium).
History:
The traditional method of producing the surface gages used an abrasive powder rubbed between the plates to remove the high spots, but it was Whitworth who contributed the refinement of replacing the grinding with hand scraping. Sometime after 1825, Whitworth went to work for Maudslay and it was there that Whitworth perfected the hand scraping of master surface plane gages. In his paper presented to the British Association for the Advancement of Science at Glasgow in 1840, Whitworth pointed out the inherent inaccuracy of grinding due to no control and thus unequal distribution of the abrasive material between the plates which would produce uneven removal of material from the plates.
History:
With the creation of master plane gages of such high accuracy, all critical components of machine tools (i.e., guiding surfaces such as machine ways) could then be compared against them and scraped to the desired accuracy.
The first machine tools offered for sale (i.e., commercially available) were constructed by Matthew Murray in England around 1800. Others, such as Henry Maudslay, James Nasmyth, and Joseph Whitworth, soon followed the path of expanding their entrepreneurship from manufactured end products and millwright work into the realm of building machine tools for sale.
History:
Important early machine tools included the slide rest lathe, screw-cutting lathe, turret lathe, milling machine, pattern tracing lathe, shaper, and metal planer, which were all in use before 1840. With these machine tools the decades-old objective of producing interchangeable parts was finally realized. An important early example of something now taken for granted was the standardization of screw fasteners such as nuts and bolts. Before about the beginning of the 19th century, these were used in pairs, and even screws of the same machine were generally not interchangeable. Methods were developed to cut screw thread to a greater precision than that of the feed screw in the lathe being used. This led to the bar length standards of the 19th and early 20th centuries.
History:
American production of machine tools was a critical factor in the Allies' victory in World War II. Production of machine tools tripled in the United States in the war. No war was more industrialized than World War II, and it has been written that the war was won as much by machine shops as by machine guns.The production of machine tools is concentrated in about 10 countries worldwide: China, Japan, Germany, Italy, South Korea, Taiwan, Switzerland, US, Austria, Spain and a few others. Machine tool innovation continues in several public and private research centers worldwide.
Drive power sources:
[A]ll the turning of the iron for the cotton machinery built by Mr. Slater was done with hand chisels or tools in lathes turned by cranks with hand power.
Drive power sources:
Machine tools can be powered from a variety of sources. Human and animal power (via cranks, treadles, treadmills, or treadwheels) were used in the past, as was water power (via water wheel); however, following the development of high-pressure steam engines in the mid 19th century, factories increasingly used steam power. Factories also used hydraulic and pneumatic power. Many small workshops continued to use water, human and animal power until electrification after 1900.Today most machine tools are powered by electricity; hydraulic and pneumatic power are sometimes used, but this is uncommon.
Automatic control:
Machine tools can be operated manually, or under automatic control. Early machines used flywheels to stabilize their motion and had complex systems of gears and levers to control the machine and the piece being worked on. Soon after World War II, the numerical control (NC) machine was developed. NC machines used a series of numbers punched on paper tape or punched cards to control their motion. In the 1960s, computers were added to give even more flexibility to the process. Such machines became known as computerized numerical control (CNC) machines. NC and CNC machines could precisely repeat sequences over and over, and could produce much more complex pieces than even the most skilled tool operators.Before long, the machines could automatically change the specific cutting and shaping tools that were being used. For example, a drill machine might contain a magazine with a variety of drill bits for producing holes of various sizes. Previously, either machine operators would usually have to manually change the bit or move the work piece to another station to perform these different operations. The next logical step was to combine several different machine tools together, all under computer control. These are known as machining centers, and have dramatically changed the way parts are made.
Examples:
Examples of machine tools are: Broaching machine Drill press Gear shaper Hobbing machine Hone Lathe Screw machines Milling machine Shear (sheet metal) Shaper Bandsaw Saws Planer Stewart platform mills Grinding machines Multitasking machines (MTMs)—CNC machine tools with many axes that combine turning, milling, grinding, and material handling into one highly automated machine toolWhen fabricating or shaping parts, several techniques are used to remove unwanted metal. Among these are: Electrical discharge machining Grinding (abrasive cutting) Multiple edge cutting tools Single edge cutting toolsOther techniques are used to add desired material. Devices that fabricate components by selective addition of material are called rapid prototyping machines.
Machine tool manufacturing industry:
The worldwide market for machine tools was approximately $81 billion in production in 2014 according to a survey by market research firm Gardner Research. The largest producer of machine tools was China with $23.8 billion of production followed by Germany and Japan at neck and neck with $12.9 billion and $12.88 billion respectively. South Korea and Italy rounded out the top 5 producers with revenue of $5.6 billion and $5 billion respectively. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Twin pattern**
Twin pattern:
In software engineering, the Twin pattern is a software design pattern that allows developers to model multiple inheritance in programming languages that do not support multiple inheritance. This pattern avoids many of the problems with multiple inheritance.
Definition:
Instead of having a single class which is derived from two super-classes, have two separate sub-classes each derived from one of the two super-classes. These two sub-classes are closely coupled, so, both can be viewed as a Twin object having two ends.
Applicability:
The twin pattern can be used: to model multiple inheritance in a language in which multiple inheritance is not supported to avoid some problems of multiple inheritance.
Structure:
There will be two or more parent classes which are used to be inherited. There will be sub-classes each of which is derived from one of the super-classes. The sub-classes are mutually linked via fields, and each sub-class may override the methods inherited from the super-class. New methods and fields are usually declared in one sub-class.
The following diagram shows the typical structure of multiple inheritance: The following diagram shows the Twin pattern structure after replacing the previous multiple inheritance structure:
Collaborations:
Each child class is responsible for the protocol inherited from its parent. It handles the messages from this protocol and forwards other messages to its partner class.
Clients of the twin pattern reference one of the twin objects directly and the other via its twin field.Clients that rely on the protocols of parent classes communicate with objects of the respective child class.
Sample code:
The following code is a sketched implementation of a computer game board with moving balls.
Class for the game board: Code sketch for GameItem class: Code sketch for the BallItem class: Code sketch for BallThread class:
Implementation of the Twin pattern:
The following issues should be considered: Data abstraction - partner classes of the twin class have to be tightly coupled, as probably they have to access each other private fields and methods. In Java, this can be achieved by placing the partner classes into a common package and providing package visibility for the required fields and methods. In Modula-3 and in Oberon, partner classes can be placed in a common module.
Implementation of the Twin pattern:
Efficiency - Since the Twin pattern uses composition which requires message forwarding, the Twin pattern may be less efficient than inheritance. However, since multiple inheritance is slightly less efficient than single inheritance anyway, the overhead will not be a major problem.
Cyclic reference - The Twin pattern relies on each twin referencing the other twin, which causes a cyclic reference scenario. Some languages may require such cyclic references to be handled specially to avoid a memory leak. For example, one reference may need to be made 'weak' to allow the cycle to break. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Chromosome 5q deletion syndrome**
Chromosome 5q deletion syndrome:
Chromosome 5q deletion syndrome is an acquired, hematological disorder characterized by loss of part of the long arm (q arm, band 5q33.1) of human chromosome 5 in bone marrow myelocyte cells. This chromosome abnormality is most commonly associated with the myelodysplastic syndrome.
It should not be confused with "partial trisomy 5q", though both conditions have been observed in the same family.
This should not be confused with the germ line cri du chat (5p deletion) syndrome which is a deletion of the short arm of the 5th chromosome.
Presentation:
The 5q-syndrome is characterized by macrocytic anemia, often a moderate thrombocytosis, erythroblastopenia, megakaryocyte hyperplasia with nuclear hypolobation, and an isolated interstitial deletion of chromosome 5. The 5q- syndrome is found predominantly in females of advanced age.
Causes:
Several genes in the deleted region appear to play a role in the pathogenesis of 5q-syndrome. Haploinsufficiency of RPS14 plays a central role, and contributes to the anemia via both p53-dependent and p53-independent tumor suppressor effects. Other genes at this region include miR-145 and miR-146a, whose deletion is associated with the megakaryocytic dysplasia and thrombocytosis seen in 5q- syndrome; SPARC, which has antiproliferative and antiangiogenic effects; and the candidate tumor suppressors EGR1, CTNNA1, and CDC25C.
Histology:
This syndrome affects bone marrow cells causing treatment-resistant anemia and myelodysplastic syndromes that may lead to acute myelogenous leukemia. Examination of the bone marrow shows characteristic changes in the megakaryocytes. They are more numerous than usual, small and mononuclear. There may be accompanying erythroid hypoplasia in the bone marrow.
Treatment:
Lenalidomide has activity in 5q- syndrome and is FDA approved for red blood cell (RBC) transfusion-dependent anemia due to low or intermediate-1 (int-1) risk myelodysplastic syndrome (MDS) associated with chromosome 5q deletion with or without additional cytogenetic abnormalities. There are several possible mechanisms that link the haploinsufficiency molecular lesions with lenalidomide sensitivity.
Prognosis:
Most affected people have a stable clinical course but are often transfusion dependent. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Freemium**
Freemium:
Freemium, a portmanteau of the words "free" and "premium", is a pricing strategy by which a basic product or service is provided free of charge, but money (a premium) is charged for additional features, services, or virtual (online) or physical (offline) goods that expand the functionality of the free version of the software. This business model has been used in the software industry since the 1980s. A subset of this model used by the video game industry is called free-to-play.
Origin:
The business model has been in use for software since the 1980s. The term freemium to describe this model appears to have been created much later, in response to a 2006 blog post by venture capitalist Fred Wilson summarizing the model:Give your service away for free, possibly ad supported but maybe not, acquire a lot of customers very efficiently through word of mouth, referral networks, organic search marketing, etc., then offer premium-priced value-added services or an enhanced version of your service to your customer base.Jarid Lukin of Alacra, one of Wilson's portfolio companies, then suggested the term "freemium" for this model.
Origin:
In 2009, Chris Anderson published the book Free, which examines the popularity of this business model. As well as for traditional proprietary software and services, it is now also often used by Web 2.0 and open source companies. In 2014, Eric Seufert published the book Freemium Economics, which attempts to deconstruct the economic principles of the freemium model and prescribe a framework for implementing them into software products.The freemium model is closely related to tiered services. Notable examples include LinkedIn, Badoo, Discord, and in the form of a "soft" paywall, such as those employed by The New York Times and La Presse+. This is often in a time-limited or feature-limited version to promote a paid-for full version. The model is particularly suited to software as the cost of distribution is negligible.
Origin:
A freemium model is sometimes used to build a consumer base when the marginal cost of producing extra units is low. Thus little is lost by giving away free software licenses as long as significant cannibalization is avoided. Other examples include free-to-play games – video games that can be downloaded without paying. Video game publishers of free-to-play games rely on other means to generate revenue – such as optional in-game virtual items that can be purchased by players to enhance gameplay or aesthetics.
Types of product limitations:
Ways in which the product or service may be limited or restricted in the free version include: Limited features: A free video chat client may not include three-way video calling. Most free-to-play games fall into this category, as they offer virtual items that are either impossible or very slow to purchase with in-game currency but can be instantly purchased with real-world money.
Types of product limitations:
Limited capacity: For example, SQL Server Express is restricted to databases of 10 GB or less.
Limited use license: For example, most Autodesk or Microsoft software products with full features are free for students with an educational license. (See: Microsoft Imagine.) Some apps, like CCleaner, are free for personal use only.
Limited use time: Most free-to-play games permit the user to play the game consecutively for a limited number of levels or turns; the player must either wait a period of time to play more or purchase the right to play more.
Limited support: Priority or real-time technical support may not be available for non-paying users. For example, Comodo offers all its software products free of charge. Its premium offerings only add various kinds of technical support.
Types of product limitations:
Limited or no access to online services that are only available by purchasing periodic subscriptionsSome software and services make all of the features available for free for a trial period, and then at the end of that period revert to operating as a feature-limited free version (e.g. Online Armor Personal Firewall). The user can unlock the premium features on payment of a license fee, as per the freemium model. Some businesses use a variation of the model known as "open core", in which the unsupported, feature-limited free version is also open-source software, but versions with additional features and official support are commercial software.
Significance:
In June 2011, PC World reported that traditional anti-virus software had started to lose market share to freemium anti-virus products. By September 2012, all but two of the 50 highest-grossing apps in the Games section of Apple's iTunes App Store supported in-app purchases, leading Wired to conclude that game developers were now required to choose between including such purchases or foregoing a very substantial revenue stream. Beginning in 2013, the digital distribution platform Steam began to add numerous free-to-play and early-access games to its library, many of which utilized freemium marketing for their in-game economies. Due to criticism that the multiplayer games falling under this category were pay-to-win in nature or were low-quality and never finished development, Valve has since added stricter rules to its early-access and free-to-play policies.
Criticism of freemium games:
Freemium games have come under criticism from players and critics. Many are labelled with the derogatory term 'pay-to-win', which criticizes freemium games for giving an advantage to players who pay more money, as opposed to those who have more skill. Criticisms also extend to the way that the business model can often appear unregulated, to the point of encouraging prolific spending. Freemium games are often designed in a manner where players who are not actively using premium features are actively frustrated, delayed or require much larger investments in time required to acquire currency or upgrades.
Criticism of freemium games:
In November 2014, the animated TV series South Park aired an episode entitled "Freemium Isn’t Free". The episode satirized the business model for encouraging predatory game design tactics based on an improper business model. In 2015, Nintendo released two of their own freemium games in the Pokémon series based on other standalone purchasable titles. With the title Pokémon Rumble World, Nintendo took a different approach by making it possible to complete the entire game without buying premium credits, but retaining them as an option so players can proceed through the game at a pace that suits them. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Biomesh**
Biomesh:
Biomesh (or biologic mesh) is a type of surgical mesh made from an organic biomaterial (such as porcine dermis, porcine small intestine submucosa, bovine dermis or pericardium, and the dermis or fascia lata of a cadaveric human). Biologic mesh is primarily indicated for several types of hernia repair, including inguinal and ventral hernias, hernia prophylaxis, and contaminated hernia repairs. However, it has also been used in pelvic floor dysfunction, parotidectomy, and reconstructive plastic surgery. The development of biologic mesh largely has derived from the need of a biocompatible material that addresses "the problems associated with a permanent synthetic mesh, including chronic inflammation, foreign body reaction, fibrosis, and mesh infection." As of 2015, however, the efficacy and optimal use of biological mesh products remains in question.
Development, benefits, and drawbacks:
The idea of using organic materials for surgical mesh has been around since at least the late 1950s, though researchers soon learned the materials they tested weren't biocompatible. Research into more compatible biomaterials occurred in the proceeding decades, including the search for cellular-based materials extracted from humans and animals. For example, in 1980, research presented at the first ever World Biomaterials Congress detailed the examined use of dermal collagen of sheep to construct biological mesh for reconstructive surgery. Since then, "research for developing and improvising the biological material required for the production of these meshes" has been ongoing.Typical advantages attributed to biologic meshes include a reduced risk of infection compared to synthetic meshes, and the absorption of the mesh into the resulting scar as part of cellular ingrowth. Commonly described drawbacks include the high cost of the material and its uncertain clinical effectiveness, particularly when the high cost is considered. An August 2015 literature review published by the Canadian Agency for Drugs and Technologies in Health addressed these drawbacks, concluding that "there remains a lack of sufficient evidence to guide clinical practice regarding the use of biological mesh products... Further rigorously designed RCTs are required to clarify comparative clinical effectiveness and safety of the many available biological mesh products for most surgical indications in which their use has been suggested." Contamination considerations The presence of contamination may limit the applicability of permanent synthetic mesh in some procedures such as hernia repair. Biologic mesh may be acceptable for this purpose or for placement in open wounds as a staged closure in complex abdominal wall reconstruction. There is limited data in both of these areas, with some noting a high risk of hernia recurrence and associated infection. The data is mostly limited to animal models and case series. However, the lack of suitable alternatives has made biologic mesh attractive for contaminated field hernia repair. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Euler operator (digital geometry)**
Euler operator (digital geometry):
In solid modeling and computer-aided design, the Euler operators modify the graph of connections to add or remove details of a mesh while preserving its topology. They are named by Baumgart after the Euler–Poincaré characteristic. He chose a set of operators sufficient to create useful meshes, some lose information and so are not invertible.
Euler operator (digital geometry):
The boundary representation for a solid object, its surface, is a polygon mesh of vertices, edges and faces. Its topology is captured by the graph of the connections between faces. A given mesh may actually contain multiple unconnected shells (or bodies); each body may be partitioned into multiple connected components each defined by their edge loop boundary. To represent a hollow object, the inside and outside surfaces are separate shells.
Euler operator (digital geometry):
Let the number of vertices be V, edges be E, faces be F, components H, shells S, and let the genus be G (S and G correspond to the b0 and b2 Betti numbers respectively). Then, to denote a meaningful geometric object, the mesh must satisfy the generalized Euler–Poincaré formula V – E + F = H + 2 * (S – G) The Euler operators preserve this characteristic. The Eastman paper lists the following basic operators, and their effects on the various terms:
Geometry:
Euler operators modify the mesh's graph creating or removing faces, edges and vertices according to simple rules while preserving the overall topology thus maintaining a valid boundary (i.e. not introducing holes). The operators themselves don't define how geometric or graphical attributes map to the new graph: e.g. position, gradient, uv texture coordinate, these will depend on the particular implementation. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Nipple wrench (black powder)**
Nipple wrench (black powder):
Relating to black-powder firearms, a nipple wrench is used to unscrew nipples which hold percussion caps. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Lxrun**
Lxrun:
In Unix computing, lxrun is a compatibility layer to allow Linux binaries to run on UnixWare, SCO OpenServer and Solaris without recompilation. It was created by Mike Davidson. It has been an open source software project since 1997, and is available under the Mozilla Public License. Both SCO and Sun Microsystems began officially supporting lxrun in 1999.
Timeline:
August 22, 1997: lxrun is cited as a proof of concept of cross-platform binary compatibility at the 86open conference hosted by SCO in Santa Cruz, CA.
August 29, 1997: lxrun's first mention on Usenet, in comp.unix.sco.misc. Most notably, the post mentions lxrun's availability in source and binary form from the SCO Skunkware FTP site. A later post in the thread mentions contributions by various authors, both inside and outside of SCO.
October 1, 1997: The official lxrun website is established.
June 19, 1998: Ronald Joe Record, Michael Hopkirk, and Steven Ginzburg present a paper on lxrun at the USENIX 1998 Technical Conference in New Orleans, LA.
Mar 1, 1999: SCO announces Linux compatibility in UnixWare 7 and demonstrates lxrun at LinuxWorld Expo and Conference in San Jose, CA.
May 12, 1999: Sun Microsystems announces support for Linux binaries on Solaris using lxrun.
Status:
According to the official lxrun website, as of 2003 lxrun is in "maintenance" mode, meaning that it is no longer being actively developed. Reasons cited for the declining interest in lxrun include the wide availability of real Linux machines, and the availability of more capable emulation systems, such as SCO's Linux Kernel Personality (LKP), OpenSolaris BrandZ, and various virtual machine solutions. Newer Linux applications and host operating systems are not officially supported by lxrun. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**HD 103774**
HD 103774:
HD 103774 is a star with a close orbiting planetary companion in the southern constellation of Corvus. With an apparent visual magnitude of 7.13, it is too faint to be readily visible to the naked eye. Parallax measurements provide a distance estimate of 184 light years from the Sun. It is drifting closer with a radial velocity of −3 km/s. The star has an absolute magnitude of 3.41.The stellar classification of HD 103774 is F6 V, indicating this is an F-type main-sequence star that is generating energy through core hydrogen fusion. It is a young star with age estimates ranging from 260 million up to 2 billion years of age. The star is mildly active and is spinning with a projected rotational velocity of 8 km/s. It has 1.4 times the mass and 1.56 times the radius of the Sun. The star is radiating 3.7 times the luminosity of the Sun from its photosphere at an effective temperature of 6,391 K.
Planetary system:
This star has been under observation as part of a survey using the HARPS spectrogram for a period of 7.5 years. In 2012, the detection of an exoplanetary companion using the radial velocity method was announced. This result was published in January 2013. The object is orbiting close to the host star at a distance of 0.07 AU (10 Gm) with a period of just 5.9 days and an eccentricity (ovalness) of 0.09. As the inclination of the orbital plane is unknown, only a lower limit on the mass can be determined; this lower bound is about equal to the mass of Saturn.
Planetary system:
There is marginal evidence for an infrared excess at a wavelength of 12 μm, indicating the likely grain size. More measurements are needed to confirm this signal. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Spontaneous order**
Spontaneous order:
Spontaneous order, also named self-organization in the hard sciences, is the spontaneous emergence of order out of seeming chaos. The term "self-organization" is more often used for physical changes and biological processes, while "spontaneous order" is typically used to describe the emergence of various kinds of social orders in human social networks from the behavior of a combination of self-interested individuals who are not intentionally trying to create order through planning. Proposed examples of systems which evolved through spontaneous order or self-organization include the evolution of life on Earth, language, crystal structure, the Internet, Wikipedia, and a free market economy.Spontaneous orders are to be distinguished from organizations as being scale-free networks, while organizations are hierarchical networks. Further, organizations can be (and often are) a part of spontaneous social orders, but the reverse is not true. While organizations are created and controlled by specific individuals or groups, spontaneous orders are created and controlled by no one in particular. In economics and the social sciences, spontaneous order is defined as "the result of human actions, not of human design".In economics, spontaneous order is an equilibrium behavior among self-interested individuals, which is most likely to evolve and survive, obeying the natural selection process "survival of the likeliest".
History:
According to Murray Rothbard, the philosopher Zhuangzi (369–286 BCE) was the first to propose the idea of spontaneous order. Zhuangzi rejected the authoritarianism of Confucianism, writing that there "has been such a thing as letting mankind alone; there has never been such a thing as governing mankind [with success]." He articulated an early form of spontaneous order, asserting that "good order results spontaneously when things are let alone", a concept later "developed particularly by Proudhon in the nineteenth [century]".The thinkers of the Scottish Enlightenment developed and inquired into the idea of the market as a spontaneous order. In 1767, the sociologist and historian Adam Ferguson described society as the "result of human action, but not the execution of any human design".However, the term “spontaneous order” seems to have been coined by Michael Polanyi in his essay, “The Growth of Thought in Society,” Economica 8 (November 1941): 428–56.The Austrian School of Economics, led by Carl Menger, Ludwig von Mises and Friedrich Hayek made it a centerpiece in its social and economic thought. Hayek's theory of spontaneous order is the product of two related but distinct influences that do not always tend in the same direction. As an economic theorist, his explanations can be given a rational explanation. But as a legal and social theorist, he leans, by contrast, very heavily on a conservative and traditionalist approach which instructs us to submit blindly to a flow of events over which we can have little control.
Proposed examples:
Markets Many classical-liberal theorists, such as Hayek, have argued that market economies are a spontaneous order, and that they represent "a more efficient allocation of societal resources than any design could achieve." They claim this spontaneous order (referred to as the extended order in Hayek's The Fatal Conceit) is superior to any order a human mind can design due to the specifics of the information required. Centralized statistical data, they suppose, cannot convey this information because the statistics are created by abstracting away from the particulars of the situation.According to Norman P. Barry, this is illustrated in the concept of the invisible hand proposed by Adam Smith in The Wealth of Nations.Lawrence Reed, president of the Foundation for Economic Education, a libertarian think tank in the United States, argues that spontaneous order "is what happens when you leave people alone—when entrepreneurs... see the desires of people... and then provide for them." He further claims that "[entrepreneurs] respond to market signals, to prices. Prices tell them what's needed and how urgently and where. And it's infinitely better and more productive than relying on a handful of elites in some distant bureaucracy." Anarchism Anarchists argue that the state is in fact an artificial creation of the ruling elite, and that true spontaneous order would arise if it were eliminated. This is construed by some but not all as the ushering in of organization by anarchist law. In the anarchist view, such spontaneous order would involve the voluntary cooperation of individuals. According to the Oxford Dictionary of Sociology, "the work of many symbolic interactionists is largely compatible with the anarchist vision, since it harbours a view of society as spontaneous order." Sobornost The concept of spontaneous order can also be seen in the works of the Russian Slavophile movements and specifically in the works of Fyodor Dostoyevsky. The concept of an organic social manifestation as a concept in Russia expressed under the idea of sobornost. Sobornost was also used by Leo Tolstoy as an underpinning to the ideology of Christian anarchism. The concept was used to describe the uniting force behind the peasant or serf Obshchina in pre-Soviet Russia.
Proposed examples:
Other examples Perhaps the most prominent exponent of spontaneous order is Friedrich Hayek. In addition to arguing the economy is a spontaneous order, which he termed a catallaxy, he argued that common law and the brain are also types of spontaneous orders. In The Republic of Science, Michael Polanyi also argued that science is a spontaneous order, a theory further developed by Bill Butos and Thomas McQuade in a variety of papers. Gus DiZerega has argued that democracy is the spontaneous order form of government, David Emmanuel Andersson has argued that religion in places like the United States is a spontaneous order, and Troy Camplin argues that artistic and literary production are spontaneous orders. Paul Krugman has also contributed to spontaneous order theory in his book The Self-Organizing Economy, in which he claims that cities are self-organizing systems. Credibility thesis suggests that the credibility of social institutions is the driving factor behind the endogenous self-organization of institutions and their persistence.Different rules of game would cause different types of spontaneous order. If an economic society obeys the equal-opportunity rules, the resulting spontaneous order is reflected as an exponential income distribution; that is, for an equal-opportunity economic society, the exponential income distribution is most likely to evolve and survive. By analyzing datasets of household income from 66 countries and Hong Kong SAR, ranging from Europe to Latin America, North America and Asia, Tao et al found that, for all of these countries, the income structure for the great majority of populations (low and middle income classes) follows an exponential income distribution.
Criticism:
Roland Kley writes about Hayek's theory of spontaneous order that "the foundations of Hayek's liberalism are so incoherent" because the "idea of spontaneous order lacks distinctness and internal structure." The three components of Hayek's theory are lack of intentionality, the "primacy of tacit or practical knowledge", and the "natural selection of competitive traditions." While the first feature, that social institutions may arise in some unintended fashion, is indeed an essential element of spontaneous order, the second two are only implications, not essential elements.Hayek's theory has also been criticized for not offering a moral argument, and his overall outlook contains "incompatible strands that he never seeks to reconcile in a systematic manner." | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Vidya (philosophy)**
Vidya (philosophy):
Vidya (Sanskrit: विद्या, IAST: vidyā) figures prominently in all texts pertaining to Indian philosophy – meaning science, learning, knowledge, and scholarship. Most importantly, it refers to valid knowledge, which cannot be contradicted, and true knowledge, which is the intuitively-gained knowledge of the self. Vidya is not mere intellectual knowledge, for the Vedas demand understanding.
Meaning:
Vidya primarily means "correct knowledge" in any field of science, learning, philosophy, or any factual knowledge that cannot be disputed or refuted.Its root is vid (Sanskrit: विद्), which means "to reason upon", knower, finding, knowing, acquiring or understanding.
Hinduism:
In Hindu philosophy, vidyā refers to the knowledge of the soul or spiritual knowledge; it refers to the study of the six schools of Hindu philosophy: Nyaya, Yoga, Vaisheshika, Samkhya, Purvamimamsa and Uttaramimamsa. The process of gaining the knowledge of the Atman cannot commence unless one has explored the Prānavidya or Agnividya to the full in all its numerous phase; through vidyā or upasana to jnana was always the eternal order indicated by the Upanishads. Jnāna dawns after the completion and perfection of the being through the vidyās; then, one crosses over beyond birth and death having already destroyed the bonds of death.
Hinduism:
Vedas During the Vedic period, vidyādāna or the gift for the sake of education was considered to be the best of gifts, possessing a higher religious efficacy than even the gift of land. Vidyā comes from the root vid ("to know"); it therefore means knowledge, science, learning, lore, scholarship and philosophy.
Hinduism:
There are basically four Vidyas: Trayi (triple) which is the study of the Vedas and their auxiliary texts; Anviksiki which is logic and metaphysics; Dandaniti which is the science of governance; Varum, the practical arts such as agriculture, commerce, medicine etc.Vidyā gives insight, in the spiritual sphere it leads to salvation, in the mundane sphere it leads to progress and prosperity. Vidyā illuminates the mind and shatters illusions, increases intelligence, power and efficiency; develops the intellect and makes it more re-fined; it effects a complete transformation as the root of all happiness and as the source of illumination and power. The word, Vidyā, does not occur in the Rig Veda, it occurs in the Atharvaveda and in the Brahmana portions of the Yajurveda and in the Upanishads.
Hinduism:
Agni Vidyā Agni Vidyā or the science of fire is said to be the greatest discovery of the ancient Indians who gained direct experience of divine fire through continuous research, contemplation, observation and experimentation; their experience led them to discover ways of using this knowledge to heal and nurture the outer and the inner worlds. To them fire is sacred, and because of the pervasive nature of fire all things are sacred. Body and mind which are extensions of the fire that the soul spontaneously emits are also sacred. Within the body the most significant centres of fire are more subtle than those of the sense organs. They are called the chakras which are seven fields of sacred fire. The understanding of the role of fire without and within gives proper self-understanding which understanding is gained through yogic practices. The performance of yajnas is the karma-kānda aspect of agni vidyā. All rituals follow set rules and conditions. The main function of the fire ritual is to make an offering to nature's finest forces and divinities that fill the space of inner consciousness; fire carries oblations to these forces and divinities. The fire has seven tongues all having unique qualities. The gods, goddesses, divinities and nature's forces are grouped in seven main categories which match with the qualities of the seven tongues of fire.
Hinduism:
In Vedanta and the Upanishads Atmaikatva Atmaikatva or the absolute oneness of the self is the theme of entire Advaita Vedanta which distinguishes six pramanas or means of valid knowledge, but this vidyā or knowledge of Brahman is guhahita, gahavareshta i.e. set in the secret place and hidden in its depth, unattainable except through adhyātma-yoga, the meditation centering upon the nature of the self. Vedanta literature is only preparatory to it, it dispels ignorance and makes the mind receptive but does not reveal the truth therefore it is an indirect means of knowledge. The oneness of the self, which is self-established and self-shining, is called vidyā in cosmic reference which reveals the true nature of Brahman, the self-shining pure consciousness which is not a visaya ('object matter or content') but the one subject, transcendent of all conventional subjects and objects. The Self or the Atman is to be sought, the Self is to be enquired into, known and understood.
Hinduism:
Hierarchy of knowledge The sage of the Mundaka Upanishad (Verse I.1.4), more in the context of the ritualistic than of epistemological concerns, states that there are two kinds of knowledge (vidyā) to be attained, the higher (para) and the lower (apara). Para vidyā, the higher knowledge, is knowledge of the Absolute (Brahman, Atman); Apara, the lower knowledge, is knowledge of the world – of objects, events, means, ends, virtues and vices. Para vidyā has Reality as its content; Apara vidyā, the phenomenal world. According to Advaita Vedanta, Para vidyā, by the nature of its content, possesses a unique quality of ultimacy that annuls any supposed ultimacy that might be attached to any other or form of knowledge, and is intuitively gained as self-certifying. Once Brahman is realized all other modes of knowledge are seen to be touched by avidyā, the root of ignorance. In this context, Vidyā means true knowledge.
Hinduism:
However, it is argued that the Advaita Vedanta interpretation does not answer the final question: what is the reality or truth-value of avidyā or what is the substratum that is the basis or cause of avidyā? Valid knowledge The Upanishads teach that the knowledge of difference is avidyā or ignorance, and the knowledge of identity is true knowledge or vidyā or valid knowledge, which leads to life eternal. For the Cārvākas, perception is the only means of valid knowledge (pramana). Vadi Deva Suri of the Jaina school defines valid knowledge as determinate cognition which apprehends itself and an object and which is capable of prompting activity which attains a desirable object or rejects an undesirable object; the result of valid knowledge is cessation of ignorance. Vaisheshikas recognized four kinds of valid knowledge – Perception, Inference, Recollection and Intuition. The Mimamsa schools introduced the concept of intrinsic validity of knowledge (svatahpramanya) and extrinsic validity of knowledge (parastah-apramana) but agreed that the validity of knowledge cannot be determined by the knowledge of any special excellence in its cause or the knowledge of its harmony with the real nature of its object or the knowledge of a fruitful action. Sankara accepted perception, inference, scriptural testimony, comparison, presumption and non-apprehension as the six sources of knowledge and concluded that the knowledge which corresponds with the real nature of its object is valid. The Atman is the reality in the empirical self as the ever-present foundational subject-objectless universal consciousness which sustains the empirical self.
Hinduism:
Further Significance In upāsanā the movement starts from the outer extremities and gradually penetrates into the inmost recesses of the soul, and the whole investigation is conducted in two spheres, in the subject as well as in the object, in the individual as well as in the world, in the aham as also in the idam , in the adhyātma and also in adhidaiva spheres and conducted synthetically as well as analytically, through apti as well as samrddhi, which the Bhagavad Gita calls yoga and vibhooti . The vidyās do not rest content in knowing the reality simply as a whole but proceed further to comprehend it in all its infinite details too. The higher includes the lower grades and adds something more to it and never rejects it; the lower has its fulfilment in the higher and finds its consummation there but never faces extinction. All forms of contemplation have only one aim: to lead to the Supreme Knowledge and hence they are termed as vidyās; through vidyā, which is amrta, one attains immortality (Shvetashvatara Upanishad Verse V.1). Dahara Vidyā, Udgitha Vidyā and Madhu Vidyā are the synthetic way whereas the analytic way is signified by the Sleeping man of the Garga-Ajātsatru episode and by the Five Sheaths, which ways show that the world and the individual spring from the same eternal source.
Hinduism:
In Hindu Tantra In Hinduism, goddesses are personifications of the deepest level of power and energy. The concept of Shakti, in its most abstract terms, relates to the energetic principle of ultimate reality, the dynamic aspect of the divine. This concept surfaces in the Kena Upanishad as Goddess Umā bestowing Brahma-vidya on Indra; when linked with shakti and maya, she embodies the power of illusion (maya), encompassing ignorance (avidya) and knowledge (vidyā) and thereby presented with a dual personality. According to the Saktas, Māyā is basically a positive, creative, magical energy of the Goddess that brings forth the universe. The ten Mahāvidyās are bestowers or personifications of transcendent and liberating religious knowledge; the term Vidyā in this context refers to power, essence of reality and the mantras. The gentle and motherly forms of Goddess Sri Vidyā are 'right-handed'. When the awareness of the 'exterior' (Shiva) combined with the "I" encompasses the entire space as "I" it is called sada-siva-tattva. When later, discarding the abstraction of the Self and the exterior, clear identification with the insentient space takes place, it is called isvara-tattva; the investigation of these two last steps is pure vidyā (knowledge). Māyā, which has been identified with Prakrti in the Shvetashvatara Upanishad represents its three gunas; also identified with avidyā, which term primarily means the dark abyss of non-being and secondarily the mysterious darkness of the unmanifest state, Māyā binds through avidyā and releases through vidyā .
Buddhism:
In Theravada Buddhism, vidyā means 'non-dual awareness' of three marks of existence. In Tibetan Buddhism, the word, rigpa , meaning vidyā, similarly refers to non-dualistic awareness or intrinsic awareness.
Buddhism:
Vidyā mantras In Vajrayana texts, mantras exist in three forms: guhyā (secret), vidyā (knowledge) dhāraṇī (mnemonic). Male Buddhist tantric deities are represented by the grammatically masculine vidyā, while female Buddhist tantric deities are represented by the grammatically feminine dhāraṇī. The vidyā mantras constitute the knowledge and the mind of all the Buddhas and that which possesses the dharma-dhātu (essence of dhamma), and it is this knowledge, according to Cabezon, which "pacifies the suffering experienced in the existential world (saṃsāra) and the heaps of faults such as desire".
Buddhism:
Pañcavidyā In Buddhism, the pañcavidyā (Sanskrit; Chinese: 五明; pinyin: wǔ-míng) or "five sciences" are the five major classes of knowledge (vidyā) which bodhisattvas are said to have mastered. A recognised master of all five sciences is afforded the title paṇḍita. The five sciences are: the "science of language" (śabda vidyā; shēng-míng, 聲明); the "science of logic" (hetu vidyā; yīn-míng, 因明); the "science of medicine" (cikitsā vidyā; yào-míng, 藥明); the "science of fine arts and crafts" (śilpa-karma-sthāna vidyā; gōngqiǎo-míng, 工巧明); the "inner science" of spirituality (adhyātma vidyā; nèi-míng, 內明) which relates to the study of the Tripiṭaka. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Canine influenza**
Canine influenza:
Canine influenza (dog flu) is influenza occurring in canine animals. Canine influenza is caused by varieties of influenzavirus A, such as equine influenza virus H3N8, which was discovered to cause disease in canines in 2004. Because of the lack of previous exposure to this virus, dogs have no natural immunity to it. Therefore, the disease is rapidly transmitted between individual dogs. Canine influenza may be endemic in some regional dog populations of the United States. It is a disease with a high morbidity (incidence of symptoms) but a low incidence of death.A newer form was identified in Asia during the 2000s and has since caused outbreaks in the US as well. It is a mutation of H3N2 that adapted from its avian influenza origins. Vaccines have been developed for both strains.
History:
The highly contagious equine influenza A virus subtype H3N8 was found to have been the cause of Greyhound race dog fatalities from a respiratory illness at a Florida racetrack in January 2004. The exposure and transfer apparently occurred at horse-racing tracks, where dog racing had also occurred. This was the first evidence of an influenza A virus causing disease in dogs. However, serum collected from racing Greyhounds between 1984 and 2004 and tested for canine influenza virus (CIV) in 2007 had positive tests going as far back as 1999. CIV possibly caused some of the respiratory disease outbreaks at tracks between 1999 and 2003.H3N8 was also responsible for a major dog-flu outbreak in New York state in all breeds of dogs. From January to May 2005, outbreaks occurred at 20 racetracks in 10 states (Arizona, Arkansas, Colorado, Florida, Iowa, Kansas, Massachusetts, Rhode Island, Texas, and West Virginia). As of August 2006, dog flu has been confirmed in 22 U.S. states, including pet dogs in Wyoming, California, Connecticut, Delaware, and Hawaii. Three areas in the United States may now be considered endemic for CIV due to continuous waves of cases: New York, southern Florida, and northern Colorado/southern Wyoming. No evidence shows the virus can be transferred to people, cats, or other species.H5N1 (avian influenza) was also shown to cause death in one dog in Thailand, following ingestion of an infected duck.The H3N2 virus made its first appearance in Canada at the start of 2018, following the importation of two unknowingly infected canines from South Korea. Following this incidence, reports of the virus possibly spreading, with two other canines reporting alarming symptoms, were made public. By March 5th, 25 cases of infection were reportedly spread, although the number is thought to be closer to approximately 100. Influenza A viruses are enveloped, negative sense, single-stranded RNA viruses. Genome analysis has shown that H3N8 was transferred from horses to dogs and then adapted to dogs through point mutations in the genes. The incubation period is two to five days, and viral shedding may occur for seven to ten days following the onset of symptoms. It does not induce a persistent carrier state.In late 2022, together with Bordetella bronchiseptica and other respiratory pathogens, the H3N2 canine flu virus experienced a surge in canine infections. This was partially due to increased human travel and reopened offices following the relaxation of COVID-19 pandemic public health measures, leading to large numbers of dogs being placed together in kennels and doggy day care centers. Changing pet ownership behaviors also led to overcrowded animal shelters, which had been emptied at the height of the pandemic.
Symptoms:
About 80% of infected dogs with H3N8 show symptoms, usually mild (the other 20% have subclinical infections), and the fatality rate for Greyhounds in early outbreaks was 5 to 8%, although the overall fatality rate in the general pet and shelter population is probably less than 1%. Symptoms of the mild form include a cough that lasts for 10 to 30 days and possibly a greenish nasal discharge. Dogs with the more severe form may have a high fever and pneumonia. Pneumonia in these dogs is not caused by the influenza virus, but by secondary bacterial infections. The fatality rate of dogs that develop pneumonia secondary to canine influenza can reach 50% if not given proper treatment. Necropsies in dogs that die from the disease have revealed severe hemorrhagic pneumonia and evidence of vasculitis.
Diagnosis:
The presence of an upper respiratory tract infection in a dog that has been vaccinated for the other major causes of kennel cough increases suspicion of infection with canine influenza, especially in areas where the disease has been documented. A serum sample from a dog suspected of having canine influenza can be submitted to a laboratory that performs PCR tests for this virus.
Vaccine:
In June 2009, the United States Department of Agriculture (USDA) Animal and Plant Health Inspection Service (APHIS) approved the first canine influenza vaccine. This vaccine must be given twice initially with a two-week break, then annually thereafter.
H3N2 version:
A second form of canine influenza was first identified during 2006 in South Korea and southern China. The virus is an H3N2 variant that adapted from its avian influenza origins. An outbreak in the US was first reported in the Chicago area during 2015. Outbreaks were reported in several US states during the spring and summer of 2015 and had been reported in 25 states by late 2015.As of April 2015, the question of whether vaccination against the earlier strain offered protection had not been resolved. The US Department of Agriculture granted conditional approval for a canine H3N2-protective vaccine in December 2015.In March 2016, researchers reported that this strain had infected cats and suggested that it may be transmitted between them.
Human risk:
The H3N2 virus as a stand-alone virus is deemed harmless to humans. According to the Windsor-Essex County Health Unit, it is only when the H3N2 virus strain combines with a human strain of flu, "those strains could combine to create a new virus." The possibility of this is unlikely; however, if an infected dog contracts a human flu, there stands a slight chance. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Immunoadsorption**
Immunoadsorption:
Immunoadsorption is a procedure that removes specific blood group antibodies from the blood. It is needed to remove the antibodies against pathogenic antibodies.The procedure generally takes about three to four hours.Immunoadsorption was developed in the 1990s as a method of extracorporeal removal of molecules from the blood, in particular molecules of the immune system.
Different number of devices/columns exist on the market, each with a different active component to which the molecule of interest attaches, allowing for selectivity in the molecules of interest.Immunoadsorption may be used as an alternative to plasma exchange in certain conditions. Evidence of benefit is lacking in those with kidney problems. Concerns include that it is expensive.
Procedure:
Dual column system Blood first passes to plasma filter. Plasma then passes on to immunoadsorption column before returning to patient. As the plasma is passing through one column, the second column is being regenerated. Once the first column is saturated the flow switches to the second column while the first is then regenerated.
-1st step: the separation of plasma from the blood cells -2nd step: the immunoadsorption column Treatment prescriptions for immunoadsorption are based on plasma volumes with different recommendations for each condition and depending on the condition being treated, sessions can be daily or intermittent.
The therapy:
Immunoadsorption could be used in various autoimmune-mediated neurological diseases in order to remove autoimmune antibodies and other pathological constituents from the patients blood It is increasingly recognized as a more specific alternative and generally appreciated for its potentially advantageous safety profile.Immunoadsorption is also used in kidney transplantation for either the preparation of the ABO-incompatible or the highly sensitized kidney transplant candidate before transplantation, or the treatment of antibody-mediated rejection after transplantation.
Indication:
The most frequently encountered complication of immunoadsorption is an allergic reaction to the filter or adsorption column. Medication may be given before the procedure to minimize the risk.
Indication:
Other side effects during the treatment could be dizziness, nausea or feeling cold.The usage of immunoadsorption as medical procedure is still limited in some countries of the world, especially in Northern America. The additional costs for immunoadsorption are balanced by the reduced length of stay time as well as the reduced need of plasma substituting solutions and handling of side effects. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Robbins algebra**
Robbins algebra:
In abstract algebra, a Robbins algebra is an algebra containing a single binary operation, usually denoted by ∨ , and a single unary operation usually denoted by ¬ satisfying the following axioms: For all elements a, b, and c: Associativity: a∨(b∨c)=(a∨b)∨c Commutativity: a∨b=b∨a Robbins equation: ¬(¬(a∨b)∨¬(a∨¬b))=a For many years, it was conjectured, but unproven, that all Robbins algebras are Boolean algebras. This was proved in 1996, so the term "Robbins algebra" is now simply a synonym for "Boolean algebra".
History:
In 1933, Edward Huntington proposed a new set of axioms for Boolean algebras, consisting of (1) and (2) above, plus: Huntington's equation: ¬(¬a∨b)∨¬(¬a∨¬b)=a.
From these axioms, Huntington derived the usual axioms of Boolean algebra.
History:
Very soon thereafter, Herbert Robbins posed the Robbins conjecture, namely that the Huntington equation could be replaced with what came to be called the Robbins equation, and the result would still be Boolean algebra. ∨ would interpret Boolean join and ¬ Boolean complement. Boolean meet and the constants 0 and 1 are easily defined from the Robbins algebra primitives. Pending verification of the conjecture, the system of Robbins was called "Robbins algebra." Verifying the Robbins conjecture required proving Huntington's equation, or some other axiomatization of a Boolean algebra, as theorems of a Robbins algebra. Huntington, Robbins, Alfred Tarski, and others worked on the problem, but failed to find a proof or counterexample.
History:
William McCune proved the conjecture in 1996, using the automated theorem prover EQP. For a complete proof of the Robbins conjecture in one consistent notation and following McCune closely, see Mann (2003). Dahn (1998) simplified McCune's machine proof. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**RF switch**
RF switch:
An RF switch or microwave switch is a device to route high frequency signals through transmission paths. RF (radio frequency) and microwave switches are used extensively in microwave test systems for signal routing between instruments and devices under test (DUT). Incorporating a switch into a switch matrix system enables you to route signals from multiple instruments to single or multiple DUTs. This allows multiple tests to be performed with the same setup, eliminating the need for frequent connects and disconnects. The entire testing process can be automated, increasing the throughput in high-volume production environments.
RF switch:
Like other electrical switches, RF and microwave switches provide different configurations for many different applications. Below is a list of typical switch configurations and usage: Single pole, double throw (SPDT or 1:2) switches route signals from one input to two output paths.Multiport switches or single pole, multiple throw (SPnT) switches allow a single input to multiple (three or more) output paths.
RF switch:
Transfer switches or double pole, double throw (DPDT) switches can serve various purposes.
Bypass switches insert or remove a test component from a signal path.RF CMOS switches are crucial to modern wireless telecommunication, including wireless networks and mobile communication devices. Infineon's bulk CMOS RF switches sell over 1 billion units annually, reaching a cumulative 5 billion units, as of 2018.
Technologies:
The two main kinds of RF and microwave switches have different capabilities: Electromechanical switches are based on the simple theory of electromagnetic induction. They rely on mechanical contacts as their switching mechanism.A solid state switch is an electronic switching device based on semiconductor technology (e.g. MOSFET, PIN diode). It functions similarly to an electromechanical switch except that it has no moving parts.
Parameters:
Frequency range RF and microwave applications range in frequency from 100 MHz for semiconductor to 60 GHz for satellite communications. Broadband accessories increase test system flexibility by extending frequency coverage. However, frequency is always application dependent and a broad operating frequency may be sacrificed to meet other critical parameters. For example, a network analyzer may perform a 1 ms sweep for an insertion loss measurement, so for this application settling time or switching speed becomes the critical parameter for ensuring measurement accuracy.
Parameters:
Insertion loss In addition to proper frequency selection, insertion loss is critical to testing. Losses greater than 1 or 2 dB will attenuate peak signal levels and increase rising and falling edge times. A low insertion loss system can be achieved by minimizing the number of connectors and through-paths, or by selecting low insertion loss devices for system configuration. As power is expensive at higher frequencies, electromechanical switches provide the lowest possible loss along the transmission path.
Parameters:
Return loss Return loss is caused by impedance mismatch between circuits. At microwave frequencies, the material properties as well as the dimensions of a network element play a significant role in determining the impedance match or mismatch caused by the distributed effect. Switches with excellent return loss performance ensure optimum power transfer through the switch and the entire network.
Repeatability Low insertion loss repeatability reduces sources of random errors in the measurement path, which improves measurement accuracy. The repeatability and reliability of a switch guarantees measurement accuracy and can cut the cost of ownership by reducing calibration cycles and increasing test system uptime.
Parameters:
Isolation Isolation is the degree of attenuation from an unwanted signal detected at the port of interest. Isolation becomes more important at higher frequencies. High isolation reduces the influence of signals from other channels, sustains the integrity of the measured signal, and reduces system measurement uncertainties. For instance, an RF switch matrix may need to route a signal to a spectrum analyzer for measurement at –70 dBm and to simultaneously route another signal at +20 dBm. In this case, switches with high isolation, 90 dB or more, will keep the measurement integrity of the low-power signal.
Parameters:
Switching speed Switching speed is defined as the time needed to change the state of a switch port (arm) from "ON' to "OFF" or from "OFF" to "ON".
Parameters:
Settling time As switching time only specifies an end value of 90% of the settled/final value of the RF signal, settling time is often highlighted in solid state switch performance where the need for accuracy and precision is more critical. Settling time is measured to a level closer to the final value. The widely used margin-to-final value of settling time is 0.01 dB (99.77% of the final value) and 0.05 dB (98.86% of the final value). This specification is commonly used for GaAs FET switches because they have a gate lag effect caused by electrons becoming trapped on the surface of the GaAs.
Parameters:
Power handling Power handling defines the ability of a switch to handle power and is very dependent on the design and materials used. There are different power handling ratings for switches such as hot switching, cold switching, average power and peak power. Hot switching occurs when RF/microwave power is present at the ports of the switching at the time of the switching. Cold switching occurs when the signal power is removed before switching. Cold switching results in lower contact stress and longer life.
Parameters:
Termination A 50-ohm load termination is critical in many applications, since each open unused transmission line has the possibility to resonate. This is important when designing a system that works up to 26 GHz or higher frequencies where switch isolation drops considerably. When the switch is connected to an active device, the reflected power of an unterminated path could possibly damage the source.
Parameters:
Electromechanical switches are categorized as terminated or unterminated. Terminated switches: when a selected path is closed, all other paths are terminated with 50 ohm loads, and the current to all the solenoids is cut off. Unterminated switches reflect power.Solid state switches are categorized as absorptive or reflective. Absorptive switches incorporate a 50 ohm termination in each of the output ports to present a low VSWR in both the OFF and ON states. Reflective switches conduct RF power when the diode is reverse biased and reflect RF power when forward biased.
Parameters:
Video leakage Video leakage refers to the spurious signals present at the RF ports of the switch when it is switched without an RF signal present. These signals arise from the waveforms generated by the switch driver and, in particular, from the leading edge voltage spike required for high-speed switching of PIN diodes. The amplitude of the video leakage depends on the design of the switch and the switch driver.
Parameters:
Operating life A long operating life reduces cost per cycle and budgetary constraints allowing manufacturers to be more competitive. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Translabyrinthine approach**
Translabyrinthine approach:
The translabyrinthine approach is a surgical approach to treating serious disorders of the cerebellopontine angle, (CPA), which is the most common location of posterior fossa tumors. especially acoustic neuroma. In this approach, the semicircular canals and vestibule, including the utricle and the saccule of the inner ear are removed, causing complete hearing loss in the operated ear. The procedure is typically performed by a team of surgeons, including a neurotologist (an ear, nose, and throat surgeon specializing in skull base surgery) as well as a neurosurgeon.
Background:
The translabyrinthine approach was developed by William F. House, M.D., who began doing dissections in the laboratory with the aid of magnification and subsequently developed the first middle cranial fossa and then the translabyrinthine approach for the removal of acoustic neuroma.
This surgical approach is typically performed by a team of surgeons, including a neurotologist (an ear, nose, and throat surgeon specializing in skull base surgery) as well as a neurosurgeon.
In this approach, the semicircular canals and vestibule, including the utricle and the saccule of the inner ear are removed with a surgical drill, causing complete sensorineural hearing loss in the operated ear. The facial nerve, which innervates the muscles of the face, is preserved in a higher percentage of cases than with other approaches.
Prior to the translabyrinthine approach, in the early 1960s acoustic neuromas were treated utilizing a suboccipital approach without the aid of an operating microscope. With the introduction of the translabyrinthine approach, mortality rates decreased from 40% in the State of California to 1%. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Hole.io**
Hole.io:
Hole.io is a 2018 arcade physics puzzle game with battle royale mechanics created by French studio Voodoo for Android and iOS.
Players control a hole in the ground that can move around the map. By consuming various objects, holes will increase in size, allowing players to consume larger objects as well as the smaller holes of other players.
Hole.io:
Critics praised the game when it debuted, and it took the top spot in the free apps section on the App Store and on Google Play. Some critics however, characterized the game as a clone of the 2018 independent game Donut County. It has also been criticized for being promoted as a multiplayer game when the other "players" are likely computer controlled NPCs.
Gameplay:
Hole.io combines several gameplay mechanics. In Classic mode, the player's objective is to become the largest hole by the end of a two-minute round by traveling around the area and consuming trees, humans, cars, and other objects which fall into the hole if appropriately sized. Gradually the hole becomes larger and capable of sucking in buildings and smaller holes. If an object is too big it will not fall in and might block the way preventing other objects from going through. Players need to utilize the game's real-time physics to their advantage and optimize their path for effective growth. Other holes can consume the player's hole resulting in "death" and respawning several seconds later."Battle" mode is a battle royale mode that pits the player against multiple opponents with the goal of being the last hole standing. While players can still consume the environment, the goal is to eliminate all other holes.Both Classic and "Battle" modes are not playing against players, but rather computers. Additionally, a solo mode exists which allows players to play alone with the goal of consuming as close to 100% of the city within two minutes. The simple mechanics of the game put it in the hyper-casual genre.
Comparison to Donut County:
Donut County is a 2018 independent video game that was in development for at least six years before its release on 28 August 2018, sparking allegations by that game's developer that Voodoo copied his idea. Both games use the same mechanic of a hole in the ground swallowing objects to grow bigger; however, Donut County additionally features a storyline and a cast of characters which Hole.io lacks. Conversely, Hole.io adds a cityscape. According to Variety, Hole.io's developer Voodoo's entire range of games consists of clones of other games. Voodoo managed to secure a $200 million investment from Goldman Sachs shortly after releasing Hole.io.In an August 2018 interview Donut County creator Ben Esposito remarked that developers like Voodoo who clone were on one side of the game making spectrum while he was on the other side coming up with new ideas.
Reception:
Soon after its release, Hole.io made it to the top of the free games section on the Apps Store and Google Play, receiving over 10 million downloads on Google Play alone.While some reviewers criticized it for copying core mechanics of Donut County, others characterized the game as being "oddly satisfying and addictive". | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**XDI**
XDI:
XDI (short for "eXtensible Data Interchange") is a semantic data interchange format and protocol under development by the OASIS XDI Technical Committee. The name comes from the addressable graph model XDI uses: every node in the XDI graph is its own RDF graph that is uniquely addressable.
Background:
The main features of XDI are: the ability to link and nest RDF graphs to provide context; full addressability of all nodes in the graph at any level of context; representation of XDI operations as graph statements so authorization can be built into the graph; a standard JSON serialization format; and a simple ontology language for defining shared semantics using XDI dictionary services.
Background:
The XDI protocol is based on an exchange of XDI messages which themselves are XDI graphs. Since the semantics of each message is fully contained within the XDI graph of that message, the XDI protocol can be bound to multiple transport protocols. The XDI TC is defining bindings to HTTP and HTTPS, however it is also exploring bindings to XMPP and potentially directly to TCP/IP.
Background:
XDI also provides a standardized portable authorization format called XDI link contracts. Link contracts are XDI subgraphs that express the permissions that one XDI actor (person, organization, or thing) grants to another for access to and usage of an XDI data graph. XDI link contracts enable these permissions to be expressed in a standard machine-readable format understood by any XDI endpoint.
Background:
This approach to globally distributed data sharing models the real-world mechanisms of social contracts and legal contracts that bind civilized people and organizations in the world today. Thus XDI can be a key enabler of a distributed Social Web. It has also been cited as a mechanism to support a new legal concept, Virtual Rights, which are based on a new legal entity, the "virtual identity", and a new fundamental right: "to have or not to have virtual identities".
Background:
Public services based on the OASIS XDI specification are under development by an international non-profit organization, XDI.org Archived 2008-05-13 at the Wayback Machine. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Precompiled header**
Precompiled header:
In computer programming, a precompiled header (PCH) is a (C or C++) header file that is compiled into an intermediate form that is faster to process for the compiler. Usage of precompiled headers may significantly reduce compilation time, especially when applied to large header files, header files that include many other header files, or header files that are included in many translation units.
Rationale:
In the C and C++ programming languages, a header file is a file whose text may be automatically included in another source file by the C preprocessor by the use of a preprocessor directive in the source file.
Rationale:
Header files can sometimes contain very large amounts of source code (for instance, the header files windows.h and Cocoa/Cocoa.h on Microsoft Windows and OS X, respectively). This is especially true with the advent of large "header" libraries that make extensive use of templates, like the Eigen math library and Boost C++ libraries. They are written almost entirely as header files that the user #includes, rather than being linked at runtime. Thus, each time the user compiles their program, the user is essentially recompiling numerous header libraries as well. (These would be precompiled into shared objects or dynamic link libraries in non "header" libraries.) To reduce compilation times, some compilers allow header files to be compiled into a form that is faster for the compiler to process. This intermediate form is known as a precompiled header, and is commonly held in a file named with the extension .pch or similar, such as .gch under the GNU Compiler Collection.
Usage:
For example, given a C++ file source.cpp that includes header.hpp: When compiling source.cpp for the first time with the precompiled header feature turned on, the compiler will generate a precompiled header, header.pch. The next time, if the timestamp of this header did not change, the compiler can skip the compilation phase relating to header.hpp and instead use header.pch directly.
Common implementations:
Microsoft Visual C and C++ Microsoft Visual C++ (version 6.0 and newer) can precompile any code, not just headers.
Common implementations:
It can do this in two ways: either precompiling all code up to a file whose name matches the /Ycfilename option or (when /Yc is specified without any filename) precompiling all code up to the first occurrence of #pragma hdrstop in the code The precompiled output is saved in a file named after the filename given to the /Yc option, with a .pch extension, or in a file named according to the name supplied by the /Fpfilename option.
Common implementations:
The /Yu option, subordinate to the /Yc option if used together, causes the compiler to make use of already precompiled code from such a file.pch.h (named stdafx.h before Visual Studio 2017) is a file generated by the Microsoft Visual Studio IDE wizard, that describes both standard system and project specific include files that are used frequently but hardly ever change.
Common implementations:
The afx in stdafx.h stands for application framework extensions. AFX was the original abbreviation for the Microsoft Foundation Classes (MFC). While the name stdafx.h was used by default in MSVC projects prior to version 2017, any alternative name may be manually specified.
Compatible compilers will precompile this file to reduce overall compile times. Visual C++ will not compile anything before the #include "pch.h" in the source file, unless the compile option /Yu'pch.h' is unchecked (by default); it assumes all code in the source up to and including that line is already compiled.
GCC Precompiled headers are supported in GCC (3.4 and newer). GCC's approach is similar to these of VC and compatible compilers. GCC saves precompiled versions of header files using a ".gch" suffix. When compiling a source file, the compiler checks whether this file is present in the same directory and uses it if possible.
GCC can only use the precompiled version if the same compiler switches are set as when the header was compiled and it may use at most one. Further, only preprocessor instructions may be placed before the precompiled header (because it must be directly or indirectly included through another normal header, before any compilable code).
GCC automatically identifies most header files by their extension. However, if this fails (e.g. because of non-standard header extensions), the -x switch can be used to ensure that GCC treats the file as a header.
Common implementations:
clang The clang compiler added support for PCH in Clang 2.5 / LLVM 2.5 of 2009. The compiler both tokenizes the input source code and performs syntactic and semantic analyses of headers, writing out the compiler's internal generated abstract syntax tree (AST) and symbol table to a precompiled header file.clang's precompiled header scheme, with some improvements such as the ability for one precompiled header to reference another, internally used, precompiled header, also forms the basis for its modules mechanism.
Common implementations:
It uses the same bitcode file format that is employed by LLVM, encapsulated in clang-specific sections within Common Object File Format or Extensible Linking Format files.
Common implementations:
C++Builder In the default project configuration, the C++Builder compiler implicitly generates precompiled headers for all headers included by a source module until the line #pragma hdrstop is found.: 76 Precompiled headers are shared for all modules of the project if possible. For example, when working with the Visual Component Library, it is common to include the vcl.h header first which contains most of the commonly used VCL header files. Thus, the precompiled header can be shared across all project modules, which dramatically reduces the build times.
Common implementations:
In addition, C++Builder can be instrumented to use a specific header file as precompiled header, similar to the mechanism provided by Visual C++.
C++Builder 2009 introduces a "Precompiled Header Wizard" which parses all source modules of the project for included header files, classifies them (i.e. excludes header files if they are part of the project or do not have an Include guard) and generates and tests a precompiled header for the specified files automatically.
Pretokenized header:
A pretokenized header (PTH) is a header file stored in a form that has been run through lexical analysis, but no semantic operations have been done on it. PTH is present in Clang before it supported PCH, and has also been tried in a branch of GCC.Compared to a full PCH mechanism, PTH has the advantages of language (and dialect) independence, as lexical analysis is similar for the C-family languages, and architecture independence, as the same stream of tokens can be used when compiling for different target architectures. It however has the disadvantage of not going any further than simple lexical analysis, requiring that syntactic and semantic analysis of the token stream be performed with every compilation. In addition, the time to compile scaling linearly with the size, in lexical tokens, of the pretokenized file, which is not necessarily the case for a fully-fledged precompilation mechanism (PCH in clang allows random access).Clang's pretokenization mechanism includes several minor mechanisms for assisting the pre-processor: caching of file existence and datestamp information, and recording inclusion guards so that guarded code can be quickly skipped over. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**3-fumarylpyruvate hydrolase**
3-fumarylpyruvate hydrolase:
3-Fumarylpyruvate hydrolase (EC 3.7.1.20, nagK (gene), naaD (gene)) is an enzyme with systematic name 3-fumarylpyruvate hydrolyase. This enzyme catalyses the following chemical reaction 3-fumarylpyruvate + H2O ⇌ fumarate + pyruvateThe enzyme is involved in bacterial degradation of 5-substituted salicylates. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Torque**
Torque:
In physics and mechanics, torque is the rotational analogue of linear force. It is also referred to as the moment of force (also abbreviated to moment). It describes the rate of change of angular momentum that would be imparted to an isolated body. The concept originated with the studies by Archimedes of the usage of levers, which is reflected in his famous quote: "Give me a lever and a place to stand and I will move the Earth". Just as a linear force is a push or a pull applied to a body, a torque can be thought of as a twist applied to an object with respect to a chosen point. Torque is defined as the product of the magnitude of the perpendicular component of the force and the distance of the line of action of a force from the point around which it is being determined. The law of conservation of energy can also be used to understand torque. The symbol for torque is typically τ , the lowercase Greek letter tau. When being referred to as moment of force, it is commonly denoted by M. In three dimensions, the torque is a pseudovector; for point particles, it is given by the cross product of the displacement vector and the force vector. The magnitude of torque applied to a rigid body depends on three quantities: the force applied, the lever arm vector connecting the point about which the torque is being measured to the point of force application, and the angle between the force and lever arm vectors. In symbols: τ=r×F sin θ, where τ is the torque vector and τ is the magnitude of the torque, r is the position vector (a vector from the point about which the torque is being measured to the point where the force is applied), and r is the magnitude of the position vector, F is the force vector, and F is the magnitude of the force vector, × denotes the cross product, which produces a vector that is perpendicular both to r and to F following the right-hand rule, θ is the angle between the force vector and the lever arm vector.The SI unit for torque is the newton-metre (N⋅m). For more on the units of torque, see § Units.
History:
The term torque (from Latin torquēre, 'to twist') is said to have been suggested by James Thomson and appeared in print in April, 1884. Usage is attested the same year by Silvanus P. Thompson in the first edition of Dynamo-Electric Machinery. Thompson motivates the term as follows: Just as the Newtonian definition of force is that which produces or tends to produce motion (along a line), so torque may be defined as that which produces or tends to produce torsion (around an axis). It is better to use a term which treats this action as a single definite entity than to use terms like "couple" and "moment", which suggest more complex ideas. The single notion of a twist applied to turn a shaft is better than the more complex notion of applying a linear force (or a pair of forces) with a certain leverage.
History:
Today, torque is referred to using different vocabulary depending on geographical location and field of study. This article follows the definition used in US physics in its usage of the word torque.In the UK and in US mechanical engineering, torque is referred to as moment of force, usually shortened to moment. This terminology can be traced back to at least 1811 in Siméon Denis Poisson's Traité de mécanique. An English translation of Poisson's work appears in 1842.
Definition and relation to angular momentum:
A force applied perpendicularly to a lever multiplied by its distance from the lever's fulcrum (the length of the lever arm) is its torque. A force of three newtons applied two metres from the fulcrum, for example, exerts the same torque as a force of one newton applied six metres from the fulcrum. The direction of the torque can be determined by using the right hand grip rule: if the fingers of the right hand are curled from the direction of the lever arm to the direction of the force, then the thumb points in the direction of the torque.More generally, the torque on a point particle (which has the position r in some reference frame) can be defined as the cross product: τ=r×F, where F is the force acting on the particle. The magnitude τ of the torque is given by sin θ, where F is the magnitude of the force applied, and θ is the angle between the position and force vectors. Alternatively, τ=rF⊥, where F⊥ is the amount of force directed perpendicularly to the position of the particle. Any force directed parallel to the particle's position vector does not produce a torque.It follows from the properties of the cross product that the torque vector is perpendicular to both the position and force vectors. Conversely, the torque vector defines the plane in which the position and force vectors lie. The resulting torque vector direction is determined by the right-hand rule.The net torque on a body determines the rate of change of the body's angular momentum, τ=dLdt where L is the angular momentum vector and t is time.
Definition and relation to angular momentum:
For the motion of a point particle, L=Iω, where I is the moment of inertia and ω is the orbital angular velocity pseudovector. It follows that τnet=I1ω1˙e1^+I2ω2˙e2^+I3ω3˙e3^+I1ω1de1^dt+I2ω2de2^dt+I3ω3de3^dt=Iω˙→+ω→×(Iω→) using the derivative of a versor isThis equation is the rotational analogue of Newton's second law for point particles, and is valid for any type of trajectory. In some simple cases like a rotating disc, where only the moment of inertia on rotating axis is, the rotational Newton's second law can bewhere {\textstyle I=mr^{2}} and α=ω˙→ Proof of the equivalence of definitions The definition of angular momentum for a single point particle is: where p is the particle's linear momentum and r is the position vector from the origin. The time-derivative of this is: This result can easily be proven by splitting the vectors into components and applying the product rule. Now using the definition of force {\textstyle \mathbf {F} ={\frac {\mathrm {d} \mathbf {p} }{\mathrm {d} t}}} (whether or not mass is constant) and the definition of velocity {\textstyle {\frac {\mathrm {d} \mathbf {r} }{\mathrm {d} t}}=\mathbf {v} } The cross product of momentum p with its associated velocity v is zero because velocity and momentum are parallel, so the second term vanishes.
Definition and relation to angular momentum:
By definition, torque τ = r × F. Therefore, torque on a particle is equal to the first derivative of its angular momentum with respect to time.
If multiple forces are applied, Newton's second law instead reads Fnet = ma, and it follows that This is a general proof for point particles.
The proof can be generalized to a system of point particles by applying the above proof to each of the point particles and then summing over all the point particles. Similarly, the proof can be generalized to a continuous mass by applying the above proof to each point within the mass, and then integrating over the entire mass.
Units:
Torque has the dimension of force times distance, symbolically T−2L2M. Although those fundamental dimensions are the same as that for energy or work, official SI literature suggests using the unit newton-metre (N⋅m) and never the joule. The unit newton-metre is properly denoted N⋅m.The traditional imperial and U.S. customary units for torque are the pound foot (lbf-ft), or for small values the pound inch (lbf-in). In the US, torque is most commonly referred to as the foot-pound (denoted as either lb-ft or ft-lb) and the inch-pound (denoted as in-lb). Practitioners depend on context and the hyphen in the abbreviation to know that these refer to torque and not to energy or moment of mass (as the symbolism ft-lb would properly imply).
Special cases and other facts:
Moment arm formula A very useful special case, often given as the definition of torque in fields other than physics, is as follows: moment arm force ).
Special cases and other facts:
The construction of the "moment arm" is shown in the figure to the right, along with the vectors r and F mentioned above. The problem with this definition is that it does not give the direction of the torque but only the magnitude, and hence it is difficult to use in three-dimensional cases. If the force is perpendicular to the displacement vector r, the moment arm will be equal to the distance to the centre, and torque will be a maximum for the given force. The equation for the magnitude of a torque, arising from a perpendicular force: distance to centre force ).
Special cases and other facts:
For example, if a person places a force of 10 N at the terminal end of a wrench that is 0.5 m long (or a force of 10 N acting 0.5 m from the twist point of a wrench of any length), the torque will be 5 N⋅m – assuming that the person moves the wrench by applying force in the plane of movement and perpendicular to the wrench.
Special cases and other facts:
Static equilibrium For an object to be in static equilibrium, not only must the sum of the forces be zero, but also the sum of the torques (moments) about any point. For a two-dimensional situation with horizontal and vertical forces, the sum of the forces requirement is two equations: ΣH = 0 and ΣV = 0, and the torque a third equation: Στ = 0. That is, to solve statically determinate equilibrium problems in two-dimensions, three equations are used.
Special cases and other facts:
Net force versus torque When the net force on the system is zero, the torque measured from any point in space is the same. For example, the torque on a current-carrying loop in a uniform magnetic field is the same regardless of the point of reference. If the net force F is not zero, and τ1 is the torque measured from r1 , then the torque measured from r2 is
Machine torque:
Torque forms part of the basic specification of an engine: the power output of an engine is expressed as its torque multiplied by the angular speed of the drive shaft. Internal-combustion engines produce useful torque only over a limited range of rotational speeds (typically from around 1,000–6,000 rpm for a small car). One can measure the varying torque output over that range with a dynamometer, and show it as a torque curve.
Machine torque:
Steam engines and electric motors tend to produce maximum torque close to zero rpm, with the torque diminishing as rotational speed rises (due to increasing friction and other constraints). Reciprocating steam-engines and electric motors can start heavy loads from zero rpm without a clutch.
Relationship between torque, power, and energy:
If a force is allowed to act through a distance, it is doing mechanical work. Similarly, if torque is allowed to act through an angular displacement, it is doing work. Mathematically, for rotation about a fixed axis through the center of mass, the work W can be expressed as W=∫θ1θ2τdθ, where τ is torque, and θ1 and θ2 represent (respectively) the initial and final angular positions of the body.
Relationship between torque, power, and energy:
Proof The work done by a variable force acting over a finite linear displacement s is given by integrating the force with respect to an elemental linear displacement ds W=∫s1s2F⋅ds However, the infinitesimal linear displacement ds is related to a corresponding angular displacement dθ and the radius vector r as ds=dθ×r Substitution in the above expression for work gives W=∫s1s2F⋅dθ×r The expression F⋅dθ×r is a scalar triple product given by [Fdθr] . An alternate expression for the same scalar triple product is [Fdθr]=r×F⋅dθ But as per the definition of torque, τ=r×F Corresponding substitution in the expression of work gives, W=∫s1s2τ⋅dθ Since the parameter of integration has been changed from linear displacement to angular displacement, the limits of the integration also change correspondingly, giving W=∫θ1θ2τ⋅dθ If the torque and the angular displacement are in the same direction, then the scalar product reduces to a product of magnitudes; i.e., cos 0=τdθ giving W=∫θ1θ2τdθ It follows from the work–energy principle that W also represents the change in the rotational kinetic energy Er of the body, given by Er=12Iω2, where I is the moment of inertia of the body and ω is its angular speed.Power is the work per unit time, given by P=τ⋅ω, where P is power, τ is torque, ω is the angular velocity, and ⋅ represents the scalar product.
Relationship between torque, power, and energy:
Algebraically, the equation may be rearranged to compute torque for a given angular speed and power output. Note that the power injected by the torque depends only on the instantaneous angular speed – not on whether the angular speed increases, decreases, or remains constant while the torque is being applied (this is equivalent to the linear case where the power injected by a force depends only on the instantaneous speed – not on the resulting acceleration, if any).
Relationship between torque, power, and energy:
In practice, this relationship can be observed in bicycles: Bicycles are typically composed of two road wheels, front and rear gears (referred to as sprockets) meshing with a chain, and a derailleur mechanism if the bicycle's transmission system allows multiple gear ratios to be used (i.e. multi-speed bicycle), all of which attached to the frame. A cyclist, the person who rides the bicycle, provides the input power by turning pedals, thereby cranking the front sprocket (commonly referred to as chainring). The input power provided by the cyclist is equal to the product of angular speed (i.e. the number of pedal revolutions per minute times 2π) and the torque at the spindle of the bicycle's crankset. The bicycle's drivetrain transmits the input power to the road wheel, which in turn conveys the received power to the road as the output power of the bicycle. Depending on the gear ratio of the bicycle, a (torque, angular speed)input pair is converted to a (torque, angular speed)output pair. By using a larger rear gear, or by switching to a lower gear in multi-speed bicycles, angular speed of the road wheels is decreased while the torque is increased, product of which (i.e. power) does not change.
Relationship between torque, power, and energy:
For SI units, the unit of power is the watt, the unit of torque is the newton-metre and the unit of angular speed is the radian per second (not rpm and not revolutions per second).
Relationship between torque, power, and energy:
The unit newton-metre is dimensionally equivalent to the joule, which is the unit of energy. In the case of torque, the unit is assigned to a vector, whereas for energy, it is assigned to a scalar. This means that the dimensional equivalence of the newton-metre and the joule may be applied in the former, but not in the latter case. This problem is addressed in orientational analysis, which treats the radian as a base unit rather than as a dimensionless unit.
Relationship between torque, power, and energy:
Conversion to other units A conversion factor may be necessary when using different units of power or torque. For example, if rotational speed (unit: revolution per minute or second) is used in place of angular speed (unit: radian per second), we must multiply by 2π radians per revolution. In the following formulas, P is power, τ is torque, and ν (Greek letter nu) is rotational speed.
Relationship between torque, power, and energy:
P=τ⋅2π⋅ν Showing units: PW=τN⋅m⋅2πrad/rev⋅νrev/s Dividing by 60 seconds per minute gives us the following.
60 s/min where rotational speed is in revolutions per minute (rpm, rev/min).
Some people (e.g., American automotive engineers) use horsepower (mechanical) for power, foot-pounds (lbf⋅ft) for torque and rpm for rotational speed. This results in the formula changing to: 33 000 .
The constant below (in foot-pounds per minute) changes with the definition of the horsepower; for example, using metric horsepower, it becomes approximately 32,550.
The use of other units (e.g., BTU per hour for power) would require a different custom conversion factor.
Derivation For a rotating object, the linear distance covered at the circumference of rotation is the product of the radius with the angle covered. That is: linear distance = radius × angular distance. And by definition, linear distance = linear speed × time = radius × angular speed × time.
By the definition of torque: torque = radius × force. We can rearrange this to determine force = torque ÷ radius. These two values can be substituted into the definition of power: power force linear distance time torque angular speed torque angular speed .
Relationship between torque, power, and energy:
The radius r and time t have dropped out of the equation. However, angular speed must be in radians per unit of time, by the assumed direct relationship between linear speed and angular speed at the beginning of the derivation. If the rotational speed is measured in revolutions per unit of time, the linear speed and distance are increased proportionately by 2π in the above derivation to give: power torque rotational speed .
Relationship between torque, power, and energy:
If torque is in newton-metres and rotational speed in revolutions per second, the above equation gives power in newton-metres per second or watts. If Imperial units are used, and if torque is in pounds-force feet and rotational speed in revolutions per minute, the above equation gives power in foot pounds-force per minute. The horsepower form of the equation is then derived by applying the conversion factor 33,000 ft⋅lbf/min per horsepower: power torque rotational speed ft lbf min horsepower 33 000 ft lbf min torque RPM 252 because 5252.113122 33 000 2π.
Principle of moments:
The principle of moments, also known as Varignon's theorem (not to be confused with the geometrical theorem of the same name) states that the resultant torques due to several forces applied to about a point is equal to the sum of the contributing torques: τ=r1×F1+r2×F2+…+rN×FN.
From this it follows that the torques resulting from two forces acting around a pivot on an object are balanced when r1×F1+r2×F2=0.
Torque multiplier:
Torque can be multiplied via three methods: by locating the fulcrum such that the length of a lever is increased; by using a longer lever; or by the use of a speed-reducing gearset or gear box. Such a mechanism multiplies torque, as rotation rate is reduced. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Polar opposite**
Polar opposite:
A polar opposite is the diametrically opposite point of a circle or sphere. It is mathematically known as an antipodal point, or antipode when referring to the Earth. It is also an idiom often used to describe people and ideas that are opposites.
Polar opposite:
Polar Opposite or Polar Opposites may also refer to: Polar Opposite, a 2011 EP by Sick Puppies Polar Opposites, a 2000 album by Junior Pantherz "Polar Opposites", an episode of the television series The Wild Thornberrys "Polar Opposites", an episode of the television series Tanked Polar Opposites, a 2008 film written by Paolo Mazzucato and directed by Fred Olen Ray "Polar Opposites", a song by Modest Mouse from the album The Lonesome Crowded West | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Anaconda (poker)**
Anaconda (poker):
Anaconda is a variety of stud poker. Other names for this game include "pass the trash", "screw your neighbor", "fuck your neighbor", and "3,2,1 left".
Play:
Each player is dealt seven cards. They then each select three cards to be passed to the player on their left. These cards are simply set on the table near their left-most opponent. No players get to see their new three cards until everyone has made a pass. Afterward, the players repeat the process, only with two cards, then again with one card. Players then discard two cards to make their best five-card poker hand.
Play:
In this version of the game, up to seven people can play, passing out a total of 49 cards and having three left over.
Betting:
A round of betting occurs before the first pass of three cards, then again after every card pass is made. Once players have set their hands, one card at a time is exposed, with a round of betting following each card.
Variations:
Anaconda can be changed in many ways, such as: Altering the number of starting cards (six cards is common).
Altering the number of cards passed.
Altering to whom the cards are passed.
Incorporating joker cards.
Including only one betting round & showdown after all passing rounds.
High-low split.
Designating certain cards as wild.
Removing all betting rounds. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Iron-55**
Iron-55:
Iron-55 (55Fe) is a radioactive isotope of iron with a nucleus containing 26 protons and 29 neutrons. It decays by electron capture to manganese-55 and this process has a half-life of 2.737 years. The emitted X-rays can be used as an X-ray source for various scientific analysis methods, such as X-ray diffraction. Iron-55 is also a source for Auger electrons, which are produced during the decay.
Decay:
Iron-55 decays via electron capture to manganese-55 with a half-life of 2.737 years. The electrons around the nucleus rapidly adjust themselves to the lowered charge without leaving their shell, and shortly thereafter the vacancy in the "K" shell left by the nuclear-captured electron is filled by an electron from a higher shell. The difference in energy is released by emitting Auger electrons of 5.19 keV, with a probability of about 60%, K-alpha-1 X-rays with energy of 5.89875 keV and a probability about 16.2%, K-alpha-2 X-rays with energy of 5.88765 keV and a probability of about 8.2%, or K-beta X-rays with nominal energy of 6.49045 keV and a probability about 2.85%. The energies of the K-alpha-1 and -2 X-rays are so similar that they are often specified as mono-energetic radiation with 5.9 keV photon energy. Its probability is about 28%. The remaining 12% is accounted for by lower-energy Auger electrons and a few photons from other, minor transitions.
Use:
The K-alpha X-rays emitted by the manganese-55 after the electron capture have been used as a laboratory source of X-rays in various X-ray scattering techniques. The advantages of the emitted X-rays are that they are monochromatic and are continuously produced over a years-long period. No electrical power is needed for this emission, which is ideal for portable X-ray instruments, such as X-ray fluorescence instruments. The ExoMars mission of ESA used, in 2016, such an iron-55 source for its combined X-ray diffraction/X-ray fluorescence spectrometer. The 2011 Mars mission MSL used a functionally similar spectrometer, but with a traditional, electrically powered X-ray source.The Auger electrons can be applied in electron capture detectors for gas chromatography. The more widely used nickel-63 sources provide electrons from beta decay.
Occurrence:
Iron-55 is most effectively produced by irradiation of iron with neutrons. The reaction (54Fe(n,γ)55Fe and 56Fe(n,2n)55Fe) of the two most abundant isotopes iron-54 and iron-56 with neutrons yields iron-55. Most of the observed iron-55 is produced in these irradiation reactions, and it is not a primary fission product. As a result of atmospheric nuclear tests in the 1950s, and until the test ban in 1963, considerable amounts of iron-55 have been released into the biosphere. People close to the test ranges, for example Iñupiat (Alaska Natives) and inhabitants of the Marshall Islands, accumulated significant amounts of radioactive iron. However, the short half-life and the test ban decreased, within several years, the available amount of iron-55 nearly to the pre-nuclear test levels. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**AirTrain**
AirTrain:
AirTrain is the name of several passenger railway operations connecting airports to existing rapid transport systems and city centres: Airtrain Citylink, a railway infrastructure company in Brisbane, Australia Airport railway line, Brisbane, Australia AirTrain (San Francisco International Airport), serving San Francisco, California Operated by the Port Authority of New York and New Jersey: AirTrain JFK, serving New York City's John F. Kennedy International Airport AirTrain Newark, serving Newark Liberty International Airport in New Jersey AirTrain LaGuardia, a proposed service to New York City's LaGuardia Airport | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Engineering cybernetics**
Engineering cybernetics:
Engineering cybernetics also known as technical cybernetics or cybernetic engineering, is the branch of cybernetics concerned with applications in engineering, in fields such as control engineering and robotics.
History:
Qian Xuesen (Hsue-Shen Tsien) defined engineering cybernetics as a theoretical field of "engineering science", the purpose of which is to "study those parts of the broad science of cybernetics which have direct engineering applications in designing controlled or guided systems". Published in 1954, Qian's published work "Engineering Cybernetics" describes the mathematical and engineering concepts of cybernetic ideas as understood at the time, breaking them down into granular scientific concepts for application. Qian's work is notable for going beyond model-based theories and arguing for the necessity of a new design principle for types of system the properties and characteristics of which are largely unknown.In the 2020s, concerns with the social consequences of cyber-physical systems, have led to calls to develop "a new branch of engineering", "drawing on the history of cybernetics and reimagining it for our 21st century challenges".
Popular usage:
1960's - An example of engineering cybernetics is a device designed in the mid-1960s by General Electric Company. Referred to as a CAM (cybernetic anthropomorphous machine), this machine was designed for use by the US Army ground troops. Operated by one man in a "cockpit" at the front end, the machine's "legs" steps were duplicates of the leg movements of the harnessed operator.
Popular usage:
A common use includes the treatment of neurological disorders with the purposeful application of neuromuscular electrical stimulation (NMES), or more precisely the use of functional electrical stimulation (FES). The most common used therapy is the 1980s introduced FES-cycling methods. Additional research is attempting to implement applications from control systems to improve FES-cycling. New research is being conducted using computer-controlled FES, where the musculoskeletal system is viewed as cybernetic system.
In Media:
1990's - Neon Genesis Evangelion the Japanese animation (anime) TV series featured giant robots piloted by humans that had a connection to the host machine via biological impulses. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Linopirdine**
Linopirdine:
Linopirdine is a putative cognition-enhancing drug with a novel mechanism of action. Linopirdine blocks the KCNQ2\3 heteromer M current with an IC50 of 2.4 micromolar disinhibiting acetylcholine release, and increasing hippocampal CA3-schaffer collateral mediated glutamate release onto CA1 pyramidal neurons. In a murine model linopirdine is able to nearly completely reverse the senescence-related decline in cortical c-FOS, an effect which is blocked by atropine and MK-801, suggesting Linopirdine can compensate for the age related decline in acetylcholine release. Linopirdine also blocks homomeric KCNQ1 and KCNQ4 voltage gated potassium channels which contribute to vascular tone with substantially less selectivity than KCNQ2/3. Linopirdine also acts as a glycine receptor antagonist in concentrations typical for Kv7 studies in the brain.
Synthesis:
The amide formation between diphenylamine (1) and oxalyl chloride [79-37-8] gives intermediate, CID:11594101 (2). Haworth type intramolecular cyclization of the acid chloride occurs on heating to afford 1-phenylisatin [723-89-7] (3). The reaction with 4-picoline (4) under PTC with a Quat. salt afforded the carbinol, CID:10358387 (5). Dehydration of the alcohol using acetic anhydride gives [33546-08-6] (6). The reduction of the olefin then afforded the indolone, CID:10470081 (7). The 3 position is now activated by the adjacent benzene ring on one side and the carbonyl group on the other. Alkylation with 4-picolylchloride [10445-91-7] (8) proceeds with hydroxide as the base to afford Linopirdine (9). | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Predicate variable**
Predicate variable:
In mathematical logic, a predicate variable is a predicate letter which functions as a "placeholder" for a relation (between terms), but which has not been specifically assigned any particular relation (or meaning). Common symbols for denoting predicate variables include capital roman letters such as P , Q and R , or lower case roman letters, e.g., x . In first-order logic, they can be more properly called metalinguistic variables. In higher-order logic, predicate variables correspond to propositional variables which can stand for well-formed formulas of the same logic, and such variables can be quantified by means of (at least) second-order quantifiers.
Notation:
Predicate variables should be distinguished from predicate constants, which could be represented either with a different (exclusive) set of predicate letters, or by their own symbols which really do have their own specific meaning in their domain of discourse: e.g. =,∈,≤,<,⊂,...
Notation:
If letters are used for both predicate constants and predicate variables, then there must be a way of distinguishing between them. One possibility is to use letters W, X, Y, Z to represent predicate variables and letters A, B, C,..., U, V to represent predicate constants. If these letters are not enough, then numerical subscripts can be appended after the letter in question (as in X1, X2, X3). Another option is to use Greek lower-case letters to represent such metavariable predicates. Then, such letters could be used to represent entire well-formed formulae (wff) of the predicate calculus: any free variable terms of the wff could be incorporated as terms of the Greek-letter predicate. This is the first step towards creating a higher-order logic.
Usage:
If the predicate variables are not defined as belonging to the vocabulary of the predicate calculus, then they are predicate metavariables, whereas the rest of the predicates are just called "predicate letters". The metavariables are thus understood to be used to code for axiom schema and theorem schemata (derived from the axiom schemata). Whether the "predicate letters" are constants or variables is a subtle point: they are not constants in the same sense that =,∈,≤,<,⊂, are predicate constants, or that 1,2,3,2,π,e are numerical constants.
Usage:
If "predicate variables" are only allowed to be bound to predicate letters of zero arity (which have no arguments), where such letters represent propositions, then such variables are propositional variables, and any predicate logic which allows second-order quantifiers to be used to bind such propositional variables is a second-order predicate calculus, or second-order logic.
Usage:
If predicate variables are also allowed to be bound to predicate letters which are unary or have higher arity, and when such letters represent propositional functions, such that the domain of the arguments is mapped to a range of different propositions, and when such variables can be bound by quantifiers to such sets of propositions, then the result is a higher-order predicate calculus, or higher-order logic. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Circadian advantage**
Circadian advantage:
A circadian advantage is an advantage gained when an organism's biological cycles are in tune with its surroundings. It is not a well studied phenomenon, but it is known to occur in certain types of cyanobacteria, whose endogenous cycles, or circadian rhythm, "resonates" or aligns with their environment. It is known to occur in plants also, suggesting that any organism which is able to attune its natural growth cycles with its environment will have a competitive advantage over those that do not. Circadian advantage may also refer to sporting teams gaining an advantage by acclimatizing to the time zone where a match is played.
In organisms:
In the context of bacterial circadian rhythms, specifically in cyanobacteria, circadian advantage refers to the improved survival of strains of cyanobacteria whose endogenous cycles "resonate" or align with the environmental circadian rhythm. For example, consider a strain with a free-running period (FRP) of 24 hours that is co-cultured with a strain that has a free-running period (FRP) of 30 hours in a light-dark cycle of 12 hours light and 12 hours dark (LD 12:12). The strain that has a 24 hour FRP will out-compete the 30 hour strain over time.
In organisms:
Competition studies in plants provide another example of circadian advantage. These studies have shown that an endogenous clock that resonates with environmental cycles leads to a competitive advantage in Arabidopsis thaliana. Experiments with wild type, short circadian period mutants, and long circadian period mutants demonstrated that plants with a circadian period that is optimally synchronized to the environment grew fastest. The same study also showed that photosynthetic carbon fixation was directly correlated to “circadian resonance”. A different study discovered that genes involved in photosynthetic reactions of A. thaliana are under clock control. mRNAs that encode chlorophyll binding proteins and the enzyme protoporphyrin IX magnesium chelatase involved in chlorophyll synthesis were cycling. The “circadian resonance” increase in productivity may arise from appropriate anticipation of sunrise and sunset, allowing for timely synthesis of light-harvesting complex proteins and chlorophyll. Therefore, the competitive advantage in A. thaliana further supports the idea that anticipation of environmental changes leads to enhanced fitness.
In organisms:
Rhodopseudomonas palustris is another example of the advantage in having a biological timing system that interacts with the environmental cycles. While the only prokaryotic group with a well-known circadian timekeeping mechanism is the cyanobacteria, recent discoveries involving R. palustris have suggested alternative timekeeping mechanisms among the prokaryotes. R. palustris is a purple non-sulfur bacterium that has KaiB and KaiC genes and exhibit adaptive kaiC-dependent growth in 24h cyclic environments. However, R. palustris was reported to show a poorly self-sustained intrinsic rhythm, and kaiC-dependent growth enhancement was not present under constant conditions. The R. palustris system was proposed as a “proto” circadian timekeeper that exhibit some parts of circadian systems (kaiB and kaiC homologs), but not all.
In organisms:
Likewise, research on the endogenous circadian timekeeping mechanisms in mice further supports that “circadian resonance” is evolutionarily adaptive. One study in particular compared the fitness of wild-type mice with mutant mice which had a short free-running circadian cycle. These mice had a mutation in the casein kinase 1Ɛ gene, which encodes an enzyme that is integral in controlling circadian cycle length. A mixed group of wild-type and mutant mice were then released in an outdoor experimental enclosure and, following a fourteen month timespan, the mice were monitored. The wild-type mice both survived longer and reproduced at a greater rate than the mutant mice. In fact, the mutant genotype was strongly selected against, thereby suggesting natural selection towards those genotypes that are resonant with the natural LD cycle.
In organisms:
It is possible that circadian clocks play a role in the gut microbiota behavior. These microorganisms experience daily changes correlated with daily light/dark and temperature cycles. This occurs through behaviors such as eating rhythms on a daily routine (consumption in the day for diurnal animals and in the night for nocturnal animals). The presence of a daily timekeeper might give those bacteria a competitive advantage over others. By allowing the bacteria to sense resources coming from the host in order to prepare and metabolize them faster. There are bacteria that have daily timekeepers, and it may be possible that the microbiota have endogenous clocks which communicate with biological clocks of the host. For instance, if there are some time-keeping qualities of the microorganisms within the intestines, it might be possible that they can affect the circadian system of the host. An endogenous clock may be present in some microbial species, and the presence of such an intrinsic timekeeper could be beneficial both in the gut (which experiences daily changes in nutrient availability) and the environment outside of the host (which experiences daily cycles of light and temperature).
In sport:
In competitive sport, a circadian advantage is a team's advantage over another by virtue of its relative degree of acclimation to a time zone versus their opponent. While this concept was explored by researchers at Stanford in 1997, and at the University of Massachusetts, the term was coined in 2004 by Dr. W. Christopher Winter, a sleep specialist and neurologist studying the effects of travel between time zones on Major League Baseball (MLB) performance.
In sport:
This study was expanded into a ten-year retrospective study with a grant through MLB that was completed by Dr. Winter and his research assistant Noah H. Green, then an undergraduate student at the University of Virginia. The work was presented in 2008 at the 22nd Annual Meeting of the Associated Professional Sleep Societies in Baltimore, Maryland.Using the convention that for every time zone crossed, synchronization to that time zone requires one day, teams can be analyzed during a season to see where they are in terms of being acclimated to their time zone of play. For example, consider the Washington Nationals. If they have been competing at home for the last 3 days or more, they would be completely acclimated to Eastern Standard Time (EST). If they were to travel to Los Angeles, upon arrival they would be 3 hours off, because they traveled 3 time zones west. Every 24 hours spent on the west coast, would bring them 1 hour closer to acclimation. So after 24 hours in Los Angeles, they would be 2 hours off. After 48 hours, they would be 1 hour off, and after 72 hours, they would be acclimated to west coast time and would stay that way until they left their time zone.
In sport:
Unlike home field advantage which is present any time two teams play a game that is not held in a neutral site, circadian advantage does not apply to all games. In a typical MLB season, it applies to approximately 20% of games played with the other 80% featuring teams at equal circadian advantage. In sports that allow more time between games, it may apply to significantly fewer games. Circadian advantage is much more of an issue in sports that feature significant international travel.
In sport:
Circadian advantage is most significant when a team holds a 3-hour advantage (or more) over another. This matchup is only encountered after very long flights where the traveling team plays soon after arrival, most commonly coast-to-coast flights in major North American and Australian leagues. As the magnitude of time zone differences between two teams becomes smaller, so too does circadian advantage.
In sport:
In 2018, pilot data collected by Walter Reed Army Institute of Research, was presented at the American Academy of Sleep Medicine's annual SLEEP meeting suggested National Football League teams perform better at night versus the day as a result of circadian advantage. It also indicated that teams had fewer turnovers at night. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Arrows (Unicode block)**
Arrows (Unicode block):
Arrows is a Unicode block containing line, curve, and semicircle symbols terminating in barbs or arrows.
Emoji:
The Arrows block contains eight emoji: U+2194–U+2199 and U+21A9–U+21AA.The block has sixteen standardized variants defined to specify emoji-style (U+FE0F VS16) or text presentation (U+FE0E VS15) for the eight emoji, all of which default to a text presentation.
History:
The following Unicode-related documents record the purpose and process of defining specific characters in the Arrows block: | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Paleosurface**
Paleosurface:
In geology and geomorphology a paleosurface is a surface made by erosion of considerable antiquity. Paleosurfaces might be flat or uneven in some cases having considerable relief. Flat and large paleosurfaces —that is planation surfaces— have higher potential to be preserved than small and irregular surfaces and are thus the most studied kind of paleosurfaces. Irregular paleosurfaces, albeit usually smaller than flat ones, occur across the globe, one example being the Sudetes etchsurfaces. In the case of peneplains it is argued that they become paleosurfaces once they are detached from the base level they grade to.Paleosurfaces form an important part of the geologic record in that they represent geological and geomorphological events.Traditionally geologist and geomorphologist view paleosurfaces differently. Geologists look into the endogenic or constructive processes occurring to create that surface, such as crustal uplift and igneous activity. The stratigraphic record is valued by geologists allowing for a broader range of surface types to be considered. However, when paleosurfaces are viewed by geomorphologists the exogenic or deconstructive processes are considered. This is because geomorphologists are primarily concerned with erosional and weathering processes.Geomorphologist Richard Huggett lists paleosurfaces as one of various categorizations of paleoplains. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Lutetium phthalocyanine**
Lutetium phthalocyanine:
Lutetium phthalocyanine (LuPc2) is a coordination compound derived from lutetium and two phthalocyanines. It was the first known example of a molecule that is an intrinsic semiconductor. It exhibits electrochromism, changing color when subject to a voltage.
Structure:
LuPc2 is a double-decker sandwich compound consisting of a Lu3+ ion coordinated to two the conjugate base of two phthalocyanines. The rings are arranged in a staggered conformation. The extremities of the two ligands are slightly distorted outwards. The complex features a non-innocent ligand, in the sense that the macrocycles carry an extra electron. It is a free radical with the unpaired electron sitting in a half-filled molecular orbital between the highest occupied and lowest unoccupied orbitals, allowing its electronic properties to be finely tuned.
Properties:
LuPc2, along with many substituted derivatives like the alkoxy-methyl derivative Lu[(C8H17OCH2)8Pc]2, can be deposited as a thin film with intrinsic semiconductor properties; said properties arise due to its radical nature and its low reduction potential compared to other metal phthalocyanines. This initially green film exhibits electrochromism; the oxidized form LuPc+2 is red, whereas the reduced form LuPc−2 is blue and the next two reduced forms are dark blue and violet, respectively. The green/red oxidation cycle can be repeated over 10,000 times in aqueous solution with dissolved alkali metal halides, before it is degraded by hydroxide ions; the green/blue redox degrades faster in water.
Electrical properties:
LuPc2 and other lanthanide phthalocyanines are of interest in the development of organic thin-film field-effect transistors.LuPc2 derivatives can be selected to change color in the presence of certain molecules, such as in gas detectors; for example, the thioether derivative Lu[(C6H13S)8Pc]2 changes from green to brownish-purple in the presence of NADH. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Ubiquitin—calmodulin ligase**
Ubiquitin—calmodulin ligase:
In enzymology, an ubiquitin-calmodulin ligase (EC 6.3.2.21) is an enzyme that catalyzes the chemical reaction n ATP + calmodulin + n ubiquitin ⇌ n AMP + n diphosphate + (ubiquitin)n-calmodulinThe 3 substrates of this enzyme are ATP, calmodulin, and ubiquitin, whereas its 3 products are AMP, diphosphate, and (ubiquitin)n-calmodulin.
This enzyme belongs to the family of ligases, specifically those forming carbon-nitrogen bonds as acid-D-amino-acid ligases (peptide synthases). The systematic name of this enzyme class is calmodulin:ubiquitin ligase (AMP-forming). Other names in common use include ubiquityl-calmodulin synthase, ubiquitin-calmodulin synthetase, ubiquityl-calmodulin synthetase, and uCaM-synthetase. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Amygdalin**
Amygdalin:
Amygdalin (from Ancient Greek: ἀμυγδαλή amygdalē "almond") is a naturally occurring chemical compound found in many plants, most notably in the seeds (kernels) of apricots, bitter almonds, apples, peaches, cherries and plums, and in the roots of manioc.
Amygdalin:
Amygdalin is classified as a cyanogenic glycoside, because each amygdalin molecule includes a nitrile group, which can be released as the toxic cyanide anion by the action of a beta-glucosidase. Eating amygdalin will cause it to release cyanide in the human body, and may lead to cyanide poisoning.Since the early 1950s, both amygdalin and a chemical derivative named laetrile have been promoted as alternative cancer treatments, often under the misnomer vitamin B17 (neither amygdalin nor laetrile is a vitamin). Scientific study has found them to not only be clinically ineffective in treating cancer, but also potentially toxic or lethal when taken by mouth due to cyanide poisoning. The promotion of laetrile to treat cancer has been described in the medical literature as a canonical example of quackery, and as "the slickest, most sophisticated, and certainly the most remunerative cancer quack promotion in medical history".
Chemistry:
Amygdalin is a cyanogenic glycoside derived from the aromatic amino acid phenylalanine. Amygdalin and prunasin are common among plants of the family Rosaceae, particularly the genus Prunus, Poaceae (grasses), Fabaceae (legumes), and in other food plants, including flaxseed and manioc. Within these plants, amygdalin and the enzymes necessary to hydrolyze it are stored in separate locations, and only mix as a result of tissue damage. This provides a natural defense system.Amygdalin is contained in stone fruit kernels, such as almonds, apricot (14 g/kg), peach (6.8 g/kg), and plum (4–17.5 g/kg depending on variety), and also in the seeds of the apple (3 g/kg). Benzaldehyde released from amygdalin provides a bitter flavor. Because of a difference in a recessive gene called Sweet kernal [Sk], much less amygdalin is present in nonbitter (or sweet) almond than bitter almond. In one study, bitter almond amygdalin concentrations ranged from 33 to 54 g/kg depending on variety; semibitter varieties averaged 1 g/kg and sweet varieties averaged 0.063 g/kg with significant variability based on variety and growing region.For one method of isolating amygdalin, the stones are removed from the fruit and cracked to obtain the kernels, which are dried in the sun or in ovens. The kernels are boiled in ethanol; on evaporation of the solution and the addition of diethyl ether, amygdalin is precipitated as minute white crystals. Natural amygdalin has the (R)-configuration at the chiral phenyl center. Under mild basic conditions, this stereogenic center isomerizes; the (S)-epimer is called neoamygdalin. Although the synthesized version of amygdalin is the (R)-epimer, the stereogenic center attached to the nitrile and phenyl groups easily epimerizes if the manufacturer does not store the compound correctly.Amygdalin is hydrolyzed by intestinal β-glucosidase (emulsin) and amygdalin beta-glucosidase (amygdalase) to give gentiobiose and L-mandelonitrile. Gentiobiose is further hydrolyzed to give glucose, whereas mandelonitrile (the cyanohydrin of benzaldehyde) decomposes to give benzaldehyde and hydrogen cyanide. Hydrogen cyanide in sufficient quantities (allowable daily intake: ~0.6 mg) causes cyanide poisoning which has a fatal oral dose range of 0.6–1.5 mg/kg of body weight.
Laetrile:
Laetrile (patented 1961) is a simpler semisynthetic derivative of amygdalin. Laetrile is synthesized from amygdalin by hydrolysis. The usual preferred commercial source is from apricot kernels (Prunus armeniaca). The name is derived from the separate words "laevorotatory" and "mandelonitrile". Laevorotatory describes the stereochemistry of the molecule, while mandelonitrile refers to the portion of the molecule from which cyanide is released by decomposition.
Laetrile:
A 500 mg laetrile tablet may contain between 2.5 and 25 mg of hydrogen cyanide.Like amygdalin, laetrile is hydrolyzed in the duodenum (alkaline) and in the intestine (enzymatically) to D-glucuronic acid and L-mandelonitrile; the latter hydrolyzes to benzaldehyde and hydrogen cyanide, that in sufficient quantities causes cyanide poisoning.Claims for laetrile were based on three different hypotheses: The first hypothesis proposed that cancerous cells contained copious beta-glucosidases, which release HCN from laetrile via hydrolysis. Normal cells were reportedly unaffected, because they contained low concentrations of beta-glucosidases and high concentrations of rhodanese, which converts HCN to the less toxic thiocyanate. Later, however, it was shown that both cancerous and normal cells contain only trace amounts of beta-glucosidases and similar amounts of rhodanese.The second proposed that, after ingestion, amygdalin was hydrolyzed to mandelonitrile, transported intact to the liver and converted to a beta-glucuronide complex, which was then carried to the cancerous cells, hydrolyzed by beta-glucuronidases to release mandelonitrile and then HCN. Mandelonitrile, however, dissociates to benzaldehyde and hydrogen cyanide, and cannot be stabilized by glycosylation.: 9 Finally, the third asserted that laetrile is the discovered vitamin B-17, and further suggests that cancer is a result of "B-17 deficiency". It postulated that regular dietary administration of this form of laetrile would, therefore, actually prevent all incidences of cancer. There is no evidence supporting this conjecture in the form of a physiologic process, nutritional requirement, or identification of any deficiency syndrome. The term "vitamin B-17" is not recognized by Committee on Nomenclature of the American Institute of Nutrition Vitamins. Ernst T. Krebs (not to be confused with Hans Adolf Krebs, the discoverer of the citric acid cycle) branded laetrile as a vitamin in order to have it classified as a nutritional supplement rather than as a pharmaceutical.
Laetrile:
History of laetrile Early usage Amygdalin was first isolated in 1830 from bitter almond seeds (Prunus dulcis) by Pierre-Jean Robiquet and Antoine Boutron-Charlard. Liebig and Wöhler found three hydrolysis products of amygdalin: sugar, benzaldehyde, and prussic acid (hydrogen cyanide, HCN). Later research showed that sulfuric acid hydrolyzes it into D-glucose, benzaldehyde, and prussic acid; while hydrochloric acid gives mandelic acid, D-glucose, and ammonia.In 1845 amygdalin was used as a cancer treatment in Russia, and in the 1920s in the United States, but it was considered too poisonous. In the 1950s, a purportedly non-toxic, synthetic form was patented for use as a meat preservative, and later marketed as laetrile for cancer treatment.The U.S. Food and Drug Administration prohibited the interstate shipment of amygdalin and laetrile in 1977. Thereafter, 27 U.S. states legalized the use of amygdalin within those states.
Laetrile:
Subsequent results In a 1977 controlled, blinded trial, laetrile showed no more activity than placebo.Subsequently, laetrile was tested on 14 tumor systems without evidence of effectiveness. The Memorial Sloan–Kettering Cancer Center (MSKCC) concluded that "laetrile showed no beneficial effects." Mistakes in an earlier MSKCC press release were highlighted by a group of laetrile proponents led by Ralph Moss, former public affairs official of MSKCC who had been fired following his appearance at a press conference accusing the hospital of covering up the benefits of laetrile. These mistakes were considered scientifically inconsequential, but Nicholas Wade in Science stated that "even the appearance of a departure from strict objectivity is unfortunate." The results from these studies were published all together.
Laetrile:
A 2015 systematic review from the Cochrane Collaboration found: The claims that laetrile or amygdalin have beneficial effects for cancer patients are not currently supported by sound clinical data. There is a considerable risk of serious adverse effects from cyanide poisoning after laetrile or amygdalin, especially after oral ingestion. The risk–benefit balance of laetrile or amygdalin as a treatment for cancer is therefore unambiguously negative.
Laetrile:
The authors also recommended, on ethical and scientific grounds, that no further clinical research into laetrile or amygdalin be conducted.Given the lack of evidence, laetrile has not been approved by the U.S. Food and Drug Administration or the European Commission.
Laetrile:
The U.S. National Institutes of Health evaluated the evidence separately and concluded that clinical trials of amygdalin showed little or no effect against cancer. For example, a 1982 trial by the Mayo Clinic of 175 patients found that tumor size had increased in all but one patient. The authors reported that "the hazards of amygdalin therapy were evidenced in several patients by symptoms of cyanide toxicity or by blood cyanide levels approaching the lethal range." The study concluded "Patients exposed to this agent should be instructed about the danger of cyanide poisoning, and their blood cyanide levels should be carefully monitored. Amygdalin (Laetrile) is a toxic drug that is not effective as a cancer treatment".
Laetrile:
Additionally, "No controlled clinical trials (trials that compare groups of patients who receive the new treatment to groups who do not) of laetrile have been reported."The side effects of laetrile treatment are the symptoms of cyanide poisoning. These symptoms include: nausea and vomiting, headache, dizziness, cherry red skin color, liver damage, abnormally low blood pressure, droopy upper eyelid, trouble walking due to damaged nerves, fever, mental confusion, coma, and death.
Laetrile:
The European Food Safety Agency's Panel on Contaminants in the Food Chain has studied the potential toxicity of the amygdalin in apricot kernels. The Panel reported, "If consumers follow the recommendations of websites that promote consumption of apricot kernels, their exposure to cyanide will greatly exceed" the dose expected to be toxic. The Panel also reported that acute cyanide toxicity had occurred in adults who had consumed 20 or more kernels and that in children "five or more kernels appear to be toxic".
Laetrile:
Advocacy and legality of laetrile Advocates for laetrile assert that there is a conspiracy between the US Food and Drug Administration, the pharmaceutical industry and the medical community, including the American Medical Association and the American Cancer Society, to exploit the American people, and especially cancer patients.Advocates of the use of laetrile have also changed the rationale for its use, first as a treatment of cancer, then as a vitamin, then as part of a "holistic" nutritional regimen, or as treatment for cancer pain, among others, none of which have any significant evidence supporting its use.Despite the lack of evidence for its use, laetrile developed a significant following due to its wide promotion as a "pain-free" treatment of cancer as an alternative to surgery and chemotherapy that have significant side effects. The use of laetrile led to a number of deaths.
Laetrile:
The FDA and AMA crackdown, begun in the 1970s, effectively escalated prices on the black market, played into the conspiracy narrative and enabled unscrupulous profiteers to foster multimillion-dollar smuggling empires.Some American cancer patients have traveled to Mexico for treatment with the substance, for example at the Oasis of Hope Hospital in Tijuana. The actor Steve McQueen died in Mexico following surgery to remove a stomach tumor, having previously undergone extended treatment for pleural mesothelioma (a cancer associated with asbestos exposure) under the care of William D. Kelley, a de-licensed dentist and orthodontist who claimed to have devised a cancer treatment involving pancreatic enzymes, 50 daily vitamins and minerals, frequent body shampoos, enemas, and a specific diet as well as laetrile.Laetrile advocates in the United States include Dean Burk, a former chief chemist of the National Cancer Institute cytochemistry laboratory, and national arm wrestling champion Jason Vale, who falsely claimed that his kidney and pancreatic cancers were cured by eating apricot seeds. Vale was convicted in 2004 for, among other things, fraudulently marketing laetrile as a cancer cure. The court also found that Vale had made at least $500,000 from his fraudulent sales of laetrile.In the 1970s, court cases in several states challenged the FDA's authority to restrict access to what they claimed are potentially lifesaving drugs. More than twenty states passed laws making the use of laetrile legal. After the unanimous Supreme Court ruling in United States v. Rutherford which established that interstate transport of the compound was illegal, usage fell off dramatically. The US Food and Drug Administration continues to seek jail sentences for vendors marketing laetrile for cancer treatment, calling it a "highly toxic product that has not shown any effect on treating cancer."
In popular culture:
The Law & Order episode "Second Opinion" is about a nutritional counselor named "Doctor" Haas giving patients laetrile as a cancer treatment for breast cancer as an alternative to getting a mastectomy. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Building the Colossus**
Building the Colossus:
Building The Colossus (1994) is the eighth album by American singer-songwriter Happy Rhodes.
Track listing:
All music, lyrics, voices, instruments and arrangements - Happy Rhodes (except as noted in credits) "Hold Me" - 4:40 "Just Like Tivoli" - 6:04 "Dying" - 5:44 "Collective Heart" - 4:43 "Building the Colossus" - 4:18 "Omar" - 4:47 "Pride" - 2:39 "You Never Told Me" - 5:03 "If I Ever See the Girl Again" - 5:33 "Down, Down" - 6:15 "Big Dreams, Big Life" - 2:31 "Glory" - 6:00Produced by Happy Rhodes and Kevin Bartlett Engineered by Pat Tessitore at Cathedral Sound Studios, Rensselaer, NY
Personnel:
Happy Rhodes: Vocals, Electronic Percussion, Keyboards, Nylon String Guitar, 12-String Guitar:, Acoustic Guitar, Synth Organ Kevin Bartlett: Electric Guitar, E-Bow Guitar, 12-String Guitar, 6-Strong Guitar, Nukelele Island Guitar, Acoustic Guitar, Electronic Percussion, Bass, Electric Bass, Synth Bass, Wacka-Wacka, Keyboards David Torn: Electronic Guitar, Electric Guitar, Seismic Anomalic Electric Guitar, Loops, Subliminal Guitar Loop, Jerry Marotta: Drums, Toms, Percussion Dave Sepowski: Electric Guitar Chuck D'Aloia: Slide Guitars, Nylon String Guitar Peter Sheehan: Additional Percussion Monica Wilson: CelloChatter in "Glory" by: Happy Rhodes, Kelly Bird, Karen Campbell, Rachael Cooper, Theresa Burns Parkhurst, Amy Abdou, Abba Rage | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**.DS Store**
.DS Store:
In the Apple macOS operating system, .DS_Store is a file that stores custom attributes of its containing folder, such as folder view options, icon positions, and other visual information. The name is an abbreviation of Desktop Services Store, reflecting its purpose. It is created and maintained by the Finder application in every folder, and has functions similar to the file desktop.ini in Microsoft Windows. Starting with a period . character, it is hidden in Finder and many Unix utilities. Its internal structure is proprietary, but has since been reverse-engineered. Starting at macOS 10.12 16A238m, Finder will not display .DS_Store files (even with com.apple.finder AppleShowAllFiles YES set).
Purpose and location:
The file .DS_Store is created in any directory (folder) accessed by the Finder application, even on remote file systems mounted from servers that share files (for example, via Server Message Block (SMB) protocol or the Apple Filing Protocol (AFP)). Remote file systems, however, could be excluded by operating system settings (such as permissions). Although primarily used by the Finder, these files were envisioned as a more general-purpose store of metadata about the display options of folders, such as icon positions and view settings. For example, on Mac OS X 10.4 "Tiger" and later, the ".DS_Store" files contain the Spotlight comments of the folder's files. These comments are also stored in the extended file attributes, but Finder does not read those.In earlier Apple operating systems, Finder applications created similar files, but at the root of the volume being accessed, including on foreign file systems, collecting all settings for all files on the volume (instead of having separate files for each respective folder).
Problems:
The complaints of many users prompted Apple to publish means to disable the creation of these files on remotely mounted network file systems. Since macOS High Sierra (10.13), Apple delays the metadata gathering for .DS_Store for folders sorted alphanumerically to improve browsing speed. However, these instructions do not apply to local drives, including USB flash drives, although there are some workarounds. Before Mac OS X 10.5, .DS_Store files were visible on remote filesystems..DS_Store files may impose additional burdens on a revision control process, since they are frequently changed and can therefore appear in commits, unless specifically excluded..DS_Store files are included in archives, such as ZIP, created by OS X users, along with other hidden files and directories like the AppleDouble ._..DS_Store files have been known to adversely affect copy operations. If multiple files are selected for file transfer, the copy operation will retroactively cancel all progress upon reaching a (duplicate) .DS_Store file, forcing the user to restart the copy operation from the beginning.Some Google Drive users on macOS reported that .DS_Store files were being flagged for copyright violations. Google stated that they had addressed an issue that "impacted a small number of Drive files" to try to prevent this issue from occurring. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Benzaldehyde**
Benzaldehyde:
Benzaldehyde (C6H5CHO) is an organic compound consisting of a benzene ring with a formyl substituent. It is the simplest aromatic aldehyde and one of the most industrially useful.
It is a colorless liquid with a characteristic almond-like odor. A component of bitter almond oil, benzaldehyde can be extracted from a number of other natural sources. Synthetic benzaldehyde is the flavoring agent in imitation almond extract, which is used to flavor cakes and other baked goods.
History:
Benzaldehyde was first extracted in 1803 by the French pharmacist Martrès. His experiments focused on elucidating the nature of amygdalin, the poisonous compound found in bitter almonds, the fruit of Prunus dulcis. Further work on the oil by Pierre Robiquet and Antoine Boutron Charlard, two French chemists, produced benzaldehyde. In 1832, Friedrich Wöhler and Justus von Liebig first synthesized benzaldehyde.
Production:
As of 1999, 7000 tonnes of synthetic and 100 tonnes of natural benzaldehyde were produced annually. Liquid phase chlorination and oxidation of toluene are the main routes. Numerous other methods have been developed, such as the partial oxidation of benzyl alcohol, alkali hydrolysis of benzal chloride, and the carbonylation of benzene.A significant quantity of natural benzaldehyde is produced from cinnamaldehyde obtained from cassia oil by the retro-aldol reaction: the cinnamaldehyde is heated in an aqueous/alcoholic solution between 90 °C and 150 °C with a base (most commonly sodium carbonate or bicarbonate) for 5 to 80 hours, followed by distillation of the formed benzaldehyde. This reaction also yields acetaldehyde. The natural status of benzaldehyde obtained in this way is controversial.
Occurrence:
Benzaldehyde and similar chemicals occur naturally in many foods. Most of the benzaldehyde that people eat is from natural plant foods, such as almonds.Almonds, apricots, apples, and cherry seed contain significant amounts of amygdalin. This glycoside breaks up under enzyme catalysis into benzaldehyde, hydrogen cyanide and two equivalents of glucose.
Benzaldehyde contributes to the scent of oyster mushrooms (Pleurotus ostreatus).
Reactions:
Benzaldehyde is easily oxidized to benzoic acid in air at room temperature, causing a common impurity in laboratory samples. Since the boiling point of benzoic acid is much higher than that of benzaldehyde, it may be purified by distillation. Benzyl alcohol can be formed from benzaldehyde by means of hydrogenation. Reaction of benzaldehyde with anhydrous sodium acetate and acetic anhydride yields cinnamic acid, while alcoholic potassium cyanide can be used to catalyze the condensation of benzaldehyde to benzoin. Benzaldehyde undergoes disproportionation upon treatment with concentrated alkali (Cannizzaro reaction): one molecule of the aldehyde is reduced to the benzyl alcohol and another molecule is simultaneously oxidized to benzoic acid.
Reactions:
With diols, including many sugars, benzaldehyde condenses to form benzylidene acetals.
Uses:
Benzaldehyde is commonly employed to confer almond flavor to foods and scented products, including e-cigarette liquids. It is sometimes used in cosmetics products.In industrial settings, benzaldehyde is used chiefly as a precursor to other organic compounds, ranging from pharmaceuticals to plastic additives. The aniline dye malachite green is prepared from benzaldehyde and dimethylaniline. Benzaldehyde is also a precursor to certain acridine dyes. Via aldol condensations, benzaldehyde is converted into derivatives of cinnamaldehyde and styrene. The synthesis of mandelic acid starts with the addition of hydrocyanic acid to benzaldehyde: The resulting cyanohydrin is hydrolysed to mandelic acid. (The scheme above depicts only one of the two formed enantiomers).
Uses:
Niche uses Benzaldehyde is used as a bee repellent. A small amount of benzaldehyde solution is placed on a fume board near the honeycombs. The bees then move away from the honey combs to avoid the fumes. The beekeeper can then remove the honey frames from the bee hive with less risk to both bees and beekeeper.
Safety:
As used in food, cosmetics, pharmaceuticals, and soap, benzaldehyde is "generally regarded as safe" (GRAS) by the US FDA and FEMA. This status was reaffirmed after a review in 2005. It is accepted in the European Union as a flavoring agent. Toxicology studies indicate that it is safe and non-carcinogenic in the concentrations used for foods and cosmetics, and may even have anti-carcinogenic (anti-cancer) properties.For a 70 kg human, the lethal dose is estimated at 50 mL. An acceptable daily intake of 15 mg/day has been identified for benzaldehyde by the United States Environmental Protection Agency. Benzaldehyde does not accumulate in human tissues. It is metabolized and then excreted in urine. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Stacking velocity**
Stacking velocity:
In reflection seismology, stacking velocity, or Normal Moveout (NMO) velocity, is the value of the seismic velocity obtained from the best fit of the traveltime curve by a hyperbola.. The hyperbolic approximation to the traveltime curve (two-way travel time versus offset) is known as Normal moveout (NMO). The procedure of finding the best fit on common midpoint (CMP) seismic gathers is known as NMO velocity analysis. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Comparison of Office Open XML software**
Comparison of Office Open XML software:
The Office Open XML format (OOXML), is an open and free document file format for saving and exchanging editable office documents such as text documents (including memos, reports, and books), spreadsheets, charts, and presentations.The following tables list applications supporting a version of the Office Open XML standard (ECMA-376 and ISO/IEC 29500:2008).
Text documents:
Word processors Word processors listed on a light purple background are discontinued.
Other applications Besides word processors, other programs can and do support the Office Open XML text format. See the list of software that supports Office Open XML for more. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Bergisch Gladbach Formation**
Bergisch Gladbach Formation:
The Bergisch Gladbach Formation is a geologic formation in Germany. It preserves fossils dating back to the Devonian period. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Precordial concordance**
Precordial concordance:
Precordial concordance, also known as QRS concordance is when all precordial leads on an electrocardiogram are either positive (positive concordance) or negative (negative concordance). When there is a negative concordance, it almost always represents a life-threatening condition called ventricular tachycardia because there is no other condition that suggests any abnormal conduction from the apex of the heart to the upper parts. However, in positive concordance another rare conditions such as left side accessory pathways or blocks are also possible. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Vortex (Amiga video game)**
Vortex (Amiga video game):
Vortex is a 1989 video game for the Amiga. The game is similar to the game Ebonstar.
Gameplay:
The player must keep anything from entering the swirling vortex in the middle of the screen, using a mouse to control movement and avoid walls.
Development:
The game was developed by Visionary Design Technologies(VDT for short) a company based in Canada that was founded by Randy Linden. It was announced in May 1989. The title was written by Andy Hook who also worked on Datastorm. a game that was also developed by VDT
Reception:
Juris Graney for the Australian Commodore and Amiga Review said "Overall the game is great. Sound effects and music are tastefully done and the graphics are excellent. If you, like me, are getting a little tired of the continuous line of look-alike punch-cm-ups and shoot-em-ups that are being paraded to us, then Vortex may be just what you are looking for" | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**MW DX**
MW DX:
MW DX, short for mediumwave DXing, is the hobby of receiving distant mediumwave (also known as AM) radio stations. MW DX is similar to TV and FM DX in that broadcast band (BCB) stations are the reception targets. However, the nature of the lower frequencies (530 – 1710 kHz) used by mediumwave radio stations is very much different from that of the VHF and UHF bands used by FM and TV broadcast stations, and therefore involves different receiving equipment, signal propagation, and reception techniques.
Propagation:
During the daytime, medium and high-powered mediumwave AM radio stations have a normal reception range of about 20 to 250 miles (32 to 400+ km), depending on the transmitter power, location, and the quality of the receiving equipment, including the amount of man-made and natural electromagnetic noise present. Long-distance reception is normally impeded by the D layer of the ionosphere, which during the daylight hours absorbs signals in the mediumwave range.
Propagation:
As the sun sets, the D layer weakens, allowing medium wave radio waves from such stations to bounce off the F layer of the ionosphere, producing reliable, long distance reception of (especially) high-powered stations up to about 1,200 miles (2,000 km) away on a nightly basis. Aside from the more or less regular reception of certain high-powered transmitters, variable conditions allow reception of different stations at different times - for example, on one night a medium-powered broadcaster from Cleveland, Ohio may be audible in Duluth, Minnesota, but not on the following night. Much of the hobby consists in trying to receive and log as many of these stations as possible, identifying target stations and frequencies to listen to and log.
Propagation:
Near or on the coastlines, trans-oceanic reception is quite common and a favored target of DXers in those areas. Very distant inter-continental DX from stations several thousands of miles away is possible even far inland, but may require exceptionally good conditions and a good receiver and antenna on the listening side.
DX stations evaporate from the dial as the sun rises. However, sunrise and sunset ("SRS" and "SSS") periods can provide interesting loggings.
MW DX in North America:
In the United States and Canada, stations on the mediumwave dial are spaced at 10 kHz intervals from 520 to 1710 kHz as prescribed since 1941 by the North American Regional Broadcasting Agreement. The tremendous number of radio stations in this region of the world and limited number of available frequencies means congestion is very common, and DXers may hear two, three, or more stations on the same frequency (especially on certain "graveyard" frequencies where many lower-powered stations operate). The most powerful stations in the two countries are clear-channel stations which can transmit with 50 kilowatts of power. Examples of stations in this category from the List of clear-channel stations are: WLS in Chicago on 890 kHz, KMOX in St. Louis on 1120 kHz, WSB in Atlanta on 750 kHz, WCCO in Minneapolis on 830 kHz, WWL in New Orleans on 870 kHz, CJBC from Toronto on 860 kHz, WABC in New York City on 770 kHz, WLW in Cincinnati on 700 kHz, WCBS, 880 kHz in New York City, and WTAM in Cleveland on 1100 kHz, all of which can be heard over much of the United States and Canada east of the Rocky Mountains. In the southern half of the United States, several Mexican stations can be heard. Many of these are called Border blaster stations because they program in English to reach the American market. Some of these operate with over 100 kW of power with highly directional antennae aimed northward to avoid interfering in the rest of Mexico. Many can be heard on a similar night-to-night basis. Many of these stations are also treaty allocated clear-channel stations, ensuring that there will be no interference or limited interference on the same frequency.
MW DX in North America:
Although some distant listeners may rely on such stations for non-DX purposes, such as to hear a certain talk show or sporting event, DX'ers generally log these stations when they begin the hobby and afterwards pay little attention to them while seeking out new, less powerful and well-heard stations, often with a few kilowatts of power or less, or unusually distant stations. Especially prized in the former category are receptions of distant traveler information service (TIS) stations, operated by the Department of Transportation to give visitors information. These stations typically run at very low powers (limited to 10 watts) and are only intended to cover small areas, but may travel thousands of miles under certain instances. Similar are the tiny radio stations operated by high schools.
MW DX in North America:
On the East Coast of the United States, it is not unusual for DX'ers to hear the high-powered European stations, which operate at 9 kHz intervals, rather than the 10 kHz in the United States, helping to reduce co-channel interference from domestic stations, from countries such as Spain and Norway. Stations from Africa and the Middle East are also often heard. The Pacific Coast of the US provides a similar opportunity with stations from Asian countries and Australia / New Zealand although a considerably longer distance must be covered. On both coasts, as well as in the middle portion of the country, "Pan-American" DX from Latin American and Caribbean nations is often sought and logged.
MW DX in North America:
The AM expanded band, or "X-Band" as MW DXers often call it (not to be confused with the range of microwave frequencies), runs from 1610 kHz to 1710 kHz. This is a relatively new portion of the mediumwave broadcast spectrum, with the first two applications for frequencies having been granted in 1997 [1]. The lower density of stations in this area of the spectrum, as well as a lack of stations with more than 10 kW of power in the United States, has led to many DX'ers taking interest here.
MW DX in Europe:
Stations in Europe often run higher power than American stations, sometimes several hundreds of kilowatts. Synchronous networks are also commonly used, with local transmitter stations often having less of a local identity than those in the United States and Canada. The wide variety of languages spoken over the DX'ing range, from Spanish to Arabic, adds an element of challenge to DXing in the region. Some stations in Europe have taken to Digital Radio Mondiale transmissions, requiring a receiver capable of demodulating such signals, or a computer loaded with special software coupled to the receiver.
MW DX in Europe:
DX reception of North American stations has been observed on many occasions. CJYQ 930 kHz and VOCM 590 kHz (both from St. John's, Newfoundland and Labrador) are generally the easiest to receive, and their presence is taken as an indication that the reception of more distant stations is possible. North American stations whose frequencies are furthest from the 9 kHz multiples used in Europe are easier to receive, particularly since 24-hour broadcasting is normal in Europe.
MW DX in Asia:
In the southern half of the China, Japan stations, some of which operate with over 200 kW of power, may be heard on a similar night-to-night basis. Many of these stations are also clear-channel stations, ensuring that there will be no interference or limited interference on the same frequency.
Equipment:
While any radio covering the mediumwave (AM radio) band can be used for DX purposes, serious DXers generally invest in a higher-quality receiver, and often a specialised indoor tuned box loop or outdoor longwire antenna.
At the lower end of the spectrum, a portable radio with a larger-than-normal internal ferrite core antenna designed for long-distance AM radio reception may be used, such as the discontinued GE Superadio, CC Radio, or the Panasonic RF-2200. The Sony ICF-SW7600G and the newer GR model are also excellent for budget minded MW dxing.
More serious DXers may spend much more for a tabletop shortwave communications receiver with good performance on the lower mediumwave frequencies using an external antenna, such as the AOR 7030+, Drake R8/R8A/R8B, Icom R-75, or Palstar R-30. Various models by Hallicrafters, Hammarlund and even home-made models from Heathkit have been popular.
In recent years, software-defined radios have become more popular for mediumwave DX. Radios like the Microtelecom Perseus and the Elad FDM-S2 can record the entire mediumwave band to a computer hard drive, which can then be played back and tuned later.
With any such receiver, a high-performance loop antenna may be employed, or in the alternative, one or more outdoor longwire Beverage antennas, sometimes many hundreds of meters long. In order to cancel out reception of unwanted stations, some DX listeners employ elaborate phased arrays of multiple Beverage antennas.
For trans-Atlantic or trans-Pacific reception, where the target station is on a 9 kHz rather than a 10 kHz multiple or vice versa, receivers with narrow RF filters are useful in rejecting adjacent broadcasts on the listener's own continent. To combat noise, DXers may use an outboard noise attenuation device, or a radio with built-in digital signal processing capabilities.
A personal computer with specialized logging software or simply a paper notebook is used to write logs. Recording devices can be used to archive memorable DX moments, or identify hard-to-hear station receptions after the fact. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Slush flow**
Slush flow:
A slushflow is rapid mass movement of water and snow, and is categorized as a type of debris flow. Slushflows are caused when water reaches a critical concentration in the snowpack due to more water inflow than outflow. The high water concentration weakens the cohesion of the snow crystals and increases the weight of the snowpack. A slushflow is released when the component of the force of gravity parallel to the slope generates a hydraulic pressure gradient exceeding the tensile strength and basal friction of the snowpack. While frequently compared to an avalanche, they have some key differences. Slushflows are lower in frequency than avalanches, have higher water content, are more laminar in flow, and have lower velocity. They most commonly occur at higher latitudes in the winter during high precipitation and in the spring during high snowmelt. Because of their high water content, they can flow down gentle slopes of only a few degrees at low altitudes. They are a significant hazard in Norway and Iceland and are responsible for the deaths of dozens of people as well as the destruction of buildings and the closure of roads. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Subgrouping**
Subgrouping:
Subgrouping in linguistics is the division of a language family into its constituent branches. In standard linguistic theory, subgroupings are determined based on shared innovations between languages. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Paediatric Glasgow Coma Scale**
Paediatric Glasgow Coma Scale:
The Paediatric Glasgow Coma Scale (British English) or the Pediatric Glasgow Coma Score (American English) or simply PGCS is the equivalent of the Glasgow Coma Scale (GCS) used to assess the level of consciousness of child patients. As many of the assessments for an adult patient would not be appropriate for infants, the Glasgow Coma Scale was modified slightly to form the PGCS. As with the GCS, the PGCS comprises three tests: eye, verbal and motor responses. The three values separately as well as their sum are considered. The lowest possible PGCS (the sum) is 3 (deep coma or death) whilst the highest is 15 (fully awake and aware person). The pediatric GCS is commonly used in emergency medical services.
Paediatric Glasgow Coma Scale:
In patients who are intubated, unconscious, or preverbal, the motor response is considered the most important component of the scale.
Coma scale:
Any combined score of less than eight represents a significant risk of mortality. A score of 12 or below indicates a severe head injury. A score of less than 8 indicates that intubation and ventilation may be necessary. A score of 6 or below indicates that intracranial pressure monitoring may be necessary. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Footwork (dance)**
Footwork (dance):
Footwork refers to dance technique aspects related to feet: foot position and foot action.
The following aspects of footwork may be considered: Dance technique: a proper footwork may be vital for proper posture and movement of a dancer.
Aesthetic value: some foot positions and actions are traditionally considered appealing, while other ones are ugly, although this depends on the culture.
Artistic expression: a sophisticated footwork may in itself be the goal of the dance expression.Different dances place different emphasis on the above aspects.
Ballet:
There are five basic dance positions that are necessary to dance ballet. Each dance move in ballet starts and ends in one of the five positions.
First Position- The back of the heels touch while the balls of the feet are facing outwards, completely.
Second Position- The same as first position but with the length of a foot in between the heels.
Third Position- Start off in second position and slide on foot back so that the arch of one foot is touching the heel of the other.
Fourth Position- The same as third position but with the heel of one foot at the level of the toes on the other, instead of at the arch, and a foot apart.
Fifth Position- Similar to fourth position but with the toes of each foot touching the heel of the other foot. Complete contact, if possible.
Ballroom:
In a narrow sense, e.g., in descriptions of ballroom dance figures, the term refers to the behavior of the foot when it meets the floor. In particular, it describes which part of the foot is in contact with the floor: ball, heel, flat, toe, high toe, inside/outside edge, etc.
Breaking:
In breakdance, moves performed on one's hands and feet may be referred to as downrock or (especially in the southern United States) as footwork. Typical moves in this type of dance include the "1-step", "2-steps", "3-steps", "4-steps", "5-steps", "6-steps", "coffee grinders", "Valdez", "C-C's", and "front C-C's". Additionally in breakdancing there exists moves specifically known as footwork which place emphasis on one's foot movement. Examples of such moves include the Indian step, Salsa step, and Crossovers.
Hip Hop:
The two basic movements in hip hop are popping and locking. It's the way in which you move your legs, arms, and torso by contracting and relaxing those muscles. When the dancer flexes, pauses, and stays in the same position, that is considered "locking". When the dancer quickly locks into another position, after the pause of locking, that is considered "popping".
Jazz:
Balance and grace are key components to be proficient in jazz. The dancers, usually, have a strong background in knowing how to dance ballet for that reason. Jazz consists of many walks, turns, and jumps. Personality is shown through the dance as they add a touch to each step. Instead of stopping and balancing in positions, jazz dancers move through them -- suspension.
Latin:
Bachata- Three steps to one side, pause, three steps to the other side, pause. This is practiced in a four-beat pattern.Cha Cha Cha- The basic movement of taking a step to the front or back then three quick steps between feet.Mambo- Shifting weight between feet while moving towards the front and then back. Three-beat step.Merengue- Step left and right while moving hips when shifting the weight.Salsa- Two quick steps followed by a pause or slow step. Turns are usually added for fun. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Olfactory nerve**
Olfactory nerve:
The olfactory nerve, also known as the first cranial nerve, cranial nerve I, or simply CN I, is a cranial nerve that contains sensory nerve fibers relating to the sense of smell.
Olfactory nerve:
The afferent nerve fibers of the olfactory receptor neurons transmit nerve impulses about odors to the central nervous system (olfaction). Derived from the embryonic nasal placode, the olfactory nerve is somewhat unusual among cranial nerves because it is capable of some regeneration if damaged. The olfactory nerve is sensory in nature and originates on the olfactory mucosa in the upper part of the nasal cavity. From the olfactory mucosa, the nerve (actually many small nerve fascicles) travels up through the cribriform plate of the ethmoid bone to reach the surface of the brain. Here the fascicles enter the olfactory bulb and synapse there; from the bulbs (one on each side) the olfactory information is transmitted into the brain via the olfactory tract. The fascicles of the olfactory nerve are not visible on a cadaver brain because they are severed upon removal. : 548
Structure:
The specialized olfactory receptor neurons of the olfactory nerve are located in the olfactory mucosa of the upper parts of the nasal cavity. The olfactory nerves consist of a collection of many sensory nerve fibers that extend from the olfactory epithelium to the olfactory bulb, passing through the many openings of the cribriform plate, a sieve-like structure of the ethmoid bone.
Structure:
The sense of smell arises from the stimulation of receptors by small molecules in inspired air of varying spatial, chemical, and electrical properties that reach the nasal epithelium in the nasal cavity during inhalation. These stimulants are transduced into electrical activity in the olfactory neurons, which then transmit these impulses to the olfactory bulb and from there they reach the olfactory areas of the brain via the olfactory tract.
Structure:
The olfactory nerve is the shortest of the twelve cranial nerves and, similar to the optic nerve, does not emanate from the brainstem.
Function:
The afferent nerve fibers of the olfactory receptor neurons transmit nerve impulses about odors to the central nervous system, where they are perceived as smell (olfaction).
The olfactory nerve is special visceral afferent (SVA).
Clinical significance:
Examination Damage to this nerve leads to impairment or total loss anosmia of the sense of smell To simply test the function of the olfactory nerve, each nostril is tested with a pungent odor. If the odor is smelled, the olfactory nerve is likely functioning. On the other hand, the nerve is only one of several reasons that could explain if the odor is not smelled. There are olfactory testing packets in which strong odors are embedded into cards and the responses of the patient to each odor can be determined.
Clinical significance:
Lesions Lesions to the olfactory nerve can occur because of "blunt trauma", such as coup-contrecoup damage, meningitis, and tumors of the frontal lobe of the brain. These injuries often lead to a reduced ability to taste and smell. Lesions of the olfactory nerve do not lead to a reduced ability to sense pain from the nasal epithelium. This is because pain from the nasal epithelium is not carried to the central nervous system by the olfactory nerve - it is carried to the central nervous system by the trigeminal nerve.
Clinical significance:
Aging and smell A decrease in the ability to smell is a normal consequence of human aging, and usually is more pronounced in men than in women. It is often unrecognized in patients except that they may note a decreased ability to taste (much of taste is actually based on reception of food odor). Some of this decrease results from repeated damage to the olfactory nerve receptors due likely to repeated upper respiratory infections. Patients with Alzheimer's disease almost always have an abnormal sense of smell when tested.
Clinical significance:
Pathway to the brain Some nanoparticles entering the nose are transported to the brain via olfactory nerve. This can be useful for nasal administration of medications. It can be harmful when the particles are soot or magnetite in air pollution.In naegleriasis, "brain-eating" amoeba enter through the olfactory mucosa of the nasal tissues and follow the olfactory nerve fibers into the olfactory bulbs and then the brain. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Reinforced concrete structures durability**
Reinforced concrete structures durability:
The durability design of reinforced concrete structures has been recently introduced in national and international regulations. It is required that structures are designed to preserve their characteristics during the service life, avoiding premature failure and the need of extraordinary maintenance and restoration works. Considerable efforts have therefore made in the last decades in order to define useful models describing the degradation processes affecting reinforced concrete structures, to be used during the design stage in order to assess the material characteristics and the structural layout of the structure.
Service life of a reinforced concrete structure:
Initially, the chemical reactions that normally occur in the cement paste, generate an alkaline environment, bringing the solution in the cement paste pores to pH values around 13. In these conditions, passivation of steel rebar occurs, due to a spontaneous generation of a thin film of oxides able to protect the steel from corrosion. Over time, the thin film can be damaged, and corrosion of steel rebar starts. The corrosion of steel rebar is one of the main causes of premature failure of reinforced concrete structures worldwide, mainly as a consequence of two degradation processes, carbonation and penetration of chlorides. With regard to the corrosion degradation process, a simple and accredited model for the assessment of the service life is the one proposed by Tuutti, in 1982. According to this model, the service life of a reinforced concrete structure can be divided into two distinct phases.
Service life of a reinforced concrete structure:
ti , initiation time: from the moment the structure is built, to the moment corrosion initiates on steel rebar. More in particular, it is the time required for aggressive agents (carbon dioxide and chlorides) to penetrate the concrete cover thickness, reach the embedded steel rebar, alter the initial passivation condition on steel surface and cause corrosion initiation.
Service life of a reinforced concrete structure:
tp , propagation time: which is defined as the time from the onset of active corrosion until an ultimate limit state is reached, i.e. corrosion propagation reaches a limit value corresponding to unacceptable structural damage, such as cracking and detachment of the concrete cover thickness.The identification of initiation time and propagation time is useful to further identify the main variables and processes influencing the service life of the structure which are specific of each service life phase and of the degradation process considered.
Service life of a reinforced concrete structure:
Carbonation-induced corrosion The initiation time is related to the rate at which carbonation propagates in the concrete cover thickness. Once that carbonation reaches the steel surface, altering the local pH value of the environment, the protective thin film of oxides on the steel surface becomes instable, and corrosion initiates involving an extended portion of the steel surface. One of the most simplified and accredited models describing the propagation of carbonation in time is to consider penetration depth proportional to the square root of time, following the correlation x=Kt where x is the carbonation depth, t is time, and K is the carbonation coefficient. The corrosion onset takes place when the carbonation depth reaches the concrete cover thickness, and therefore can be evaluated as ti=(cK)2 where c is the concrete cover thickness. K is the key design parameter to assess initiation time in the case of carbonation-induced corrosion. It is expressed in mm/year1/2 and depends on the characteristics of concrete and the exposure conditions. The penetration of gaseous CO2 in a porous medium such as concrete occurs via diffusion. The humidity content of concrete is one of the main influencing factors of CO2 diffusion in concrete. If concrete pores are completely and permanently saturated (for instance in submerged structures) CO2 diffusion is prevented. On the other hand, for completely dry concrete, the chemical reaction of carbonation cannot occur. Another influencing factor for CO2 diffusion rate is concrete porosity. Concrete obtained with higher w/c ratio or obtained with an incorrect curing process presents higher porosity at hardened state, and is therefore subjected to a higher carbonation rate. The influencing factors concerning the exposure conditions are the environmental temperature, humidity and concentration of CO2. Carbonation rate is higher for environments with higher humidity and temperature, and increases in polluted environments such as urban centres and inside close spaces as tunnels.To evaluate propagation time in the case of carbonation-induced corrosion, several models have been proposed. In a simplified but commonly accepted method, the propagation time is evaluated as function of the corrosion propagation rate. If the corrosion rate is considered constant, tp can be estimated as: tp=plimvcorr where plim is the limit corrosion penetration in steel and vcorr is the corrosion propagation rate.
Service life of a reinforced concrete structure:
plim must be defined in function of the limit state considered. Generally for carbonation-induced corrosion the concrete cover cracking is considered as limit state, and in this case a plim equal to 100 μm is considered. vcorr depends on the environmental factors in proximity of the corrosion process, such as the availability of oxygen and water at concrete cover depth. Oxygen is generally available at the steel surface, except for submerged structures. If pores are constantly fully saturated, a very low amount of oxygen reaches the steel surface and corrosion rate can be considered negligible. For very dry concretes vcorr is negligible due to the absence of water which prevents the chemical reaction of corrosion. For intermediate concrete humidity content, corrosion rate increases with increasing the concrete humidity content. Since the humidity content in a concrete can significantly vary along the year, it is general not possible to define a constant vcorr . One possible approach is to consider a mean annual value of vcorr Chloride-induced corrosion The presence of chlorides to the steel surface, above a certain critical amount, can locally break the protective thin film of oxides on the steel surface, even if concrete is still alkaline, causing a very localized and aggressive form of corrosion known as pitting. Current regulations forbid the use of chloride contaminated raw materials, therefore one factor influencing the initiation time is chloride penetration rate from the environment. This is a complex task, because chloride solutions penetrate in concrete through the combination of several transport phenomena, such as diffusion, capillary effect and hydrostatic pressure. Chloride binding is another phenomenon affecting the kinetic of chloride penetration. Part of the total chloride ions can be absorbed or can chemically react with some constituents of the cement paste, leading to a reduction of chlorides in the pore solution (free chlorides that are steel able to penetrate in concrete). The ability of a concrete to chloride binding is related to the cement type, being higher for blended cements containing silica fume, fly ash or furnace slag. Being the modelling of chloride penetration in concrete particularly complex, a simplified correlation is generally adopted, which was firstly proposed by Collepardi in 1972 C(x,t)=Cs[1−erf(x2Dt)] Where Cs is the chloride concentration at the exposed surface, x is the chloride penetration depth, D is the chloride diffusion coefficient, and t is time.
Service life of a reinforced concrete structure:
This equation is a solution of Fick's II law of diffusion in the hypothesis that chloride initial content is zero, that Cs is constant in time on the whole surface, and D is constant in time and through the concrete cover. With Cs and D known, the equation can be used to evaluate the temporal evolution of the chloride concentration profile in the concrete cover and evaluate the initiation time as the moment in which critical chloride threshold ( Ccl ) is reached at the depth of steel rebar. However, there are many critical issues related to the practical use of this model. For existing reinforced concrete structures in chloride-bearing environment Cs and D can be identified calculating the best-fit curve for measured chloride concertation profiles. From concrete samples retrieved on field is therefore possible to define the values of Cs and D for residual service life evaluation.
Service life of a reinforced concrete structure:
On the other hand, for new structures it is more complicated to define Cs and D. These parameters depend on the exposure conditions, the properties of concrete such as porosity (and therefore w/c ratio and curing process) and type of cement used. Furthermore, for the evaluation of long-term behaviour of structure, a critical issue is related to the fact that Cs and D can not be considered constant in time, and that the transport penetration of chlorides can be considered as pure diffusion only for submerged structures. A further issue is the assessment of Ccl . There are various influencing factors, such as are the potential of steel rebar and the pH of the solution included in concrete pores. Moreover, pitting corrosion initiation is a phenomenon with a stochastic nature, therefore also Ccl can be defined only on statistical basis.
Corrosion prevention:
The durability assessment has been implemented in European design codes at the beginning of the 90s. It is required for designers to include the effects of long-term corrosion of steel rebar during the design stage, in order to avoid unacceptable damages during the service life of the structure. Different approaches are then available for the durability design.
Corrosion prevention:
Standard approach It is the standardized method to deal with durability, also known as deem-to-satisfy approach, and provided by current european regulation EN 206. It is required that the designer identifies the environmental exposure conditions and the expected degradation process, assessing the correct exposure class. Once this is defined, design code gives standard prescriptions for w/c ratio, the cement content, and the thickness of the concrete cover. This approach represents an improvement step for the durability design of reinforced concrete structures, it is suitable for the design of ordinary structures designed with traditional materials (Portland cement, carbon steel rebar) and with an expected service life of 50 years. Nevertheless, it is considered not completely exhaustive in some cases. The simple prescriptions do not allow to optimize the design for different parts of the structures with different local exposure conditions. Furthermore, they do not allow to consider the effects on service life of special measures such as the use of additional protections.
Corrosion prevention:
Performance-based approach Performance-based approaches provide for a real design of durability, based on models describing the evolution in time of degradation processes, and the definition of times at which defined limit states will be reached. To consider the wide variety of service life influencing factors and their variability, performance-based approaches address the problem from a probabilistic or semiprobabilistic point of view. The performance-based service life model proposed by the European project DuraCrete, and by FIB Model Code for Service Life Design, is based on a probabilistic approach, similar to the one adopted for structural design. Environmental factors are considered as loads S(t), while material properties such as chloride penetration resistance are considered as resistances R(t) as shown in Figure 2. For each degradation process, design equations are set to evaluate the probability of failure of predefined performances of the structure, where acceptable probability is selected on the basis of the limit state considered. The degradation processes are still described with the models previously defined for carbonation-induced and chloride-induced corrosion, but to reflect the statistical nature of the problem, the variables are considered as probability distribution curves over time. To assess some of the durability design parameters, the use of accelerated laboratory test is suggested, such as the so called Rapid Chloride Migration Test to evaluate chloride penetration resistance of concrete '. Through the application of corrective parameters, the long-term behaviour of the structure in real exposure conditions may be evaluated.
Corrosion prevention:
The use of probabilistic service life models allows to implement a real durability design that could be implemented in the design stage of structures. This approach is of particular interest when an extended service life is required (>50 years) or when the environmental exposure conditions are particularly aggressive. Anyway, the applicability of this kind of models is still limited. The main critical issues still concern, for instance, the individuation of accelerated laboratory tests able to characterize concrete performances, reliable corrective factors to be used for the evaluation of long-term durability performances and the validation of these models based on real long-term durability performances. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Rice transplanter**
Rice transplanter:
A rice transplanter is a specialized transplanter fitted to transplant rice seedlings onto paddy fields. The two main types of rice transplanter are the riding type and walking type. The riding type is power-driven and can usually transplant six lines in one pass; the walking type is manually driven and can usually transplant four lines in one pass.Although rice is grown in areas other than Asia, rice transplanters are used mainly in East, Southeast, and South Asia. This is because rice can be grown without transplanting, by simply sowing seeds on field, and farmers outside Asia prefer this fuss-free way at the expense of reduced yield.A common rice transplanter comprises: a seedling tray like a shed roof on which a mat-type rice nursery is set; a seedling tray shifter that shifts the seedling tray like a typewriter carriage; and plural pickup forks that pick up a seedling from a mat-type nursery on the seedling tray and put the seedling into the earth, as if the seedling were taken between one's fingers.Machine transplanting using rice transplanters requires considerably less time and labour than manual transplanting. It increases the approximate area that a person can plant from 700 to 10,000 square metres per day.
Rice transplanter:
However, rice transplanters are considerably expensive for almost all Asian small-hold farmers. Rice transplanters are popular in industrialized countries where labor cost is high, for example in South Korea. They are now also becoming more popular in South Asian countries because, at transplanting time, labor shortage is at peak levels.Rice transplanters were first developed in Japan in the 1960s, whereas the earliest attempt to mechanize rice transplanting dates back to late 19th century. In Japan, development and spread of rice transplanters progressed rapidly during the 1970s and 1980s. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Host controller interface (USB, Firewire)**
Host controller interface (USB, Firewire):
A host controller interface (HCI) is a register-level interface that enables a host controller for USB or IEEE 1394 hardware to communicate with a host controller driver in software. The driver software is typically provided with an operating system of a personal computer, but may also be implemented by application-specific devices such as a microcontroller.
On the expansion card or motherboard controller, this involves much custom logic, with digital logic engines in the motherboard's controller chip, plus analog circuitry managing the high-speed differential signals. On the software side, it requires a device driver (called a Host Controller Driver, or HCD).
IEEE 1394:
Open Host Controller Interface Open Host Controller Interface (OHCI) is an open standard.
IEEE 1394:
When applied to an IEEE 1394 (also known as FireWire; i.LINK or Lynx) card, OHCI means that the card supports a standard interface to the PC and can be used by the OHCI IEEE 1394 drivers that come with all modern operating systems. Because the card has a standard OHCI interface, the OS does not need to know in advance exactly who makes the card or how it works; it can safely assume that the card understands the set of well-defined commands that are defined in the standard protocol.
USB:
Open Host Controller Interface The OHCI standard for USB is similar to the OHCI standard for IEEE 1394, but supports USB 1.1 (full and low speeds) only; so as a result its register interface looks completely different. Compared with UHCI, it moves more intelligence into the controller, and thus is accordingly much more efficient; this was part of the motivation for defining it. If a computer provides non-x86 USB 1.1, or x86 USB 1.1 from a USB controller that is not made by Intel or VIA, it probably uses OHCI (e.g. OHCI is common on add-in PCI Cards based on an NEC chipset). It has many fewer intellectual property restrictions than UHCI. It only supports 32-bit memory addressing, so it requires an IOMMU or a computationally expensive bounce buffer to work with a 64-bit operating system. OHCI interfaces to the rest of the computer only with memory-mapped I/O.
USB:
Universal Host Controller Interface Universal Host Controller Interface (UHCI) is a proprietary interface created by Intel for USB 1.x (full and low speeds). It requires a license from Intel. A USB controller using UHCI does little in hardware and requires a software UHCI driver to do much of the work of managing the USB bus. It only supports 32-bit memory addressing, so it requires an IOMMU or a computationally expensive bounce buffer to work with a 64-bit operating system. UHCI is configured with port-mapped I/O and memory-mapped I/O, and also requires memory-mapped I/O for status updates and for data buffers needed to hold data that needs to be sent or data that was received.
USB:
Enhanced Host Controller Interface The Enhanced Host Controller Interface (EHCI) is a high-speed controller standard applicable to USB 2.0. UHCI- and OHCI-based systems, as existed previously, entailed greater complexity and costs than necessary. Consequently, the USB Implementers Forum (USB-IF) insisted on a public specification for EHCI. Intel hosted EHCI conformance-testing and this helped to prevent the incursion of proprietary features.
USB:
Originally a PC providing high-speed ports had two controllers, one handling low- and full-speed devices and the second handling high-speed devices. Typically such a system had EHCI and either OHCI or UHCI drivers. The UHCI driver provides low- and full-speed interfaces for Intel or VIA chipsets' USB host controllers on the motherboard, or for any VIA discrete host controllers attached to the computer's expansion bus. The OHCI driver provides low- and full-speed functions for USB ports of all other motherboard chipset vendors' integrated USB host controllers or discrete host controllers attached to the computer's expansion bus. The EHCI driver provided high-speed functions for USB ports on the motherboard or on the discrete USB controller. More recent hardware routes all ports through an internal "rate-matching" hub (RMH) that converts all traffic involving any directly-connected ports working at full-speed and low-speed between the high-speed traffic presented to the EHCI controller and the full-speed or low-speed traffic that the ports operating at those speeds expect, allowing the EHCI controller to handle these devices.
USB:
The EHCI software interface specification defines both 32-bit and 64-bit versions of its data structures, so it does not need a bounce buffer or IOMMU to work with a 64-bit operating system if a rate-matching hub is implemented to provide full-speed and low-speed connectivity instead of companion controllers using either the UHCI specification or OHCI specification, both of which are 32-bit only specifications.
USB:
Extensible Host Controller Interface Extensible Host Controller Interface (xHCI) is the newest host controller standard that improves speed, power efficiency and virtualization over its predecessors. The goal was also to define a USB host controller to replace UHCI/OHCI/EHCI. It supports all USB device speeds (USB 3.1 SuperSpeed+, USB 3.0 SuperSpeed, USB 2.0 Low-, Full-, and High-speed, USB 1.1 Low- and Full-speed).
USB:
Virtual Host Controller Interface Virtual Host Controller Interface (VHCI) refers to a virtual controller that may export virtual USB devices not backed by physical devices. For instance, on Linux, VHCI controllers are used to expose USB devices from other machines, attached using the USB/IP protocol.
USB4 Host Interface The Host Interface defined in the USB4 Specification. It makes operating system to manage USB4 Host Route for USB, DisplayPort, PCI Express, Thunderbolt or Host-to-Host Communication. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Devicetree**
Devicetree:
In computing, a devicetree (also written device tree) is a data structure describing the hardware components of a particular computer so that the operating system's kernel can use and manage those components, including the CPU or CPUs, the memory, the buses and the integrated peripherals.
The device tree was derived from SPARC-based computers via the Open Firmware project. The current Devicetree specification is targeted at smaller systems, but is still used with some server-class systems (for instance, those described by the Power Architecture Platform Reference).
Devicetree:
Personal computers with the x86 architecture generally do not use device trees, relying instead on various auto configuration protocols (e.g. ACPI) to discover hardware. Systems which use device trees usually pass a static device tree (perhaps stored in EEPROM, or stored in NAND device like eUFS) to the operating system, but can also generate a device tree in the early stages of booting. As an example, Das U-Boot and kexec can pass a device tree when launching a new operating system. On systems with a boot loader that does not support device trees, a static device tree may be installed along with the operating system; the Linux kernel supports this approach.
Devicetree:
The Devicetree specification is currently managed by a community named devicetree.org, which is associated with, among others, Linaro and Arm.
Formats:
A device tree can hold any kind of data as internally it is a tree of named nodes and properties. Nodes contain properties and child nodes, while properties are name–value pairs.
Device trees have both a binary format for operating systems to use and a textual format for convenient editing and management.
Usage:
Linux Given the correct device tree, the same compiled kernel can support different hardware configurations within a wider architecture family. The Linux kernel for the ARC, ARM, C6x, H8/300, MicroBlaze, MIPS, NDS32, Nios II, OpenRISC, PowerPC, RISC-V, SuperH, and Xtensa architectures reads device tree information; on ARM, device trees have been mandatory for all new SoCs since 2012. This can be seen as a remedy to the vast number of forks (of Linux and Das U-Boot) that have historically been created to support (marginally) different ARM boards. The purpose is to move a significant part of the hardware description out of the kernel binary, and into the compiled device tree blob, which is handed to the kernel by the boot loader, replacing a range of board-specific C source files and compile-time options in the kernel.It is specified in a Devicetree Source file (.dts) and is compiled into a Devicetree Blob or device tree binary (.dtb) file through the Devicetree compiler (DTC). Device tree source files can include other files, referred to as device tree source includes.It has been customary for ARM-based Linux distributions to include a boot loader, that necessarily was customized for specific boards, for example Raspberry Pi or Hackberry A10. This has created problems for the creators of Linux distributions as some part of the operating system must be compiled specifically for every board variant, or updated to support new boards. However, some modern SoCs (for example, Freescale i.MX6) have a vendor-provided boot loader with device tree on a separate chip from the operating system.A proprietary configuration file format used for similar purposes, the FEX file format, is a de facto standard among Allwinner SoCs.
Usage:
Windows Windows (except of Windows CE) does not use DeviceTree (DTB file) as described here. Instead, it uses ACPI to discover and manage devices.
Coreboot The coreboot project makes use of device trees, but they are different from the flattened device trees used in the Linux kernel.
Example:
Example of Devicetree Source (DTS) format: In the example above, the line /dts-v1/; signifies version 1 of the DTS syntax.
The tree has four nodes: / (root node), soc (stands for "system on a chip"), flash-controller@4001e000 and flash@0 (instance of flash which uses the flash controller). Besides these node names, the latter two nodes have labels flash_controller and flash0 respectively.
Example:
The latter two nodes have properties, which represent name/value pairs. Property label has string type, property erase-block has integer type and property reg is an array of integers (32-bit unsigned values). Property values can refer to other nodes in the devicetree by their phandles. Phandle for a node with label flash0 would be written as &flash0. Phandles are also 32-bit values.
Example:
Parts of the node names after the "at" sign (@) are unit addresses. Unit addresses specify a node's address in the address space of its parent node.
The above tree could be compiled by the standard DTC compiler to binary DTB format or assembly. In Zephyr RTOS, however, DTS files are compiled into C header files (.h), which are then used by the build system to compile code for a specific board. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Libsigc++**
Libsigc++:
libsigc++ is a C++ library for typesafe callbacks.
Libsigc++:
libsigc++ implements a callback system for use in abstract interfaces and general programming. libsigc++ is one of the earliest implementations of the signals and slots concept implemented using C++ template metaprogramming. It was created as an alternative to the use of a meta compiler such as found in the signals and slots implementation in Qt. libsigc++ originated as part of the gtkmm project in 1997 and later was rewritten to be a standalone library. Each signal has a particular function profile which designates the number of arguments and argument type associated with the callback. Functions and methods are then wrapped using template calls to produce function objects (functors) which can be bound to a signal. Each signal can be connected to multiple functors thus creating an observer pattern through which a message can be distributed to multiple anonymous listener objects. Reference counting based object lifespan tracking was used to disconnect the functors from signals as objects are deleted. The use of templates allowed for compile time typesafe verification of connections. The addition of this strict compile time checking required the addition of template typecasting adapters which convert the functor callback profile to match the required signal pattern. libsigc++ was a natural expansion of the C++ standard library functors to the tracking of objects necessary to implement the observer pattern. It inspired multiple C++ template based signal and slot implementations including the signal implementation used in the boost C++ libraries. libsigc++ is released as free software under the GNU Lesser General Public License (LGPL). | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Realcompact space**
Realcompact space:
In mathematics, in the field of topology, a topological space is said to be realcompact if it is completely regular Hausdorff and it contains every point of its Stone–Čech compactification which is real (meaning that the quotient field at that point of the ring of real functions is the reals). Realcompact spaces have also been called Q-spaces, saturated spaces, functionally complete spaces, real-complete spaces, replete spaces and Hewitt–Nachbin spaces (named after Edwin Hewitt and Leopoldo Nachbin). Realcompact spaces were introduced by Hewitt (1948).
Properties:
A space is realcompact if and only if it can be embedded homeomorphically as a closed subset in some (not necessarily finite) Cartesian power of the reals, with the product topology. Moreover, a (Hausdorff) space is realcompact if and only if it has the uniform topology and is complete for the uniform structure generated by the continuous real-valued functions (Gillman, Jerison, p. 226).
Properties:
For example Lindelöf spaces are realcompact; in particular all subsets of Rn are realcompact.
The (Hewitt) realcompactification υX of a topological space X consists of the real points of its Stone–Čech compactification βX. A topological space X is realcompact if and only if it coincides with its Hewitt realcompactification.
Write C(X) for the ring of continuous real-valued functions on a topological space X. If Y is a real compact space, then ring homomorphisms from C(Y) to C(X) correspond to continuous maps from X to Y. In particular the category of realcompact spaces is dual to the category of rings of the form C(X).
In order that a Hausdorff space X is compact it is necessary and sufficient that X is realcompact and pseudocompact (see Engelking, p. 153). | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Thrust-to-weight ratio**
Thrust-to-weight ratio:
Thrust-to-weight ratio is a dimensionless ratio of thrust to weight of a rocket, jet engine, propeller engine, or a vehicle propelled by such an engine that is an indicator of the performance of the engine or vehicle.
The instantaneous thrust-to-weight ratio of a vehicle varies continually during operation due to progressive consumption of fuel or propellant and in some cases a gravity gradient. The thrust-to-weight ratio based on initial thrust and weight is often published and used as a figure of merit for quantitative comparison of a vehicle's initial performance.
Calculation:
The thrust-to-weight ratio is calculated by dividing the thrust (in SI units – in newtons) by the weight (in newtons) of the engine or vehicle. The weight (N) is calculated by multiplying the mass in kilograms (kg) by the acceleration due to gravity (m/s^2). Note that the thrust can also be measured in pound-force (lbf), provided the weight is measured in pounds (lb). Division using these two values still gives the numerically correct (dimensionless) thrust-to-weight ratio. For valid comparison of the initial thrust-to-weight ratio of two or more engines or vehicles, thrust must be measured under controlled conditions.
Aircraft:
The thrust-to-weight ratio and lift-to-drag ratio are the two most important parameters in determining the performance of an aircraft.
Aircraft:
The thrust-to-weight ratio varies continually during a flight. Thrust varies with throttle setting, airspeed, altitude, air temperature, etc. Weight varies with fuel burn and payload changes. For aircraft, the quoted thrust-to-weight ratio is often the maximum static thrust at sea level divided by the maximum takeoff weight. Aircraft with thrust-to-weight ratio greater than 1:1 can pitch straight up and maintain airspeed until performance decreases at higher altitude.A plane can take off even if the thrust is less than its weight as, unlike a rocket, the lifting force is produced by lift from the wings, not directly by thrust from the engine. As long as the aircraft can produce enough thrust to travel at a horizontal speed above its stall speed, the wings will produce enough lift to counter the weight of the aircraft.
Aircraft:
cruise cruise cruise Propeller-driven aircraft For propeller-driven aircraft, the thrust-to-weight ratio can be calculated as follows: 550 hp W where ηp is propulsive efficiency (typically 0.8), hp is the engine's shaft horsepower, and V is true airspeed in feet per second.
Rockets:
The thrust-to-weight ratio of a rocket, or rocket-propelled vehicle, is an indicator of its acceleration expressed in multiples of gravitational acceleration g.Rockets and rocket-propelled vehicles operate in a wide range of gravitational environments, including the weightless environment. The thrust-to-weight ratio is usually calculated from initial gross weight at sea level on earth and is sometimes called Thrust-to-Earth-weight ratio. The thrust-to-Earth-weight ratio of a rocket or rocket-propelled vehicle is an indicator of its acceleration expressed in multiples of earth's gravitational acceleration, g0.The thrust-to-weight ratio of a rocket improves as the propellant is burned. With constant thrust, the maximum ratio (maximum acceleration of the vehicle) is achieved just before the propellant is fully consumed. Each rocket has a characteristic thrust-to-weight curve, or acceleration curve, not just a scalar quantity.
Rockets:
The thrust-to-weight ratio of an engine is greater than that of the complete launch vehicle, but is nonetheless useful because it determines the maximum acceleration that any vehicle using that engine could theoretically achieve with minimum propellant and structure attached.
For a takeoff from the surface of the earth using thrust and no aerodynamic lift, the thrust-to-weight ratio for the whole vehicle must be greater than one. In general, the thrust-to-weight ratio is numerically equal to the g-force that the vehicle can generate. Take-off can occur when the vehicle's g-force exceeds local gravity (expressed as a multiple of g0).
The thrust-to-weight ratio of rockets typically greatly exceeds that of airbreathing jet engines because the comparatively far greater density of rocket fuel eliminates the need for much engineering materials to pressurize it.
Rockets:
Many factors affect thrust-to-weight ratio. The instantaneous value typically varies over the duration of flight with the variations in thrust due to speed and altitude, together with changes in weight due to the amount of remaining propellant, and payload mass. Factors with the greatest effect include freestream air temperature, pressure, density, and composition. Depending on the engine or vehicle under consideration, the actual performance will often be affected by buoyancy and local gravitational field strength.
Examples:
Aircraft Jet and rocket engines Fighter aircraft Table for Jet and rocket engines: jet thrust is at sea level Fuel density used in calculations: 0.803 kg/l For the metric table, the T/W ratio is calculated by dividing the thrust by the product of the full fuel aircraft weight and the acceleration of gravity.
J-10's engine rating is of AL-31FN. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Tiffeneau–Demjanov rearrangement**
Tiffeneau–Demjanov rearrangement:
The Tiffeneau–Demjanov rearrangement (TDR) is the chemical reaction of a 1-aminomethyl-cycloalkanol with nitrous acid to form an enlarged cycloketone.
Tiffeneau–Demjanov rearrangement:
The Tiffeneau–Demjanov ring expansion, Tiffeneau–Demjanov rearrangement, or TDR, provides an easy way to increase amino-substituted cycloalkanes and cycloalkanols in size by one carbon. Ring sizes from cyclopropane through cyclooctane are able to undergo Tiffeneau–Demjanov ring expansion with some degree of success. Yields decrease as initial ring size increases, and the ideal use of TDR is for synthesis of five, six, and seven membered rings. A principal synthetic application of Tiffeneau–Demjanov ring expansion is to bicyclic or polycyclic systems. Several reviews on this reaction have been published.
Discovery:
The reaction now known as the Tiffeneau–Demjanov rearrangement (TDR) was discovered in two steps. The first step of occurred in 1901 when Russian chemist Nikolai Demyanov discovered that aminomethylcycloalkanes produce novel products upon treatment with nitrous acid. When this product was identified as the expanded alcohol in 1903, the Demjanov rearrangement was coined.
The Demjanov rearrangement itself has since been successfully used in industry and synthetical organic chemistry. However, its scope is limited. The Demjanov rearrangement is only best suited for expanding four, five, and six member aminomethylcycloalkanes. Moreover, alkenes and un-expanded cycloalcohols form as by-products. Yields diminish as the starting cycloalkane becomes larger.
A discovery by French scientists a few years before World War II would result in the modern TDR reaction. In 1937, Tiffeneau, Weill, and Tchoubar published in Comptes Rendus their finding that 1-aminomethylcycloahexanol converts readily to cycloheptanone upon treatment with nitrous acid.
Discovery:
Perhaps due to such a large ring being expanded, the authors did not immediately relate it to the Demjanov rearrangement. Instead, they envisioned that their reaction was similar to one discovered by Wallack in 1906. Upon oxidation with permanganate, cycloglycols will dehydrate to yield an aldehyde via an epoxide intermediate. The authors postulated that deamination resulted in a similar epoxide intermediate that subsequently formed a ring enlarge cycloketone. However, in the time that followed, scientists began to realize that these reactions were related. By the early 1940s, TDR was in organic vernacular. Tiffeneau's discovery enlarged the synthetic scope of the Demjanov rearrangement as now seven and eight carbon rings could be enlarged. Since the resulting cycloketone could be easily converted to a cycloaminoalcohol again, this new method quickly became popular among organic chemists.
Basic mechanism:
The basic reaction mechanism is a diazotation of the amino group by nitrous acid followed by expulsion of nitrogen and formation of a primary carbocation. A rearrangement reaction with ring expansion forms a more stable oxonium ion which is deprotonated.
Early development of mechanism:
Although chemists at the time knew very well what the product of a symmetrical 1-aminomethylcycloalcohol would be when exposed to nitrous acid, there was significant debate on the reaction's mechanism that lasted up until the 1980s. Scientists were puzzled over the array of products they would obtain when the reaction was performed on an unsymmetrical 1-aminomethylcycloalcohols and bridged cyclo-systems. Even today, experiments continue that are designed to shed light into the more subtle mechanistic features of this reaction and increase yields of desired expanded products.
Early development of mechanism:
In 1960, Peter A.S. Smith and Donald R. Baer, both of the University of Michigan, published a treatise on the TDR. Their proposed mechanism contained within provides an excellent perspective of scientist's understanding of the TDR at that time.
Early development of mechanism:
The mechanism proposed by Baer and Smith was the summation of several sources of information. Since the early 1950s, it had been postulated by many that the TDR mechanism involved a carbonium ion. However, a major breakthrough in the development of the TDR mechanism came with the improved understanding of the phenomenon behind amine groups reacting with nitrous acid. Meticulous kinetic studies throughout the late 1950s led scientists to believe that nitrous acid reacts with an amine by first producing a nitrous acid derivative, potentially N2O3. While this derivative would prove incorrect as it relates to TDR, scientists of the time still correctly came to the conclusion that the derivative would react with the amine to produce the diazonium ion. The inferred instability of this diazonium ion gave solid evidence for the existence of a carbocation in the TDR mechanism.
Early development of mechanism:
Another piece of information that had implications in the mechanism of the TDR was the simple fact that the reaction proceeds more easily with the conditions discovered by Tiffeneau. By placing an alcohol on the carbon on the reagent, reaction rates and yields are much improved to those of the simple Demjanov rearrangement. Moreover, few unwanted by products are formed, such as olefins. These aforementioned observations were the center around which Smith and Baer's mechanism was constructed. It is easy to see that hydrogen's presence would mean that hydride shifts would occur in competition with carbon shifts during the course of the reaction. Moreover, this shift is likely as it would move place positive charge from a 1° carbon to a 3° carbon. In a mildly basic solvent such as water, this new intermediate could easily produce an olefin via an E1-like reaction.
Early development of mechanism:
Such olefins are typically seen in simple Demjanov rearrangements but are not seen in the TDR. The alcohol's presence explains how this E1 reaction does not occur. Moreover, having an alcohol present puts the developing positive charge of the ring enlarged intermediate next to an oxygen. This would be more favorable than hydrogen as oxygen can lend electron density to the carbonium ion via resonance.
Early development of mechanism:
This again favors ring expansion and is another caveat that shows how it incorporates higher yields of the TDR over the Demjanov rearrangement.
Smith and Baer' mechanism was also able to account for other observations of the time. Tiffeneau–Demjanov rearrangements of1-aminomethylcycloalkanols with alkyl substitutions on the side aminomethyl chain had been accomplished by many scientists before 1960. Smith and Baer investigated how such substitution affects the TDR by synthesizing various 1-hydroxycyclohexylbenzyl-amines and exposing them to TDR conditions.
Early development of mechanism:
Seeing as six member rings are routinely enlarged by the TDR, one might expect the reaction to occur. However instead of the anticipated ring enlargements, only diols are seen as products. Five member analogues to the above substituted reagents enlarge under TDR conditions. Alkyl substitutions as opposed to aryl substitutions result in diminished TDRs. Smith and Baer assert that these observations support their mechanism. Since substitution stabilizes the carbonium ion after damnification, the resulting carbonium ion is more likely to react with a nucleophile present (water in this case) and not undergo rearrangement. Five member rings rearrange due to the ring strain encouraging the maneuver. This strain makes the carbocation unstable enough to cause a carbon to shift.
Early development of mechanism:
Problems with the early mechanism As definitive as Smith and Baer's early mechanism seems, there are several phenomena that it did not account for. The problem with their mechanism mainly focused around TDR precursors that have alkyl substituents on the ring. When said substituent is placed on the ring as to make the molecule still symmetric, one product is formed upon exposure to TDR conditions. However, if the alkyl is placed on the ring as to make the molecule unsymmetric, several products could form.
Early development of mechanism:
The principal method for synthesizing the starting amino alcohols is through the addition of cyanide anion to a cyclic ketone. The resulting hydroxynitrile is then reduced, forming the desired amino alcohol. This method forms diastereomers, possibly affecting the regioselectivity of the reaction. For nearly all asymmetric precursors, one product isomer is formed preferentially to another. As TDR was routinely being used to synthesize various steroids and bicyclic compounds, their precursors were rarely symmetric. As a result, a lot of time was spent identifying and separating products. At the time, this phenomenon baffled chemists. Due to spectroscopic and separation limitations, it was very difficult for scientists to probe this caveat of the TDR in a sophisticated way. However, most believed that what was governing preferential product formation involved the migratory aptitudes of competing carbons and/or steric control. Migratory aptitude made reference to the possibility that the preferred product of the reaction was the result of an initial stability of one carbon migrating in preference to another. This possibility was more the belief and subject of research of earlier scientists, including Marc Tiffeneau himself. However, in the early 1960s, more and more scientists were starting to think that steric factors were the driving force behind the selectivity for this reaction.
Early development of mechanism:
Sterics and stereochemistry in the mechanism As chemists continued to probe this reaction with more and more advanced technology and methods, other factors began to be tabled as possibilities for what was controlling product formation of unsymmetrical amino alcohols. In 1963, Jones and Price of the University of Toronto demonstrated how remote substituents in steroids play a role in product distribution. In 1968, Carlson and Behn of the University of Kansas discovered that experimental conditions also play a role. These latter scientists established that in ring extension via the TDR, initial temperature and concentration of reagents all played a role in eventual product distribution. Indeed, other avenues of the TDR were being explored and charted.
Early development of mechanism:
However, Carlson and Behn did manage to report a significant breakthrough in the realm of sterics and migratory aptitudes as they relate to the TDR. As it might be expected based on electronic reasoning, the more highly substituted carbon should migrate preferentially to a less substituted carbon. However, this is not always seen and often accounts of migratory aptitudes show fickle preferences. Thus, the authors assert that such aptitudes are of little importance. Sterically, thanks chiefly to improved spectroscopic methods, they were able to confirm that having the amine group equatorial to the alkane ring corresponded to drastically different product yields.
Early development of mechanism:
According to the authors, the preferential formation of D from A does not reflect a preferred conformation of A. Their modeling indicates that both A and B are initially just as likely to become C. He concludes that there must be a steric interaction to develop in the transition state during migration that makes A preferentially form D when exposed to the TDR conditions. The idea that sterics played a factor during migration and was not a factor just at the beginning to the reaction, was new. Carlson and Behn speculate that the factor might lay in transannular hydrogen interactions along the path of migration. Their modeling suggested that this interaction may be more severe for A forming C. However, they are not certain enough to offer this as a definitive explanation as they concede that more subtle conformational and/or electronic effects could be at work as well.
Early development of mechanism:
At this point, the mechanism proposed by Smith and Baer seemed to be on its way out. If steric interactions relating to carbon migration during the reaction's transition state were important, this did not support the carbocation envisioned by Smith and Baer. Research around bi-cyclics during the 1970s would shed even more light into the TDR mechanism. In 1973, McKinney and Patel of Marquette University published an article in which they used the TDR for expanding norcamphor and dehydronorcamphor. Two of their observations are important. One centers on the expansion of exo and endo-2-norbornylcarbinyl systems.
Early development of mechanism:
One might expect in (I) that A would migrate in preference to B seeing as such a migration would place the developing charge on a 2° carbon and pass the specie through a more favorable chair-like intermediate. This is not seen. Only 38% of the product exhibits A migration. To account for why A migration is not dominant in the expansion of I, the authors assert a least movement argument. Simply put, the migration of the non-bridgehead carbon provides for the least amount of total atom movement, something that plays into the energetics of the reaction. This least movement consideration would prove important in the TDR mechanism as it accounts for products with intermediates passing through unfavorable conformations.
Early development of mechanism:
However, McKinney and Patel also confirm that traditional factors such as developing positive charge stability still play a crucial role in the direction of expansion. They accomplish this by expanding 2-norbornenyl carbinyl systems.
Early development of mechanism:
By adding a simple double bond to these systems, the authors see a significant increase in the migration of the bridgehead carbon A (50% in this case.) The authors attribute this jump in migration to the fact that this bride carbon migrating allows the developing positive charge to be stabilizing by resonance contributed by the double bond. Therefore, carbocation/ positive charge effects can not be ignored in the discussion of the factors influencing product distribution.
Later mechanistic studies:
As evidence continued to mount during the years after Smith and Baer's publication in 1960, it was obvious that the TDR mechanism would need revisiting. This new mechanism would have to de-stress the carbocation as there are other factors that influence ring expansion. Orientation of the developing diazonium ion, the possibility of steric interactions during the reaction, and atomic movement would all have to be included. In 1982, Cooper and Jenner published such a mechanism. Their mechanism has stood to this day as the current understanding of the TDR.
Later mechanistic studies:
The most obvious departure from Smith and Baer's mechanism is that Cooper and Jenner represent the diazonium departure and subsequent alkyl shift as a concerted step. Such a feature allows for sterics, orientations, and atomic movement to be factors. However, distribution of positive charge is still important in this mechanism as it does explain much of the observed behavior of the TDR. Another observation that should be made is that there is no preference given to these aforementioned factors in the mechanism. That is to say, even today it is very difficult to predict which carbon will migrate preferentially. Indeed, the TDR has become more useful as spectroscopic and separation techniques have advanced. Such advancements allows for the quick identification and isolation of desired products.
Later mechanistic studies:
Since the mid-1980s, most organic chemists have resigned themselves to accepting the fact that the TDR is governed by several factors that often seem fickle in importance. As a result, much research is now being directed towards the development of techniques to increase migration of a specific carbon. One example of such an effort has recently come out of the University of Melbourne.
Later mechanistic studies:
Noting that group 4 metal substituents can stabilize positive charge that is β to them, Chow, McClure, and White attempted to use this to direct TDRs in 2004.{} They hypothesized that placing a silicon trimethyl group β to a carbon that can migrate would increase such migration.
Later mechanistic studies:
Their results show that this does occur to a small extent. The authors believe that the reason why the carbon migration increases only slightly is that positive charge is not a large factor in displacing the diazonium ion. Since this ion is such a good leaving group, it requires very little 'push' from the developing carbon-carbon bond. Their results again highlight the fact that multiple factors determine the direction of carbon migration. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**G protein-gated ion channel**
G protein-gated ion channel:
G protein-gated ion channels are a family of transmembrane ion channels in neurons and atrial myocytes that are directly gated by G proteins.
Overview of mechanisms and function:
Generally, G protein-gated ion channels are specific ion channels located in the plasma membrane of cells that are directly activated by a family of associated proteins. Ion channels allow for the selective movement of certain ions across the plasma membrane in cells. More specifically, in nerve cells, along with ion transporters, they are responsible for maintaining the electrochemical gradient across the cell.
Overview of mechanisms and function:
G proteins are a family of intracellular proteins capable of mediating signal transduction pathways. Each G protein is a heterotrimer of three subunits: α-, β-, and γ- subunits. The α-subunit (Gα) typically binds the G protein to a transmembrane receptor protein known as a G protein-coupled receptor, or GPCR. This receptor protein has a large, extracellular binding domain which will bind its respective ligands (e.g. neurotransmitters and hormones). Once the ligand is bound to its receptor, a conformational change occurs. This conformational change in the G protein allows Gα to bind GTP. This leads to yet another conformational change in the G protein, resulting in the separation of the βγ-complex (Gβγ) from Gα. At this point, both Gα and Gβγ are active and able to continue the signal transduction pathway. Different classes of G protein-coupled receptors have many known functions including the cAMP and Phosphatidylinositol signal transduction pathways. A class known as metabotropic glutamate receptors play a large role in indirect ion channel activation by G proteins. These pathways are activated by second messengers which initiate signal cascades involving various proteins which are important to the cell's response.
Overview of mechanisms and function:
G protein-gated ion channels are associated with a specific type of G protein-coupled receptor. These ion channels are transmembrane ion channels with selectivity filters and a G protein binding site. The GPCRs associated with G protein-gated ion channels are not involved in signal transduction pathways. They only directly activate these ion channels using effector proteins or the G protein subunits themselves (see picture). Unlike most effectors, not all G protein-gated ion channels have their activity mediated by Gα of their corresponding G proteins. For instance, the opening of inwardly rectifying K+ (GIRK) channels is mediated by the binding of Gβγ.G protein-gated ion channels are primarily found in CNS neurons and atrial myocytes, and affect the flow of potassium (K+), calcium (Ca2+), sodium (Na+), and chloride (Cl−) across the plasma membrane.
Types of G Protein-gated ion channels:
Potassium channels Structure Four G protein gated inwardly-rectifying potassium (GIRK) channel subunits have been identified in mammals: GIRK1, GIRK2, GIRK3, and GIRK4. The GIRK subunits come together to form GIRK ion channels. These ion channels, once activated, allow for the flow of potassium ions (K+) from the extracellular space surrounding the cell across the plasma membrane and into the cytoplasm. Each channel consists of domains which span the plasma membrane, forming the K+-selective pore region through which the K+ ions will flow. Both the N-and C-terminal ends of the GIRK channels are located within the cytoplasm. These domains interact directly with the βγ-complex of the G protein, leading to activation of the K+ channel. . These domains on the N-and C-terminal ends which interact with the G proteins contain certain residues which are critical for the proper activation of the GIRK channel. In GIRK4, the N-terminal residue is His-64 and the C-terminal residue is Leu-268; in GIRK1 they are His-57 and Leu-262, respectively. Mutations in these domains lead to the channel's desensitivity to the βγ-complex and therefore reduce the activation of the GIRK channel.The four GIRK subunits are 80-90% similar in their pore-forming and transmembrane domains, a feature accountable by the similarities in their structures and sequences. GIRK2, GIRK3, and GIRK4 share an overall identity of 62% with each other, while GIRK1 only shares 44% identity with the others. Because of their similarity, the GIRK channel subunits can come together easily to form heteromultimers (a protein with two or more different polypeptide chains). GIRK1, GIRK2, and GIRK3 show abundant and overlapping distribution in the central nervous system (CNS) while GIRK1 and GIRK4 are found primarily in the heart. GIRK1 combines with GIRK2 in the CNS and GIRK4 in the atrium to form heterotetramers; each final heterotetramer contains two GIRK1 subunits and two GIRK2 or GIRK4 subunits. GIRK2 subunits can also form homotetramers in the brain, while GIRK4 subunits can form homotetramers in the heart. GIRK1 subunits have not been shown to be able to form functional homotetramers. Though GIRK3 subunits are found in the CNS, their role in forming functional ion channels is still unknown.
Types of G Protein-gated ion channels:
Subtypes and respective functions GIRKs found in the heartOne G protein-gated potassium channel is the inward-rectifing potassium channel (IKACh) found in cardiac muscle (specifically, the sinoatrial node and atria), which contributes to the regulation of heart rate. These channels are almost entirely dependent on G protein activation, making them unique when compared to other G protein-gated channels. Activation of the IKACh channels begins with release of acetylcholine (ACh) from the vagus nerve onto pacemaker cells in the heart. ACh binds to the M2 muscarinic acetylcholine receptors, which interact with G proteins and promote the dissociation of the Gα subunit and Gβγ-complex. IKACh is composed of two homologous GIRK channel subunits: GIRK1 and GIRK4. The Gβγ-complex binds directly and specifically to the IKACh channel through interactions with both the GIRK1 and GIRK4 subunits. Once the ion channel is activated, K+ ions flow out of the cell and cause it to hyperpolarize. In its hyperpolarized state, the neuron cannot fire action potentials as quickly, which slows the heartbeat.
Types of G Protein-gated ion channels:
GIRKs found in the brainThe G protein inward rectifying K+ channel found in the CNS is a heterotetramer composed of GIRK1 and GIRK2 subunits and is responsible for maintaining the resting membrane potential and excitability of the neuron. Studies have shown the largest concentrations of the GIRK1 and GIRK2 subunits to be in the dendritic areas of neurons in the CNS. These areas, which are both extrasynaptic (exterior to a synapse) and perisynaptic (near a synapse), correlate with the large concentration of GABAB receptors in the same areas. Once the GABAB receptors are activated by their ligands, they allow for the dissociation of the G protein into its individual α-subunit and βγ-complex so it can in turn activate the K+ channels. The G proteins couple the inward rectifying K+ channels to the GABAB receptors, mediating a significant part of the GABA postsynaptic inhibition.Furthermore, GIRKs have been found to play a role in a group of serotonergic neurons in the dorsal raphe nucleus, specifically those associated with the neuropeptide hormone orexin. The 5-HT1A receptor, a serotonin receptor and type of GPCR, has been shown to be coupled directly with the α-subunit of a G protein, while the βγ-complex activates GIRK without use of a second messenger. The subsequent activation of the GIRK channel mediates hyperpolarization of orexin neurons, which regulate the release of many other neurotransmitters including noradrenaline and acetylcholine.
Types of G Protein-gated ion channels:
Calcium channels Structure In addition to the subset of potassium channels that are directly gated by G proteins, G proteins can also directly gate certain calcium ion channels in neuronal cell membranes. Although membrane ion channels and protein phosphorylation are typically indirectly affected by G protein-coupled receptors via effector proteins (such as phospholipase C and adenylyl cyclase) and second messengers (such as inositol triphosphate, diacylglycerol and cyclic AMP), G proteins can short circuit the second-messenger pathway and gate the ion channels directly. Such bypassing of the second-messenger pathways is observed in mammalian cardiac myocytes and associated sarcolemmal vesicles in which Ca2+ channels are able to survive and function in the absence of cAMP, ATP or protein kinase C when in the presence of the activated α-subunit of the G protein. For example, Gα, which is stimulatory to adenylyl cyclase, acts on the Ca2+ channel directly as an effector. This short circuit is membrane-delimiting, allowing direct gating of calcium channels by G proteins to produce effects more quickly than the cAMP cascade could. This direct gating has also been found in specific Ca2+ channels in the heart and skeletal muscle T tubules.
Types of G Protein-gated ion channels:
Function Several high-threshold, slowly inactivating calcium channels in neurons are regulated by G proteins. The activation of α-subunits of G proteins has been shown to cause rapid closing of voltage-dependent Ca2+ channels, which causes difficulties in the firing of action potentials. This inhibition of voltage-gated Calcium channels by G protein-coupled receptors has been demonstrated in the dorsal root ganglion of a chick among other cell lines. Further studies have indicated roles for both Gα and Gβγ subunits in the inhibition of Ca2+ channels. The research geared to defining the involvement of each subunit, however, has not uncovered the specificity or mechanisms by which Ca2+ channels are regulated.
Types of G Protein-gated ion channels:
The acid-sensing ion channel ASIC1a is a specific G protein-gated Ca2+ channel. The upstream M1 muscarinic acetylcholine receptor binds to Gq-class G proteins. Blocking this channel with the agonist oxotremorine methiodide was shown to inhibit ASIC1a currents. ASIC1a currents have also been shown to be inhibited in the presence of oxidizing agents and potentiated in the presence of reducing agents. A decrease and increase in acid-induced intracellular Ca2+ accumulation were found, respectively.
Types of G Protein-gated ion channels:
Sodium channels Patch clamp measurements suggest a direct role for Gα in the inhibition of fast Na+ current within cardiac cells. Other studies have found evidence for a second-messenger pathway which may indirectly control these channels. Whether G proteins indirectly or directly activate Na+ ion channels not been defined with complete certainty.
Types of G Protein-gated ion channels:
Chloride channels Chloride channel activity in epithelial and cardiac cells has been found to be G protein-dependent. However, the cardiac channel that has been shown to be directly gated by the Gα subunit has not yet been identified. As with Na+ channel inhibition, second-messenger pathways cannot be discounted in Cl− channel activation.Studies done on specific Cl− channels show differing roles of G protein activation. It has been shown that G proteins directly activate one type of Cl− channel in skeletal muscle. Other studies, in CHO cells, have demonstrated a large conductance Cl− channel to be activated differentially by CTX- and PTX-sensitive G proteins. The role of G proteins in the activation of Cl− channels is a complex area of research that is ongoing.
Clinical significance and ongoing research:
Mutations in G proteins associated with G protein-gated ion channels have been shown to be involved in diseases such as epilepsy, muscular diseases, neurological diseases, and chronic pain, among others.Epilepsy, chronic pain, and addictive drugs such as cocaine, opioids, cannabinoids, and ethanol all affect neuronal excitability and heart rate. GIRK channels have been shown to be involved in seizure susceptibility, cocaine addiction, and increased tolerance for pain by opioids, cannabinoids, and ethanol. This connection suggests that GIRK channel modulators may be useful therapeutic agents in the treatment of these conditions. GIRK channel inhibitors may serve to treat addictions to cocaine, opioids, cannabinoids, and ethanol while GIRK channel activators may serve to treat withdrawal symptoms.
Clinical significance and ongoing research:
Alcohol intoxication Alcohol intoxication has been shown to be directly connected to the actions of GIRK channels. GIRK channels have a hydrophobic pocket that is capable of binding ethanol, the type of alcohol found in alcoholic beverages. When ethanol acts as an agonist, GIRK channels in the brain experience prolonged opening. This causes decreased neuronal activity, the result of which manifests as the symptoms of alcohol intoxication. The discovery of the hydrophobic pocket capable of binding ethanol is significant in the field of clinical pharmacology. Agents that can act as agonists to this binding site can be potentially useful in the creation of drugs for the treatment of neurological disorders such as epilepsy in which neuronal firing exceeds normal levels.
Clinical significance and ongoing research:
Breast cancer Studies have shown that a link exists between channels with GIRK1 subunits and the beta-adrenergic receptor pathway in breast cancer cells responsible for growth regulation of the cells. Approximately 40% of primary human breast cancer tissues have been found to carry the mRNA which codes for GIRK1 subunits. Treatment of breast cancer tissue with alcohol has been shown to trigger increased growth of the cancer cells. The mechanism of this activity is still a subject of research.
Clinical significance and ongoing research:
Down syndrome Altered cardiac regulation is common in adults diagnosed with Down syndrome and may be related to G protein-gated ion channels. The KCNJ6 gene is located on chromosome 21 and encodes for the GIRK2 protein subunit of G protein-gated K+ channels. People with Down Syndrome have three copies of chromosome 21, resulting in an overexpression of the GIRK2 subunit. Studies have found that recombinant mice overexpressing GIRK2 subunits show altered responses to drugs that activate G protein-gated K+ channels. These altered responses were limited to the sino-atrial node and atria, both areas which contain many G protein-gated K+ channels. Such findings could potentially lead to the development of drugs that can help regulate the cardiac sympathetic-parasympathetic imbalance in Down Syndrome adults.
Clinical significance and ongoing research:
Chronic atrial fibrillation Atrial fibrillation (abnormal heart rhythm) is associated with shorter action potential duration and believed to be affected by the G protein-gated K+ channel, IK,ACh. The IK,ACh channel, when activated by G proteins, allows for the flow of K+ across the plasma membrane and out of the cell. This current hyperpolarizes the cell, thus terminating the action potential. It has been shown that in chronic atrial fibrillation there an increase in this inwardly rectifying current because of constantly activated IK,ACh channels. Increase in the current results in shorter action potential duration experienced in chronic atrial fibrillation and leads to the subsequent fibrillating of the cardiac muscle. Blocking IK,ACh channel activity could be a therapeutic target in atrial fibrillation and is an area under study.
Clinical significance and ongoing research:
Pain management GIRK channels have been demonstrated in vivo to be involved in opioid- and ethanol-induced analgesia. These specific channels have been the target of recent studies dealing with genetic variance and sensitivity to opioid analgesics due to their role in opioid-induced analgesia. Several studies have shown that when opioids are prescribed to treat chronic pain, GIRK channels are activated by certain GPCRs, namely opioid receptors, which leads to the inhibition of nociceptive transmission, thus functioning in pain relief. Furthermore, studies have shown that G proteins, specifically the Gi alpha subunit, directly activate GIRKs which were found to participate in propagation of morphine-induced analgesia in inflamed spines of mice. Research pertaining to chronic pain management continues to be performed in this field. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Links LS 2000**
Links LS 2000:
Links LS 2000 is a golf video game developed by Access Software and published by Microsoft. It is part of the Links series and was released in 1999 for Microsoft Windows, and in 2000 for Macintosh. It was followed by Links 2001.
Links LS 2000:
Links LS 2000 was viewed by critics as a minimally upgraded version of its predecessor, Links LS 1999. It was praised for its multiplayer, variety, and game physics, but critics felt that rival golf games such as Jack Nicklaus 6: Golden Bear Challenge were superior, in part because of their inclusion of a golf course designer. Links LS 2000 was the sixth best-selling computer sports game of 1999, with 104,225 units sold.
Links LS 2000:
An add-on program with additional courses, titled Links LS 2000 10-Course Pack, was released in 2000. Links LS Classic, released later in 2000, is a version of Links LS 2000 that includes 21 championship courses.
Gameplay:
Links LS 2000 features six golf courses: St. Andrews Old Course, St. Andrews New Course, and St. Andrews Jubilee; Hawaii's Mauna Kea and Hapuna; and Indiana's Covered Bridge. The game also allows for various courses to be imported from previously released Links add-on disks. St. Andrews Old and Mauna Kea were featured in earlier entries in the Links series. The game includes short movies that discuss each course.Compared to its predecessor Links LS 1999, the game adds four new golfers, including Arnold Palmer and Fuzzy Zoeller. It features five new game modes, including Fuzzy Zoeller's Wolf Challenge, which is a skins game variant. The game featured new online multiplayer options over its predecessor, including MSN Gaming Zone, and was compatible with the online LS Tour 2000. The game also introduces a feature called "SkyScape" that allows the player to change the amount of cloud coverage, and includes other adjustable options relating to wind, fog, haze, and camera angles. The player can also create and edit sounds. Links LS 2000 features commentary from David Feherty and Craig Bolerjack. The game includes three different ways to hit a ball, including Easy Swing and the complex PowerStroke; with the latter version, the player uses computer mouse motion to perform a shot.In addition to LS Tour 2000, several other golfing events were held for online users of the game, including tournaments by the Virtual Golfing Association. Other online events included e-World Shotgun 2000 and the World Links Championship.
Development and release:
Links LS 2000 was developed by Access Software. Because of a limited development period, few new features were added to the game in relation to its predecessor. Links LS 2000 was completed in September 1999, and was released in North America the following month. The game was released as a set of three CDs, and was published by Microsoft, which purchased Access Software earlier in 1999.Links LS 2000 10-Course Pack is an add-on program for Links LS 2000 and its predecessor. It features additional courses for tournaments. It was completed in January 2000, and was released the following month. Links LS Classic, released in November 2000, is a version of Links LS 2000 that includes 21 championship courses.In November 2000, a Macintosh version of the game was released. It was ported to Macintosh by Green Dragon Creations, and was published by MacSoft, also in a set of three CDs.
Contests:
In November 1999, Microsoft launched the "Links LS 2000 Hole-in-One Sweepstakes," in which players would try to make a hole-in-one on the seventh hole of the game's Mauna Kea course. The winner would receive a two-person trip to Hawaii, a five-night stay at the Mauna Kea Beach Hotel, $500 of spending money, and a round of golf on the real Mauna Kea course. In June 2000, Microsoft announced its "Father's Day on the Fairway Sweepstakes", in which two grand-prize winners would compete against each other in real golf and a game of Links LS 2000. The winner would get to play a round of golf with Arnold Palmer, while 100 first-prize winners would receive a free copy of Links LS 2000.
Reception:
The game received favorable reviews according to the review aggregation website GameRankings.Critics viewed it as a minimally upgraded version of its predecessor, and were hopeful that Links 2001 would be more of a substantial update in the series. Tony Wyss of GameSpy described the game as Links LS 1999 but with more multiplayer options, as well as "minimal feature improvements that keep it from being great." Wyss considered it "an extremely difficult game to assess as a reviewer," recommending it for first-time players of a golf video game while stating that owners of Links LS 1999 might want to reconsider before purchasing it. Sports Gaming Network also considered it a difficult game to review because of its similarities to the previous game, and likewise recommended it for beginning computer golfers. William Abner of Computer Games Strategy Plus considered it a "terribly difficult game to rate," praising its gameplay, its features, and calling it "one of the finer" golf simulations available, while noting its similarities to the previous game. Paul Rosano of Hartford Courant wrote that because of its $50 price, the game was difficult to recommend to owners of its predecessor, but stated that for players who "haven't updated since "Links 98" or earlier, there is definitely enough to warrant a purchase."Tom Ham of PC Accelerator wrote that the game "feels more like an upgrade than a full-fledged product – still not a bad thing." Stephen Poole of GameSpot wrote that the game's additional features "are so unimpressive both in quantity and quality that there's simply not much reason for owners of the previous version to get excited," further stating that the game felt more like an add-on course bundle. Jeff Lackey of Computer Gaming World wrote that the game "appears to be a meager repackaging of its predecessor" with "just the barest minimum of additions necessary to justify the new package and name."Critics believed that competitors such as Jack Nicklaus 6: Golden Bear Challenge and PGA Championship Golf 1999 Edition were superior to the game, with Poole writing that "the days of simply assuming the latest iteration of Links is the best golf sim around are long gone - especially if the series doesn't begin to evolve more quickly than it has in the past couple of installments." Lackey cited the game's lack of a course designer as one of its disadvantages against competitors, and noted that additional courses could not be downloaded for free, as with rival games. Martin Korda of PC Zone considered the lack of a course designer to be significant, calling its absence almost unforgivable. Sports Gaming Network considered the absence of a course designer to be the game's "biggest, most glaring omission," stating that such a feature "has become virtually mandatory in the golf sim world these days" and that it would have significantly improved the game's replay value, while writing, "Its absence is sorely felt, particularly as this version of the game offers so little else that is new."The graphics received a mixed reception, with criticism going towards the game's golfer animations, and the lack of moving water. Some praised the graphics, including Michael L. House of AllGame, who also praised the SkyScape feature. Ham wrote that the game "is gorgeous and the overall look of the courses is great," but he stated that the digitized golf players resembled cardboard cutouts and that the trees and scenery "look like cheap bitmap leftovers from the Sega Genesis days." Erik Peterson of IGN was critical of the trees and bushes for resembling cardboard cutouts, while stating that they "are so pixelated at times that they are nearly indistinguishable". Peterson also wrote that the golf crowd resembled a series of "blocky wax reproductions". Michael Phillips of Inside Mac Games called the graphics "quite nice, but not perfect," criticizing the pixelated golfers and writing that "it's a tad disappointing to watch clouds that are still and water that doesn't move." Poole stated that the game's digitized graphics looked dated in comparison to rival golf games, writing "it simply looks like a photo and doesn't feel like a golf course." Wyss considered the courses beautiful, but felt the graphics could be better. Wyss stated that the 3D golfer characters looked awkward against the backdrop of the golf courses. Sports Gaming Network called it a "great looking game" but considered its graphics to be dated, while mentioning minor graphical improvements over the previous game. Michael Lafferty of GameZone considered the graphics average, and stated that the game would not appeal to golfers.The sound received some praise, although Lafferty considered it to be average. Philip Michaels of Macworld noted minor glitches in which the sound would cut out at the end of golfing holes. The multiplayer options were praised, as were the variety and the game physics. Abner believed that Links LS 2000 had more realistic putting physics than its predecessor, while Peterson stated that the physics were "probably the best yet seen on a golf-sim."Some criticized the Easy Swing and PowerStroke options respectively for being too simple and too difficult, although Korda considered it to be the most fun of the three swing options, while Sports Gaming Network called it "the best non-real-time mouse swing in the business." Lackey considered the lack of a real-time aspect to be a "major drawback" for the PowerStroke swing. Ham felt that the game's commentary could have been better, while Poole described it as having "huge gaps of silence followed by failed attempts at humor or insipid post-shot observations". Wyss praised the commentary for its humor. Nash Werner of GamePro stated that the game "does everything right," while St. Louis Post-Dispatch called it "the most realistic golf experience possible for the PC" and praised its "sharp and crisp" imagery.Links LS 2000 was the sixth best-selling computer sports game of 1999, with 104,225 units sold, bringing in revenue of $4.6 million. In March 2000, the game won a Codie Award from the Software and Information Industry Association for "Sports Games - Best Product." The Houston Chronicle's Bob LeVitus, also known as "Dr. Mac," included the Macintosh version in his "Dr. Mac Game Hall of Fame Awards" in 2001; it won "Best Sports Game For Big Kids". | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**API key**
API key:
An application programming interface (API) key is a unique identifier used to authenticate and authorize a user, developer, or calling program to an API. However, they are typically used to authenticate and authorize a project with the API rather than a human user.
Usage:
The API key often acts as both a unique identifier and a secret token for authentication and authorization, and will generally have a set of access rights on the API associated with it.
HTTP APIs API keys for HTTP-based APIs can be sent in multiple ways: In the query string:As a request header:As a cookie:
Security:
API keys are generally not considered secure; they are typically accessible to clients, making it easy for someone to steal an API key. Once the key is stolen, it has no expiration, so it may be used indefinitely, unless the project owner revokes or regenerates the key. Since API keys must only be accessible to the client and server, authentication using API keys is only considered secure when used in conjunction with other security mechanisms such as HTTPS.
Incidents:
In 2017, Fallible, a Delaware-based security firm examined 16,000 android apps and identified over 300 which contained hard-coded API keys for services like Dropbox, Twitter, and Slack. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Dressed herring**
Dressed herring:
Dressed herring, colloquially known as herring under a fur coat (Russian: "сельдь под шубой", tr. "sel'd pod shuboy" or "селёдка под шубой", "selyodka pod shuboy"), is a layered salad composed of diced pickled herring covered with layers of grated boiled eggs, vegetables (potatoes, carrots, beetroots), chopped onions, and mayonnaise. Some variations of this dish include a layer of fresh grated apple while some do not.A final layer of grated boiled beetroot covered with mayonnaise is what gives the salad its characteristic rich purple color. Dressed herring salad is often decorated with grated boiled eggs (whites, yolks, or both).
Dressed herring:
Dressed herring salad is popular in Russia, Ukraine (Ukrainian: Оселедець під шубою, romanized: oseledets pid shuboyu), Belarus (Belarusian: Селядзец пад футрам, romanized: Sieliadziec pad futram) and other countries such as Lithuania and Latvia, (Lithuanian: Silkė pataluose, Latvian: Siļķe kažokā). It is especially popular for holidays, and is commonly served as a "zakuska" at New Year (Novy God) and Christmas celebrations in Belarus, Ukraine, Russia and Kazakhstan. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Indium (111In) capromab pendetide**
Indium (111In) capromab pendetide:
Indium (111In) capromab pendetide (trade name Prostascint) is used to image the extent of prostate cancer. Capromab is a mouse monoclonal antibody which recognizes a protein found on both prostate cancer cells and normal prostate tissue. It is linked to pendetide, a derivative of DTPA. Pendetide acts as a chelating agent for the radionuclide indium-111. Following an intravenous injection of Prostascint, imaging is performed using single-photon emission computed tomography (SPECT).Early trials with yttrium (90Y) capromab pendetide were also conducted. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Swainsonine**
Swainsonine:
Swainsonine is an indolizidine alkaloid. It is a potent inhibitor of Golgi alpha-mannosidase II, an immunomodulator, and a potential chemotherapy drug. As a toxin in locoweed (likely its primary toxin) it also is a significant cause of economic losses in livestock industries, particularly in North America. It was first isolated from Swainsona canescens.
Pharmacology:
Swainsonine inhibits glycoside hydrolases, specifically N-linked glycosylation. Disruption of Golgi alpha-mannosidase II with swainsonine induces hybrid-type glycans. These glycans have a Man5GlcNAc2 core with processing on the 3-arm that resembles so-called complex-type glycans.The pharmacological properties of this product have not been fully investigated.
Sources:
Some plants do not produce the toxic compound itself; they are host of endophytic fungi which produces swainsonine.
Biosynthesis:
The biosynthesis of swainsonine has been investigated in the fungus Rhizoctonia leguminicola, and it initially involves the conversion of lysine into pipecolic acid. The pyrrolidine ring is then formed via retention of the carbon atom of the pipecolate's carboxyl group, as well as the coupling of two more carbon atoms from either acetate or malonate to form a pipecolylacetate. The retention of the carboxyl carbon is striking, since it is normally lost in the biosynthesis of most other alkaloids.The resulting oxoindolizidine is then reduced to (1R,8aS)- 1-hydroxyindolizidine, which is subsequently hydroxylated at the C2 carbon atom to yield 1,2-dihydroxyindolizidine. Finally, an 8-hydroxyl group is introduced through epimerization at C-8a to yield swainsonine. Schneider et al. have suggested that oxidation occurs at C-8a to give an iminium ion. Reduction from the β face would then yield the R configuration of swainsonine, as opposed to the S configuration of slaframine, another indolizidine alkaloid whose biosynthesis is similar to that of swainsonine during the first half of the pathway and also shown above alongside that of swainsonine. The instance at which oxidation and reduction occur with regard to the introduction of the hydroxyl groups at the C2 and C8 positions is still under investigation.The biosynthetic pathway of swainsonine has also been investigated in the Diablo locoweed. Through detection of (1,8a-trans)-1-hydroxyindolizidine and (1,8a-trans-1,2-cis)-1,2-dihydroxyindolizidine—two precursors of swainsonine in the fungus pathway—in the shoots of the plant, Harris et al. proposed that the biosynthetic pathway of swainsonine in the locoweed is nearly identical to that of the fungus.
Synthesis:
Despite the small size of swaisonine, the synthesis of this molecule and its analogues is quite challenging due to the presence of four chiral centers. In most cases, synthesis implies the use of sugars, chiral aminoacids as starting compounds, or chiral catalysts to induce chirality.The swainsonine synthesis was systemazed by three common precursors: 8-oxy-hexahydroindolizines, N-protected-3-oxy-2-substituted-piperidines and 2-substituted-pyrrolidine-3,4-protected-diols.
Livestock losses:
Because chronic intoxication with swainsonine causes a variety of neurological disorders in livestock, these plant species are known collectively as locoweeds. Other effects of intoxication include reduced appetite and consequent reduced growth in young animals and loss of weight in adults, and cessation of reproduction (loss of libido, loss of fertility, and abortion).
Potential uses:
Swainsonine has a potential for treating cancers such as glioma and gastric carcinoma. However, a phase II clinical trial of GD0039 (a hydrochloride salt of swainsonine) in 17 patients with renal carcinoma was discouraging. Swainsonine's activity against tumors is attributed to its stimulation of macrophages.Swainsonine also has potential uses as an adjuvant for anti-cancer drugs and other therapies in use. In mice, swainsonine reduces the toxicity of doxorubicin, suggesting that swainsonine might enable use of higher doses of doxorubicin. Swainsonine may promote restoration of bone marrow damaged by some types of cancer treatments.
Molecular mechanism:
The inhibitory effect of swainsonine on Golgi Mannosidase II (GMII) was proposed to be due to its ability to bind in the GMII binding pocket in a similar fashion as the natural GMII substrate in its transition state. Later, it was shown that the binding pattern of the swainsonine molecule resembles that of the Michaelis complex of mannose and only the protonated, charge positive swainsonine molecule binds similarly to the substrate in its transition state. The actual state in which swainsonine binds in the mannosidase remains undetermined and is most likely dependent on the pH at which the enzyme operates. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Newel**
Newel:
A newel, also called a central pole or support column, is the central supporting pillar of a staircase. It can also refer to an upright post that supports and/or terminates the handrail of a stair banister (the "newel post"). In stairs having straight flights it is the principal post at the foot of the staircase, but the term can also be used for the intermediate posts on landings and at the top of a staircase. Although its primary purpose is structural, newels have long been adorned with decorative trim and designed in different architectural styles.Newel posts turned on a lathe are solid pieces that can be highly decorative, and they typically need to be fixed to a square newel base for installation. These are sometimes called solid newels in distinction from hollow newels due to varying techniques of construction. Hollow newels are known more accurately as box newel posts. In historic homes, folklore holds that the house plans were placed in the newel upon completion of the house before the newel was capped.The most common means of fixing a newel post to the floor is to use a newel post fastener, which secures a newel post to a timber joist through either concrete or wooden flooring.
In popular culture:
A loose ball cap finial on the newel post at the base of the stairway is a plot device in the 1946 classic It's a Wonderful Life. The same is used in jest in the 1989 film Christmas Vacation, in which Clark Griswold, in an emotional meltdown, cuts a loose finial off a newel post with a chainsaw. He casually exclaims, "Fixed the newel post!" and carries on.In Family Guy Season 17, Episode 16, “You Can’t Handle the Booth”, Stewie and Brian argue over the semantics of Peter getting stuck in what Stewie calls “banister slats” and Brian corrects him by saying they are called “baluster slats”. Stewie then asks if the “baluster” is the big, round thing at the bottom of the stairs where the staircase begins, to which Brian laughs at him and corrects him by saying “I believe what you are referring to is a newel post”. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Genetic vaccine**
Genetic vaccine:
A genetic vaccine (also gene-based vaccine) is a vaccine that contains nucleic acids such as DNA or RNA that lead to protein biosynthesis of antigens within a cell. Genetic vaccines thus include DNA vaccines, RNA vaccines and viral vector vaccines.
Properties:
Most vaccines other than live attenuated vaccines and genetic vaccines are not taken up by MHC-I-presenting cells, but act outside of these cells, producing only a strong humoral immune response via antibodies. In the case of intracellular pathogens, an exclusive humoral immune response is ineffective. Genetic vaccines are based on the principle of uptake of a nucleic acid into cells, whereupon a protein is produced according to the nucleic acid template. This protein is usually the immunodominant antigen of the pathogen or a surface protein that enables the formation of neutralizing antibodies that inhibit the infection of cells. Subsequently, the protein is broken down at the proteasome into short fragments (peptides) that are imported into the endoplasmic reticulum via the transporter associated with antigen processing, allowing them to bind to MHCI-molecules that are subsequently secreted to the cell surface. The presentation of the peptides on MHC-I complexes on the cell surface is necessary for a cellular immune response. As a result, genetic vaccines and live vaccines generate cytotoxic T-cells in addition to antibodies in the vaccinated individual. In contrast to live vaccines, only parts of the pathogen are used, which means that a reversion to an infectious pathogen cannot occur as it happened during the polio vaccinations with the Sabin vaccine.
Administration:
Genetic vaccines are most commonly administered by injection (intramuscular or subcutaneous) or infusion, and less commonly and for DNA, by gene gun or electroporation. While viral vectors have their own mechanisms to be taken up into cells, DNA and RNA must be introduced into cells via a method of transfection. In humans, the cationic lipids SM-102, ALC-0159 and ALC-0315 are used in conjunction with electrically neutral helper lipids. This allows the nucleic acid to be taken up by endocytosis and then released into the cytosol.
Applications:
Examples of genetic vaccines approved for use in humans include the RNA vaccines tozinameran and mRNA-1273, the DNA vaccine ZyCoV-D as well as the viral vectors AZD1222, Ad26.COV2.S, Ad5-nCoV, and Sputnik V. In addition, genetic vaccines are being investigated against proteins of various infectious agents, protein-based toxins, as cancer vaccines, and as tolerogenic vaccines for hyposensitization of type I allergies.
History:
The first use of a viral vector for vaccination – a Modified Vaccinia Ankara Virus expressing HBsAg – was published by Bernard Moss and colleagues. DNA was used as a vaccine by Jeffrey Ulmer and colleagues in 1993. The first use of RNA for vaccination purposes was described in 1993 by Frédéric Martinon, Pierre Meulien and colleagues and in 1994 by X. Zhou, Peter Liljeström, and colleagues in mice. Martinon demonstrated that a cellular immune response was induced by vaccination with an RNA vaccine. In 1995, Robert Conry and colleagues described that a humoral immune response was also elicited after vaccination with an RNA vaccine. While DNA vaccines were more frequently researched in the early years due to their ease of production, low cost, and high stability to degrading enzymes, but sometimes produced low vaccine responses despite containing immunostimulatory CpG sites, more research was later conducted on RNA vaccines, whose immunogenicity was often better due to inherent adjuvants and which, unlike DNA vaccines, cannot insert into the genome of the vaccinated. Accordingly, the first RNA- and DNA-based vaccines approved for humans were RNA and DNA vaccines used as COVID vaccines. Viral vectors had previously been approved as ebola vaccines. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Capillary lamina of choroid**
Capillary lamina of choroid:
The capillary lamina of choroid or choriocapillaris is a part of the choroid of the eye. It is a layer of capillaries immediately adjacent to Bruch's membrane of the choroid. The choriocapillaris consists of a dense network of freely anastomosing highly permeable fenestrated large-calibre capillaries. It nourishes the outer avascular layers of the retina.
Structure:
Microstructure In the capillaries that compose the choriocapillaris, the fenestrations are densest at the aspect of the capillaries that faces retina, whereas pericytes are situated at the obverse aspect.The choroidal blood vessels can be divided into two categories: the choriocapillaris, and the larger caliber arteries and veins that lie just posterior to the choriocapillaris (these can easily be seen in an albino fundus because there is minimal pigment obscuring the vessels). The choriocapillaris forms a single layer of anastomosing, fenestrated capillaries having wide lumina with most of the fenestrations facing toward the retina. The lumen is approximately three to four times that of ordinary capillaries, such that two or three red blood cells can pass through the capillary abreast, whereas in ordinary capillaries the cells usually course single file. The cell membrane is reduced to a single layer at the fenestrations, facilitating the movement of material through the vessel walls. Occasional (pericyte)s (Rouget cells), which may have a contractile function, are found around the capillary wall. Pericytes have the ability to alter local blood flow. The choriocapillaris is densest in the macular area, where it is the sole blood supply for a small region of the retina. The choriocapillaris is unique to the choroid and does not continue into the ciliary body.
Function:
The choriocapillaris serves multiple functions that include sustaining the photoreceptors, filtering waste produced in the outer retina and regulating the temperature of macula.The capillary wall is permeable to plasma proteins which is probably of great importance for the supply of vitamin A to the pigment epithelium.
History:
The choriocapillaris was first described in man by Hovius in 1702, although it was not so named until 1838, by Eschricht. Passera (1896) described its form as star-shaped, radiating capillaries beneath the pigment epithelium of the retina, and Duke-Elder and Wybar (1961) have emphasized its nature as a network of capillaries in one plane. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Lead(II) perchlorate**
Lead(II) perchlorate:
Lead(II) perchlorate is a chemical compound with the formula Pb(ClO4)2·xH2O, where is x is 0,1, or 3. It is an extremely hygroscopic white solid that is very soluble in water.
Preparation:
Lead perchlorate trihydrate is produced by the reaction of lead(II) oxide, lead carbonate, or lead nitrate by perchloric acid: Pb(NO3)2 + HClO4 → Pb(ClO4)2 + HNO3The excess perchloric acid was removed by first heating the solution to 125 °C, then heating it under moist air at 160 °C to remove the perchloric acid by converting the acid to the dihydrate. The anhydrous salt, Pb(ClO4)2, is produced by heating the trihydrate to 120 °C under water-free conditions over phosphorus pentoxide. The trihydrate melts at 83 °C. The anhydrous salt decomposes into lead(II) chloride and lead(II) oxide at 250 °C. The monohydrate is produced by only partially dehydrating the trihydrate, and this salt undergoes hydrolysis at 103 °C.The solution of anhydrous lead(II) perchlorate in methanol is explosive. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**TMEM211**
TMEM211:
Transmembrane protein 211 (TMEM211,bA9F11.1,Q6ICI0,LHFLP7) is a tetraspan membrane protein under the LHFPL subfamily. It primarily plays a role in the perception of sound but may have secondary roles in insulin signaling. It is encoded by the TMEM211 gene and is found in almost all animals.
Expression and localization:
Human TMEM211 RNA is expressed in relatively low levels, but displays clear spikes in the tissues of the brain, stomach, lungs, breasts, ovaries, prostate, trachea, and salivary glands. In the fetus, TMEM211 RNA is again expressed in the brain and stomach, but is also expressed in the intestines. TMEM211 is also known to be expressed in skin, in significantly greater amounts in sun-exposed skin than in non-sun-exposed skin, and overexpressed in triple-negative breast cancer. In a compendium of healthy canine tissues, TMEM211 displayed the highest expression in the pancreas (Figure 1). Within the pancreas, TMEM211 displays highly biased abundance in the islets of Langerhans, and is absent from other pancreatic tissue(Figure 2). TMEM211 is localized to the plasma membrane as a result of its four transmembrane helices.Despite the protein's localization to the uterus, ovaries, and breast milk, oral administration of estradiol to menopausal women does not produce a significant change in the level of TMEM211 expression. However, it did significantly reduce the variability of TMEM211 between samples, indicating that estrogen does exert a controlling effect on TMEM211, likely through an indirect mechanism (Figure 3). TMEM211 was also shown to be expressed in higher levels in obese individuals (Figure 4). This result may be explained by the finding that obesity, especially non-diabetic obesity, is correlated with an increase in both the volume and number of islets of Langerhans. Alzheimer's patients likewise displayed higher TMEM211 expression compared to non-Alzheimer's individuals, as did women when compared to men (Figure 5).
Translations and homologs:
Figure 6 displays select regions of interest of the TMEM211 human protein sequence. Most notably, it shows the location and presence of the transmembrane regions and the exon boundary. The RNA transcript is made of four exons, but only two of these exons contain coding sequence. Uniquely, the protein has two start codons, both of which can begin translation and lead to protein products. This is the main cause of TMEM211 isoforms.
Translations and homologs:
Ortholog space TMEM211 is present in the majority of animal species, but does not exist outside of animals (Figure 7). The oldest group of organisms where the gene is found ubiquitously is anemones, which diverged from humans 824 million years ago. The gene is not found in organisms that diverged 934 million years ago, indicating that the gene is between 824 and 934 million years old. Following evolution, the gene is then found in fish, amphibians, reptiles, birds, and mammals. However, there is one clade with a different pattern of expression, arthropods. The gene is readily found in crustaceans, but is missing from all other arthropods. For unknown reasons, marine arthropods maintain the TMEM211 gene while terrestrial arthropods have lost it.
Translations and homologs:
There is one possible exception to this, the White Butterfly Parasite. This gene is found by BLAST when searching for human TMEM211, and searching for White Butterfly Parasite's TMEM211 sequence did return results in other parasitic wasps, indicating that this was not likely a sequencing or contamination error. This small clade may have preserved the gene while the rest of the terrestrial arthropods lost it. However, the TMEM211 genes found within this clade are significantly less similar than every other organism with the same ancestral distance from humans. While the gene may be an ortholog, it also may be serving a new function for this group of wasps, which is why this clade retained the gene while other insects lost it. This would also explain the higher percent divergence. Similarly, it is possible that this group does not express the gene, which would mean that no evolutionary forces work against its mutation, allowing it to change faster than in organisms where the gene must serve as a template for a functional protein. Finally, it has been proven that the caterpillar species that this group of wasps preys upon uses an entomopathogenic defense mechanism. This virus does not cause illness to the caterpillar, but does infect the wasp eggs and wasp larvae and has been shown to cause horizontal gene transfer. It is easily plausible that the TMEM211 gene was given to this subset of wasps by one of these viruses. Then, as the gene was artificially created in the genome and does not serve a real purpose, it mutates freely, leading to the observed divergence that is higher than expected. This explanation is supported by the fact that the intron that separates the two halves of the protein is missing. Alternatively, the BLAST result might be misleading, and what is flagged as TMEM211 is actually a member of the paralogous LHFPL family, which is known to be found in both wasps and caterpillars, and includes members that do not share the TMEM211 intron.
Translations and homologs:
Sequence alignments The MSA of distant orthologs highlights several conserved regions (Figure 8). However, the human gene does not partake in the majority of this conservation, nor do the other mammals. This is consistent with the protein's presence in breastmilk; non-mammalian species do not have breastmilk, and thus the protein may have slightly different functions in mammals and non-mammals. Much of the protein's identity that is found to match appears scattered, but there is one clear region of highest conservation. In humans, this region spans amino acids 132–150. The other two regions with high conservation shared by humans span amino acids 59-71 and 110–117. There are only six amino acids with complete conservation across all species, Cys27, Gly 65, Gln100, Pro115, Cys127, Cys138. Half of the completely conserved amino acids are cysteine, thus cysteine's sulfhydryl group and ability to form disulfide bridges and covalent bonds may be important to the protein's structure and/or function.Overall, the amino acids with the highest conservation in respect to the human sequence are tyrosine (81%) and tryptophan (72%). These two amino acids are vital to transmembrane proteins as they can interact with both the hydrophobic membrane and the aqueous environment inside or outside of the cell, and thus, it is logical that these amino acids are highly conserved. Looking at the mammals’ alignment, the majority of the protein is conserved (Figure 9). In fact, every protein that is wholly unconserved is found embedded within the membrane, where exact identity likely matters less to the function of the protein. This pattern remains true in the broader MSA of orthologs, where the regions of least conservation are mostly inside the membrane. While keeping this conservation, TMEM211 has evolved and mutated with the organisms that carry it. The divergence of each organism's gene follows normal phylogeny (Figure 10).
Translations and homologs:
Conserved regions There is evidence that suggests that the transmembrane segments are conserved to a lesser degree than are the other segments of the protein, a notion supported by the frequent swapping of small, nonpolar amino acids within these regions. To test this hypothesis, Shannon Variability bits were computed for each position within the protein (Figure 11). While this correlation is indeed observed, the relationship is not great enough to be conclusive. Using the Shannon bits, Figure 12 converts the annotated conceptual translation to an annotated representation that better highlights the proximity of the conserved regions and amino acids in three-dimensional space. The first and third conserved regions are located adjacent to one another on the extracellular side of the protein, possibly creating an active site. The orientation of the transmembrane domains also brings several cysteines into close proximity, allowing them to form disulfide bridges.
Translations and homologs:
Paralogs A BLAST query of the human TMEM211 sequence within the human genome returns no results. This would seem to indicate that humans have no paralog. However, submitting non-mammalian TMEM211 sequences as queries returns Homo sapiens Lipoma high-mobility group I C protein fusion partner-like tetraspan subfamily member 3 protein isoform 2 (LHFPL3). Additionally, BLAST queries of non-mammalian TMEM211 sequences searching within those organisms’ genomes return numerous results within the LHFPL family. This protein is similar to TMEM211 in the fact that it is a tetraspan membrane protein. To evaluate whether these four transmembrane regions are the cause for high similarity, a global alignment between human TMEM211 and human LHFPL3 was created (Figure 13). The similarity does not appear confined to the transmembrane regions, indicating that these two proteins may be paralogous. Additionally, 4 of the 6 amino acids completely conserved in TMEM211 orthologs are aligned between TMEM211 and LHFPL3, furthering the evidence that these proteins are paralogs (p<.01).The LHFPL family is of known function; members of the family aid protein binding in the brain and are vital to the perception of sound. Knockouts or loss of function mutations to these family members cause complete or partial deafness in humans and mice that is then inherited in an autosomal recessive pattern. Other mutations to LHFPL have been linked to tumors, autism, Alzheimer's, and other neurological conditions. The conserved cysteine residues in TMEM211 indicate that this could be a possible function for TMEM211 as well. The conserved regions that align in three-dimensional space may form a binding site that uses the conserved cysteine residues to recognize and bind substrates. Similar to TMEM211, the LHFPL family members show biased expression in the brain and salivary glands. LHFPL was not shown to be expressed in the pancreas, but per the aforementioned discussion, exclusion of the islets of Langerhans from pancreatic samples would lead to this result. LHFPL RNA, like TMEM211, was present in lower levels in the GI tract, kidneys, prostate, thyroid, and sex organs. LHFPL family members have been studied more so than TMEM211, and protein abundance data shows that each family member, with the exception of LHFPL2, displays its highest abundance level in healthy tissues in the brain. While still found in brain tissue at higher concentrations than the majority of other brain proteins, LHFPL2 displays an abundance in platelets that is 60 times greater, in the top 10% of platelet proteins. Each family member also displayed heightened abundance in lung cancer cell lines, as did TMEM211. Several LHFPL family members were also found in breastmilk, but in greatly lower quantities than TMEM211. Despite the association with hearing and sound perception, neither TMEM211 or any LHFPL family members show a pattern of localization biased to the thalamus, temporal lobe, or auditory cortex.Fibrinogen alpha is understood to be a rapidly evolving protein, while cytochrome C is a model of a slowly evolving protein. The linear trendline for the corrected divergence of TMEM211 is closer to that of fibrinogen alpha chain than that of cytochrome C (Figure 14). Looking at the data itself, TMEM211 and fibrinogen alpha chain have diverged at almost exactly the same rate over the last 400 million years. Thus, TMEM211 is a rapidly changing protein as well. This may be due to patterns observed earlier, where the segments of the protein inside the membrane are allowed to change with great liberty. It is also possible that the divergence accelerated with the rise of the first mammal 200 million years ago, assuming that the protein's existence in breastmilk is indicative of different function. Finally, the human LHFPL3 gene shares 17.8% identity with human TMEM211, which corresponds to a corrected divergence of 173%. This divergence aligns with that of the anemones, the most distant human ancestor to share the TMEM211 gene, something that would be expected if the two genes are paralogs born from the same ancestral gene. Using the trendline to predict LHFPL's date of divergence results in an estimation of 634-903 mya (95% confidence interval). This overlaps the estimated range of the emergence of the TMEM211 gene 824-934 million years ago, supporting the notion that the two are paralogs.
Transcription:
Promoter TMEM211 is understood to be under the control of the GXP_6044388 promoter (Figure 15). The GXP_6044388 promoter is widely found across species, and maintains distinct regions of conservation (Figure 16). In humans, there is a 40 base pair overlap between this promoter and the first exon of the TMEM211 RNA transcript, though this exon does not directly contribute to the resulting protein sequence. The promoter does not appear near enough to other genes to influence the expression of anything other than TMEM211. Orthologs for this promoter sequence are found before TMEM211 orthologs in a multitude of species. There are sequences with high shared identity with the GXP_6044388 promoter sequence found on chromosomes 5,6,7,13,14,15,16,17,18,19,20, and Y, though many of these sequences are not located near other genes and may not be expressed. Interestingly, the GXP_6044388 shares an average 43.83% identity with the LHFPL family members’ 6 promoters. This shared identity is high enough to conclude that these promoters are homologous.
Transcription:
Transcription factors FEZF1.02 and VDR_RXR.03 are the most likely candidates to control TMEM211 gene expression. FEZF1.02 was selected as a likely candidate because it was the only identified possible transcription factor to be specific to the brain, where TMEM211 is known to be expressed in high quantities relative to other sites of expression. FEZF1.02 increases transcription, and binds opposite the VDR_RXR.03, a heterodimer transcriptional coactivator that is dependent on calcitriol. Calcitriol is a form of vitamin D that is partly produced in skin in response to sunlight, which would explain TMEM211's higher expression in sun-exposed skin than in non-sun-exposed skin. Additionally, the main site of calcitriol production is the kidney, to be used as a hormone that signals the thyroid gland, and both of these locations were amongst the highest areas of TMEM211 expression. Calcitriol is also passed from nursing mothers in breastmilk. Furthermore, oral administration of estrogen has been shown to increase the bioavailability of consumed calcitriol, and leads to higher levels of calcitriol in circulation. Thus, the VDR_RXR.03 can explain the effects of oral estrogen administration on TMEM211 expression. Calcitriol is also a molecule relied on by the lungs for proper function. Finally, TMEM211 displayed an extreme pattern of localization to the islets of Langerhans in pancreatic tissue. Pancreatic beta cells are known to partake in calcitriol signaling, responding with an increase in insulin secretion from the islets of Langerhans and an increased resistance to cellular stresses. Overall, VDR_RXR.03 can single-handedly explain the majority of TMEM211 expression and localization. It is also of interest that calcitriol treatment has been shown to increase the expression of LHFPL RNA.
mRNA:
The 5’UTR of TMEM211 is conserved to a lesser degree than is the coding sequence (Figure 17). The 3’UTR is not conserved at all, to the point that BLAST cannot locate other orthologous sequences. This may be because the 3’UTR is not thermodynamically favorable, and has many possible conformations within each species. Thus, the only structural elements of note are present in the 5’UTR. There is a stable, highly conserved stem-loop present in the mRNA that is located near the start of translation that may sterically hinder translation, or, conversely, be recognized by initiation factors or transport proteins (Figure 18, left). There is one structural element that is conserved across orthologs in both sequence and structure, a pair of adjacent, highly stable stem-loops (Figure 18, middle). This structure is located far from other sites along the linear sequence, but may be recognized by transport proteins or signaling pathways. Finally, there is a less stable stem-loop present at the 5’ end of the 5’UTR (Figure 18, right). While this stem-loop is less stable than the other structural elements, the sequence and structure of the loop itself is completely conserved across the orthologs, including the cytosine that is bumped out from the stem. This suggests that this structure is recognized and bound by some protein, where then the low level of base-pairing along the stem would allow the structure to be easily undone and allow the mRNA to remain bound to the protein. In fact, the conserved and exposed loop are recognized as a binding site by MEIS1.01, a homeobox protein involved in neural crest development and neural differentiation. Moreover, MEIS1 deficiency causes hearing loss in mice, which is also the result observed when LHFPL paralogs lose function.
Structure:
Internal repeats There are only two local alignments within TMEM211 that produce positive scores (Figure 19). Neither of these alignments are impressively similar, and are likely the result of random chance. This dismissal is supported by the fact that neither of these repeats are conserved across other animals.
Structure:
Composition Human TMEM211 has a molecular weight of 20.4 KD and an isoelectric point of 9.64. This MW is smaller than the average human protein, while the pI is far higher than the average pI of human proteins (6.5), but still within the normal range for a transmembrane protein. There are no clusters of charge despite this elevated pI. The protein sequence does not significantly deviate from average human proteins in its amino acid composition (Figure 20).The human TMEM211 protein sequence has been analyzed by a number of software to look for structural elements, domains, and localization information. The presence of the four transmembrane domains were confirmed by PSORT II, Eukaryotic Linear Motif, and Interpro. These software also confirmed the orientation of the protein within the membrane and its localization. Additionally, PSORT II and PROSITE identified a leucine zipper pattern starting at position 56. Most notably, MyHits Motif Scan identified a domain shared with the LHFPL family spanning positions 100–157. Similarly, MotifFinder also reported a domain shared between TMEM211 and the LHFPL family, though it expanded this shared region to span positions 53–158. This expansion now includes the previously identified leucine zipper pattern.
Structure:
Post-translational modifications The human TMEM211 protein sequence has been analyzed with multiple software to search for sites of post-translational modification. First, the Eukaryotic Linear Motif tool identified several statistically significant phosphorylation sites and a glycosaminoglycan attachment site, but all were contained within transmembrane domains where the amino acid sequence would not be exposed to these factors. Furthermore, these locations are not conserved across orthologs, not even in other mammals. Negative results were similarly obtained from Marcoil, Sulfinator, PhosphoSitePlus, GPS-SUMO, NetNGlyc, and NetOGlyc.NetPhos predicts very strongly that S93 would be phosphorylated, but this serine does not exist in orthologs outside of primates. However, it is possible that this is part of a mammal-specific function. This serine is on the opposite side of the membrane from the hypothesized active site, so it is unlikely that it modulates protein activity. This phosphorylation site could be part of a signal pathway that the protein conducts.DiANNA also finds results, highlighting the probable formation of a disulfide bond between C127 and C138. Other results that involved a bond across the cell membrane were dismissed. The cysteines in the predicted bond are conserved completely across all organisms in the MSA, and thus, this bond is likely of vital importance to TMEM211's structure and resulting function.
Structure:
Annotated structure Identified domains of interest were added to the previous model of human TMEM211 (Figure 21). The leucine zipper pattern occurs over a highly conserved portion of the protein, both of which are included in the expanded domain shared with the LHFPL family. This expanded domain covers the entirety of the predicted active site, all of which is also highly conserved across species. The shorter shared domain is displayed for clarity, but even this shorter domain overlaps the majority of the active site and is highly conserved. While leucine zippers are better understood in regards to intracellular functions, extracellular leucine zippers have been shown to assist protein binding and be functional components of signal receptors. These are both identified functions of the LHFPL family and could be functions for TMEM211.
Structure:
The transmembrane domains are each held in place by the protein's sequence, modifications, or secondary structures, in addition to their composition of mostly hydrophobic residues. The first transmembrane domain is held in place on the intracellular side by positively charged arginine residues that prevent the domain from slipping into the membrane. Transmembrane domains 1 and 2 are both stabilized on the extracellular side by the formation of beta sheets in the sequence between them. This structure would not be able to pass through the membrane. On the intracellular side, transmembrane domain 2 is prevented from moving by charged arginine residues and polar serine residues. These residues are also the predicted phosphorylation site, which would further inhibit movement of the transmembrane domain. Transmembrane domain 3 likely can shift further towards the extracellular space, as the seven amino acid sequence that precedes it is entirely hydrophobic. Shortly after, a threonine residue and a completely conserved glutamine residue would prevent the sequence from moving further into the membrane. Seven positions is also what is required for the completely conserved C127 on the other side of transmembrane domain 3 to move out of the membrane and form the predicted disulfide bond with c138. Once formed, this bond would prevent the domain from slipping back into its original position. Transmembrane domain 4 can likely shift more freely, as no matter its position, it will force some quantity of charged or polar residues into the membrane. It may be able to slide all the way until it encounters the disulfide bond of c138, for this positioning forces the least number of polar and charged amino acids into the membrane. This region of polar and charged residues inside a hydrophobic membrane is one of the least conserved areas of the protein, likely because it is unstable and away from the active site.
Structure:
Tertiary structure The structure of TMEM211 can be predicted with very high confidence (Figure 22). The only region with low confidence is the region preceding the C-terminus, which is highly non-conserved and unstable. This is also the only major portion of the protein's structure that varies across different species. As displayed, the four transmembrane domains cluster together, but space-filling models clearly show that they do not form a passable channel.
Structure:
Although no clusters of charge were detected in the linear TMEM211 sequence, there is a clear cluster of charge, positive and negative, located inside the binding site (Figure 23). This would allow the TMEM211 active site to ionically interact with its ligand, as well grant the protein further substrate specificity than structure alone. Importantly, the predicted binding site of TMEM211 is solvent accessible, more so than the rest of the protein (Figure 23). This accessibility will allow the ligand to enter and interact with the protein. The combination of charges, structure, and conserved cysteine residues likely all assist TMEM211 to bind to a ligand. The binding sites of LHFPL family members bind other proteins, but no functional protein-protein interactions have been discovered for TMEM211. The only interaction that has been observed is between TMEM211 and P29996 of Hepatitis Delta. However, P29996 is known to interact with 156 other human proteins of various types, and TMEM211 is likely caught up in this promiscuity and not targeted for any specific functional role.
Variation:
Excluding silent mutations, there are 7 SNPs known to occur with heterozygosity in the human population of TMEM211 genes. These mutations were added to the representation of TMEM211 (Figure 25). Mutation W67R, presents with 50% heterozygosity, indicating that this mutation likely has little or no effect on protein function. However, several other variations have significant correlations with phenotypes. Variations to G140, which becomes R or A, are associated with patterns in BMI. If these changes affect the protein's function, which is known to exist in the islets of Langerhans, it could exert influence on BMI through changes to levels of secreted insulin. Variation M145I is associated with an increased risk for Alzheimers, which aligns with known variations to LHFPL that also increase Alzheimer's risk. Most notably, variation C127W is associated with deafness and hearing loss. C127 is located in the binding pocket and is predicted to form a disulfide bond with C138. Tryptophan would not be able to engage in a disulfide bond and would likely alter the functional capabilities of the binding site. C127W is likely a loss of function mutation, which then exactly mirrors the known resulting deafness and hearing loss from loss of function mutations in LHFPL family members. Furthering the similarity, this pair of cysteines is completely conserved across the LHFPL family.There were no mutations occurring within the binding sites of possible transcription factors or within the mRNA structural elements, but there was not enough available data to conclude that this is due to necessary preservation of these sequences.
Function:
Currently, there is no known function of TMEM211. An informative experiment would be a knockout experiment in mice, where the TMEM211 gene is removed or replaced using CRISPR. Assuming the protein is serving some function, the mouse will then be deficient in some aspect. Specific attention should be paid to the mouse's perception of sound, as this is the result of knockout experiments on LHFPL family members. While this experiment would likely not prove specific functions of TMEM211, it could identify regions or pathways to focus on in future experiments. This would narrow the field of possible functions suggested by the protein's abundance in pancreas, breast, brain, and sex organ tissue.Insulin is a vital aspect of cellular glucose uptake, and is needed by most cells in the human body. It is released in response to high blood glucose levels. Antibody staining has shown that TMEM211 displays an extreme bias for localization to the insulin secreting islets of Langerhans in pancreatic tissue, and a suggested function of TMEM211 and the LHFPL family is signal relay. Thus, a possible function of TMEM211's extracellular binding site may be to bind to glucose and relay a signal that glucose is present in the bloodstream. This is the known function of an adenosine triphosphate‐sensitive K+ channel, but signal pathways are often redundant and it would be logical for the human body to have developed redundancy for a process that is vital to life. To test this hypothesis, a surface plasmon resonance experiment should be conducted using stabilized TMEM211 and glucose solution. This method of experimentation is effective at assessing binding between small ligands and membrane proteins. If the binding rate determined is non-zero, then glucose is likely binding to the active site. If the binding rate determine is zero, the experiment should be repeated with non-glucose nutrients that affect insulin secretion, namely free fatty acids and free amino acids, and repeated with hormones that affect insulin secretion, including melatonin, estrogen, leptin, growth hormone, and glucagon like peptide-1. If any of these result in a non-zero binding rate, an experiment can then be conducted to determine if that interaction is responsible for signaling insulin release. It is also possible that, unlike the glucose pathway, the binding of TMEM211 and resulting signal suppress insulin release. The results of this experiment can then be applied elsewhere. Many other cells and tissues with TMEM211 also respond to glucose or hormones, and while the signal may cause a different response in each cell type, the extracellular binding site should maintain its specificity for its ligand no matter the tissue type. For instance, the salivary gland produces extra saliva in the presence of glucose and has high expression of TMEM211. If results from pancreatic experiments show that TMEM211 is detecting and signaling glucose levels, it could be hypothesized that TMEM211 is modulating saliva production in the salivary gland.
Function:
Finally, neither TMEM211 nor any LHFPL family member showed biased abundance for the parts of the brain associated with auditory processing, yet mutations affect an organism's ability to hear. However, the 5’UTR of TMEM211 mRNA is highly conserved and is bound by MEIS1.01, an embryonic protein involved in neural differentiation. It is possible that mutations to these genes do not cause deafness or loss of hearing by breaking the pathway for the perception of sound, but by preventing the appropriate development of structures needed to process auditory signals. The mutations could prevent some aspect of the auditory pathway from assembling or differentiating correctly during development. This is the mechanism by which mutation to other inner ear transmembrane proteins causes deafness. These mutations could be induced in mice, allowing for the examination of differences to structure and sound processing between mice who have TMEM211 mutations, mice with LHFPL mutations, and control mice. fMRI would allow researchers to observe where in the auditory processing pathway the defect was occurring. To separate these mechanisms, the experimental mice should be bred with wild type mice to create a generation of mice that are heterozygous for the mutation. These mice would still show some aspects of auditory processing and could direct researchers to the specific breakdown in the pathway that is caused by the mutation. Included in this study should be the physical examination of the inner ear.
Function:
Function in the hearing pathway also explains the patterns observed in which organisms possess the gene. Cnidaria were the first animals able to detect vibrations in the fluid surrounding their bodies. Anemone even use hair-like structures that resemble the human inner ear, a structure that likely gave rise to modern ears. Vertebrates and arthropods are the only animals to have what would be labeled as ears, but many invertebrates still use structures similar to that of the anemone. This explains the point on the divergence graph where the gene begins to rapidly change, which exactly aligns with the rise of terrestrial animals. The gene began to change faster as organisms began adapting to sound perception in air as opposed to in water. This also explains why the amphibians display much lower identity with human TMEM211 than is expected based on their date of divergence; their protein did not need to change as much as they remained in aquatic environments. Terrestrial arthropods do not possess TMEM211, even though they have ears. However, terrestrial vertebrates and terrestrial arthropods do not share a terrestrial ancestor. Vertebrates evolved to inhabit land as tetrapods, who adapted the TMEM211 gene to allow them to process sound in air. Terrestrial arthropods arose from the marine arthropods, and although they developed something labeled as an ear, it is the product of convergent evolution, not shared ancestry. The arthropods’ evolution of ears could have stopped using the TMEM211 gene as they found an alternate pathway to perceive sound in air. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Hidden Mickey**
Hidden Mickey:
A Hidden Mickey is a representation of Mickey Mouse that has been inserted subtly into the design of a ride, attraction, or other location in a Disney theme park, Disney properties, animated film, feature-length movie, TV series, or other Disney product. The most common Hidden Mickey is a formation of three circles that may be perceived as the silhouette of the head and ears of Mickey Mouse, often referred to by Disney aficionados as a "Classic Mickey". Mickeys may be painted, made up of objects (such as rocks, or three plates on a table), or be references such as someone wearing Mickey Mouse Club ears in a painting. Hidden Mickeys can take on many sizes and forms.
Hidden Mickey:
Hidden Mickeys are slipped into many Disney animated films as Easter eggs. They are also hidden in architecture and attractions in Disney parks and resorts, and in studio buildings and many other Disney-related features.
History:
The first published sighting of a Hidden Mickey was made by Arlen Miller, who wrote an article on Hidden Mickeys for WDW's Eyes and Ears (a Cast Member weekly publication) in 1989. The article listed Hidden Mickeys found in the Disney theme parks. Months later the author was contacted by Disney News for more information, and the resulting article made the news of Hidden Mickeys spread worldwide.
History:
The history of Hidden Mickeys can be traced back to when the Imagineers were designing Epcot in the late 1970s and early 1980s. The Disney Company had decided that EPCOT Center would be a more adult themed park, including selling alcohol. As alcohol and Disney characters were deemed to be an improper combination, it was decided that none of the Disney characters, including Mickey Mouse and Minnie Mouse, would ever be seen at EPCOT Center. To some of the Imagineers working on EPCOT Center, this was taken as a challenge. They started including hidden Mickey Mouse profiles into various design elements of that park. As the park began to grow, guest comments led Disney to include the characters in EPCOT Center, but tradition was well-established by that point. Hidden Mickeys (as well as other Disney characters like Minnie Mouse) have become a staple of all theme park designs since. Because of the popularity of Hidden Mickeys, Imagineers are encouraged to place them in new constructions.
History:
Throughout the years, Hidden Mickeys spread in popularity as a pop-culture phenomenon. They have also appeared in animated movies.
History:
Hidden Mickey 50 Ears As part of the Happiest Homecoming on Earth at Disneyland, the park had been decorated with 50 hidden Mickey 50 Ears, the official symbol of the 50th anniversary of Disneyland. The symbol is the traditional Mickey face and ears, but with a number 50 in the center. Before the 50th anniversary of Disneyland ended on September 30, 2006, the Hidden 50 Mickeys were gradually removed.
Locations:
Common locations for deliberate Hidden Mickeys include the Walt Disney Parks and Resorts, where they are most commonly found in attractions, stores, and decor around the environment. Although approximately 1,000 Hidden Mickeys have been recorded, The Walt Disney Company has never compiled a complete list of all the "known" or "deliberate" Mickeys (whether created by an Imagineer or a Disney Cast Member), so there is no way to confirm or disprove any reported Mickey sightings.
Locations:
The book Discovering the Magic Kingdom: An Unofficial Disneyland Vacation Guide - Second Edition has the largest printed listing of Hidden Mickeys for the Disneyland Resort in Anaheim. The book lists 419 Hidden Mickeys that can be found at Disneyland Park, Downtown Disney, the three Disneyland hotels, and Disney's California Adventure.
In media:
In the George Lopez episode "George Goes to Disneyland" there was a contest to see how many Hidden Mickeys a viewer could find. The winner won $10,015 and a trip to Disneyland.
In media:
The Kingdom Hearts series has several Hidden Mickeys throughout different games, with Kingdom Hearts III placing a larger emphasis on finding them in several different Disney worlds; they are referred to in-game as "Lucky Emblems". Finding these Lucky Emblems and taking in-game pictures of them are important, as depending on the difficulty, a certain number are required to unlock the game's secret ending.
In media:
In Super Smash Bros. Ultimate, where Kingdom Hearts’ Sora is a downloadable fighter, he retains the Hidden Mickey token attached to his keyblade, the Kingdom Key with it being the only reference to a Disney property, as everything else belonging to Disney alone was not included (characters and elements co-owned by Square Enix and Disney were included, however).
In Epic Mickey 2: The Power of Two, there are many Hidden Mickeys that Mickey can photograph throughout the game. There are also hidden Oswalds too. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Metal acetylacetonates**
Metal acetylacetonates:
Metal acetylacetonates are coordination complexes derived from the acetylacetonate anion (CH3COCHCOCH−3) and metal ions, usually transition metals. The bidentate ligand acetylacetonate is often abbreviated acac. Typically both oxygen atoms bind to the metal to form a six-membered chelate ring. The simplest complexes have the formula M(acac)3 and M(acac)2. Mixed-ligand complexes, e.g. VO(acac)2, are also numerous. Variations of acetylacetonate have also been developed with myriad substituents in place of methyl (RCOCHCOR′−). Many such complexes are soluble in organic solvents, in contrast to the related metal halides. Because of these properties, acac complexes are sometimes used as catalyst precursors and reagents. Applications include their use as NMR "shift reagents" and as catalysts for organic synthesis, and precursors to industrial hydroformylation catalysts. C5H7O−2 in some cases also binds to metals through the central carbon atom; this bonding mode is more common for the third-row transition metals such as platinum(II) and iridium(III).
Synthesis:
The usual synthesis involves treatment of a metal salt with acetylacetone, acacH: Mz+ + z Hacac ⇌ M(acac)z + z H+Addition of base assists the removal of a proton from acetylacetone and shifts the equilibrium in favour of the complex. Both oxygen centres bind to the metal to form a six-membered chelate ring. In some cases the chelate effect is so strong that no added base is needed to form the complex. Some complexes are prepared by metathesis using Tl(acac).
Structure and bonding:
In the majority of its complexes acac forms six-membered C3O2M chelate rings. The M(acac) ring is planar with a symmetry plane bisecting the ring.
Structure and bonding:
The acacM ring generally exhibits aromatic character, consistent with delocalized bonding in the monoanionic C3O2 portion. Consistent with this scenario, in some complexes, the acac ligand is susceptible to electrophilic substitution, akin to electrophilic aromatic substitution (in this equation Me = CH3): Co(O2C3Me2H)3 + 3 NO2+ → Co(O2C3Me2NO2)3 + 3 H+In terms of electron counting, neutral bidentate O,O-bonded acac ligand is an "L-X ligand", i.e. a combination of a Lewis base (L) and a pseudohalide (X).
Structure and bonding:
An exception to the classical description presented above, the bis(pyridine) adduct of chromium(II) acetylacetonate features noninnocent acac2- ligand.
Classification by triad:
Titanium triad Treatment of TiCl4 with acetylacetone gives TiCl2(acac)2, a red-coloured, octahedral complex with C2 symmetry: TiCl4 + 2 Hacac → TiCl2(acac)2 + 2 HClThis reaction requires no base. The complex TiCl2(acac)2 is fluxional in solution, the NMR spectrum exhibiting a single methyl resonance at room temperature.Unlike Ti(IV), both Zr(IV) and Hf(IV) bind four bidentate acetylacetonates, reflecting the larger radius of these metals. Hafnium acetylacetonate and zirconium acetylacetonate adopt square antiprismatic structures.
Classification by triad:
Regarding acetylacetonates of titanium(III), Ti(acac)3 is well studied. This blue-colored compound forms from titanium trichloride and acetylacetone.
Classification by triad:
Vanadium triad Vanadyl acetylacetonate is a blue complex with the formula V(O)(acac)2. This complex features the vanadyl(IV) group, and many related compounds are known. The molecule is square pyramidal, with idealized C2v symmetry. The complex catalyzes epoxidation of allylic alcohols by peroxides. Vanadium(III) acetylacetonate is a dark-brown solid. Vanadium β-diketonate complexes are used as precatalysts in the commercial production of ethylene-propylene-diene elastomers (EPDM). They are often evaluated for other applications related to redox flow batteries, diabetes and enhancing the activity of insulin, and as precursors to inorganic materials by CVD.
Classification by triad:
Chromium triad Chromium(III) acetylacetonate, Cr(acac)3, is a typical octahedral complex containing three acac− ligands. Like most such compounds, it is highly soluble in nonpolar organic solvents. This particular complex, which has a three unpaired electrons, is used as a spin relaxation agent to improve the sensitivity in quantitative carbon-13 NMR spectroscopy. Chromium(II) acetylacetonate is a highly oxygen-sensitive, light brown compound. The complex adopts a square planar structure, weakly associated into stacks in the solid state. It is isomorphous with Pd(acac)2 and Cu(acac)2.
Classification by triad:
Manganese triad Mn(acac)3 has been prepared by the comproportionation of the manganese(II) compound Mn(acac)2 with potassium permanganate in the presence of additional acetylacetone. Alternatively the direct reaction of acetylacetone with potassium permanganate. In terms of electronic structure, Mn(acac)3 is high spin. Its distorted octahedral structure reflects geometric distortions due to the Jahn–Teller effect. The two most common structures for this complex include one with tetragonal elongation and one with tetragonal compression. For the elongation, two Mn–O bonds are 2.12 Å while the other four are 1.93 Å. For the compression, two Mn–O bonds are 1.95 Å and the other four are 2.00 Å. The effects of the tetragonal elongation are noticeably more significant than the effects of the tetragonal compression.
Classification by triad:
In organic chemistry, Mn(acac)3 has been used as a one-electron oxidant for coupling phenols.
Iron triad Iron(III) acetylacetonate, Fe(acac)3, is a red high-spin complex that is highly soluble in organic solvents. It is a high-spin complex with five unpaired electrons. It has occasionally been investigated as a catalyst precursor. Fe(acac)3 has been partially resolved into its Δ and Λ isomers. The ferrous complex Fe(acac)2 is oligomeric.
Like iron, Ru(III) forms a stable tris(acetylacetonate). Reduction of this Ru(III) derivative in the presence of other ligands affords mixed ligand complexes, e.g. Ru(acac)2(alkene)2.
Cobalt triad Tris(acetylacetonato)cobalt(III), Co(acac)3, is low-spin, diamagnetic complex. Like other compounds of the type M(acac)3, this complex is chiral (has a non-superimposable mirror image).
Classification by triad:
The synthesis of Co(acac)3 involves the use of an oxidant since the cobalt precursors are divalent: 2 CoCO3 + 6 Hacac + H2O2 → 2 Co(acac)3 + 4 H2O + 2 CO2The complex "Co(acac)2", like the nickel complex with analogous stoichiometry, is typically isolated with two additional ligands, i.e. octahedral Co(acac)2L2. The anhydrous form exists as the tetramer [Co(acac)2]4. Like the trimeric nickel complex, this tetramer shows ferromagnetic interactions at low temperatures.Ir(acac)3 and Rh(acac)3 are known. A second linkage isomer of the iridium complex is known, trans-Ir(acac)2(CH(COMe)2)(H2O). This C-bonded derivative is a precursor to homogeneous catalysts for C–H activation and related chemistries.Two well-studied acetylacetonates of rhodium(I) and iridium(I) are Rh(acac)(CO)2 and Ir(acac)(CO)2. These complexes are square-planar, with C2v symmetry.
Classification by triad:
Nickel triad Nickel(II) bis(acetylacetonate) exists as the trimetallic complex [Ni(acac)2]3. Bulky beta-diketonates give red, monomeric, square-planar complexes. Nickel(II) bis(acetylacetonate) reacts with water to give the octahedral adduct [Ni(acac)2(H2O)2], a chalky green solid.
In contrast to the complicated magnetism and structures of Ni(acac)2, platinum(II) bis(acetylacetonate) and palladium(II) bis(acetylacetonate) are diamagnetic monometallic species.
Copper triad Cu(acac)2 is prepared by treating acetylacetone with aqueous Cu(NH3)2+4. It is available commercially, catalyzes coupling and carbene transfer reactions.
Unlike the copper(II) derivative, copper(I) acetylacetonate is an air-sensitive oligomeric species. It is employed to catalyze Michael additions.
Zinc triad The monoaquo complex Zn(acac)2H2O (m.p. 138–140 °C) is pentacoordinate, adopting a square pyramidal structure. The complex is of some use in organic synthesis. Dehydration of this species gives the hygroscopic anhydrous derivative (m.p. 127 °C). This more volatile derivative has been used as a precursor to films of ZnO.
Acetylacetonates of the other elements:
Colourless, diamagnetic Al(acac)3 is structurally similar to other tris complexes, e.g. [Fe(acac)3]. The trisacetylacetonates of the lanthanides often adopt coordination numbers above 8.
Variants of acac:
Many variants of acetylacetonates are well developed. Hexafluoroacetylacetonates and trifluoroacetylacetonates form complexes that are often structurally related to regular acetylacetonates, but are more Lewis acidic and more volatile. The complex Eufod, Eu(OCC(CH3)3CHCOC3F7)3, features an elaborate partially fluorinated ligand. This complex is a Lewis acid, forming adducts with a variety of hard bases.
One or both oxygen centers in acetylacetonate can be replaced by RN groups, giving rise to Nacac and Nacnac ligands.
C-bonded acetylacetonates:
C5H7O−2 in some cases also binds to metals through the central carbon atom (C3); this bonding mode is more common for the third-row transition metals such as platinum(II) and iridium(III). The complexes Ir(acac)3 and corresponding Lewis-base adducts Ir(acac)3L (L = an amine) contain one carbon-bonded acac ligand. The IR spectra of O-bonded acetylacetonates are characterized by relatively low-energy νCO bands of 1535 cm−1, whereas in carbon-bonded acetylacetonates, the carbonyl vibration occurs closer to the normal range for ketonic C=O, i.e. 1655 cm−1. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Von Neumann architecture**
Von Neumann architecture:
The von Neumann architecture—also known as the von Neumann model or Princeton architecture—is a computer architecture based on a 1945 description by John von Neumann, and by others, in the First Draft of a Report on the EDVAC. The document describes a design architecture for an electronic digital computer with these components: A processing unit with both an arithmetic logic unit and processor registers A control unit that includes an instruction register and a program counter Memory that stores data and instructions External mass storage Input and output mechanismsThe term "von Neumann architecture" has evolved to refer to any stored-program computer in which an instruction fetch and a data operation cannot occur at the same time (since they share a common bus). This is referred to as the von Neumann bottleneck, which often limits the performance of the corresponding system.The design of a von Neumann architecture machine is simpler than in a Harvard architecture machine—which is also a stored-program system, yet has one dedicated set of address and data buses for reading and writing to memory, and another set of address and data buses to fetch instructions.
Von Neumann architecture:
A stored-program computer uses the same underlying mechanism to encode both program instructions and data as opposed to designs which use a mechanism such as discrete plugboard wiring or fixed control circuitry for instruction implementation. Stored-program computers were an advancement over the manually reconfigured or fixed function computers of the 1940s, such as the Colossus and the ENIAC. These were programmed by setting switches and inserting patch cables to route data and control signals between various functional units.
Von Neumann architecture:
The vast majority of modern computers use the same hardware mechanism to encode and store both data and program instructions, but have caches between the CPU and memory, and, for the caches closest to the CPU, have separate caches for instructions and data, so that most instruction and data fetches use separate buses (split cache architecture).
History:
The earliest computing machines had fixed programs. Some very simple computers still use this design, either for simplicity or training purposes. For example, a desk calculator (in principle) is a fixed program computer. It can do basic mathematics, but it cannot run a word processor or games. Changing the program of a fixed-program machine requires rewiring, restructuring, or redesigning the machine. The earliest computers were not so much "programmed" as "designed" for a particular task. "Reprogramming"—when possible at all—was a laborious process that started with flowcharts and paper notes, followed by detailed engineering designs, and then the often-arduous process of physically rewiring and rebuilding the machine. It could take three weeks to set up and debug a program on ENIAC.With the proposal of the stored-program computer, this changed. A stored-program computer includes, by design, an instruction set, and can store in memory a set of instructions (a program) that details the computation.
History:
A stored-program design also allows for self-modifying code. One early motivation for such a facility was the need for a program to increment or otherwise modify the address portion of instructions, which operators had to do manually in early designs. This became less important when index registers and indirect addressing became usual features of machine architecture. Another use was to embed frequently used data in the instruction stream using immediate addressing.
Capabilities:
On a large scale, the ability to treat instructions as data is what makes assemblers, compilers, linkers, loaders, and other automated programming tools possible. It makes "programs that write programs" possible. This has made a sophisticated self-hosting computing ecosystem flourish around von Neumann architecture machines.
Some high level languages leverage the von Neumann architecture by providing an abstract, machine-independent way to manipulate executable code at runtime (e.g., LISP), or by using runtime information to tune just-in-time compilation (e.g. languages hosted on the Java virtual machine, or languages embedded in web browsers).
On a smaller scale, some repetitive operations such as BITBLT or pixel and vertex shaders can be accelerated on general purpose processors with just-in-time compilation techniques. This is one use of self-modifying code that has remained popular.
Development of the stored-program concept:
The mathematician Alan Turing, who had been alerted to a problem of mathematical logic by the lectures of Max Newman at the University of Cambridge, wrote a paper in 1936 entitled On Computable Numbers, with an Application to the Entscheidungsproblem, which was published in the Proceedings of the London Mathematical Society. In it he described a hypothetical machine he called a universal computing machine, now known as the "Universal Turing machine". The hypothetical machine had an infinite store (memory in today's terminology) that contained both instructions and data. John von Neumann became acquainted with Turing while he was a visiting professor at Cambridge in 1935, and also during Turing's PhD year at the Institute for Advanced Study in Princeton, New Jersey during 1936–1937. Whether he knew of Turing's paper of 1936 at that time is not clear.
Development of the stored-program concept:
In 1936, Konrad Zuse also anticipated, in two patent applications, that machine instructions could be stored in the same storage used for data.Independently, J. Presper Eckert and John Mauchly, who were developing the ENIAC at the Moore School of Electrical Engineering of the University of Pennsylvania, wrote about the stored-program concept in December 1943.
In planning a new machine, EDVAC, Eckert wrote in January 1944 that they would store data and programs in a new addressable memory device, a mercury metal delay-line memory. This was the first time the construction of a practical stored-program machine was proposed. At that time, he and Mauchly were not aware of Turing's work.
Development of the stored-program concept:
Von Neumann was involved in the Manhattan Project at the Los Alamos National Laboratory. It required huge amounts of calculation, and thus drew him to the ENIAC project, during the summer of 1944. There he joined the ongoing discussions on the design of this stored-program computer, the EDVAC. As part of that group, he wrote up a description titled First Draft of a Report on the EDVAC based on the work of Eckert and Mauchly. It was unfinished when his colleague Herman Goldstine circulated it, and bore only von Neumann's name (to the consternation of Eckert and Mauchly). The paper was read by dozens of von Neumann's colleagues in America and Europe, and influenced the next round of computer designs.
Development of the stored-program concept:
Jack Copeland considers that it is "historically inappropriate to refer to electronic stored-program digital computers as 'von Neumann machines'". His Los Alamos colleague Stan Frankel said of von Neumann's regard for Turing's ideas I know that in or about 1943 or '44 von Neumann was well aware of the fundamental importance of Turing's paper of 1936….
Development of the stored-program concept:
Von Neumann introduced me to that paper and at his urging I studied it with care. Many people have acclaimed von Neumann as the "father of the computer" (in a modern sense of the term) but I am sure that he would never have made that mistake himself. He might well be called the midwife, perhaps, but he firmly emphasized to me, and to others I am sure, that the fundamental conception is owing to Turing—in so far as not anticipated by Babbage…. Both Turing and von Neumann, of course, also made substantial contributions to the "reduction to practice" of these concepts but I would not regard these as comparable in importance with the introduction and explication of the concept of a computer able to store in its memory its program of activities and of modifying that program in the course of these activities.
Development of the stored-program concept:
At the time that the "First Draft" report was circulated, Turing was producing a report entitled Proposed Electronic Calculator. It described in engineering and programming detail, his idea of a machine he called the Automatic Computing Engine (ACE). He presented this to the executive committee of the British National Physical Laboratory on February 19, 1946. Although Turing knew from his wartime experience at Bletchley Park that what he proposed was feasible, the secrecy surrounding Colossus, that was subsequently maintained for several decades, prevented him from saying so. Various successful implementations of the ACE design were produced.
Development of the stored-program concept:
Both von Neumann's and Turing's papers described stored-program computers, but von Neumann's earlier paper achieved greater circulation and the computer architecture it outlined became known as the "von Neumann architecture". In the 1953 publication Faster than Thought: A Symposium on Digital Computing Machines (edited by B. V. Bowden), a section in the chapter on Computers in America reads as follows: The Machine of the Institute For Advanced Studies, Princeton In 1945, Professor J. von Neumann, who was then working at the Moore School of Engineering in Philadelphia, where the E.N.I.A.C. had been built, issued on behalf of a group of his co-workers, a report on the logical design of digital computers. The report contained a detailed proposal for the design of the machine that has since become known as the E.D.V.A.C. (electronic discrete variable automatic computer). This machine has only recently been completed in America, but the von Neumann report inspired the construction of the E.D.S.A.C. (electronic delay-storage automatic calculator) in Cambridge (see p. 130).
Development of the stored-program concept:
In 1947, Burks, Goldstine and von Neumann published another report that outlined the design of another type of machine (a parallel machine this time) that would be exceedingly fast, capable perhaps of 20,000 operations per second. They pointed out that the outstanding problem in constructing such a machine was the development of suitable memory with instantaneously accessible contents. At first they suggested using a special vacuum tube—called the "Selectron"—which the Princeton Laboratories of RCA had invented. These tubes were expensive and difficult to make, so von Neumann subsequently decided to build a machine based on the Williams memory. This machine—completed in June, 1952 in Princeton—has become popularly known as the Maniac. The design of this machine inspired at least half a dozen machines now being built in America, all known affectionately as "Johniacs".
Development of the stored-program concept:
In the same book, the first two paragraphs of a chapter on ACE read as follows: Automatic Computation at the National Physical Laboratory One of the most modern digital computers which embodies developments and improvements in the technique of automatic electronic computing was recently demonstrated at the National Physical Laboratory, Teddington, where it has been designed and built by a small team of mathematicians and electronics research engineers on the staff of the Laboratory, assisted by a number of production engineers from the English Electric Company, Limited. The equipment so far erected at the Laboratory is only the pilot model of a much larger installation which will be known as the Automatic Computing Engine, but although comparatively small in bulk and containing only about 800 thermionic valves, as can be judged from Plates XII, XIII and XIV, it is an extremely rapid and versatile calculating machine.
Development of the stored-program concept:
The basic concepts and abstract principles of computation by a machine were formulated by Dr. A. M. Turing, F.R.S., in a paper1. read before the London Mathematical Society in 1936, but work on such machines in Britain was delayed by the war. In 1945, however, an examination of the problems was made at the National Physical Laboratory by Mr. J. R. Womersley, then superintendent of the Mathematics Division of the Laboratory. He was joined by Dr. Turing and a small staff of specialists, and, by 1947, the preliminary planning was sufficiently advanced to warrant the establishment of the special group already mentioned. In April, 1948, the latter became the Electronics Section of the Laboratory, under the charge of Mr. F. M. Colebrook.
Early von Neumann-architecture computers:
The First Draft described a design that was used by many universities and corporations to construct their computers. Among these various computers, only ILLIAC and ORDVAC had compatible instruction sets.
ARC2 (Birkbeck, University of London) officially came online on May 12, 1948.
Manchester Baby (Victoria University of Manchester, England) made its first successful run of a stored program on June 21, 1948.
Early von Neumann-architecture computers:
EDSAC (University of Cambridge, England) was the first practical stored-program electronic computer (May 1949) Manchester Mark 1 (University of Manchester, England) Developed from the Baby (June 1949) CSIRAC (Council for Scientific and Industrial Research) Australia (November 1949) MESM at the Kiev Institute of Electrotechnology in Kiev, Ukrainian SSR (November 1950) EDVAC (Ballistic Research Laboratory, Computing Laboratory at Aberdeen Proving Ground 1951) ORDVAC (U-Illinois) at Aberdeen Proving Ground, Maryland (completed November 1951) IAS machine at Princeton University (January 1952) MANIAC I at Los Alamos Scientific Laboratory (March 1952) ILLIAC at the University of Illinois, (September 1952) BESM-1 in Moscow (1952) AVIDAC at Argonne National Laboratory (1953) ORACLE at Oak Ridge National Laboratory (June 1953) BESK in Stockholm (1953) JOHNNIAC at RAND Corporation (January 1954) DASK in Denmark (1955) WEIZAC at the Weizmann Institute of Science in Rehovot, Israel (1955) PERM in Munich (1956) SILLIAC in Sydney (1956)
Early stored-program computers:
The date information in the following chronology is difficult to put into proper order. Some dates are for first running a test program, some dates are the first time the computer was demonstrated or completed, and some dates are for the first delivery or installation.
The IBM SSEC had the ability to treat instructions as data, and was publicly demonstrated on January 27, 1948. This ability was claimed in a US patent. However it was partially electromechanical, not fully electronic. In practice, instructions were read from paper tape due to its limited memory.
The ARC2 developed by Andrew Booth and Kathleen Booth at Birkbeck, University of London officially came online on May 12, 1948. It featured the first rotating drum storage device.
The Manchester Baby was the first fully electronic computer to run a stored program. It ran a factoring program for 52 minutes on June 21, 1948, after running a simple division program and a program to show that two numbers were relatively prime.
The ENIAC was modified to run as a primitive read-only stored-program computer (using the Function Tables for program ROM) and was demonstrated as such on September 16, 1948, running a program by Adele Goldstine for von Neumann.
The BINAC ran some test programs in February, March, and April 1949, although was not completed until September 1949.
The Manchester Mark 1 developed from the Baby project. An intermediate version of the Mark 1 was available to run programs in April 1949, but was not completed until October 1949.
The EDSAC ran its first program on May 6, 1949.
The EDVAC was delivered in August 1949, but it had problems that kept it from being put into regular operation until 1951.
The CSIR Mk I ran its first program in November 1949.
The SEAC was demonstrated in April 1950.
The Pilot ACE ran its first program on May 10, 1950, and was demonstrated in December 1950.
The SWAC was completed in July 1950.
The Whirlwind was completed in December 1950 and was in actual use in April 1951.
The first ERA Atlas (later the commercial ERA 1101/UNIVAC 1101) was installed in December 1950.
Evolution:
Through the decades of the 1960s and 1970s computers generally became both smaller and faster, which led to evolutions in their architecture. For example, memory-mapped I/O lets input and output devices be treated the same as memory. A single system bus could be used to provide a modular system with lower cost. This is sometimes called a "streamlining" of the architecture.
Evolution:
In subsequent decades, simple microcontrollers would sometimes omit features of the model to lower cost and size.
Larger computers added features for higher performance.
Design limitations:
Von Neumann bottleneck The shared bus between the program memory and data memory leads to the von Neumann bottleneck, the limited throughput (data transfer rate) between the central processing unit (CPU) and memory compared to the amount of memory. Because the single bus can only access one of the two classes of memory at a time, throughput is lower than the rate at which the CPU can work. This seriously limits the effective processing speed when the CPU is required to perform minimal processing on large amounts of data. The CPU is continually forced to wait for needed data to move to or from memory. Since CPU speed and memory size have increased much faster than the throughput between them, the bottleneck has become more of a problem, a problem whose severity increases with every new generation of CPU.
Design limitations:
The von Neumann bottleneck was described by John Backus in his 1977 ACM Turing Award lecture. According to Backus: Surely there must be a less primitive way of making big changes in the store than by pushing vast numbers of words back and forth through the von Neumann bottleneck. Not only is this tube a literal bottleneck for the data traffic of a problem, but, more importantly, it is an intellectual bottleneck that has kept us tied to word-at-a-time thinking instead of encouraging us to think in terms of the larger conceptual units of the task at hand. Thus programming is basically planning and detailing the enormous traffic of words through the von Neumann bottleneck, and much of that traffic concerns not significant data itself, but where to find it.
Design limitations:
Mitigations There are several known methods for mitigating the Von Neumann performance bottleneck. For example, the following all can improve performance: Providing a cache between the CPU and the main memory providing separate caches or separate access paths for data and instructions (the so-called Modified Harvard architecture) using branch predictor algorithms and logic providing a limited CPU stack or other on-chip scratchpad memory to reduce memory access Implementing the CPU and the memory hierarchy as a system on chip, providing greater locality of reference and thus reducing latency and increasing throughput between processor registers and main memoryThe problem can also be sidestepped somewhat by using parallel computing, using for example the non-uniform memory access (NUMA) architecture—this approach is commonly employed by supercomputers. It is less clear whether the intellectual bottleneck that Backus criticized has changed much since 1977. Backus's proposed solution has not had a major influence. Modern functional programming and object-oriented programming are much less geared towards "pushing vast numbers of words back and forth" than earlier languages like FORTRAN were, but internally, that is still what computers spend much of their time doing, even highly parallel supercomputers.
Design limitations:
As of 1996, a database benchmark study found that three out of four CPU cycles were spent waiting for memory. Researchers expect that increasing the number of simultaneous instruction streams with multithreading or single-chip multiprocessing will make this bottleneck even worse. In the context of multi-core processors, additional overhead is required to maintain cache coherence between processors and threads.
Design limitations:
Self-modifying code Aside from the von Neumann bottleneck, program modifications can be quite harmful, either by accident or design. In some simple stored-program computer designs, a malfunctioning program can damage itself, other programs, or the operating system, possibly leading to a computer crash. Memory protection and other forms of access control can usually protect against both accidental and malicious program changes. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Transmeta Efficeon**
Transmeta Efficeon:
The Efficeon (stylized as efficēon) processor is Transmeta's second-generation 256-bit VLIW design released in 2004 which employs a software engine Code Morphing Software (CMS) to convert code written for x86 processors to the native instruction set of the chip. Like its predecessor, the Transmeta Crusoe (a 128-bit VLIW architecture), Efficeon stresses computational efficiency, low power consumption, and a low thermal footprint.
Processor:
Efficeon most closely mirrors the feature set of Intel Pentium 4 processors, although, like AMD Opteron processors, it supports a fully integrated memory controller, a HyperTransport IO bus, and the NX bit, or no-execute x86 extension to PAE mode. NX bit support is available starting with CMS version 6.0.4.
Efficeon's computational performance relative to mobile CPUs like the Intel Pentium M is thought to be lower, although little appears to be published about the relative performance of these competing processors.
Efficeon came in two package types: a 783- and a 592-contact ball grid array (BGA). Its power consumption is moderate (with some consuming as little as 3 watts at 1 GHz and 7 watts at 1.5 GHz), so it can be passively cooled.
Two generations of this chip were produced. The first generation (TM8600) was manufactured using a TSMC 0.13 micrometre process and produced at speeds up to 1.2 GHz. The second generation (TM8800 and TM8820) was manufactured using a Fujitsu 90 nm process and produced at speeds ranging from 1 GHz to 1.7 GHz.
Internally, the Efficeon has two arithmetic logic units, two load/store/add units, two execute units, two floating-point/MMX/SSE/SSE2 units, one branch prediction unit, one alias unit, and one control unit. The VLIW core can execute a 256-bit VLIW instruction per cycle, which is called a molecule, and has room to store eight 32-bit instructions (called atoms) per cycle.
The Efficeon has a 128 KB L1 instruction cache, a 64 KB L1 data cache and a 1 MB L2 cache. All caches are on die.
Additionally, the Efficeon CMS (code morphing software) reserves a small portion of main memory (typically 32 MB) for its translation cache of dynamically translated x86 instructions.
Products:
Elitegroup A532 Microsoft FlexGo Computer (first generation) Orion Multisystem Cluster Workstation Sharp Actius MM20, MP30, MP70G Sharp Mebius Muramasa PC-MM2, PC-CV50F | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Fear of crossing streets**
Fear of crossing streets:
The fear of crossing streets, or its terms dromophobia and agyrophobia, is a specific phobia that affects a person's ability to cross a street or roadway where cars or vehicles may be present. The term dromophobia comes from the Greek dromos, meaning racetrack.
Causes of dromophobia:
Dromophobia may result from experiencing a road accident and thus may be classified as a subtype of panic disorder with agoraphobia (PDA). As such, dromophobia, especially fear of crossing streets alone may be a component of accident-related posttraumatic stress disorder, as a reaction to a situation reminiscent of the past traumatic event. Sometimes this behavior may be misinterpreted during PTSD symptom assessment as a caution (i.e., a normal learning behavior) rather than fear (which is an abnormal avoidant behavior). Fear of crossing streets may also result from an anticipatory anxiety related to person's limited mobility. For example, a person with stiff-person syndrome may experience attacks of increasing stiffness or spasms while crossing the street.Dromophobia may be present in people, especially children, with autism, or other neurological conditions that impact the ability to judge the speed of an approaching car. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Train game**
Train game:
A train game or railway game is a board game that represents the construction and operation of railways. Train games often are highly involved hobby games that take several hours to play. Like wargames, train games represent a relatively small niche in the games market. A very popular example of a Train Game would be Ticket to ride.
Not every game with a train in it is a "train game". For example, the domino game Mexican Train and Monopoly are not usually considered train games because they do not represent railway operations. Empire Builder and 1830 are examples of train games.
Varieties:
Classic train games generally fall into two broad categories: 18XX games and "crayon rail" games: 18XX games originated in 1974 with the publication of Francis Tresham's 1829 and continued with such titles as 1830, 1856, and 1870. These games involve buying and selling stock in railway companies, laying track, and running locomotives to generate a profit. Most are hex map games in which cardboard tiles are laid to build sequences of railway track. Many 18XX games can be further divided into "1829 style games," which emphasize company development, and "1830 style games," which emphasize robber baron stock market manipulation.Crayon rail games are more streamlined and do not contain a stock market component. They focus on laying track, delivering goods, and making profits. Instead of the hex map system found in 18XX games, railway tracks are drawn with crayons or dry erase markers. The first mass market crayon style game was Darwin Bromley and Bill Fawcett's Empire Builder which was released in 1980 by Mayfair Games, however Railway Rivals by David Watts had been popular - especially amongst postal gamers - for nearly 10 years before that. Other games in the Empire Builder series include British Rails, Eurorails, India Rails, and North American Rails, to name a few. Some of these are even set in a fantasy or science fiction world, such as Iron Dragon and Lunar Rails.
Varieties:
Another type of train game is Silverton, a Mayfair game that uses wooden blocks instead of crayons to represent increasing completion of rail networks (the pieces also block competitors in a mechanic similar to the station-tokens in 18xx games). Mayfair republished the original Two Wolf Games Silverton and includes the expansion map as part of the basic game.
The deck-building board game Trains also includes rail laying and station building elements.
The mechanics in Friedemann Friese's Power Grid were taken from crayon rail games. Its predecessor Funkenschlag even used crayons to denote power lines. In this sense, Power Grid is more of a "train game" than such train themed games as Ticket to Ride, Union Pacific, and TransAmerica, which do not involve as many train related mechanics.
Tournaments:
Several competitions for train gamers are held at major game conventions by the Train Gamers Association. Their largest event is the Puffing Billy Tournament, but other competitions include Iron Man, the 18XX Championship, and the Empire Builder International. The Puffing Billy Tournament was named in honor of the world's oldest surviving steam locomotive, Puffing Billy, which was built in 1814.
Tournaments:
Another important event for train gamers is the Chattanooga Rail Gaming Challenge, which has been held in Chattanooga, Tennessee since 1997. In 2007, the competition grew to over 60 participants.Train games are also popular at the annual World Boardgaming Championships which routinely attracts over 1000 participants, over 100 of whom play titles such as Rail Baron, Empire Builder and 18XX. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Octatropine methylbromide**
Octatropine methylbromide:
Octatropine methylbromide (INN) or anisotropine methylbromide (USAN), trade names Valpin, Endovalpin, Lytispasm and others, is a muscarinic antagonist and antispasmodic. It was introduced to the U.S. market in 1963 as an adjunct in the treatment of peptic ulcer, and promoted as being more specific to the gastrointestinal tract than other anticholinergics, although its selectivity was questioned in later studies.Octatropine has been superseded by more effective agents in the treatment of peptic ulcer disease, and is no longer used. It is still sold in some countries in combination with other drugs, such as phenobarbital and metamizole. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**GPR143**
GPR143:
G-protein coupled receptor 143, also known as Ocular albinism type 1 (OA1) in humans, is a conserved integral membrane protein with seven transmembrane domains and similarities with G protein-coupled receptors (GPCRs) that is expressed in the eye and epidermal melanocytes. This protein encoded by the GPR143 gene, whose variants can lead to Ocular albinism type 1.The GPR143 gene is regulated by the Microphthalmia-associated transcription factor.L-DOPA is an endogenous ligand for OA1.
Interactions:
GPR143 has been shown to interact with GNAI1. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**3,4-Methylenedioxyphenethylamine**
3,4-Methylenedioxyphenethylamine:
3,4-Methylenedioxyphenethylamine, also known as 3,4-MDPEA, MDPEA, and homopiperonylamine, is a substituted phenethylamine formed by adding a methylenedioxy group to phenethylamine. It is structurally similar to MDA, but without the methyl group at the alpha position.
3,4-Methylenedioxyphenethylamine:
According to Alexander Shulgin in his book PiHKAL, MDPEA appears to be biologically inactive. This is likely because of extensive first-pass metabolism by the enzyme monoamine oxidase. However, if MDPEA were either used in high enough of doses (e.g., 1-2 grams), or in combination with a monoamine oxidase inhibitor (MAOI), it is probable that it would become sufficiently active, though it would likely have a relatively short duration of action. This idea is similar in concept to the use of selective MAOA inhibitors and selective MAOB inhibitors in augmentation of dimethyltryptamine (DMT) and phenethylamine (PEA), respectively. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Waste Atlas**
Waste Atlas:
Waste Atlas is an interactive waste management map that visualises global solid waste management data for comparison and benchmarking purposes.Waste Atlas partnership is a non-commercial initiative supported by significant global range non-profit organizations, including D-Waste, ISWA, WtERT, SWEEP-Net, SWAPI, and University of Leeds [1].Currently, Waste Atlas hosts waste data for 164 countries; more than 1,800 cities from all over the world and approximately 2,500 waste management facilities (1,626 sanitary landfills, 716 WtE, 129 MBT, 78 BT and 89 of the world’s biggest dumpsites).
Global Correlation Charts and Global Waste Maps:
Global Correlation Charts is a set of global charts which correlate waste indicators such as waste generation per capita and collection coverage with socio-economic indicators such as income indicators and human development index.Global Waste Maps is a set of global maps that visualise waste management indicators such as waste collection coverage, waste generation per capita, etc.
Waste Atlas Report:
1st Annual report 2013 Waste Atlas report is dedicated to global solid waste management assessment and is based on data from 162 countries and 1,773 cities. According to the outcomes of the report, current annual municipal solid waste generation is assessed to about 1.9 billion tonnes with almost 30% of it to remain uncollected. More than half of the world’s population does not have access to a regular refuse collection services, as for the waste collected, 70% of it is led for disposal to landfills and dumpsites, 14.5% is recycled or recovered in formal systems and 11% is led to thermal treatment facilities. It is assessed that 3.5 billion people lack access to even the most elementary form of waste management.
Waste Atlas Report:
2nd Annual report 2014 Waste Atlas report is dedicated to unsound waste disposal, particularly in dumpsites. The 50 biggest dumpsites around the world are listed with the most important information relating to their operation visualized in a unified way. Data relating to the amount and the type of waste disposed in place, the size, the waste concentration, the number of informal waste pickers, the population and the natural resources within a radius of 10 km and the distance of the nearest settlements are presented. The research relied on crowd-sourcing 59,000 files from 25 countries. The results of the report highlight the health and environmental impacts of dumpsites and show that the 50 biggest active dumpsites affect daily, the lives of 64 million people, a figure almost equal to the population of France, their total waste volume is 0.6-0.8 m3 almost 200-300 times the volume of the Great Pyramid of Giza. The statistical analysis showed that a typical waste dumpsite covers an area of 24 ha equal to around 29 big international football fields. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Exodeoxyribonuclease I**
Exodeoxyribonuclease I:
Exodeoxyribonuclease I (EC 3.1.11.1, Escherichia coli exonuclease I, E. coli exonuclease I, exonuclease I) is an enzyme that catalyses the following chemical reaction: Exonucleolytic cleavage in the 3′- to 5′-direction to yield nucleoside 5′-phosphatesPreference for single-stranded DNA. The Escherichia coli enzyme hydrolyses glucosylated DNA.
Punjabi | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Gluten challenge test**
Gluten challenge test:
The gluten challenge test is a medical test in which gluten-containing foods are consumed and (re-)occurrence of symptoms is observed afterwards to determine whether and how much a person reacts to these foods. The test may be performed in people with suspected gluten-related disorders in very specific occasions and under medical supervision, for example in people who had started a gluten-free diet without performing duodenal biopsy.Gluten challenge is discouraged before the age of 5 years and during pubertal growth.Gluten challenge protocols have significant limitations because a symptomatic relapse generally precedes the onset of a serological and histological relapse, and therefore becomes unacceptable for most patients.
History:
Before serological and biopsy-based diagnosis of coeliac disease was available, a gluten challenge test was a prerequisite for diagnosis of coeliac disease.Today, with serological testing (determination of coeliac disease-specific antibodies in the blood) and duodenal biopsy with histological testing being available for diagnosing coeliac disease, patients with suspected coeliac disease are strongly advised to undergo both serological and biopsy testing before undertaking a gluten-free diet. People who present minor damage of the small intestine often have negative blood antibodies titers and many patients with coeliac disease are missed when a duodenal biopsy is not performed. Serologic tests have a high capacity to detect coeliac disease only in patients with total villous atrophy and have very low capacity to detect cases with partial villous atrophy or minor intestinal lesions with normal villi. Currently, gluten challenge is no longer required to confirm the diagnosis in patients with intestinal lesions compatible with coeliac disease and a positive response to a gluten-free diet. Nevertheless, in some cases, a gluten challenge with a subsequent biopsy may be useful to support the diagnosis, for example in people with a high suspicion for coeliac disease, without a biopsy confirmation, who have negative blood antibodies and are already on a gluten-free diet. Gluten challenge is discouraged before the age of 5 years and during pubertal growth. European guidelines suggest that in children and adolescents with symptoms which are compatible with coeliac disease, the diagnosis can be made without the need for an intestinal biopsy if anti-tTG antibodies titres are very high (10 times the upper limit of normal).A recently proposed criterion to non-coeliac gluten sensitivity diagnosis concludes that an improvement of gastrointestinal symptoms and extra-intestinal manifestations higher than 50% with a gluten-free diet, assessed through a rating scale, may confirm the clinical diagnosis of non-coeliac gluten sensitivity. Nevertheless, this rating scale, is not yet applied worldwide. To exclude a placebo effect, a double-blind placebo-controlled gluten challenge is a useful tool, although it is expensive and complicated in routine clinical ground, and therefore, is only performed in research studies.
Testing:
The test is also frequently used in clinical trials, for example for assessing the efficacy of novel drugs for patients with coeliac disease.Medical guidelines for performing a gluten challenge vary in terms of the recommended dose and duration of the test.
Preparation In order to be able to assess the results of the gluten challenge, the patient needs to have been on a gluten-free diet beforehand, with symptoms having disappeared sufficiently for allowing for a subsequent re-appearance of symptoms under gluten challenge to be observed.
Testing:
Procedure It remains unclear what daily intake of gluten is adequate and how long the gluten challenge should last. Some protocols recommend eating a maximum of 10 g of gluten per day for 6 weeks. Nevertheless, recent studies have shown that 2-week challenge of 3 g of gluten per day may induce histological and serological abnormalities in most adults with proven coeliac disease. This newly proposed protocol has shown higher tolerability and compliance, and it has been calculated that its application in secondary-care gastrointestinal practice would identify celiac disease in 7% patients referred for suspected non-coeliac gluten sensitivity, while in the remaining 93% would confirm non-coeliac gluten sensitivity, but is not yet universally adopted. A double-blind placebo-controlled gluten challenge can be performed by means of capsules containing gluten powder (or wheat powder) or a placebo, respectively, although it is expensive and complicated in routine clinical ground, and therefore, is only performed in research studies.There are indications that patients with non-coeliac gluten sensitivity show a reappearance of symptoms in far shorter time than is the case for coeliac disease: in non-coeliac gluten sensitivity, symptoms usually relapse in a few hours or days of gluten challenge.In cases of suspected coeliac disease, a gastrointestinal biopsy is performed at the end of the gluten challenge. For an alternative diagnosis of non-coeliac gluten sensitivity, the reappearance of symptoms is assessed. However, there is no agreement so far as to how to perform a non-coeliac gluten sensitivity symptom evaluation after a gluten challenge.For people eating a gluten-free diet who are unable to perform an oral gluten challenge, an alternative to identify a possible celiac disease is an in vitro gliadin challenge of small bowel biopsies, but this test is available only at selected specialized tertiary-care centers.
Variations:
For determining whether certain foods such as oats can be tolerated by certain patients, a gradual challenge may be performed. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Classic RISC pipeline**
Classic RISC pipeline:
In the history of computer hardware, some early reduced instruction set computer central processing units (RISC CPUs) used a very similar architectural solution, now called a classic RISC pipeline. Those CPUs were: MIPS, SPARC, Motorola 88000, and later the notional CPU DLX invented for education.
Classic RISC pipeline:
Each of these classic scalar RISC designs fetches and tries to execute one instruction per cycle. The main common concept of each design is a five-stage execution instruction pipeline. During operation, each pipeline stage works on one instruction at a time. Each of these stages consists of a set of flip-flops to hold state, and combinational logic that operates on the outputs of those flip-flops.
The classic five stage RISC pipeline:
Instruction fetch The instructions reside in memory that takes one cycle to read. This memory can be dedicated to SRAM, or an Instruction Cache. The term "latency" is used in computer science often and means the time from when an operation starts until it completes. Thus, instruction fetch has a latency of one clock cycle (if using single-cycle SRAM or if the instruction was in the cache). Thus, during the Instruction Fetch stage, a 32-bit instruction is fetched from the instruction memory.
The classic five stage RISC pipeline:
The Program Counter, or PC is a register that holds the address that is presented to the instruction memory. The address is presented to instruction memory at the start of a cycle. Then during the cycle, the instruction is read out of instruction memory, and at the same time, a calculation is done to determine the next PC. The next PC is calculated by incrementing the PC by 4, and by choosing whether to take that as the next PC or to take the result of a branch/jump calculation as the next PC. Note that in classic RISC, all instructions have the same length. (This is one thing that separates RISC from CISC ). In the original RISC designs, the size of an instruction is 4 bytes, so always add 4 to the instruction address, but don't use PC + 4 for the case of a taken branch, jump, or exception (see delayed branches, below). (Note that some modern machines use more complicated algorithms (branch prediction and branch target prediction) to guess the next instruction address.) Instruction decode Another thing that separates the first RISC machines from earlier CISC machines, is that RISC has no microcode. In the case of CISC micro-coded instructions, once fetched from the instruction cache, the instruction bits are shifted down the pipeline, where simple combinational logic in each pipeline stage produces control signals for the datapath directly from the instruction bits. In those CISC designs, very little decoding is done in the stage traditionally called the decode stage. A consequence of this lack of decoding is that more instruction bits have to be used to specifying what the instruction does. That leaves fewer bits for things like register indices.
The classic five stage RISC pipeline:
All MIPS, SPARC, and DLX instructions have at most two register inputs. During the decode stage, the indexes of these two registers are identified within the instruction, and the indexes are presented to the register memory, as the address. Thus the two registers named are read from the register file. In the MIPS design, the register file had 32 entries.
The classic five stage RISC pipeline:
At the same time the register file is read, instruction issue logic in this stage determines if the pipeline is ready to execute the instruction in this stage. If not, the issue logic causes both the Instruction Fetch stage and the Decode stage to stall. On a stall cycle, the input flip flops do not accept new bits, thus no new calculations take place during that cycle.
The classic five stage RISC pipeline:
If the instruction decoded is a branch or jump, the target address of the branch or jump is computed in parallel with reading the register file. The branch condition is computed in the following cycle (after the register file is read), and if the branch is taken or if the instruction is a jump, the PC in the first stage is assigned the branch target, rather than the incremented PC that has been computed. Some architectures made use of the Arithmetic logic unit (ALU) in the Execute stage, at the cost of slightly decreased instruction throughput.
The classic five stage RISC pipeline:
The decode stage ended up with quite a lot of hardware: MIPS has the possibility of branching if two registers are equal, so a 32-bit-wide AND tree runs in series after the register file read, making a very long critical path through this stage (which means fewer cycles per second). Also, the branch target computation generally required a 16 bit add and a 14 bit incrementer. Resolving the branch in the decode stage made it possible to have just a single-cycle branch mis-predict penalty. Since branches were very often taken (and thus mis-predicted), it was very important to keep this penalty low.
The classic five stage RISC pipeline:
Execute The Execute stage is where the actual computation occurs. Typically this stage consists of an ALU, and also a bit shifter. It may also include a multiple cycle multiplier and divider.
The ALU is responsible for performing boolean operations (and, or, not, nand, nor, xor, xnor) and also for performing integer addition and subtraction. Besides the result, the ALU typically provides status bits such as whether or not the result was 0, or if an overflow occurred.
The bit shifter is responsible for shift and rotations.
Instructions on these simple RISC machines can be divided into three latency classes according to the type of the operation: Register-Register Operation (Single-cycle latency): Add, subtract, compare, and logical operations. During the execute stage, the two arguments were fed to a simple ALU, which generated the result by the end of the execute stage.
Memory Reference (Two-cycle latency). All loads from memory. During the execute stage, the ALU added the two arguments (a register and a constant offset) to produce a virtual address by the end of the cycle.
The classic five stage RISC pipeline:
Multi-cycle Instructions (Many cycle latency). Integer multiply and divide and all floating-point operations. During the execute stage, the operands to these operations were fed to the multi-cycle multiply/divide unit. The rest of the pipeline was free to continue execution while the multiply/divide unit did its work. To avoid complicating the writeback stage and issue logic, multicycle instruction wrote their results to a separate set of registers.
The classic five stage RISC pipeline:
Memory access If data memory needs to be accessed, it is done in this stage.
During this stage, single cycle latency instructions simply have their results forwarded to the next stage. This forwarding ensures that both one and two cycle instructions always write their results in the same stage of the pipeline so that just one write port to the register file can be used, and it is always available.
For direct mapped and virtually tagged data caching, the simplest by far of the numerous data cache organizations, two SRAMs are used, one storing data and the other storing tags.
Writeback During this stage, both single cycle and two cycle instructions write their results into the register file.
The classic five stage RISC pipeline:
Note that two different stages are accessing the register file at the same time -- the decode stage is reading two source registers, at the same time that the writeback stage is writing a previous instruction's destination register. On real silicon, this can be a hazard (see below for more on hazards). That is because one of the source registers being read in decode might be the same as the destination register being written in writeback. When that happens, then the same memory cells in the register file are being both read and written the same time. On silicon, many implementations of memory cells will not operate correctly when read and written at the same time.
Hazards:
Hennessy and Patterson coined the term hazard for situations where instructions in a pipeline would produce wrong answers.
Hazards:
Structural hazards Structural hazards occur when two instructions might attempt to use the same resources at the same time. Classic RISC pipelines avoided these hazards by replicating hardware. In particular, branch instructions could have used the ALU to compute the target address of the branch. If the ALU were used in the decode stage for that purpose, an ALU instruction followed by a branch would have seen both instructions attempt to use the ALU simultaneously. It is simple to resolve this conflict by designing a specialized branch target adder into the decode stage.
Hazards:
Data hazards Data hazards occur when an instruction, scheduled blindly, would attempt to use data before the data is available in the register file.
In the classic RISC pipeline, Data hazards are avoided in one of two ways: Solution A. Bypassing Bypassing is also known as operand forwarding.
Hazards:
Suppose the CPU is executing the following piece of code: The instruction fetch and decode stages send the second instruction one cycle after the first. They flow down the pipeline as shown in this diagram: In a naive pipeline, without hazard consideration, the data hazard progresses as follows: In cycle 3, the SUB instruction calculates the new value for r10. In the same cycle, the AND operation is decoded, and the value of r10 is fetched from the register file. However, the SUB instruction has not yet written its result to r10. Write-back of this normally occurs in cycle 5 (green box). Therefore, the value read from the register file and passed to the ALU (in the Execute stage of the AND operation, red box) is incorrect.
Hazards:
Instead, we must pass the data that was computed by SUB back to the Execute stage (i.e. to the red circle in the diagram) of the AND operation before it is normally written-back. The solution to this problem is a pair of bypass multiplexers. These multiplexers sit at the end of the decode stage, and their flopped outputs are the inputs to the ALU. Each multiplexer selects between: A register file read port (i.e. the output of the decode stage, as in the naive pipeline): red arrow The current register pipeline of the ALU (to bypass by one stage): blue arrow The current register pipeline of the access stage (which is either a loaded value or a forwarded ALU result, this provides bypassing of two stages): purple arrow. Note that this requires the data to be passed backwards in time by one cycle. If this occurs, a bubble must be inserted to stall the AND operation until the data is ready.Decode stage logic compares the registers written by instructions in the execute and access stages of the pipeline to the registers read by the instruction in the decode stage, and cause the multiplexers to select the most recent data. These bypass multiplexers make it possible for the pipeline to execute simple instructions with just the latency of the ALU, the multiplexer, and a flip-flop. Without the multiplexers, the latency of writing and then reading the register file would have to be included in the latency of these instructions.
Hazards:
Note that the data can only be passed forward in time - the data cannot be bypassed back to an earlier stage if it has not been processed yet. In the case above, the data is passed forward (by the time the AND is ready for the register in the ALU, the SUB has already computed it).
Hazards:
Solution B. Pipeline interlock However, consider the following instructions: The data read from the address adr is not present in the data cache until after the Memory Access stage of the LD instruction. By this time, the AND instruction is already through the ALU. To resolve this would require the data from memory to be passed backwards in time to the input to the ALU. This is not possible. The solution is to delay the AND instruction by one cycle. The data hazard is detected in the decode stage, and the fetch and decode stages are stalled - they are prevented from flopping their inputs and so stay in the same state for a cycle. The execute, access, and write-back stages downstream see an extra no-operation instruction (NOP) inserted between the LD and AND instructions.
Hazards:
This NOP is termed a pipeline bubble since it floats in the pipeline, like an air bubble in a water pipe, occupying resources but not producing useful results. The hardware to detect a data hazard and stall the pipeline until the hazard is cleared is called a pipeline interlock.
Hazards:
A pipeline interlock does not have to be used with any data forwarding, however. The first example of the SUB followed by AND and the second example of LD followed by AND can be solved by stalling the first stage by three cycles until write-back is achieved, and the data in the register file is correct, causing the correct register value to be fetched by the AND's Decode stage. This causes quite a performance hit, as the processor spends a lot of time processing nothing, but clock speeds can be increased as there is less forwarding logic to wait for.
Hazards:
This data hazard can be detected quite easily when the program's machine code is written by the compiler. The Stanford MIPS machine relied on the compiler to add the NOP instructions in this case, rather than having the circuitry to detect and (more taxingly) stall the first two pipeline stages. Hence the name MIPS: Microprocessor without Interlocked Pipeline Stages. It turned out that the extra NOP instructions added by the compiler expanded the program binaries enough that the instruction cache hit rate was reduced. The stall hardware, although expensive, was put back into later designs to improve instruction cache hit rate, at which point the acronym no longer made sense.
Hazards:
Control hazards Control hazards are caused by conditional and unconditional branching. The classic RISC pipeline resolves branches in the decode stage, which means the branch resolution recurrence is two cycles long. There are three implications: The branch resolution recurrence goes through quite a bit of circuitry: the instruction cache read, register file read, branch condition compute (which involves a 32-bit compare on the MIPS CPUs), and the next instruction address multiplexer.
Hazards:
Because branch and jump targets are calculated in parallel to the register read, RISC ISAs typically do not have instructions that branch to a register+offset address. Jump to register is supported.
Hazards:
On any branch taken, the instruction immediately after the branch is always fetched from the instruction cache. If this instruction is ignored, there is a one cycle per taken branch IPC penalty, which is adequately large.There are four schemes to solve this performance problem with branches: Predict Not Taken: Always fetch the instruction after the branch from the instruction cache, but only execute it if the branch is not taken. If the branch is not taken, the pipeline stays full. If the branch is taken, the instruction is flushed (marked as if it were a NOP), and one cycle's opportunity to finish an instruction is lost.
Hazards:
Branch Likely: Always fetch the instruction after the branch from the instruction cache, but only execute it if the branch was taken. The compiler can always fill the branch delay slot on such a branch, and since branches are more often taken than not, such branches have a smaller IPC penalty than the previous kind.
Hazards:
Branch Delay Slot: Always fetch the instruction after the branch from the instruction cache, and always execute it, even if the branch is taken. Instead of taking an IPC penalty for some fraction of branches either taken (perhaps 60%) or not taken (perhaps 40%), branch delay slots take an IPC penalty for those branches into which the compiler could not schedule the branch delay slot. The SPARC, MIPS, and MC88K designers designed a branch delay slot into their ISAs.
Hazards:
Branch Prediction: In parallel with fetching each instruction, guess if the instruction is a branch or jump, and if so, guess the target. On the cycle after a branch or jump, fetch the instruction at the guessed target. When the guess is wrong, flush the incorrectly fetched target.Delayed branches were controversial, first, because their semantics are complicated. A delayed branch specifies that the jump to a new location happens after the next instruction. That next instruction is the one unavoidably loaded by the instruction cache after the branch.
Hazards:
Delayed branches have been criticized as a poor short-term choice in ISA design: Compilers typically have some difficulty finding logically independent instructions to place after the branch (the instruction after the branch is called the delay slot), so that they must insert NOPs into the delay slots.
Superscalar processors, which fetch multiple instructions per cycle and must have some form of branch prediction, do not benefit from delayed branches. The Alpha ISA left out delayed branches, as it was intended for superscalar processors.
Hazards:
The most serious drawback to delayed branches is the additional control complexity they entail. If the delay slot instruction takes an exception, the processor has to be restarted on the branch, rather than that next instruction. Exceptions then have essentially two addresses, the exception address and the restart address, and generating and distinguishing between the two correctly in all cases has been a source of bugs for later designs.
Exceptions:
Suppose a 32-bit RISC processes an ADD instruction that adds two large numbers, and the result does not fit in 32 bits.
Exceptions:
The simplest solution, provided by most architectures, is wrapping arithmetic. Numbers greater than the maximum possible encoded value have their most significant bits chopped off until they fit. In the usual integer number system, 3000000000+3000000000=6000000000. With unsigned 32 bit wrapping arithmetic, 3000000000+3000000000=1705032704 (6000000000 mod 2^32). This may not seem terribly useful. The largest benefit of wrapping arithmetic is that every operation has a well defined result.
Exceptions:
But the programmer, especially if programming in a language supporting large integers (e.g. Lisp or Scheme), may not want wrapping arithmetic. Some architectures (e.g. MIPS), define special addition operations that branch to special locations on overflow, rather than wrapping the result. Software at the target location is responsible for fixing the problem. This special branch is called an exception. Exceptions differ from regular branches in that the target address is not specified by the instruction itself, and the branch decision is dependent on the outcome of the instruction.
Exceptions:
The most common kind of software-visible exception on one of the classic RISC machines is a TLB miss.
Exceptions:
Exceptions are different from branches and jumps, because those other control flow changes are resolved in the decode stage. Exceptions are resolved in the writeback stage. When an exception is detected, the following instructions (earlier in the pipeline) are marked as invalid, and as they flow to the end of the pipe their results are discarded. The program counter is set to the address of a special exception handler, and special registers are written with the exception location and cause.
Exceptions:
To make it easy (and fast) for the software to fix the problem and restart the program, the CPU must take a precise exception. A precise exception means that all instructions up to the excepting instruction have been executed, and the excepting instruction and everything afterwards have not been executed.
Exceptions:
To take precise exceptions, the CPU must commit changes to the software visible state in the program order. This in-order commit happens very naturally in the classic RISC pipeline. Most instructions write their results to the register file in the writeback stage, and so those writes automatically happen in program order. Store instructions, however, write their results to the Store Data Queue in the access stage. If the store instruction takes an exception, the Store Data Queue entry is invalidated so that it is not written to the cache data SRAM later.
Cache miss handling:
Occasionally, either the data or instruction cache does not contain a required datum or instruction. In these cases, the CPU must suspend operation until the cache can be filled with the necessary data, and then must resume execution. The problem of filling the cache with the required data (and potentially writing back to memory the evicted cache line) is not specific to the pipeline organization, and is not discussed here.
Cache miss handling:
There are two strategies to handle the suspend/resume problem. The first is a global stall signal. This signal, when activated, prevents instructions from advancing down the pipeline, generally by gating off the clock to the flip-flops at the start of each stage. The disadvantage of this strategy is that there are a large number of flip flops, so the global stall signal takes a long time to propagate. Since the machine generally has to stall in the same cycle that it identifies the condition requiring the stall, the stall signal becomes a speed-limiting critical path.
Cache miss handling:
Another strategy to handle suspend/resume is to reuse the exception logic. The machine takes an exception on the offending instruction, and all further instructions are invalidated. When the cache has been filled with the necessary data, the instruction that caused the cache miss restarts. To expedite data cache miss handling, the instruction can be restarted so that its access cycle happens one cycle after the data cache is filled. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Black match**
Black match:
In pyrotechnics, black match is a type of crude fuse, constructed of cotton string fibers intimately coated with a dried black powder slurry.
When black match is confined in a paper tube, called quick match or piped match, the flame front propagates much more quickly, many feet per second.
Quick match is often used in model rockets in the United Kingdom to ignite multiple engines/motors; it is however largely unavailable in the USA due to ambiguous explosives laws. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**COPIM**
COPIM:
COPIM (Community-led Open Publication Infrastructures for Monographs) is an international partnership and project funded by Research England and Arcadia Fund between 11/2019 and 04/2023. Following the principle of 'Scaling Small', the project has developed a set of proof-of-concepts of not-for-profit community-owned, open infrastructures to enable open access book publishing to prosper.COPIM has been named as a Supporting Action in UKRI's 2020 Open Access Review Consultation.
Work Packages:
In seven distinct Work Packages, the project has explored: how to scope and build support for an integration of open access books in libraries; how to build a collective of librarians, publishers and researchers invested in sustainable OA through a not-for-profit, community-governed OA book revenue management and information exchange platform; how to establish funding models that enable a transition of legacy publishers' existing business models to non-BPC OA; research on, and implementation of robust governance models for not-for-profit, community-owned digital infrastructures such as those being developed in other work packages; channels of OA book discovery and dissemination, culminating in the development of an open-source OA book metadata creation and dissemination system and service; how to establish more robust ways to tackle the technical and legal impediments to a more streamlined process of archiving and preservation of OA books technical and legal solutions.
Open Book Collective:
An output of COPIM's Revenue Work Package, the Open Book Collective is a nonprofit platform and community of OA book publishers, infrastructure providers, and libraries that are collaborating to bring about a future for OA book publishing free from inequitable Book Processing Charges. The Open Book Collective has been registered as a CLG (company limited by guarantee) in the UK in 2022..
Opening the Future:
Opening the Future, a revenue model developed in COPIM's Business Models Work Package, is a collective subscription model through which subscribing libraries can get unlimited access to a selection of a chosen publisher's backlist, with perpetual access after three years. The generated membership revenue is used by the publisher solely to produce new Open access monographs.The model is currently being piloted in collaboration with CEU Press and Liverpool University Press under the remit of COPIM.
Thoth:
Thoth is a nonprofit Open source metadata management and distribution platform developed by COPIM's Dissemination Work Package. Thoth is specifically tailored to tackle issues of getting Open access (OA) works into the book supply chain. It is being built with openness in mind: its source code is open, its data is exposed via open APIs and all of the generated metadata outputs are released under a CC0 license.
Thoth:
Thoth’s main goals are: To lower the entry barrier to good metadata management and practices for small/medium OA publishers who are currently struggling to produce their metadata to the various and different specifications that each distributing platform requires; To help distribute open access books, which have been systematically excluded from a book supply chain that was created primarily with closed, priced books in mind; To expose quality and first-hand metadata, using industry standards, publicly for anyone to consume. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Police diving**
Police diving:
Police diving is a branch of professional diving carried out by police services. Police divers are usually professional police officers, and may either be employed full-time as divers or as general water police officers, or be volunteers who usually serve in other units but are called in if their diving services are required.
The duties carried out by police divers include rescue diving for underwater casualties, under the general classification of public safety diving, and forensic diving, which is search and recovery diving for evidence and bodies.
Scope:
Police diving includes forensic diving – the recovery of evidence from underwater – and public safety diving. Police diving work may include: Underwater searches Evidence recovery Submerged body recoveries (from accidents/suicides/crime victims) Anti-narcotics operations (inspecting ships hulls etc.) Anti-terrorism operations (explosive ordnance disposal, defense against swimmer incursions) Search and rescue operations Other maritime law enforcement Forensic diving Forensic diving is professional diving work related to the gathering of evidence for use in investigations and legal cases. Police divers may be called in to investigate and recover evidence in plane crashes, submerged vehicles, boating accidents, suicides, homicides, swimming fatalities and other incidents and crimes. Forensic divers may face a number of environmental hazards from underwater structures and infrastructure, debris, industrial pollution, medical waste, organic hazards from various sources, shifting currents, poor visibility, hypothermia and hyperthermia, for which special equipment may be required to mitigate the risk. Other specialised equipment could include locating devices, access equipment, and transportation. Underwater recovery efforts may include the use of trained dogs, which can detect human remains underwater at depths as great as 150 metres (490 ft) feet in ideal conditions. Qualifications and training for forensic divers are additional to departmental physical and psychological requirements. Training may include instruction in stress management, media relations and teamwork.Submerged evidence can have similar forensic value to evidence found above the water. Items recovered from immersion have been used as evidence in many cases where they have provided identifiable blood traces, fingerprints, hair and fibers, and other trace evidence. There are advantages to having a regional underwater investigation team available, but doing it well requires planning, administration, an adequate budget and due consideration of occupational health and safety issues. The working environment for underwater investigation includes a range of contaminated and inhospitable sites. Depending on the location and local procedural requirements, the teams may contain volunteers, firefighting and rescue personnel, or law enforcement personnel, and in some cases a collaboration of all of these. It is preferable that all members of an agency dive team be full-time, trained members of that agency for reasons of liability, training, policy and procedures. In some jurisdictions the required minimum certification is a recreational certification, in others an occupational qualification and registration may be stipulated. All members should be medically fit to dive, properly trained and competent to perform the tasks they may be assigned, and trained in matters of crime scene documentation and evidence handling and processing in an underwater environment.
Scope:
Public safety diving "Public safety diving" is a term coined by Steven J Linton in the 1970s to describe underwater rescue, underwater recovery and underwater investigation conducted by divers working for or under the authority of municipal, state or federal agencies. These divers are typically members of police departments, sheriff's offices, fire rescue agencies, search and rescue teams or providers of emergency medical services. Public safety divers (PSDs) can be paid by the agencies employing them, or be non-paid volunteers.
Scope:
Conditions Due to the conditions in which accidents may happen, or where criminals may choose to dispose of evidence or their victims, police divers might need to dive under hostile environmental conditions which can include: In canals and rivers with strong flow In harbours and shipping lanes In intake pipes, sewers, culverts, and other enclosed spaces In bodies of water requiring high angle rope access In water towers with potable water Sludge, mud, debris or thick vegetation Under ice and in frigid water At night or with low visibility In rough seas and weather In water contaminated with toxins or parasitesAs these dives may have to be done at short notice, department diving supervisors should be aware of the conditions and local hazards of the likely sites within their areas of operation, so that appropriate measures can be available when their team is called out.
Qualifications and training:
For professional police diving, the diver would in most cases be expected to be trained as a professional public safety diver, with specialised training in handling underwater forensic work. All the principles of land-based law enforcement work preserving and collecting evidence apply underwater.More specialised training, depending on local requirements, may include airborne deployment of divers and gear, climbing and rappelling, cold water and ice diving, firearms training, night diving, operation of a recompression chamber, search management, surface-supplied air diving and diving voice communication systems, hazmat diving, and penetration diving.
Qualifications and training:
United States In the US, diving training agencies such as Emergency Response Diving International (ERDI), the National Academy of Police Diving (NAPD), Team Lifeguard Systems, and Underwater Criminal Investigators have courses to train divers in public safety diving.UCI (Underwater Criminal Investigators) was founded in 1987 to provide professional underwater criminal investigations training to the public safety diving community.The National Academy of Police Diving (NAPD) was formed in 1988 by a group of police divers to create a national standard for police and public safety diver training and certification in the US. It has helped provide training for police officers, fire departments, military divers, and environmental investigators in the following locations: North America, Central America, Russia, Australia, and the Caribbean.
Qualifications and training:
South Africa In South Africa, public safety diving and police diving fall under the Diving Regulations of the Occupational Health and Safety Act, 1993, and such divers are required to be registered as commercial divers by the Department of Employment and Labour. Their basic diver training must be done through registered commercial diving schools. As they are professional police officers, they are also trained in police procedures.
Qualifications and training:
Australia The Australian Diver Accreditation Scheme (ADAS) includes accreditation for police diver training.
Equipment:
Some items of diving equipment have been designed or modified specifically for public safety diving, such as buoyancy compenstor harnesses modified for helicopter lifts and swiftwater work, and for chemical resistance and HAZMAT conditions. Most equipment is standard scuba and surface supplied diving equipment suitable for the conditions in which it is to be used.
History:
In Britain, in the early years of the British Sub-Aqua Club (BSAC), police often called on BSAC branches to dive to find submerged bodies, before the police started their own diving branches. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Shiplap**
Shiplap:
Shiplap is a type of wooden board used commonly as exterior siding in the construction of residences, barns, sheds, and outbuildings.
Exterior walls:
Shiplap is either rough-sawn 25 mm (1 in) or milled 19 mm (3⁄4 in) pine or similarly inexpensive wood between 76 and 254 mm (3 and 10 in) wide with a 9.5–12.7 mm (3⁄8–1⁄2 in) rabbet on opposite sides of each edge. The rabbet allows the boards to overlap in this area. The profile of each board partially overlaps that of the board next to it creating a channel that gives shadow line effects, provides excellent weather protection and allows for dimensional movement.
Exterior walls:
Useful for its strength as a supporting member, and its ability to form a relatively tight seal when lapped, shiplap is usually used as a type of siding for buildings that do not require extensive maintenance and must withstand cold and aggressive climates. Rough-sawn shiplap is attached vertically in post and beam construction, usually with 51–65 mm (6d–8d) common nails, while milled versions, providing a tighter seal, are more commonly placed horizontally, more suited to two-by-four frame construction.
Exterior walls:
Small doors and shutters such as those found in barns and sheds are often constructed of shiplap cut directly from the walls, with only thin members framing or crossing the back for support. Shiplap is also used indoors for the rough or rustic look that it creates when used as paneling or a covering for a wall or ceiling. Shiplap is often used to describe any rabbeted siding material that overlaps in a similar fashion.
Interior design:
In interior design, shiplap is a style of wooden wall siding characterized by long planks, normally painted white, that are mounted horizontally with a slight gap between them in a manner that evokes exterior shiplap walls. A disadvantage of the style is that the gaps are prone to accumulating dust.Installing shiplap horizontally in a room can help carry the eye around the space, making it feel larger. Installing it vertically helps emphasize the height of the room, making it feel taller. Rectangular shiplap pieces can be placed in a staggered zig-zag layout to add texture and enhance the size of the room. Shiplap can also be installed on the ceiling, to draw the eye upwards. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Polyurea**
Polyurea:
Polyurea is a type of elastomer that is derived from the reaction product of an isocyanate component and an amine component. The isocyanate can be aromatic or aliphatic in nature. It can be monomer, polymer, or any variant reaction of isocyanates, quasi-prepolymer or a prepolymer. The prepolymer, or quasi-prepolymer, can be made of an amine-terminated polymer resin, or a hydroxyl-terminated polymer resin.The resin blend may be made up of amine-terminated polymer resins, and/or amine-terminated chain extenders. The amine-terminated polymer resins do not have any intentional hydroxyl moieties. Any hydroxyls are the result of incomplete conversion to the amine-terminated polymer resins. The resin blend may also contain additives or non-primary components. These additives may contain hydroxyls, such as pre-dispersed pigments in a polyol carrier. Normally, the resin blend does not contain a catalyst(s).
Polymer structure:
The word polyurea is derived from the Greek words πολυ- - poly- meaning "many"; and ουρίας - oûron meaning "to urinate" (referring to the substance urea, found in urine). Urea or carbamide is an organic compound with the chemical formula (NH2)2CO. The molecule has two amine groups (–NH2) joined by a carbonyl functional group (C=O). In a polyurea, alternating monomer units of isocyanates and amines react with each other to form urea linkages. Ureas can also be formed from the reaction of isocyanates and water which forms a carbamic acid intermediate. This acid quickly decomposes by splitting off carbon dioxide and leaving behind an amine. This amine then reacts with another isocyanate group to form the polyurea linkage. This two step reaction is used to make what is commonly but improperly called polyurethane foam. The carbon dioxide that is liberated in this reaction is the primary blowing (foaming) agent especially in many polyurethane foams which more precisely should be called polyurethane/urea foams.
Uses:
Polyurea and polyurethane are copolymers used in the manufacture of spandex, which was invented in 1959.
Uses:
Polyurea was originally developed in automotive applications in the 1980s but other applications such as protecting tabletop edges followed. Its fast reactivity and relative moisture insensitivity made it useful for coatings on large surface area projects, such as secondary containment, manhole and tunnel coatings, tank liners, and truck bed liners. Excellent adhesion to concrete and steel is obtained with the proper primer and surface treatment. They can also be used for spray molding and armor. Some polyureas reach strengths of 40 MPa (6000 psi) tensile and over 500% elongation making it a tough coating. The quick cure time allows many coats to be built up quickly.
Uses:
In 2014, a polyurea elastomer-based material was shown to be self-healing, melding together after being cut in half. The material also includes inexpensive commercially available compounds. The elastomer molecules were tweaked, making the bonds between them longer. The resulting molecules are easier to pull apart from one another and better able to rebond at room temperature with almost the same strength. The rebonding can be repeated. Elastic, self-healing paints and other coatings recently took a step closer to common use, thanks to research being conducted at the University of Illinois. Scientists there have used "off-the-shelf" components to create a polymer that melds back together after being cut in half, without the addition of other chemicals.Polyurea has become a preferred long term solution for narrowboats. The traditional coating with bitumen, known as "blacking" is being replaced with the practice of polyurea coatings. The clearest advantage is that it is not necessary to reapply a coat every 3–4 years. It is thought that polyurea coatings last 25–30 years.Commercial trademarks for Polyurea include Line-X, GLS 100R, and Pentens SPU-1000, to name a few. There are multiple possible polyurea formulations. The Polyurea Development Association is a trade association that represents the interests of polyurea coating manufacturers. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Oxidative coupling of methane**
Oxidative coupling of methane:
The oxidative coupling of methane (OCM) is a potential chemical reaction studied in the 1980s for the direct conversion of natural gas, primarily consisting of methane, into value-added chemicals. Although the reaction would have strong economics if practicable, no effective catalysts are known, and thermodynamic arguments suggest none can exist.
Ethylene production:
The principal desired product of OCM is ethylene, the world's largest commodity chemical and the chemical industry's fundamental building block. While converting methane to ethylene would offer enormous economic benefits, it is a major scientific challenge. Thirty years of research failed to produce a commercial OCM catalyst, preventing this process from commercial applications.
Ethylene derivatives are found in food packaging, eyeglasses, cars, medical devices, lubricants, engine coolants and liquid crystal displays. Ethylene production by steam cracking consumes large amounts of energy and uses oil and natural gas fractions such as naphtha and ethane.
Ethylene production:
The oxidative coupling of methane to ethylene is written below: 2CH4 + O2 → C2H4 + 2H2OThe reaction is exothermic (∆H = -280 kJ/mol) and occurs at high temperatures (750–950 ˚C). In the reaction, methane (CH4) is activated heterogeneously on the catalyst surface, forming methyl free radicals, which then couple in the gas phase to form ethane (C2H6). The ethane subsequently undergoes dehydrogenation to form ethylene (C2H4). The yield of the desired C2 products is reduced by non-selective reactions of methyl radicals with the surface and oxygen in the gas phase, which produce (undesirable) carbon monoxide and carbon dioxide.
Catalysis:
Direct conversion of methane into other useful products is one of the most challenging subjects to be studied in heterogeneous catalysis. Methane activation is difficult because of its thermodynamic stability with a noble gas like electronic configuration. The tetrahedral arrangement of strong C–H bonds (435 kJ/mol) offer no functional group, magnetic moments or polar distributions to undergo chemical attack. This makes methane less reactive than nearly all of its conversion products, limiting efficient utilization of natural gas, the world's most abundant petrochemical resource.
Catalysis:
The economic promise of OCM has attracted significant industrial interest. In the 1980s and 1990s multiple research efforts were pursued by academic investigators and petrochemical companies. Hundreds of catalysts have been tested, and several promising candidates were extensively studied. Researchers were unable to achieve the required chemoselectivity for economic operation. Instead of producing ethylene, the majority of methane was non-selectively oxidized to carbon dioxide.
Catalysis:
The lack of selectivity was related to the poor C-H activation of known catalysts, requiring high reaction temperatures (750 ˚C and 950 ˚C) to activate the C-H bond. This high reaction temperature establishes a secondary gas-phase reaction mechanism pathway, whereby the desired reaction of methyl radical coupling to C2 products (leading to ethylene) strongly competes with COx side reactions.The high temperature also presents a challenge for the reaction engineering. Among the process engineering challenges are the requirements for expensive metallurgy, lack of industry experience with high temperature catalytic processes and the potential need for new reactor design to manage heat transfer efficiently.Labinger postulated an inherent limit to OCM selectivity, concluding that "expecting substantial improvements in the OCM performance might not be wise". Labinger's argument, later demonstrated experimentally by Mazanec et al., is based on the mechanism of methane activation, which is a radical mechanism, forming H and CH3 radicals by the homolytic cleavage of the C-H bond. Ethylene and ethane that are proposed products have C-H bonds of similar strength. Thus, any catalyst that can activate methane can also activate the products. The yield of ethylene (and/or ethane) is limited by the relative rates of the methane and ethylene reactions, and these rates are very similar. Reactions of the products lead to higher homologues, and eventually to aromatics and coke. The same limitation applies to direct pyrolysis of methane, which is also a radical process. Nevertheless, some recent work have shown that the mechanism of the OCM could be initiated by an heterolytic cleavage of the C-H bond on magnesium oxide in the presence of O2 atmosphere.Eventually, the inability to discover a selective catalyst led to a gradual loss of interest in OCM. Beginning in the mid-1990s, research activity in this area began to decline significantly, as evidenced by the decreasing number of patents filed and peer-reviewed publications. The research company Siluria attempted to develop a commercially viable OCM process, but did not succeed. The company sold their OCM technology to McDermott in 2019. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**DirectX plugin**
DirectX plugin:
In computer music and professional audio creation, a DirectX plugin is a software processing component that can be loaded as a plugin into host applications to allow real-time processing, audio effects, mixing audio or act as virtual synthesizers. DirectX plugins allow the replacement of traditional recording studio hardware and rack units used in professional studios with software-based counterparts that can be connected together in a modular way. This allows host manufacturers to focus on the conviviality and efficiency of their products while specialized manufacturers can focus on the digital signal processing aspect. For example, there are plugins for effects boxes, such as reverbs and delays, effects pedals, like guitar distortion, flange and chorus, and for mixing and mastering processors such as compressors, limiters, exciters, sub bass enhancers, stereo imagers and many more.
Overview:
Similar to Virtual Studio Technology and later, Audio Units in Apple Mac OS X, DirectX plugins have an open standard architecture for connecting audio synthesizers and effect plugins to audio editors and hard-disk recording systems. DirectX plug-ins are based on Microsoft's Component Object Model (COM) which allows plugins to be recognised and used by other applications via common interfaces. Plugins connect to applications and other plugins with pins via which they can pass and processes buffered streams of audio (or video) data. Architecturally, DirectX plugins are DirectShow filters.
Types and compatibility:
DirectX plugins are also of two types, DirectX effect plugins (DX) and DirectX Instrument plugins (DXi). Effect plugins are used to generate, process, receive, or otherwise manipulate streams of audio. Instrument plugins are MIDI controllable DirectX plugins, generally used to synthesize sound or playback sampled audio using virtual synthesizers, samplers or drum machines. DirectX effect plugins were developed by Microsoft as part of DirectShow. DirectX instruments were developed by Cakewalk in co-operation with Microsoft and are available on Windows. Several wrapper plugins are available so that DirectX plugins can be used in applications which only support VST and vice versa. Others such as chainer plugins are also available which allow chaining multiple plugins together.
Programmability:
DirectX plugins can be developed in C++ using Microsoft's DirectX SDK, Sony's Audio Plug-In Development Kit or Cakewalk's DirectX Wizard. There is also a Delphi SDK available.
DirectX plugin hosts:
ACID Pro (version 3.0 or later) Adobe Audition (Formerly Cool Edit 2000 and Cool Edit Pro 1.0, 2.0) Cakewalk Sonar (version 2.0 or later) MAGIX Samplitude REAPER Sony Vegas Sound Forge Steinberg Wavelab Steinberg Nuendo Steinberg Cubase OpenMPT
Future:
DirectX plugins are superseded by DMO-based signal processing filters and more recently, by Media Foundation Transforms. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**SmartVision**
SmartVision:
Technicolor SmartVision is an update of the SmartVision service platform software, intended for use as a converged media service for set-top boxes, mobile TV, and PCs to access content in a time-delayed, on-demand, or linear manner. The content is agnostic to the network but adapted to different terminals to enable a whole convergent user experience. The latest version deployed is SmartVision 2.6.3. It was released end of March 2008. Thomson SmartVision has been adopted by several tier 1 Telco operators, of which France Telecom is worth mentioning. As part of this deployment, SmartVision is the most widely deployed commercially available IPTV platform.
Overview:
Thomson SmartVision includes all the features of Thales SmartVision, including support for on-demand and live video, video recording and time shifting, and an interactive program guide with integrated search and scheduled recording. SmartVision allows operators to create and manage TV channel bouquets and pay-per-view services. The end-user can select a program or a channel through an electronic program guide or a mosaic and watch multimedia content on various devices like TV sets, PC, mobile phone and portable Video Players. SmartVision incorporates facilities to encrypt videos and interfaces with all significant Conditional Access Systems to manage users’ licenses for live and pay-per-view services.
Overview:
SmartVision can be interfaced with third-party platforms to propose interactive services such as weather forecasts, news, games, and other video applications. Viewers can interact with programs, select different views, participate in game shows, vote, or request more information on a commercial. SmartVision delivers a user interface that users can easily customize to match specific look-and-feel, branding, business models, or specific multimedia services to the homes.
Overview:
SmartVision is designed to launch integrated TriplePlay services. It can be deployed with leading VoIP platforms such as Thomson's Cirpack Multi node-B. Telephony is integrated into the TV portal to manage voice features such as musical ring back tones or call forwards with a user-friendly interface. Caller ID is shown on TV, and live programs automatically pause to let users answer calls. Voice mail and MMS are now accessed via the TV set.
Overview:
Thomson has also joined France Telecom and Sagem as a founding member of a joint venture named Soft at Home, aimed at standardizing the digital household. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Body bag**
Body bag:
A body bag, also known as a cadaver pouch or human remains pouch (HRP), is a non-porous bag designed to contain a human body, used for the storage and transportation of shrouded corpses.
History:
In the United States, the apparent first documented bag for the purpose of transporting bodies was patented under the name "Improvement in Receptacles for Dead Bodies." The patent was filed during the Civil War by Dr. Thomas Holmes, United States Patent No. 39, 291. The purpose of the bag, as stated in the patent application dated July 21, 1863 was, "... to facilitate the carrying of badly-wounded dead bodies hurriedly away that could not otherwise be quickly removed for the want of proper conveyances, or difficulty to procure boxes or coffins for removing the dead, as the boxes or coffins cannot be so easily transported or handled on the field of battle." He said that he'd "invented a new and useful Elastic and Deodorizing Receptacle."
Uses:
Body bags can also be used for the storage of corpses within morgues. Before purpose-made body bags were available, cotton mattress covers were sometimes used, particularly in combat zones during the Second World War. If not available, other materials were used such as bed sheets, blankets, shelter halves, ponchos, sleeping bag covers, tablecloths, curtains, parachute canopies, tarpaulins, or discarded canvas—"sealed in a blanket"—slang. However, the subsequent rubber (and now plastic) body bag designs are much superior, not least because they prevent leakage of body fluids, which often occurs after death. The dimensions of a body bag are generally around 36 inches by 90 inches (91 cm by 229 cm). Most have some form of carrying handles, usually webbing, at each corner and along the edges.
Uses:
In modern warfare, body bags have been used to contain the bodies of dead soldiers. Disaster agencies typically have reserves of body bags, both for anticipated wars and natural disasters. During the Cold War, vast reserves of body bags were built up in anticipation of millions of fatalities from nuclear war. This was the subject of Adrian Mitchell's protest poem "Fifteen Million Plastic Bags".
Uses:
Body bags are sometimes portrayed in films and television as being made of a heavy black plastic. Lightweight white body bags have since become popular because it is much easier to spot a piece of evidence that may have been jostled from the body in transit on a white background than on a black background. Even so, black body bags are still in general use. Other typical colors include orange, blue, or gray. Body bags used in the Vietnam War were heavy-duty black rubberized fabric. Regardless of their color, body bags are made of thick plastic and have a full-length zipper on them. Sometimes the zipper runs straight down the middle. Alternatively, the path of the zipper may be J-shaped or D-shaped. Depending on the design, there are sometimes handles (two on each side) to facilitate lifting. It is possible to write information on the plastic surface of a body bag using a marker pen, and this often happens—either in situ (particularly when many bodies are being collected) or at the mortuary, before being stored in refrigerated cabinets. Alternatively, some designs of body bags have transparent label pockets as an integral part of the design, into which a name-card can be inserted. In any case, a conventional toe tag can easily be tied to one of the lifting handles if required or used to bind two zippers to show a lack of tampering. Body bags are not designed to be washed and re-used. Aside from the obvious hygiene concerns, re-use of body bags could easily contaminate evidence in the case of a suspicious death. As a result, body bags are routinely discarded and incinerated after one use.
Uses:
Although body bags are most often used for the transport of human remains from their place of discovery to a funeral home or mortuary, they can also be used for temporary burials such as in a combat zone. In such situations, proper funerals are impossible because of imminent enemy attack. This was the situation during the Falklands War of 1982, during which British dead were placed in gray plastic body bags and then laid in mass graves. Some months after the conflict ended, all remains were exhumed from their temporary graves to receive a conventional funeral service with full military honors.
Uses:
During the Iraq and Afghanistan wars in the mid-2000s the military began using body bags as a rapid means of delivering ammunition, supplies, batteries, rations, water cans, and other items to small units in the field. The body bags with less than 100 pounds of supplies were loaded in helicopters. Upon landing they were quickly shoved out the doors and troops on the ground grabbed the carrying handles and dragged them to cover as the helicopters departed.
Uses:
The term "body bag" is sometimes used for fashion or other bags worn on the body (sling body bag or across body bag) and this sense has no connection with either of the two above senses.
White body bags were used to differentiate the charred mannequins and eight teenage victims of the Haunted Castle fire at Six Flags Great Adventure in 1984. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Titanium tetraiodide**
Titanium tetraiodide:
Titanium tetraiodide is an inorganic compound with the formula TiI4. It is a black volatile solid, first reported by Rudolph Weber in 1863. It is an intermediate in the van Arkel–de Boer process for the purification of titanium.
Physical properties:
TiI4 is a rare molecular binary metal iodide, consisting of isolated molecules of tetrahedral Ti(IV) centers. The Ti-I distances are 261 pm. Reflecting its molecular character, TiI4 can be distilled without decomposition at one atmosphere; this property is the basis of its use in the van Arkel–de Boer process. The difference in melting point between TiCl4 (m.p. -24 °C) and TiI4 (m.p. 150 °C) is comparable to the difference between the melting points of CCl4 (m.p. -23 °C) and CI4 (m.p. 168 °C), reflecting the stronger intermolecular van der Waals bonding in the iodides.
Physical properties:
Two polymorphs of TiI4 exist, one of which is highly soluble in organic solvents. In the less soluble cubic form, the Ti-I distances are 261 pm.
Production:
Three methods are well known: 1) From the elements, typically using a tube furnace at 425 °C: Ti + 2 I2 → TiI4This reaction can be reversed to produce highly pure films of Ti metal.2) Exchange reaction from titanium tetrachloride and HI.
TiCl4 + 4 HI → TiI4 + 4 HCl3) Oxide-iodide exchange from aluminium iodide.
3 TiO2 + 4 AlI3 → 3 TiI4 + 2 Al2O3
Reactions:
Like TiCl4 and TiBr4, TiI4 forms adducts with Lewis bases, and it can also be reduced. When the reduction is conducted in the presence of Ti metal, one obtains polymeric Ti(III) and Ti(II) derivatives such as CsTi2I7 and the chain CsTiI3, respectively.TiI4 exhibits extensive reactivity toward alkenes and alkynes resulting in organoiodine derivatives. It also effects pinacol couplings and other C-C bond-forming reactions. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**International Wittgenstein Symposium**
International Wittgenstein Symposium:
The International Wittgenstein Symposium is an international conference dedicated to the work of Ludwig Wittgenstein and its relationship to philosophy and science. It is sponsored by the Austrian Ludwig Wittgenstein Society.
History:
In 1976, the International Wittgenstein Symposium was founded by Elisabeth Leinfellner, Werner Leinfellner, Rudolf Haller, Paul Weingartner, and Adolf Hübner in Kirchberg am Wechsel, Lower Austria. The location was chosen because in the 1920s, Ludwig Wittgenstein taught at elementary schools in the area surrounding Kirchberg am Wechsel. On the 24th to the 25th of April, 1976 (just prior to the 25th anniversary of Wittgenstein's death), the first conference took place. Only four of the five founders gave talks on his philosophical work at the first meeting, but at the second, 120 speakers attended from around the world.
Philosophical topics:
The general topic of each symposium centers around the philosophy and philosophy of science of Wittgenstein, but the specific topics change from year to year. For example, the topic of the second International Wittgenstein Symposium was "Wittgenstein and his impact on contemporary thought" and the topic of the third symposium was "Wittgenstein, the Vienna Circle, and critical rationalism (including a seminar on Popper's The Open Society and Its Enemies)." A survey of topics is available from the site of the Austrian Ludwig Wittgenstein Society.
Proceedings:
Starting with the second symposium, the papers accepted for presentation have been published in edited proceedings. From 1978 to 2005 the proceedings of the International Wittgenstein Symposium were published by Hölder-Pichler-Tempsky, ontos verlag, and are now published by De Gruyter.The new series of publications of the ALWS, published by De Gruyter can be found here. The proceedings published at ontos verlag are available Open Access online at a site prepared by the Wittgenstein Archives at the University of Bergen. The Wittgenstein Archives have also prepared a site that contains an Open Access selection of symposium papers from the period 2001-10.
Sponsorship:
The symposia are sponsored by the Austrian Ludwig Wittgenstein Society and they are largely funded by the government of Lower Austria and the Austrian Federal Ministry for Science and Research. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Radiocarbon calibration**
Radiocarbon calibration:
Radiocarbon dating measurements produce ages in "radiocarbon years", which must be converted to calendar ages by a process called calibration. Calibration is needed because the atmospheric 14C:12C ratio, which is a key element in calculating radiocarbon ages, has not been constant historically.Willard Libby, the inventor of radiocarbon dating, pointed out as early as 1955 the possibility that the ratio might have varied over time. Discrepancies began to be noted between measured ages and known historical dates for artefacts, and it became clear that a correction would need to be applied to radiocarbon ages to obtain calendar dates. Uncalibrated dates may be stated as "radiocarbon years ago", abbreviated "14Cya".
Radiocarbon calibration:
The term Before Present (BP) is established for reporting dates derived from radiocarbon analysis, where "present" is 1950. Uncalibrated dates are stated as "uncal BP", and calibrated (corrected) dates as "cal BP". Used alone, the term BP is ambiguous.
Construction of a curve:
To produce a curve that can be used to relate calendar years to radiocarbon years, a sequence of securely-dated samples is needed, which can be tested to determine their radiocarbon age. Dendrochronology, or the study of tree rings, led to the first such sequence: tree rings from individual pieces of wood show characteristic sequences of rings that vary in thickness due to environmental factors such as the amount of rainfall in a given year. Those factors affect all trees in an area and so examining tree-ring sequences from old wood allows the identification of overlapping sequences. In that way, an uninterrupted sequence of tree rings can be extended far into the past. The first such published sequence, based on bristlecone pine tree rings, was created in the 1960s by Wesley Ferguson. Hans Suess used the data to publish the first calibration curve for radiocarbon dating in 1967. The curve showed two types of variation from the straight line: a long-term fluctuation with a period of about 9,000 years, and a shorter-term variation, often referred to as "wiggles", with a period of decades. Suess said that he drew the line showing the wiggles by "cosmic schwung", or freehand. It was unclear for some time whether the wiggles were real or not, but they are now well-established.The calibration method also assumes that the temporal variation in 14C level is global, such that a small number of samples from a specific year are sufficient for calibration, which was experimentally verified in the 1980s.Over the next 30 years, many calibration curves were published by using a variety of methods and statistical approaches. They were superseded by the INTCAL series of curves, beginning with INTCAL98, published in 1998, and updated in 2004, 2009, 2013 and 2020. The improvements to these curves are based on new data gathered from tree rings, varves, coral, and other studies. Significant additions to the datasets used for INTCAL13 include non-varved marine foraminifera data, and U-Th dated speleothems. The INTCAL13 data includes separate curves for the Northern and Southern Hemispheres, as they differ systematically because of the hemisphere effect; there is also a separate marine calibration curve. The calibration curve for the southern hemisphere is known as the SHCal as opposed to the IntCal for the northern hemisphere. The most recent version being published in 2020. There is also a different curve for the period post 1955 due to atomic bomb testing creating higher levels of radiocarbon which vary based on latitude, known as bomb cal.
Methods:
Probabilistic Modern methods of calibration take the original normal distribution of radiocarbon age ranges and use it to generate a histogram showing the relative probabilities for calendar ages. This has to be done by numerical methods rather than by a formula because the calibration curve is not describable as a formula. Programs to perform these calculations include OxCal and CALIB. These can be accessed online; they allow the user to enter a date range at one standard deviation confidence for the radiocarbon ages, select a calibration curve, and produce probabilistic output both as tabular data and in graphical form.In the example CALIB output shown at left, the input data is 1270 BP, with a standard deviation of 10 radiocarbon years. The curve selected is the northern hemisphere INTCAL13 curve, part of which is shown in the output; the vertical width of the curve corresponds to the width of the standard error in the calibration curve at that point. A normal distribution is shown at left; this is the input data, in radiocarbon years. The central darker part of the normal curve is the range within one standard deviation of the mean; the lighter grey area shows the range within two standard deviations of the mean. The output is along the bottom axis; it is a trimodal graph, with peaks at around 710 AD, 740 AD, and 760 AD. Again, the 1σ confidence ranges are in dark grey, and the 2σ confidence ranges are in light grey.
Methods:
Intercept Before the widespread availability of personal computers made probabilistic calibration practical, a simpler "intercept" method was used. Once testing has produced a sample age in radiocarbon years with an associated error range of plus or minus one standard deviation (usually written as ±σ), the calibration curve can be used to derive a range of calendar ages for the sample. The calibration curve itself has an associated error term, which can be seen on the graph labelled "Calibration error and measurement error". This graph shows INTCAL13 data for the calendar years 3100 BP to 3500 BP. The solid line is the INTCAL13 calibration curve, and the dotted lines show the standard error range, as with the sample error, this is one standard deviation. Simply reading off the range of radiocarbon years against the dotted lines, as is shown for sample t2, in red, gives too large a range of calendar years. The error term should be the root of the sum of the squares of the two errors: total sample calib 2 Example t1, in green on the graph, shows this procedure—the resulting error term, σtotal, is used for the range, and this range is used to read the result directly from the graph itself without reference to the lines showing the calibration error.
Methods:
Variations in the calibration curve can lead to very different resulting calendar year ranges for samples with different radiocarbon ages. The graph to the right shows the part of the INTCAL13 calibration curve from 1000 BP to 1400 BP, a range in which there are significant departures from a linear relationship between radiocarbon age and calendar age. In places where the calibration curve is steep, and does not change direction, as in example t1 in blue on the graph to the right, the resulting calendar year range is quite narrow. Where the curve varies significantly both up and down, a single radiocarbon date range may produce two or more separate calendar year ranges. Example t2, in red on the graph, shows this situation: a radiocarbon age range of about 1260 BP to 1280 BP converts to three separate ranges between about 1190 BP and 1260 BP. A third possibility is that the curve is flat for some range of calendar dates; in this case, illustrated by t3, in green on the graph, a range of about 30 radiocarbon years, from 1180 BP to 1210 BP, results in a calendar year range of about a century, from 1080 BP to 1180 BP.The intercept method is based solely on the position of the intercepts on the graph. These are taken to be the boundaries of the 68% confidence range, or one standard deviation. However, this method does not make use of the assumption that the original radiocarbon age range is a normally distributed variable: not all dates in the radiocarbon age range are equally likely, and so not all dates in the resulting calendar year age are equally likely. Deriving a calendar year range by means of intercepts does not take this into account.
Methods:
Wiggle-matching For a set of samples with a known sequence and separation in time such as a sequence of tree rings, the samples' radiocarbon ages form a small subset of the calibration curve. The resulting curve can then be matched to the actual calibration curve by identifying where, in the range suggested by the radiocarbon dates, the wiggles in the calibration curve best match the wiggles in the curve of sample dates. This "wiggle-matching" technique can lead to more precise dating than is possible with individual radiocarbon dates. Since the data points on the calibration curve are five years or more apart, and since at least five points are required for a match, there must be at least a 25-year span of tree ring (or similar) data for this match to be possible. Wiggle-matching can be used in places where there is a plateau on the calibration curve, and hence can provide a much more accurate date than the intercept or probability methods are able to produce. The technique is not restricted to tree rings; for example, a stratified tephra sequence in New Zealand, known to predate human colonization of the islands, has been dated to 1314 AD ± 12 years by wiggle-matching.
Combination of calibrated dates:
When several radiocarbon dates are obtained for samples which are known or suspected to be from the same object, it may be possible to combine the measurements to get a more accurate date. Unless the samples are definitely of the same age (for example, if they were both physically taken from a single item) a statistical test must be applied to determine if the dates do derive from the same object. This is done by calculating a combined error term for the radiocarbon dates for the samples in question, and then calculating a pooled mean age. It is then possible to apply a T test to determine if the samples have the same true mean. Once this is done the error for the pooled mean age can be calculated, giving a final answer of a single date and range, with a narrower probability distribution (i.e., greater accuracy) as a result of the combined measurements.Bayesian statistical techniques can be applied when there are several radiocarbon dates to be calibrated. For example, if a series of radiocarbon dates is taken from different levels in a given stratigraphic sequence, Bayesian analysis can help determine if some of the dates should be discarded as anomalies, and can use the information to improve the output probability distributions. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Straumann**
Straumann:
Straumann Group is a Swiss company based in Basel (Switzerland) manufacturing dental implants and specialized in related technologies. The group researches, develops, manufactures and supplies dental implants, instruments, biomaterials, CADCAM prosthetics, digital equipment, software, and clear aligners for applications in replacement, restorative, orthodontic and preventative dentistry.
The Straumann Group also offers services to the dental profession worldwide, including training and education, which is provided in collaboration with the International Team for Implantology (ITI) and the Instituto Latino Americano de Pesquisa e Ensino Odontológico (ILAPEO). Its products and services are available in more than 100 countries through a broad network of distribution subsidiaries and partners.
Business areas:
Straumann is active in the business of replacement and restoration of teeth, and prevention of tooth loss. Collaborating with clinics, research institutes and universities since the beginning of the company’s existence, it develops implants, instruments, computer-aided design/manufacturing (CAD/CAM) prosthetics, 3D printing and tissue regeneration products. Straumann also sees advantages in orthodontics, as 30-40% of implant patients need to get their teeth realigned before implant treatment. The company provides training and education for the dental profession around the globe in cooperation with the International Team for Implantology.
History:
The history of the Straumann Group has three distinct eras and spans more than half a century. It began in the village of Waldenburg, Switzerland in 1954 with the foundation of a research institute bearing the name of its founder, Dr. Ing. Reinhard Straumann.
History:
1954–1970: Between Watchmaking and Medtech Between 1954 and 1970, the company specialized in alloys used in timing instruments and in materials testing. Among Straumann's renowned inventions in this period were special alloys that are still used in watch springs today. A breakthrough in the use of non-corroding alloys for treating bone fractures prompted Dr. Fritz Straumann to enter the fields of orthopedics and dental implantology, which began the second phase of the company's history.1954 In the small town of Waldenburg, at the foot of the Swiss Jura, Reinhard Straumann founds the "Dr. Ing. R. Straumann Research Institute AG".
History:
1960 The Swiss Association for the Study of Internal Fixation (AO/ASIF) is looking for a company that is capable of providing materials for internal fixation implants – Dr. H.C. Fritz Straumann, son of the company's founder, gets in touch.
History:
1970–1998: Establishment in Medtech and MBO of Stratec Between 1970 and 1990, Straumann became a leading manufacturer of osteosynthesis implants. A management buy-out of the osteosynthesis division in 1990 led to the creation of Stratec (subsequently Synthes) as a separate company. Thomas Straumann, grandson of the founder, headed the remaining part of the firm, which employed just 25 people focused exclusively on dental implants. 1990 thus marked the beginning of the Straumann Group as it is known today.
History:
1980 Straumann established a partnership with the International Team for Implantology. The 1980s also marked the company's geographic expansion, with subsidiaries in Germany (1980) and the US (1989). 1974 The first dental implants are developed at the Institut Straumann and undergo successful clinical testing at the University of Berne.
1980 Under the aegis of Dr. Fritz Straumann, Waldenburg, and Prof Schroeder, the University of Berne, the International Team for Implantology, the ITI, is founded.
1990 After a management buy-out of the internal fixation division, Thomas Straumann focuses the activities of the Institut Straumann AG on the area of implant dentistry.
1998–present In 1998, Straumann Holding AG became a publicly traded company on the Swiss exchange. Through the acquisition of Kuros Therapeutics (2002) and Biora (2003), Straumann entered the promising field of oral tissue regeneration.
1998 The Straumann Holding AG goes public and is listed on the Swiss stock exchange.
2000 With the opening of the production site in Villeret, located in the Bernese Jura, and the Technology Center in Waldenburg, new dimensions open up for the international Straumann group.
2002 Straumann acquires Kuros Therapeutics AG and extends its activities into the field of biomaterials.
2003 Straumann acquires the Swedish company Biora, a pioneer in the area of biologically based regeneration of dental tissue.
2004 Straumann moves into its new headquarters in Basel.
2011 Investment in Dental Wings, a developer and provider of CADCAM software and scanning technology, based in Canada.
2012 Straumann acquires Neodent from Brazil and extends its activities into the value Market.
2013 Straumann invests in Medentika and Createch – both companies are active in prosthetics.
2016 Straumann acquires Equinox, in the fast-growing value segment in India.
The company also invests in the French implant maker Anthogyr to address the non-premium segment in China.
2017 Straumann took a controlling interest in Medentika.
History:
2018 Straumann invests in Botiss Biomedical and fully acquires Createch. The Group also gains control of T-Plus in Taiwan.Additionally, the Straumann Group entered the orthodontics field and strengthened its digital capabilities through acquisitions and alliances: full acquisition of Dental Wings; acquisition of ClearCorrect (US-based provider of clear-aligner tooth correction orthodontic devices); investment in Geniova (based in Spain and specialized in developing hybrid aligner orthodontic devices); investment in Rapid Shape (3D-printing systems); increased investment in Rodo Medical; acquisition of Loop Digital Solutions; partnership with 3Shape (scanning and software solutions).2019 The Group takes over the French implant manufacturer Anthogyr.
Organization:
Board of directors (as of 2022) Gilbert Achermann – Chairman of the Board Thomas Straumann Marco Gadola – Chair Technology & Innovation Committee Juan José Gonzalez Petra Rumpf - Chair of the ESG Task Force Beat Lüthi – Vice Chairman of the Board, Chair Human Resources & Compensation Committee Regula Wallimann – Chair Audit & Risk Committee Nadia Tarolli Schmidt Executive Management Board (as of 2022) Guillaume Daniellot – Chief Executive Officer Wolfgang Becker – Head Distributor & Emerging Markets EMEA Peter Hackel – Chief Financial Officer Holger Haderer – Head Marketing & Education Patrick Kok-Kien Loh – Head Sales Asia/Pacific Matthias Schupp – Head Sales Latin America, CEO Neodent Dirk Reznik – Head Digital Business Unit Camila Finzi – Head Orthodontics Business Unit Aurelio Sahagun - Head Sales North America Rahma Samow – Head Dental Service Organizations Jason Forbes - Chief Consumer Officer Sébastien Roche - Chief Operations Officer Christian Ullrich - Chief Information Officer
Production facilities:
The Group's principal production sites for implant components and instruments are in Brazil, Germany, India, Switzerland and the US, while CADCAM prosthetics are milled in Brazil, China, Germany, Japan and the US. Biomaterials are produced in Sweden, digital equipment in Canada and Germany, and clear aligners in the US.
Production facilities:
Villeret (Switzerland) All major components of the Straumann Dental Implant System are currently manufactured at Straumann's factory in Villeret, Switzerland. Villeret became operational in 2000. Continued global volume growth made it necessary to expand capacity, and a second production floor was fitted out in 2005. As a result, Villeret now operates two fully independent production lines, one producing surgical products (implants) and the other manufacturing components for the range of implant prosthetics (abutments). Villeret also houses the manufacturing unit for Straumann's third generation implant surface technology SLActive.
Production facilities:
North Andover (USA) The North American headquarters in Andover house Straumann's first manufacturing unit outside Switzerland and produces implant system components and instruments.The 7,400-square-meter (80,000-square-foot) production area complements Straumann's current production unit in Villeret, Switzerland. It is also the home office location for Neodent, the Brazilian value implant Straumann acquired in 2013 as part of its product portfolio.
Curitiba (Brazil) Headquarters and production facility of the Neodent non-premium portfolio.
Malmö (Sweden) Straumann's production unit in Malmö is devoted primarily to the specialized manufacture of regenerative products.In June 2003 Straumann acquired the Swedish company Biora, which specialized in the manufacture of the protein-based products for tissue regeneration. The manufacture of Emdogain, a protein based gel for use in periodontal disease, is focused in Malmö.
Montreal (Canada) Dental Wings headquarters and digital equipment production facility Round Rock (USA) ClearCorrect headquarters and clear aligners production facility CADCAM facilities The sites in Markkleeberg (Germany), Arlington (Texas, USA), Narita (Japan) and Shenzhen (China) host Straumann's centralized CADCAM facilities (a.k.a. etkon) for tooth restoration prosthetics. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Heterochromatin protein 1**
Heterochromatin protein 1:
The family of heterochromatin protein 1 (HP1) ("Chromobox Homolog", CBX) consists of highly conserved proteins, which have important functions in the cell nucleus. These functions include gene repression by heterochromatin formation, transcriptional activation, regulation of binding of cohesion complexes to centromeres, sequestration of genes to the nuclear periphery, transcriptional arrest, maintenance of heterochromatin integrity, gene repression at the single nucleosome level, gene repression by heterochromatization of euchromatin, and DNA repair. HP1 proteins are fundamental units of heterochromatin packaging that are enriched at the centromeres and telomeres of nearly all eukaryotic chromosomes with the notable exception of budding yeast, in which a yeast-specific silencing complex of SIR (silent information regulatory) proteins serve a similar function. Members of the HP1 family are characterized by an N-terminal chromodomain and a C-terminal chromoshadow domain, separated by a hinge region. HP1 is also found at some euchromatic sites, where its binding can correlate with either gene repression or gene activation. HP1 was originally discovered by Tharappel C James and Sarah Elgin in 1986 as a factor in the phenomenon known as position effect variegation in Drosophila melanogaster.
Paralogs and orthologs:
Three different paralogs of HP1 are found in Drosophila melanogaster, HP1a, HP1b and HP1c. Subsequently orthologs of HP1 were also discovered in S. pombe (Swi6), Xenopus (Xhp1α and Xhp1γ), Chicken (CHCB1, CHCB2 and CHCB3), Tetrahymena (Pdd1p) and many other metazoans. In mammals, there are three paralogs: HP1α, HP1β and HP1γ. In Arabidopsis thaliana (a plant), there is one structural homolog: Like Heterochromatin Protein 1 (LHP1), also known as Terminal Flower 2 (TFL2).
Paralogs and orthologs:
HP1β in mammals HP1β interacts with the histone methyltransferase (HMTase) Suv(3-9)h1 and is a component of both pericentric and telomeric heterochromatin. HP1β is a dosage-dependent modifier of pericentric heterochromatin-induced silencing and silencing is thought to involve a dynamic association of the HP1β chromodomain with the tri-methylated histone H3 K9me3. The binding of the K9me3-modified H3 N-terminal tail by the chromodomain is a defining feature of HP1 proteins.
Interacting proteins:
HP1 interacts with numerous other proteins/molecules (in addition to H3K9me3) with different cellular functions in different organisms. Some of these HP1 interacting partners are: histone H1, histone H3, histone H4, histone methyltransferase, DNA methyltransferase, methyl CpG binding protein MeCP2, and the origin recognition complex protein ORC2.
Binding affinity and cooperativity:
HP1 has a versatile structure with three main components; a chromodomain, a chromoshadow domain, and a hinge domain. The chromodomain is responsible for the specific binding affinity of HP1 to histone H3 when tri-methylated at the 9th lysine residue. HP1 binding affinity to nucleosomes containing histone H3 methylated at lysine K9 is significantly higher than to those with unmethylated lysine K9. HP1 binds nucleosomes as a dimer and in principle can form multimeric complexes. Some studies have interpreted HP1 binding in terms of nearest-neighbor cooperative binding. However, the analysis of available data on HP1 binding to nucleosomal arrays in vitro shows that experimental HP1 binding isotherms can be explained by a simple model without cooperative interactions between neighboring HP1 dimers. Nevertheless, favorable interactions between nearest neighbors of HP1 lead to limited spreading of HP1 and its marks along the nucleosome chain in vivo.The binding affinity of the HP1 chromodomain has also been implicated in regulation of alternative splicing. HP1 can act as both an enhancer and silencer of splicing alternative exons. The exact role it plays in regulation varies by gene and is dependent on the methylation patterns within the gene body. In humans, the role of HP1 on splicing has been characterized for alternative splicing of the EDA exon from the fibronectin gene. In this pathway HP1 acts as a mediator protein for repression of alternative splicing of the EDA exon. When the chromatin within the gene body is not methylated, HP1 does not bind and the EDA exon is transcribed. When the chromatin is methylated, HP1 binds the chromatin and recruits the splicing factor SRSF3 which binds HP1 and splices the EDA exon from the mature transcript. In this mechanism HP1 recognizes the H3K9me3 methylated chromatin and recruits a splicing factor to alternatively splice the mRNA, thereby excluding the EDA exon from the mature transcript.
Role in DNA repair:
All HP1 isoforms (HP1-alpha, HP1-beta, and HP1-gamma) are recruited to DNA at sites of UV-induced damages, at oxidative damages and at DNA breaks. The HP1 protein isoforms are required for DNA repair of these damages. The presence of the HP1 protein isoforms at DNA damages assists with the recruitment of other proteins involved in subsequent DNA repair pathways. The recruitment of the HP1 isoforms to DNA damage is rapid, with half maximum recruitment (t1/2) by 180 seconds in response to UV damage, and a t1/2 of 85 seconds in response to double-strand breaks. This is a bit slower than the recruitment of the very earliest proteins recruited to sites of DNA damage, though HP1 recruitment is still one of the very early steps in DNA repair. Other earlier proteins may be recruited with a t1/2 of 40 seconds for UV damage and a t1/2 of about 1 second in response to double-strand breaks (see DNA damage response). | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**SmartKey**
SmartKey:
SmartKey was the first macro processing program of its type, and the first terminate-and-stay-resident program for PCs and CP/M microcomputers, their eight bit predecessors.Smartkey's "keyboard definitions" were first used with the early word processing program WordStar to change margins of screenplays. Thousands of other uses were made for the program.
SmartKey was written by Nick Hammond, an admiral in the Royal Australian Navy, and published by Software Research Technologies, founded by Stan Brin and Reid H. Griffin.
SmartKey received two Editor's Choice awards from PC Magazine due to its tight code and powerful features, but was never able to counter the marketing muscle of its largest competitor, SuperKey, a product of Borland International. The company folded in 1987 and the product disappeared from the market soon after. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Subaerial unconformity**
Subaerial unconformity:
In geology, a subaerial unconformity is a surface that displays signs of erosion by processes that commonly occur on the surface. These processes generating the subaerial unconformity can include wind degradation, pedogenesis, dissolution processes such as karstification as well as fluvial processes such as fluvial erosion, bypass and river rejuvenation.
Role in sequence stratigraphy:
Subaerial unconformities are used as limiting surfaces that define sequences in sequence stratigraphy. In this context they are synonymous with the terms lowstand unconformity, regressive surface of fluvial erosion as well as fluvial entrenchment surface and incision surface. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Intonjutsu**
Intonjutsu:
Intonjutsu (隠遁術 lit. Kanji for "disappearing technique") is the ninja art of "disappearing" and has many walking and stealth techniques. It also comprises wilderness survival, fieldcraft, and Shinobi-aruki (silent movement steps and leaps). | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.