text
stringlengths
60
353k
source
stringclasses
2 values
**BBCOR** BBCOR: BBCOR (Bat-ball coefficient of restitution) is a baseball bat performance standard created by the National Collegiate Athletic Association (NCAA) to certify the performance of Composite baseball bats used in competition.From the standard: "To initiate the certification process for all baseball bats that are constructed with materials other than one-piece solid wood, an interested bat manufacturer must send one of the NCAA Certification Centers written notice of its intent to request certification testing on specific models it deems appropriate for testing." This standard went into effect on January 1, 2011 and all composite bats used in NCAA competition must meet the BBCOR standard. The standard is used to certify "all baseball bats that are constructed with materials other than one-piece solid wood". BBCOR: 2. BBCOR certification matter a lot for passionate players and also for coach and checking BBCOR Certification is means a lot.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Crossbuck** Crossbuck: A crossbuck is a traffic sign used to indicate a level railway crossing. It is composed of two slats of wood or metal of equal length, fastened together on a pole in a saltire formation (resembling the letter X). Crossbucks are sometimes supplemented by electrical warnings of flashing lights, a bell, or a boom barrier that descends to block the road and prevent traffic from crossing the tracks. Vienna Convention: The Vienna Convention on Road Signs and Signals, a multilateral treaty of the United Nations with the intention of standardizing traffic signs around the world, prescribes several different regulations for the "crossbuck" sign. Vienna Convention: The sign should consist of two arms not less than 1.2 metres (3.9 ft) long, crossed in the form of an . The first model may have a white or yellow ground with a thick red or black border. The second model may have a white or yellow ground with a thin black border and an inscription, such as "RAILWAY CROSSING". If lateral clearance obstructs the placement of the sign, it may be rotated 90° so that its points are directed vertically. If used at a level crossing with more than 1 set of tracks, a half cross or a supplementary plate stating the number of tracks may be added below. Vienna Convention: Most countries use one of these models: Variants around the world: In the United States, the crossbuck carries the words "RAIL" and "ROAD" on one arm and "CROSSING" on the other ("RAIL" and "ROAD" are separated by the "CROSSING" arm), in black text on a white background. Older variants simply used black and white paint; newer installations use a reflective white material with non-reflective lettering. Some antique U.S. crossbucks were painted in other color schemes, and used glass "cat's eye" reflectors on the letters to make them stand out. Other countries, such as China, also use this layout, but with appropriately localized terms. Often, a supplemental sign below the crossbuck indicates the number of tracks at the crossing. Variants around the world: A special kind of crossing sign assembly was introduced on an experimental basis in Ohio in 1992, the "Buckeye Crossbuck". It included an enhanced crossbuck, reflective and with red lettering, and also a reflective plate reading "YIELD" below the crossbuck, whose sides are bent backwards in order to catch and reflect at a right angle the light of an approaching train. The experiment's final report gave the device a favorable review; however, the plate, R15-9 "Crossbuck Shield", was rejected for inclusion in the 2003 Manual on Uniform Traffic Control Devices. Variants around the world: In Canada, crossbucks have a red border and no lettering. These were installed in the 1980s shortly after English-French bilingualism was made official, replacing signs of a style similar to those used in the U.S., except the word "RAILWAY" was used instead of "RAILROAD" and in certain areas the words "TRAVERSE DE CHEMIN DE FER" were used. In Mexico, the crossbucks read "CRUCERO FERROCARRIL", a literal translation of its U.S. counterpart. Older designs read "CUIDADO CON EL TREN", meaning "beware of the train". In Argentina, the most common legend is "PELIGRO FERROCARRIL" ("danger: railroad"). Other crosses also read "CUIDADO CON LOS TRENES - PARE MIRE ESCUCHE ("beware of the trains - stop, look, listen") for the Ferrocarril Belgrano, "PASO A NIVEL - FERRO CARRIL" for the Ferrocarril Mitre and "CUIDADO CON LOS TRENES" ("beware of the trains") for the Ferrocarril Roca. In parts of Europe, the cross is white with red trimmings or ends, sometimes on a rectangular background; in Finland and Greece the cross is yellow, trimmed with red. Taiwan uses two crossbucks: a version with a yellow and black cross, and one with the cross in white with a red border. A special symbol in the center indicates an electric railroad crossing, cautioning road users about excessive height cargo that may contact the electric wires. Variants around the world: In Australia, the crossbuck is a St Andrews Cross as in Europe, but uses words and the same color as the American crossbuck. In contrast to the American "RAILROAD CROSSING", Australian signs say "RAILWAY CROSSING" or "TRAMWAY CROSSING". (Most cases where a tram in its own right-of-way crosses a road do not use a crossbuck and so are regular intersections rather than level crossings.) Different countries may classify the sign differently. For example, in Australia it is considered a regulatory sign, while in close neighbour New Zealand it is considered a warning sign. Some countries, such as Australia, France, New Zealand, Slovakia and Slovenia may place the crossbuck design on a "target board", while other countries quite often do not. In the United Kingdom, it is only used for crossings with no barriers or signal lights. Multiple tracks: Several countries use a sign to indicate that multiple tracks must be crossed at a level crossing. In Australia, Canada, New Zealand, and the U.S., a sign is mounted beneath the crossbuck (above the warning light assembly, if any) with the number of tracks. Many European countries use multiple crossbucks or additional chevrons ("half-crossbucks") below the first one. Taiwan also uses half-crossbucks below the regular crossbuck. Advance warning: Several countries include the crossbuck iconography in their warning signs for a railway crossing ahead.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Knight shift** Knight shift: The Knight shift is a shift in the nuclear magnetic resonance (NMR) frequency of a paramagnetic substance first published in 1949 by the UC Berkeley physicist Walter D. Knight.For an ensemble of N spins in a magnetic induction field B→ , the nuclear Hamiltonian for the Knight shift is expressed in Cartesian form by: KS =−∑iNγi⋅I→^i⋅K^i⋅B→ , where for the ith spin γi is the gyromagnetic ratio, I→^i is a vector of the Cartesian nuclear angular momentum operators, the K^i=(KxxKxyKxzKyxKyyKyzKzxKzyKzz) matrix is a second-rank tensor similar to the chemical shift shielding tensor. Knight shift: The Knight shift refers to the relative shift K in NMR frequency for atoms in a metal (e.g. sodium) compared with the same atoms in a nonmetallic environment (e.g. sodium chloride). The observed shift reflects the local magnetic field produced at the sodium nucleus by the magnetization of the conduction electrons. The average local field in sodium augments the applied resonance field by approximately one part per 1000. In nonmetallic sodium chloride the local field is negligible in comparison. Knight shift: The Knight shift is due to the conduction electrons in metals. They introduce an "extra" effective field at the nuclear site, due to the spin orientations of the conduction electrons in the presence of an external field. This is responsible for the shift observed in the nuclear magnetic resonance. The shift comes from two sources, one is the Pauli paramagnetic spin susceptibility, the other is the s-component wavefunctions at the nucleus. Knight shift: Depending on the electronic structure, the Knight shift may be temperature dependent. However, in metals which normally have a broad featureless electronic density of states, Knight shifts are temperature independent.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Climate of Kosovo** Climate of Kosovo: Kosovo is a relatively small country. Because of the climatic position and complicated structure of the relief it has a variety of climate systems. Climate of Kosovo: Kosovo lies in the south part of the middle geographical latitude of the northern hemisphere and it is affected by the Mediterranean Mild Climate and European Continental Climate. Important factors that affect Kosovo's climate are: its position towards Eurasia and Africa, hydrographic masses (Atlantic Ocean and Mediterranean sea), atmospheric masses (tropic, arctic and continental) and others. Minor factors are: relief, hydrography, plain and vegetation. Areas: The climatic area of the Ibar valley is influenced by continental air masses. For this reason, in this part of the region, the winters are colder with medium temperatures above −10 °C (14 °F), but sometimes down to −26 °C (−15 °F). The summers are very hot with average temperatures of 20 °C (68 °F), sometimes up to 37 °C (99 °F). This area is characterized by a dry climate and a total annual precipitation of 600 mm per year, approximately. The climatic area of dukagjini, which includes the watershed of the White Drin river, is influenced very much by the hot air masses, which cross the Adriatic Sea. Medium temperatures during winter range from 0.5 °C (32.9 °F) to sometimes 22.8 °C (73.0 °F). The average annual precipitation of this climatic area is about 700 mm (28 in) per year. The winter is characterized by heavy snowfalls. The climatic area of the mountains and forest parts is characterised by a typical forest clime, that is associated with heavy rainfalls (900 to 1,300 mm (35 to 51 in) per year), and summers that are very short and cold, and winters that are cold and with a lot of snow. Finally, it can be stated that the Kosovo territory is characterised by a sunny climate with variable temperature and humidity conditions. Areas: General air flows, physical, geographical and topographical characteristics enforce territorial and temporal changes of climatic elements. Air Temperature: Air Temperature is the main climatic element which tells the degree of air heat near-earth layers. In Kosovo there are thermal differences in horizontal and vertical direction. The eastern side is colder than the western part. Air Temperature: The average temperature of Kosovo for a year is 9.5 °C (49.1 °F). The warmest month is July with 28.3 °C (82.9 °F), the coldest is January with −18.7 °C (−1.7 °F). The highest average temperature for a year is in Prizren (12 °C (54 °F)), the lowest temperature in Podujevo (9 °C (48 °F)). Except Prizren and Istok, all other meteorological stations, in January have average temperatures under 0 °C (32 °F). Air Temperature: Except average temperature values, thermal characteristics of Kosovo will be better understood in the analyze of extreme values. Maximal values in all meteorological stations are higher than 35 °C (95 °F), while the absolute lowest value was registered on June 6, 1963 in Gjilan with a value of −32.5 °C (−26.5 °F). Based on the summary, the amplitude of the average of the average values in Kosovo is 20.5 °C (68.9 °F). Movements of air temperature from year to year are pretty much noticeable. In lower parts of Kosovo, tropic days usually last for 30 days. Precipitation: Precipitation is an important indicator of the climate in Kosovo and it represents a meteorological element with significant changes with time and territory. Key features of the rainfall of each territory are: the submission forms, their distribution during the year, annual amount, pluviometric regime, the number of days with precipitation and their intensity. You can see all precipitation forms in Kosovo. Important significance has the rainfall in hills and valleys, and heavy snowfall in the high mountain areas such as Accursed Mountains and Šar Mountains. The presence of hail is an unfortunate phenomenon for the agriculture in Kosovo. This form of precipitation mostly occurs during July and August. Precipitation: Even though Kosovo lies in a relatively small territory, there are noticeable differences between territories and their amount of precipitation. Kosovo is affected by middle-maritime and middle-continental precipitation regime. On the west, the middle-maritime form is more present. This type of precipitation regime is known for big rainfalls during the year (over 700 mm (28 in)), the maximum amount during November and the minimum during summer. The eastern part is affected by the middle-continental type of precipitation, which is known with lack of rainfall during a year (over 600 mm (24 in)), the maximum amount during May and the minimum during winter. The biggest amount of rainfall is in the west part of Accursed Mountains with over 1,750 mm (69 in), while the lowest amount of rainfall can be found in the east part – Kosovska Kamenica with less than 600 mm (24 in). Snowfall is a common occurrence during the cold months of the year. In the low parts of Kosovo there are averagely 26 days with snowfall, while the high parts have over 100 days. The number of days with snow and the thickness depend on the relief. Snowfall has an importance on keeping the wetness of the surface, creating water reserves, tourism etc. Winds: Winds are a random meteorological phenomenon in Kosovo. The dominant wind direction usually has the bigger force and speed. The average wind speed in Kosovo is 1.3 m/s in Peja to 2.4 m/s Ferizaj. Extreme wind speeds in Kosovo are around 31 m/s which occur during March and April; usually end up with causing damage on houses. Insolation: Insolation is a measure of solar radiation energy received on a given surface area and recorded during a given time. Insolation is a climatic element that has an importance in different economical activities such as: agriculture, tourism etc. The period of insolation depends on astronomical, meteorological and relief factors. Insolation is smaller in narrow valleys, river valleys and mountain ranges as a result of the increase of overcast and elevation. Insolation: Kosovo has on average 2,066 hours with sun per year or approximately 5.7 hours per day. The highest insolation value is in Pristina with 2.140 hours for 1 year, while Peja with the smallest insolation value of 1.958 hours, with 2.067 hours and Prizren with 2.099 hours. The maximum insolation in Kosovo occurs during July, while the lowest insolation occurs in December. Climatic territorial differences: The climatic changes based on territories depend on various factors, such as: vegetation, humidity of the territory etc. Climate classifications are usually based on: wind exposure, mountain height and direction, climatic effects on vegetation and watercourses. Climatic territorial differences: From all the climatic classifications made in Kosovo, the most accurate one is the one provided by W. Köppen. According to this classification, Kosovo is part of C and D types of climate. Kosovo's hot average climate during summer months is under 22 °C (72 °F) and the cold average is above −3 °C (27 °F). According to the thermal characteristics, amount of rainfall and their regime in low parts of Kosovo, we get two subtypes of climate: Metohija subtype and Kosovo subtypeThe climatic subtype of the western part has soft winters and a big amount of rainfall. These characteristics are inherited by the Mediterranean mild climate. The climatic subtype of Kosovo has cold winters and lack of rainfall, characteristics inherited from the continental type of climate. Even though Kosovo has these climatic characteristics most of the time, there might be a major change in climate on certain years only. In 1992, Kosovo had a very hot and dry summer, which fits the B climatic type of Köppen. D-climatic type which is known for average temperatures of −3 °C (27 °F) during cold months and 15 °C (59 °F) during hot months can be found in high parts of Kosovo, such as Accursed Mountains, the Šar Mountains and Kopaonik. This type of climate is known by the name subalpine and alpine. Human Effect: Climate is important in human life, as well as vital activist such as: agriculture, rest, recreation, tourism, transport, medicine, sanitary-hygienic conditions.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**F-Spot** F-Spot: F-Spot is a slowly maintained image organizer, designed to provide personal photo management for the GNOME desktop environment. The name is a play on the word F-Stop. History: The F-Spot project was started by Ettore Perazzoli and is maintained by Stephen Shaw. F-Spot is written in the C# programming language using Mono. Before its shutdown and discontinuation in 2017, F-Spot was the standard image tool for several GNOME based distributions. Even before that, Fedora replaced F-Spot with Shotwell in Fedora 13. Ubuntu has done the same as of 10.10 Maverick Meerkat.There is some new work to update F-Spot and potentially bring it to Windows and Mac. Features: F-Spot aimed to have an interface that is simple to use but also facilitate advanced features such as tagging images, and displaying and exporting image metadata in Exif and XMP formats. Features: All major photographic image formats are supported, including JPEG, PNG, TIFF, DNG, GIF, SVG and PPM, as well as several vendor-specific RAW formats (CR2, PEF, ORF, SRF, CRW, MRW and RAF). As of 2008, the RAW formats were not editable with F-Spot. However, newer releases of F-Spot have the DevelopInUFRaw extension, which calls on UFRaw for the conversion work, and then re-imports the resulting JPEG back into F-Spot as a new version of the original RAW. Features: Photos can be imported directly from the camera. The driver support is provided by libgphoto2. The GNOME desktop environment can also optionally detect if a camera or a memory card has been attached, and import images to F-Spot automatically. Photo CDs can be created by selecting multiple photographs and selecting "Export to CD" from the main menu. Basic functions such as crop and rotate available alongside more advanced features such as red-eye removal and versioning. The rotate function allows for movements in single degree increments with autocrop, not just 90-degree adjustment. Color adjustments are supported with a histogram. They include an auto-improve and individual brightness, contrast, hue, saturation and temperature. Features: Photos in the F-Spot library can be uploaded to a number of online photo storage sites. F-Spot supports two major gallery sites, Flickr and Picasa Web Albums, and also using stand-alone web gallery software, including Gallery and O.r.i.g.i.n.a.l. F-Spot can also generate static web gallery sites and export to Facebook. F-Spot automatically downsizes photos before exporting to Flickr, and though it describes this as "optional," there is no option to not downsize photos before export. Technical information: When images are imported into F-Spot, they are written to disk. The folder is /username/Pictures/Photos/[year]/[month]/[Day].
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Boccette** Boccette: Boccette is a billiards-type game played in Italy. A variation of the game of five-pins, it is played on a pocketless (carom) billiard table with nine balls (typically four white, four red, and one blue). Cue sticks are not used; the balls are manipulated with the hands directly. The Game is very popular in countries colonized by Italy, specially in Eritrea
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**First-class citizen** First-class citizen: In a given programming language design, a first-class citizen is an entity which supports all the operations generally available to other entities. These operations typically include being passed as an argument, returned from a function, and assigned to a variable. History: The concept of first- and second-class objects was introduced by Christopher Strachey in the 1960s. He did not actually define the term strictly, but contrasted real numbers and procedures in ALGOL: First and second class objects. In ALGOL, a real number may appear in an expression or be assigned to a variable, and either of them may appear as an actual parameter in a procedure call. A procedure, on the other hand, may only appear in another procedure call either as the operator (the most common case) or as one of the actual parameters. There are no other expressions involving procedures or whose results are procedures. Thus in a sense procedures in ALGOL are second class citizens—they always have to appear in person and can never be represented by a variable or expression (except in the case of a formal parameter)... History: Robin Popplestone gave the following definition: All items have certain fundamental rights. All items can be the actual parameters of functions All items can be returned as results of functions All items can be the subject of assignment statements All items can be tested for equality.During the 1990s, Raphael Finkel proposed definitions of second and third class values, but these definitions have not been widely adopted. Examples: The simplest scalar data types, such as integer and floating-point numbers, are nearly always first-class. Examples: In many older languages, arrays and strings are not first-class: they cannot be assigned as objects or passed as parameters to a subroutine. For example, neither Fortran IV nor C supports array assignment, and when they are passed as parameters, only the position of their first element is actually passed—their size is lost. C appears to support assignment of array pointers, but in fact these are simply pointers to the array's first element, and again do not carry the array's size. Examples: In most languages, data types are not first-class objects, though in some object-oriented languages, classes are first-class objects and are instances of metaclasses. Languages in the functional programming family often also feature first-class types, in the form of, for example, generalized algebraic data types, or other metalanguage amenities enabling programs to implement extensions to their own implementation language. Few languages support continuations and GOTO-labels as objects at all, let alone as first-class objects. Functions: Many programming languages support passing and returning function values, which can be applied to arguments. Whether this suffices to call function values first-class is disputed. Some authors require it be possible to create new functions at runtime to call them 'first-class'. Under this definition, functions in C are not first-class objects; instead, they are sometimes called second-class objects, because they can still be manipulated in most of the above fashions (via function pointers). In Smalltalk, functions (methods) are first-class objects, just like Smalltalk classes. Since Smalltalk operators (+, -, etc.) are methods, they are also first-class objects. Reflection: Some languages, such as Java and PHP, have an explicit reflection subsystem which allow access to internal implementation structures even though they are not accessible or manipulable in the same way as ordinary objects. Reflection: In other languages, such as those in the Lisp family, reflection is a central feature of the language, rather than a special subsystem. Typically this takes the form of some set of the following features: syntactic macros or fexprs - which allow the user to write code which handles code as data and evaluates it by discretion, enabling, for example, programs to write programs (or rewrite themselves) inside of the compiler or interpreter; a meta-circular evaluator - which provides a definition of the language's evaluator as a compiled tautologisation of itself, facilitating straightforward modification of the language without requiring a metalanguage different from itself; a metaobject protocol - a special form of meta-circular evaluator for object-oriented programming, in which the object system implements itself recursively via a system of metaclasses and metaobjects, which are themselves classes and objects.These allow varying forms of first-class access to the language implementation, and are, in general, manipulable in the same way as, and fully indistinguishable from, ordinary language objects. Because of this, their usage generally comes with some (cultural) stipulations and advice, as untested modification of the core programming system by users can easily undermine performance optimisations made by language implementers.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**TiVo Media File System** TiVo Media File System: The MFS or Media File System is a proprietary file system used on TiVo hard drives for fault tolerant real-time recording of live TV. TiVo Media File System: Although MFS is still not particularly well understood by programmers unaffiliated with the TiVo corporation, enough is known about the file system to be able to do reads and limited writes. Applications exist to manipulate the file system and objects within it. Most of these applications are reverse engineered from software found on the TiVo itself, as many of the early TiVo programs were little more than specialized scripts that manipulated the data. TiVo Media File System: The MFS file system is organized more like a database, including transaction logging and rollback capabilities. It utilizes multiple partitions on the drive for a complete system. The partitions come in pairs, with one being the "Application" partition, and the other being the "Media" partition. The Media region is invariably quite large, and organized into long continuous blocks of data, with a variable block size that has a minimum of at least 1 megabyte. This is because it is designed to store large sections of video. TiVo Media File System: Each object in the TiVo file system is assigned an ID, which is internally called the "FSID" (presumably, file system ID). There are (at least) 4 types of objects that MFS supports: Streams (recordings, audio or video), Directory, Database, and Files. All Stream objects are stored in the MFS media regions, while the other types are stored in "application" regions. TiVo Media File System: The file system itself is implemented entirely in the Linux userspace. The primary reason TiVo devised such a system is because they needed a way to store large continuous sections of data easily in a manner that lent itself well to streaming that data directly to the media decoders in the TiVo devices, without being CPU dependent. Thus, the CPU has very little involvement in playback and recording functionality, simply directing the encoder/decoder chips to stream data directly to the drives via direct memory access while mapping sections of virtual memory onto the drive. The main CPU then orchestrates the entire affair. The result of this is that data stored on the MFS media region is not formatted into normal files, as such, but is a direct data stream that is indexed by the database sections in the MFS application region.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Jamie A. Davies** Jamie A. Davies: Jamie A. Davies is a British scientist, Professor of Experimental Anatomy at the University of Edinburgh, and leader of a laboratory in its Centre for Integrative Physiology. He works in the fields of Developmental biology, Synthetic biology, and Tissue engineering. He is also Principal Investigator for the IUPHAR/BPS Guide to Pharmacology database. Biography: Davies received his BA, MA, and, in 1989, D. Phil, all at University of Cambridge. He then took up post doctoral fellowships first at the University of Manchester, and then at the University of Southampton before being appointed to Edinburgh. He was initially appointed in 1995 as a lecturer, rising to senior lecturer, reader, and finally professor. Biography: Davies was the founding editor of the journal Organogenesis and is on the editorial boards of Journal of Anatomy, and Nephron. He is a fellow of the Royal Society of Edinburgh, the Royal Society of Biology, the Royal Society of Medicine, a Principle Fellow of the Higher Education Academy and, though his 'other life' as a dance teacher, a Fellow of the Royal Society of Arts. He is also a member of the Institute of Electrical and Electronics Engineers. He served on the board of the National Centre for 3Rs from 2009 to 2014, and was deputy chair from 2012. Books: Davies J.A. (2004) Branching Morphogenesis. Springer. Davies J.A. (2004) Mechanisms of Morphogenesis. Elsevier/ Academic Press Davies J.A. (2012) Replacing animal models: a practical guide to creating and using culture-based biomimetic alternatives. Wiley-Blackwell. Davies J.A. (2012) Tissue Regeneration. InTech. Davies J.A. (2013) Mechanisms of Morphogenesis (2nd Edition). Elsevier/ Academic Press. Davies J.A. (2014) Life Unfolding. Oxford University Press. Davies J.A. (2018) Synthetic Biology: A Very Short Introduction. Oxford University Press. Davies J.A. & Lawrence M.L. (2018) Organoids and Mini-organs. Academic Press. Davies J.A. (2020) Synthetic biology in mammals . Oxford University Press Davies J.A. (2021) Human Physiology: A Very Short Introduction. Oxford University Press.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Eprint** Eprint: In academic publishing, an eprint or e-print is a digital version of a research document (usually a journal article, but could also be a thesis, conference paper, book chapter, or a book) that is accessible online, usually as green open access, whether from a local institutional or a central digital repository.When applied to journal articles, the term "eprints" covers both preprints (before peer review) and postprints (after peer review). Eprint: Digital versions of materials other than research documents are not usually called e-prints, but some other name, such as e-books.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**UFluids@Home** UFluids@Home: μFluids@Home is a computer simulation of two-phase flow behavior in microgravity and microfluidics problems at Purdue University, using the Surface Evolver program. About: The project's purpose is to develop better methods for the management of liquid rocket propellants in microgravity, and to investigate two-phase flow in microelectromechanical systems, taking into account factors like surface tension. Systems can then be designed that use electrowetting, channel geometry, and hydrophobic or hydrophilic coatings to allow the smooth passage of fluids. Such systems would include compact medical devices, biosensors, and fuel cells, to name a few. Computing platform: μFluids@Home uses the BOINC volunteer computing platform. Application notes There is no screensaver. Work unit CPU times are generally less than 20 hours. Work units average in size around 500 kB. You must run many work units to get levels of credit comparable to SETI@home or climateprediction.net BOINC projects.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Cantor distribution** Cantor distribution: The Cantor distribution is the probability distribution whose cumulative distribution function is the Cantor function. Cantor distribution: This distribution has neither a probability density function nor a probability mass function, since although its cumulative distribution function is a continuous function, the distribution is not absolutely continuous with respect to Lebesgue measure, nor does it have any point-masses. It is thus neither a discrete nor an absolutely continuous probability distribution, nor is it a mixture of these. Rather it is an example of a singular distribution. Cantor distribution: Its cumulative distribution function is continuous everywhere but horizontal almost everywhere, so is sometimes referred to as the Devil's staircase, although that term has a more general meaning. Characterization: The support of the Cantor distribution is the Cantor set, itself the intersection of the (countably infinitely many) sets: 27 27 27 27 19 27 20 27 25 27 26 27 81 81 27 27 81 81 19 81 20 81 27 27 25 81 26 81 55 81 56 81 19 27 20 27 61 81 62 81 21 27 73 81 74 81 25 27 26 27 79 81 80 81 ,1]C5=⋯ The Cantor distribution is the unique probability distribution for which for any Ct (t ∈ { 0, 1, 2, 3, ... }), the probability of a particular interval in Ct containing the Cantor-distributed random variable is identically 2−t on each one of the 2t intervals. Moments: It is easy to see by symmetry and being bounded that for a random variable X having this distribution, its expected value E(X) = 1/2, and that all odd central moments of X are 0. The law of total variance can be used to find the variance var(X), as follows. For the above set C1, let Y = 0 if X ∈ [0,1/3], and 1 if X ∈ [2/3,1]. Then: var var var var var with probability with probability var ⁡(X)+19 From this we get: var ⁡(X)=18. A closed-form expression for any even central moment can be found by first obtaining the even cumulants κ2n=22n−1(22n−1)B2nn(32n−1), where B2n is the 2nth Bernoulli number, and then expressing the moments as functions of the cumulants.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Swale (landform)** Swale (landform): A swale is a shady spot, or a sunken or marshy place. In US usage in particular, it is a shallow channel with gently sloping sides. Such a swale may be either natural or human-made. Artificial swales are often infiltration basins, designed to manage water runoff, filter pollutants, and increase rainwater infiltration. Bioswales are swales that involve the inclusion of plants or vegetation in their construction, specifically. On land: The use of swales has been popularized as a rainwater-harvesting and soil-conservation strategy by Bill Mollison, David Holmgren, and other advocates of permaculture. In this context a swale is usually a water-harvesting ditch on contour, also called a contour bund. On land: Swales as used in permaculture are designed by permaculturalists to slow and capture runoff by spreading it horizontally across the landscape (along an elevation contour line), facilitating runoff infiltration into the soil. This archetypal form of swale is a dug-out, sloped, often grassed or reeded "ditch" or "lull" in the landform. One option involves piling the spoil onto a new bank on the still lower slope, in which case a bund or berm is formed, mitigating the natural (and often hardscape-increased) risks to slopes below and to any linked watercourse from flash flooding. In arid and seasonally dry places, vegetation (existing or planted) in the swale benefits heavily from the concentration of runoff. Trees and shrubs along the swale can provide shade and mulch which decrease evaporation. On beaches: The term "swale" or "beach swale" is also used to describe long, narrow, usually shallow troughs between ridges or sandbars on a beach, that run parallel to the shoreline.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Complete Champion** Complete Champion: Complete Champion is a supplement for the 3.5 edition of the Dungeons & Dragons fantasy role-playing game. Contents: Somewhat of a sequel to Complete Divine, the book is geared for characters who fight for a cause. Publication history: Complete Champion was written by Ed Stark, Chris Thomasson, Rhiannon Louve, Ari Marmell, and Gary Astleford, and was published in May 2007. Cover art was by Eric Polak, with interior art by Steve Argyle, Stephen Belledin, Miguel Coimbra, Thomas Denmark, Eric Deschamps, Wayne England, David Griffith, Fred Hooper, Ralph Horsley, Howard Lyon, Eva Widermann, and Sam Wood. Publication history: Thomasson defined the use of "champion" in the title to mean a "champion of faith", rather than in the more general sense of the term: "All characters have the potential to be champions; this book is focused on the divine, specifically divine magic and the religions of D&D --the goal we had was to make those elements of the game more accessible to characters other than paladins, clerics and druids." Reception: Viktor Coble listed the entire Complete series - including Complete Adventurer, Complete Divine, Complete Warrior, Complete Arcane, Complete Champion, and Complete Mage - as #9 on CBR's 2021 "D&D: 10 Best Supplemental Handbooks" list, stating that "These books took a deep dive into specific class types. They expanded on what it meant to be that kind of class, gave informative prestige classes, extra abilities, and even new concepts for playing them."
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Regummed stamp** Regummed stamp: In philately, a regummed stamp is any stamp without gum, or without full gum, that has had new gum applied to the back to increase its value. Unused stamps with full original gum (OG) on the back are worth more than stamps without gum or complete gum, for instance those that have been mounted using a stamp hinge. Regummed stamp: Until the 1970s, it was common for stamps to be mounted using hinges and there was little difference in value between unmounted and previously hinged stamps. Since then, a significant price difference has developed between the two types of stamps and unscrupulous stamp collectors and dealers have been tempted to regum previously mounted stamps to make them appear as if they have full original gum. Regummed stamp: Regumming may take place just to remove traces of mounting or to disguise more serious faults or repairs, in which case the whole stamp may be given new gum. Regumming to hide repaired faults is not new but regumming to remove traces of mounting is a relatively recent development. Detecting regummed stamps: Such alterations are often easily detected with the naked eye due to differences in colour or texture between the old and new gum. In addition, a stamp where all or a large part of the gum is fresh may sometimes be detected by placing it in the palm of the hand where warmth will cause the stamp to curl in a different direction to the same stamp with the original gum. Detecting regummed stamps: Another test is to use a magnifying glass to see if gum has gathered on the perforation edges of the stamp. This will not occur with an original gum stamp as the normal method of stamp production is to perforate a whole sheet of stamps after the sheet is first gummed and then printed. This means the gum is evenly distributed on the back of all the stamps contained on the sheet before perforating. The perforating process does not perfectly cut the paper. Fine tears develop along the round areas of the perforation edges because of this process. Also, when the stamps are torn to separate them from the sheets, fine tears along the perforation tips develop. These fine tears from perforating and tearing render tiny hairs descending from the edge of the stamp. Examination of these hairs will help detect a regummed stamp. Remember, with this test it is important not to look at the gum, but rather the perforation edges. The edges will be somewhat hairy on an original gummed stamp, but on a regummed stamp the hairs will be glued together in varying degrees. This gum may also gather unevenly at the perforation edges. On expert regumming jobs the person making the alteration may file the perforations to restore the original look to the stamps edge, but they almost always leave an imperfection or two that may only be caught by an expert. Detecting regummed stamps: If none of the normal tests are definitive then collectors will often obtain an opinion about the stamp's status from an expert, who will examine the stamp and may even analyse it chemically. This is known as having the stamp expertised.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Old-growth forest** Old-growth forest: An old-growth forest, sometimes synonymous with primary forest, virgin forest, late seral forest, primeval forest, first-growth forest, or mature forest—is a forest that has attained great age without significant disturbance, and thereby exhibits unique ecological features, and might be classified as a climax community. The Food and Agriculture Organization of the United Nations defines primary forests as naturally regenerated forests of native tree species where there are no clearly visible indications of human activity and the ecological processes are not significantly disturbed. Barely one-third (34 percent) of the world's forests are primary forests. Old-growth features include diverse tree-related structures that provide diverse wildlife habitats that increases the biodiversity of the forested ecosystem. Virgin or first-growth forests are old-growth forests that have never been logged. The concept of diverse tree structure includes multi-layered canopies and canopy gaps, greatly varying tree heights and diameters, and diverse tree species and classes and sizes of woody debris. Old-growth forest: As of 2020, the world has 1.11 billion ha of primary forest remaining. Combined, three countries (Brazil, Canada, and Russia) host more than half (61 percent) of the world's primary forest. The area of primary forest has decreased by 81 million ha since 1990, but the rate of loss more than halved in 2010–2020 compared with the previous decade.Old-growth forests are valuable for economic reasons and for the ecosystem services they provide. This can be a point of contention when some in the logging industry desire to harvest valuable timber from the forests, destroying the forests in the process, to generate short-term profits, while environmentalists seek to preserve the forests in their pristine state for benefits such as water purification, flood control, weather stability, maintenance of biodiversity, and nutrient cycling. Moreover, old-growth forests are more efficient at sequestering carbon than newly planted forests and fast-growing timber plantations, thus preserving the forests is important to climate change mitigation. Characteristics: Old-growth forests tend to have large trees and standing dead trees, multilayered canopies with gaps that result from the deaths of individual trees, and coarse woody debris on the forest floor.A forest regenerated after a severe disturbance, such as wildfire, insect infestation, or harvesting, is often called second-growth or 'regeneration' until enough time passes for the effects of the disturbance to be no longer evident. Depending on the forest, this may take from a century to several millennia. Hardwood forests of the eastern United States can develop old-growth characteristics in 150–500 years. In British Columbia, Canada, old growth is defined as 120 to 140 years of age in the interior of the province where fire is a frequent and natural occurrence. In British Columbia's coastal rainforests, old growth is defined as trees more than 250 years, with some trees reaching more than 1,000 years of age. In Australia, eucalypt trees rarely exceed 350 years of age due to frequent fire disturbance.Forest types have very different development patterns, natural disturbances and appearances. A Douglas-fir stand may grow for centuries without disturbance while an old-growth ponderosa pine forest requires frequent surface fires to reduce the shade-tolerant species and regenerate the canopy species. In the boreal forest of Canada, catastrophic disturbances like wildfires minimize opportunities for major accumulations of dead and downed woody material and other structural legacies associated with old growth conditions. Typical characteristics of old-growth forest include the presence of older trees, minimal signs of human disturbance, mixed-age stands, presence of canopy openings due to tree falls, pit-and-mound topography, down wood in various stages of decay, standing snags (dead trees), multilayered canopies, intact soils, a healthy fungal ecosystem, and presence of indicator species. Characteristics: Biodiversity Old-growth forests are often biologically diverse, and home to many rare species, threatened species, and endangered species of plants and animals, such as the northern spotted owl, marbled murrelet and fisher, making them ecologically significant. Levels of biodiversity may be higher or lower in old-growth forests compared to that in second-growth forests, depending on specific circumstances, environmental variables, and geographic variables. Logging in old-growth forests is a contentious issue in many parts of the world. Excessive logging reduces biodiversity, affecting not only the old-growth forest itself, but also indigenous species that rely upon old-growth forest habitat. Characteristics: Mixed age Some forests in the old-growth stage have a mix of tree ages, due to a distinct regeneration pattern for this stage. New trees regenerate at different times from each other, because each of them has a different spatial location relative to the main canopy, hence each one receives a different amount of light. The mixed age of the forest is an important criterion in ensuring that the forest is a relatively stable ecosystem in the long term. A climax stand that is uniformly aged becomes senescent and degrades within a relatively short time to result in a new cycle of forest succession. Thus, uniformly aged stands are less stable ecosystems. Boreal forests are more uniformly aged, as they are normally subject to frequent stand-replacing wildfires. Characteristics: Canopy openings Forest canopy gaps are essential in creating and maintaining mixed-age stands. Also, some herbaceous plants only become established in canopy openings, but persist beneath an understory. Openings are a result of tree death due to small impact disturbances such as wind, low-intensity fires, and tree diseases. Old-growth forests are unique, usually having multiple horizontal layers of vegetation representing a variety of tree species, age classes, and sizes, as well as "pit and mound" soil shape with well-established fungal nets. Because old-growth forest is structurally diverse, it provides higher-diversity habitat than forests in other stages. Thus, sometimes higher biological diversity can be sustained in old-growth forests, or at least a biodiversity that is different from other forest stages. Characteristics: Topography The characteristic topography of much old-growth forest consists of pits and mounds. Mounds are caused by decaying fallen trees, and pits (tree throws) by the roots pulled out of the ground when trees fall due to natural causes, including being pushed over by animals. Pits expose humus-poor, mineral-rich soil and often collect moisture and fallen leaves, forming a thick organic layer that is able to nurture certain types of organisms. Mounds provide a place free of leaf inundation and saturation, where other types of organisms thrive. Characteristics: Standing snags Standing snags provide food sources and habitat for many types of organisms. In particular, many species of dead-wood predators such as woodpeckers must have standing snags available for feeding. In North America, the spotted owl is well known for needing standing snags for nesting habitat. Characteristics: Decaying ground layer Fallen timber, or coarse woody debris, contributes carbon-rich organic matter directly to the soil, providing a substrate for mosses, fungi, and seedlings, and creating microhabitats by creating relief on the forest floor. In some ecosystems such as the temperate rain forest of the North American Pacific coast, fallen timber may become nurse logs, providing a substrate for seedling trees. Characteristics: Soil Intact soils harbor many life forms that rely on them. Intact soils generally have very well-defined horizons, or soil profiles. Different organisms may need certain well-defined soil horizons to live, while many trees need well-structured soils free of disturbance to thrive. Some herbaceous plants in northern hardwood forests must have thick duff layers (which are part of the soil profile). Fungal ecosystems are essential for efficient in-situ recycling of nutrients back into the entire ecosystem. Definitions: Ecological definitions Stand age definition Stand age can also be used to categorize a forest as old-growth. For any given geographical area, the average time since disturbance until a forest reaches the old growth stage can be determined. This method is useful, because it allows quick and objective determination of forest stage. However, this definition does not provide an explanation of forest function. It just gives a useful number to measure. So, some forests may be excluded from being categorized as old-growth even if they have old-growth attributes just because they are too young. Also, older forests can lack some old-growth attributes and be categorized as old-growth just because they are so old. The idea of using age is also problematic, because human activities can influence the forest in varied ways. For example, after the logging of 30% of the trees, less time is needed for old-growth to come back than after removal of 80% of the trees. Although depending on the species logged, the forest that comes back after a 30% harvest may consist of proportionately fewer hardwood trees than a forest logged at 80% in which the light competition by less important tree species does not inhibit the regrowth of vital hardwoods. Definitions: Forest dynamics definition From a forest dynamics perspective, old-growth forest is in a stage that follows understory reinitiation stage. A review of the stages helps to understand the concept: Stand-replacing: Disturbance hits the forest and kills most of the living trees. Stand-initiation: A population of new trees becomes established. Definitions: Stem-exclusion: Trees grow higher and enlarge their canopy, thus competing for the light with neighbors; light competition mortality kills slow-growing trees and reduces forest density, which allows surviving trees to increase in size. Eventually, the canopies of neighboring trees touch each other and drastically lower the amount of light that reaches lower layers. Due to that, the understory dies and only very shade-tolerant species survive. Definitions: Understory reinitiation: Trees die from low-level mortality, such as windthrow and diseases. Individual canopy gaps start to appear and more light can reach the forest floor. Hence, shade-tolerant species can establish in the understory. Definitions: Old-growth: Main canopy trees become older and more of them die, creating even more gaps. Since the gaps appear at different times, the understory trees are at different growth stages. Furthermore, the amount of light that reaches each understory tree depends on its position relative to the gap. Thus, each understory tree grows at a different rate. The differences in establishment timing and in growth rate create a population of understory trees that is variable in size. Eventually, some understory trees grow to become as tall as the main canopy trees, thereby filling the gap. This perpetuation process is typical for the old-growth stage. This, however, does not mean that the forest will be old-growth forever. Generally, three futures for old-growth stage forest are possible: 1) The forest will be hit by a disturbance and most of the trees will die, 2) Unfavorable conditions for new trees to regenerate will occur. In this case, the old trees will die and smaller plants will create woodland, and 3) The regenerating understory trees are different species from the main canopy trees. In this case, the forest will switch back to stem-exclusion stage, but with shade-tolerant tree species. 4) The forest in an old-growth stage can be stable for centuries, but the length of this stage depends on the forest's tree composition and the climate of the area. For example, frequent natural fires do not allow boreal forests to be as old as coastal forests of western North America.Of importance is that while the stand switches from one tree community to another, the stand will not necessarily go through old-growth stage between those stages. Some tree species have a relatively open canopy. That allows more shade-tolerant tree species to establish below even before the understory reinitiation stage. The shade-tolerant trees eventually outcompete the main canopy trees in stem-exclusion stage. Therefore, the dominant tree species will change, but the forest will still be in stem-exclusion stage until the shade-tolerant species reach old-growth stage. Definitions: Tree species succession may change tree species' composition once the old-growth stage has been achieved. For example, an old boreal forest may contain some large aspen trees, which may die and be replaced by smaller balsam fir or black spruce. Consequently, the forest will switch back to understory reinitiation stage. Using the stand dynamics definition, old-growth can be easily evaluated using structural attributes. However, in some forest ecosystems, this can lead to decisions regarding the preservation of unique stands or attributes that will disappear over the next few decades because of natural succession processes. Consequently, using stand dynamics to define old-growth forests is more accurate in forests where the species that constitute old-growth have long lifespans and succession is slow. Definitions: Social and cultural definitions Common cultural definitions and common denominators regarding what comprises old-growth forest, and the variables that define, constitute and embody old-growth forests include: The forest habitat possesses relatively mature, old trees; The tree species present have long continuity on the same site; The forest itself is a remnant natural area that has not been subjected to significant disturbance by mankind, altering the appearance of the landscape and its ecosystems, has not been subjected to logging (or other types of development such as road networks or housing), and has inherently progressed per natural tendencies.Additionally, in mountainous, temperate landscapes (such as Western North America), and specifically in areas of high-quality soil and a moist, relatively mild climate, some old-growth trees have attained notable height and girth (DBH: diameter at breast height), accompanied by notable biodiversity in terms of the species supported. Therefore, for most people, the physical size of the trees is the most recognized hallmark of old-growth forests, even though the ecologically productive areas that support such large trees often comprise only a very small portion of the total area that has been mapped as old-growth forest. (In high-altitude, harsh climates, trees grow very slowly and thus remain at a small size. Such trees also qualify as old growth in terms of how they are mapped, but are rarely recognized by the general public as such.) The debate over old-growth definitions has been inextricably linked with a complex range of social perceptions about wilderness preservation, biodiversity, aesthetics, and spirituality, as well as economic or industrial values. Definitions: Economic definitions In logging terms, old-growth stands are past the economic optimum for harvesting—usually between 80–150 years, depending on the species. Old-growth forests were often given harvesting priority because they had the most commercially valuable timber, they were considered to be at greater risk of deterioration through root rot or insect infestation, and they occupied land that could be used for more productive second-growth stands. In some regions, old growth is not the most commercially viable timber—in British Columbia, Canada, harvesting in the coastal region is moving to younger second-growth stands. Definitions: Other definitions A 2001 scientific symposium in Canada found that defining old growth in a scientifically meaningful, yet policy-relevant, manner presents some basic difficulties, especially if a simple, unambiguous, and rigorous scientific definition is sought. Symposium participants identified some attributes of late-successional, temperate-zone, old-growth forest types that could be considered in developing an index of "old-growthness" and for defining old-growth forests:Structural features: Uneven or multi-aged stand structure, or several identifiable age cohorts Average age of dominant species approaching half the maximum longevity for species (about 150+ years for most shade-tolerant trees) Some old trees at close to their maximum longevity (ages of 300+ years) Presence of standing dead and dying trees in various stages of decay Fallen, coarse woody debris Natural regeneration of dominant tree species within canopy gaps or on decaying logsCompositional features: Long-lived, shade-tolerant tree species associations (e.g., sugar maple, American beech, yellow birch, red spruce, eastern hemlock, white pine)Process features: Characterized by small-scale disturbances creating gaps in the forest canopy A long natural rotation for catastrophic or stand-replacing disturbance (e.g., a period greater than the maximum longevity of the dominant tree species) Minimal evidence of human disturbance Final stages of stand development before a relatively steady state is reached Importance: Old-growth forests often contain rich communities of plants and animals within the habitat due to the long period of forest stability. These varied and sometimes rare species may depend on the unique environmental conditions created by these forests. Old-growth forests serve as a reservoir for species, which cannot thrive or easily regenerate in younger forests, so they can be used as a baseline for research. Plant species that are native to old-growth forests may someday prove to be invaluable towards curing various human ailments, as has been realized in numerous plants in tropical rainforests. Importance: Old-growth forests also store large amounts of carbon above and below the ground (either as humus, or in wet soils as peat). They collectively represent a very significant store of carbon. Destruction of these forests releases this carbon as greenhouse gases, and may increase the risk of global climate change. Although old-growth forests serve as a global carbon dioxide sink, they are not protected by international treaties, because it is generally thought that aging forests cease to accumulate carbon. However, in forests between 15 and 800 years of age, net ecosystem productivity (the net carbon balance of the forest including soils) is usually positive; old-growth forests accumulate carbon for centuries and contain large quantities of it. Ecosystem services: Old-growth forests provide ecosystem services that may be far more important to society than their use as a source of raw materials. These services include making breathable air, making pure water, carbon storage, regeneration of nutrients, maintenance of soils, pest control by insectivorous bats and insects, micro- and macro-climate control, and the storage of a wide variety of genes. Climatic impacts: The effects of old-growth forests in relation to global warming have been addressed in various studies and journals. Climatic impacts: The Intergovernmental Panel on Climate Change said in its 2007 report: "In the long term, a sustainable forest management strategy aimed at maintaining or increasing forest carbon stocks, while producing an annual sustained yield of timber, fibre, or energy from the forest, will generate the largest sustained mitigation benefit."Old-growth forests are often perceived to be in equilibrium or in a state of decay. However, evidence from analysis of carbon stored above ground and in the soil has shown old-growth forests are more productive at storing carbon than younger forests. Forest harvesting has little or no effect on the amount of carbon stored in the soil, but other research suggests older forests that have trees of many ages, multiple layers, and little disturbance have the highest capacities for carbon storage. As trees grow, they remove carbon from the atmosphere, and protecting these pools of carbon prevents emissions into the atmosphere. Proponents of harvesting the forest argue the carbon stored in wood is available for use as biomass energy (displacing fossil fuel use), although using biomass as a fuel produces air pollution in the form of carbon monoxide, nitrogen oxides, volatile organic compounds, particulates, and other pollutants, in some cases at levels above those from traditional fuel sources such as coal or natural gas.Each forest has a different potential to store carbon. For example, this potential is particularly high in the Pacific Northwest where forests are relatively productive, trees live a long time, decomposition is relatively slow, and fires are infrequent. The differences between forests must, therefore, be taken into consideration when determining how they should be managed to store carbon.Old-growth forests have the potential to impact climate change, but climate change is also impacting old-growth forests. As the effects of global warming grow more substantial, the ability of old-growth forests to sequester carbon is affected. Climate change showed an impact on the mortality of some dominant tree species, as observed in the Korean pine. Climate change also showed an effect on the composition of species when forests were surveyed over a 10- and 20-year period, which may disrupt the overall productivity of the forest. Logging: According to the World Resources Institute, as of January 2009, only 21% of the original old-growth forests that once existed on Earth are remaining. An estimated one-half of Western Europe's forests were cleared before the Middle Ages, and 90% of the old-growth forests that existed in the contiguous United States in the 1600s have been cleared.The large trees in old-growth forests are economically valuable, and have been subject to aggressive logging throughout the world. This has led to many conflicts between logging companies and environmental groups. From certain forestry perspectives, fully maintaining an old-growth forest is seen as extremely economically unproductive, as timber can only be collected from falling trees, and also potentially damaging to nearby managed groves by creating environments conducive to root rot. It may be more productive to cut the old growth down and replace the forest with a younger one.The island of Tasmania, just off the southeast coast of Australia, has the largest amount of temperate old-growth rainforest reserves in Australia with around 1,239,000 hectares in total. While the local Regional Forest Agreement (RFA) was originally designed to protect much of this natural wealth, many of the RFA old-growth forests protected in Tasmania consist of trees of little use to the timber industry. RFA old-growth and high conservation value forests that contain species highly desirable to the forestry industry have been poorly preserved. Only 22% of Tasmania's original tall-eucalypt forests managed by Forestry Tasmania have been reserved. Ten thousand hectares of tall-eucalypt RFA old-growth forest have been lost since 1996, predominantly as a result of industrial logging operations. In 2006, about 61,000 hectares of tall-eucalypt RFA old-growth forests remained unprotected. Recent logging attempts in the Upper Florentine Valley have sparked a series of protests and media attention over the arrests that have taken place in this area. Additionally, Gunns Limited, the primary forestry contractor in Tasmania, has been under recent criticism by political and environmental groups over its practice of woodchipping timber harvested from old-growth forests. Management: Increased understanding of forest dynamics in the late 20th century led the scientific community to identify a need to inventory, understand, manage, and conserve representative examples of old-growth forests with their associated characteristics and values. Literature around old growth and its management is inconclusive about the best way to characterize the true essence of an old-growth stand.A better understanding of natural systems has resulted in new ideas about forest management, such as managed natural disturbances should be designed to achieve the landscape patterns and habitat conditions that are normally maintained in nature. This coarse filter approach to biodiversity conservation recognizes ecological processes and provides for a dynamic distribution of old growth across the landscape. Management: And all seral stages—young, medium and, old—support forest biodiversity. Plants and animals rely on different forest ecosystem stages to meet their habitat needs.In Australia, the Regional Forest Agreement (RFA) attempted to prevent the clearfelling of defined "old-growth forests". This led to struggles over what constitutes "old growth". For example, in Western Australia, the timber industry tried to limit the area of old growth in the karri forests of the Southern Forests Region; this led to the creation of the Western Australian Forests Alliance, the splitting of the Liberal Government of Western Australia and the election of the Gallop Labor Government. Old-growth forests in this region have now been placed inside national parks. A small proportion of old-growth forest also exists in South-West Australia, and is protected by federal laws from logging, which has not occurred there for more than 20 years.In British Columbia, Canada, old-growth forests must be maintained in each of the province's ecological units to meet biodiversity needs. Locations of remaining tracts: In 2006, Greenpeace identified that the world's remaining intact forest landscapes are distributed among the continents as follows: 35% in South America: The Amazon rainforest is mainly located in Brazil, which clears a larger area of forest annually than any other country in the world. 28% in North America, which harvests 10,000 km2 of ancient forests every year. Many of the fragmented forests of southern Canada and the United States lack adequate animal travel corridors and functioning ecosystems for large mammals. Most of the remaining old-growth forests in the contiguous United States and Alaska are on public land. 19% in northern Asia, home to the largest boreal forest in the world 8% in Africa, which has lost most of its intact forest landscapes in the last 30 years. The timber industry and local governments are responsible for destroying huge areas of intact forest landscapes and continue to be the single largest threat to these areas. 7% in South Asia Pacific, where the Paradise Forests are being destroyed faster than any other forest on Earth. Much of the large, intact forest landscapes have already been cut down, 72% in Indonesia, and 60% in Papua New Guinea. Less than 3% in Europe, where more than 150 km2 of intact forest landscapes are cleared every year and the last areas of the region's intact forest landscapes in European Russia are shrinking rapidly. In the United Kingdom, they are known as ancient woodlands. Sources: This article incorporates text from a free content work. Licensed under CC BY-SA 3.0 (license statement/permission). Text taken from Global Forest Resources Assessment 2020 Key findings​, FAO, FAO. To learn how to add open license text to Wikipedia articles, please see this how-to page. For information on reusing text from Wikipedia, please see the terms of use. Sources: This article incorporates text from a free content work. Licensed under CC BY-SA 3.0 (license statement/permission). Text taken from The State of the World's Forests 2020. In brief – Forests, biodiversity and people​, FAO & UNEP, FAO & UNEP. To learn how to add open license text to Wikipedia articles, please see this how-to page. For information on reusing text from Wikipedia, please see the terms of use.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Anyone for tennis?** Anyone for tennis?: The phrase "Anyone for tennis?" (also given as "Tennis, anyone?") is an English language idiom primarily of the 20th century. The phrase is used to invoke a stereotype of shallow, leisured, upper-class toffs (tennis was, particularly before the widespread advent of public courts in the later 20th century, seen as a posh game for the rich, with courts popular at country clubs and private estates). It's a stereotypical entrance or exit line given to a young man of this class in a superficial drawing-room comedy. Usage: A close paraphase of the saying, was used in George Bernard Shaw's 1914 drawing-room comedy Misalliance, in which Johnny Tarleton asks "Anybody on for a game of tennis?" (An 1891 story in the satirical magazine Punch put a generally similar notion in the mouth of a similar type of character: "I’m going to see if there’s anyone on the tennis-court, and get a game if I can. Ta-ta!".)"Anyone for tennis?" is particularly associated with the early career of Hollywood star Humphrey Bogart, and he is cited as the first person to use the phrase on stage. At the start of his career, in the 1920s and early 1930s, Bogart appeared in many Broadway plays in what Jeffrey Meyers characterized as "charming and fatuous roles – in [one of] which he is supposed to have said 'Tennis, anyone?'".If Bogart ever did speak the line, it would have presumably been in the 1925 play Hell's Bells, set at the Tanglewood Lodge in New Dauville, Connecticut. Bogart claimed that his line in the play was "It's forty-love outside. Anyone care to watch?", and that indeed is what is printed in the script. However, according to Darwin Porter, director John Hayden crossed out that line and replaced it with "Tennis anyone?" before opening night. And several observers have asserted that he did say it, reportedly including Louella Parsons and Richard Watts Jr. Erskine Johnson, in a 1948 interview, reports Bogart as saying "I used to play juveniles on Broadway and came bouncing into drawing rooms with a tennis racket under my arm and the line: 'Tennis anybody?' It was a stage trick to get some of the characters off the set so the plot could continue." But Bogart's usual stance was denial of using that precise phrase ("The lines I had were corny enough, but I swear to you, never once did I have to say 'Tennis, anyone?'"), although averring that it did characterize generally some of his early roles. Usage: Though [Bogart's] early parts were as juveniles, he sometimes called them 'Tennis, anyone?' parts and that is why he is given credit for bringing that phrase into the language. He explained juveniles this way: 'The playwright gets five or six characters into a scene and doesn’t know how to get them offstage. So what does he do? He drags in the juvenile, who has been waiting in the wings for just such a chance. He comes in, tennis racquet under his arm, and says, "Tennis, anyone?" That, of course, solves the playwright’s problem. The player whom the author wants to get rid of for the time being accepts the suggestion. The leading lady, who is due for a love scene with the leading man, declines. So the others exit and all is ready for the love scene between the leading lady and man. It doesn’t always have to be tennis. Sometimes it’s golf or riding, but tennis is better because it gives the young man a chance to look attractive in spotless white flannels.' The phrase continued to drift through media in the 20th century and, to a diminished extent, into the 21st, often at random or just because tennis generally is the subject, rather than specifically to invoke or mock vapid toffs. It appears in the lyric of the "Beautiful Girl Montage" in the classic 1952 musical movie Singin' in the Rain, in the Daffy Duck cartoons Rabbit Fire, Drip-Along Daffy and The Ducksters (1950-1951),, and in the lyric and title of the 1968 song "Anyone for Tennis" by the British rock band Cream, which was the theme song of the film The Savage Seven. William Holden's shallow rich playboy character jokes "tennis, anyone?" when flirting with Joan Vohs's in the 1954 film Sabrina (in which Bogart plays another character). The television series Anyone for Tennyson? (1976–1978) riffs on the name, as does the 1981 stage play Anyone for Denis? "Anyone for Tennis" is the title of the B-side instrumental for Men at Work's 1981 single Who Can It Be Now?.The phrase also occurs in Monty Python's spoof sketch Sam Peckinpah's "Salad Days".
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Carboxy-lyases** Carboxy-lyases: Carboxy-lyases, also known as decarboxylases, are carbon–carbon lyases that add or remove a carboxyl group from organic compounds. These enzymes catalyze the decarboxylation of amino acids, beta-keto acids and alpha-keto acids. Classification and nomenclature: Carboxy-lyases are categorized under EC number 4.1.1. Usually, they are named after the substrate whose decarboxylation they catalyze, for example pyruvate decarboxylase catalyzes the decarboxylation of pyruvate. Examples: Aromatic-L-amino-acid decarboxylase Glutamate decarboxylase Histidine decarboxylase Ornithine decarboxylase Phosphoenolpyruvate carboxylase Pyruvate decarboxylase RuBisCO – the only carboxylase that leads to a net fixation of carbon dioxide Uridine monophosphate synthetase Uroporphyrinogen III decarboxylase enoyl-CoA carboxylases/reductases (ECRs)
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Frisk (confectionery)** Frisk (confectionery): Frisk is the name of a line of breath mint candies produced by Frisk International and distributed worldwide by Perfetti Van Melle. Frisk mints are small, pellet-like mint candies contained in a metal cartridge. History: Frisk was invented in 1986 by a Belgian entrepreneur who, in collaboration with a pharmaceutical company, developed the formula for a particularly strong mint-flavoured candy. Initially, the product was sold exclusively in pharmacies in Belgium, then the market was extended to the Netherlands, Canada and Japan. Especially in the latter country, Frisk gained considerable success, to the point that in 1996 it became the first imported product to obtain recognition as the "Best Food Product of the Year" in Japan. Since 1995, the brand has been distributed by Perfetti Van Melle. There are several flavor varieties. The classic box changed in 2004 to a box with a sliding opening, but the biggest change in visual identity came in 2009, with the introduction of the metal box and the new triangular format of the pellets. Perfetti announced in 2014 that they had removed titanium dioxide, an additive considered toxic, from all their products, including Frisk tablets. In 2016, Perfetti Van Melle announced the closure of the original Frisk factory in Haasrode, which marked the loss of 38 jobs.In 2019, the brand signed a partnership with the audiovisual production company Myvisto to promote two of its new sugar-free products, Frisk Power Mints and Frisk Clean Breath, in advertisements for the internet. According to Perfetti Van Melle, the campaign is a success. In 2020, Frisk introduces Frisk White.Their products are now sold in Japan, France, the Netherlands, Italy, Denmark, Canada, Belgium and Norway. Their largest market is in Japan. In popular culture: In the Japanese television show Downtown no Gaki no Tsukai ya Arahende!!, comedian Endō Shōzo performs a running gag during their "Absolutely Tasty" cooking segments, in which he uses Frisk candies, hoping to obtain a sponsorship with Frisk by using their product. Instead of enhancing the dishes he creates, it often leads to hilarious results.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Mind projection fallacy** Mind projection fallacy: The mind projection fallacy is an informal fallacy first described by physicist and Bayesian philosopher E. T. Jaynes. In a first, "positive" form, it occurs when someone thinks that the way they see the world reflects the way the world really is, going as far as assuming the real existence of imagined objects. That is, someone's subjective judgments are "projected" to be inherent properties of an object, rather than being related to personal perception. One consequence is that others may be assumed to share the same perception, or that they are irrational or misinformed if they do not. In a second "negative" form of the fallacy, as described by Jaynes, occurs when someone assumes that their own lack of knowledge about a phenomenon (a fact about their state of mind) means that the phenomenon is not or cannot be understood (a fact about reality; see also Map and territory.) Jaynes used this concept to argue against Copenhagen interpretation of quantum mechanics. He described the fallacy as follows: [I]n studying probability theory, it was vaguely troubling to see reference to "gaussian random variables", or "stochastic processes", or "stationary time series", or "disorder", as if the property of being gaussian, random, stochastic, stationary, or disorderly is a real property, like the property of possessing mass or length, existing in Nature. Indeed, some seek to develop statistical tests to determine the presence of these properties in their data... Mind projection fallacy: Once one has grasped the idea, one sees the Mind Projection Fallacy everywhere; what we have been taught as deep wisdom, is stripped of its pretensions and seen to be instead a foolish non sequitur. The error occurs in two complementary forms, which we might indicate thus: (A) (My own imagination) → (Real property of Nature), [or] (B) (My own ignorance) → (Nature is indeterminate)
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**All-silica fiber** All-silica fiber: All-silica fiber, or silica-silica fiber, is an optical fiber whose core and cladding are made of silica glass. The refractive index of the core glass is higher than that of the cladding. These fibers are typically step-index fibers. The cladding of an all-silica fiber should not be confused with the polymer overcoat of the fiber. All-silica fiber is usually used as the medium for the purpose of transmitting optical signals. It is of technical interest in the fields of communications, broadcasting and television, due to its physical properties of low transmission loss, large bandwidth and light weight. Applications: The practical application of optical fibers in various optical networks determines the requirements for the technical performance of optical fibers. For short-distance fiber-optic transmission networks, the multi-mode optical fiber is suitable for laser transmission and wider bandwidths, so as to support larger capacity of serial signal information transmission. For long-distance submarine optical cable transmission systems, in order to reduce the number of expensive optical fiber amplifiers, it is important to consider using optical fibers with large mode field diameter area and negative dispersion to increase the transmission distance. The focus of the land-based long-distance transmission system is to be able to transmit more wavelengths, each of which should be transmitted at a high rate as much as possible. Even if the dispersion value of the optical fiber with the changes of the wavelength is minimum, the dispersion of fiber still needs to be solved. For local area networks, since the transmission distance is relatively short, the focus of consideration is on the cost of the optical network rather than the cost of transmission. In other words, it is necessary to solve the add/drop multiplexing problem of the upper/lower path in the optical fiber transmission system, and at the same time, the cost of the add/drop wavelength must be minimized. Applications: Dispersion Compensating Fiber (DCF) Fiber dispersion is a problem that must be avoided in communication networks, and it is also a problem that needs to be solved in long-distance transmission systems. In general, fiber dispersion includes two parts: material dispersion and waveguide structure dispersion. Material dispersion depends on the dispersion of the silica master batch and dopants used to make the fiber. Waveguide dispersion is usually the tendency effective refractive index of a mode that tends to vary with wavelength. Dispersion compensation fiber is a technology used to solve dispersion management in transmission systems. Applications: Non-Dispersion Shifted Fiber (USF) Non-dispersion-shifted fiber (USF) is dominated by positive material dispersion. After combining with small waveguide dispersion, it produces zero dispersion near 1310 nm. The dispersion-shifted fiber (DSF) and non-zero dispersion-shifted fiber (NZDSF) use technical means to deliberately design the refractive index profile of the fiber to produce waveguide dispersion compared with the material dispersion, so that the zero-dispersion wavelength of DSF moved to around 1550 nm after the material dispersion and the waveguide dispersion are added. The 1550 nm wavelength is the most widely used wavelength in the current communication network. In the submarine optical cable transmission system, two kinds of optical fibers with positive and negative dispersion are combined to form a transmission system for dispersion management. With the increase in the distance and capacity of the transmission system, a large number of wavelength division multiplexing (WDM) and dense wavelength division multiplexing (DWDM) systems have been put into use. In these systems, in order to perform dispersion compensation, a double-clad and triple-clad DCF with refractive index distribution that can work in the C-band and L-band has been developed. Applications: Amplification Fiber Amplification fibers, such as erbium-doped fiber (EDF), thulium-doped fiber (TOF), etc, can be made by doping rare earth elements, in the core layer of silica fiber. Amplifying fiber is highly integrable with traditional quartz fiber and also have many advantages such as high output, wide bandwidth, low noise and so on. Fiber amplifiers (such as EDFA) made of amplified fibers are the most widely used key components in today's transmission systems. Applications: Polarization Maintaining Fiber Polarization-maintaining fiber is initially developed for coherent optical transmission and later has been used in the technical fields of fiber optic sensors such as fiber optic gyroscopes. In recent years, due to the increase in the number of wavelength division multiplexing in the DWDM transmission system and the development of high speed, the polarization maintaining fiber has been more widely used. Currently, the most widely used fiber is Panda Optical Fiber (PANDA). PANDA fiber is currently used as pigtails connected with other fiber optic devices and used in the system as a whole. Applications: Single-mode non-stripping optical fiber(SM-NSF) is a new type of optical fiber that still has the NSP polyester layer remaining on the surface of the fiber cladding even after removing the fiber cladding to protect the mechanical properties and high reliability of the optical fiber. SM-NSP fiber and conventional SM fiber have the same outer diameter, eccentricity, and degree of accuracy. It can be widely used in the optical fiber of the transmission system and is an ideal new-type distribution optical fiber. Applications: Optical Fiber for Deep Ultraviolet (DUV) Light Transmission One of the current research topics of solid-state lasers and gas lasers is laser oscillation technology in the deep ultraviolet field (250 nm). Deep ultraviolet light has been extremely widely used in the surface treatment of semiconductor substrates, DNA analysis and testing in the biochemistry field, and the treatment of myopia in the medical field.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Alfred P. Wolf** Alfred P. Wolf: Alfred P. Wolf (February 13, 1923 – December 17, 1998) was an American nuclear and organic chemist. Alfred P. Wolf: Wolf was chairman of the Chemistry Department at Brookhaven National Laboratory, research professor in the Department of Psychiatry at New York University a member of the National Academy of Sciences, The Journal of Nuclear Medicine said that his "discoveries were instrumental in the development of positron emission tomography (PET)" and that he "made pioneering contributions over nearly 50 years in the field of organic radiochemistry". The New York Times said that Wolf "helped create some of today's most sophisticated diagnostic tools" and that he "advanced the field of organic radiochemistry, radiopharmacology and nuclear medicine" throughout his career of 50 years. Alfred P. Wolf: The National Academy of Sciences said that "he pioneered the development of labeling techniques that used the reactions of hot atoms". Notable awards and distinctions: 1971 the Nuclear Chemistry Award of the American Chemical Society 1981 the Society of Nuclear Medicine Paul Aebersold Award 1983 an honorary doctorate from the Faculty of Mathematics and Science at Uppsala University, Sweden. 1988 elected to the National Academy of Sciences 1991 the Hevesy Nuclear Medicine Pioneer Award 1997 the Melvin Calvin Award of the International Isotope Society Life and career: 1923: born in Manhattan on February 13, 1923 1944: B.A., chemistry, Columbia University 1948: M.A., chemistry, Columbia University 1951: joined the Brookhaven National Laboratory 1952: Ph.D.,chemistry, Columbia University 1957: senior chemist, the Brookhaven National Laboratory 1982: head of the Chemistry Department, the Brookhaven National Laboratory
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Glucosamine-1-phosphate N-acetyltransferase** Glucosamine-1-phosphate N-acetyltransferase: In enzymology, a glucosamine-1-phosphate N-acetyltransferase (EC 2.3.1.157) is an enzyme that catalyzes the chemical reaction acetyl-CoA + alpha-D-glucosamine 1-phosphate ⇌ CoA + N-acetyl-alpha-D-glucosamine 1-phosphateThus, the two substrates of this enzyme are acetyl-CoA and alpha-D-glucosamine 1-phosphate, whereas its two products are CoA and N-acetyl-alpha-D-glucosamine 1-phosphate. This enzyme belongs to the family of transferases, specifically those acyltransferases transferring groups other than aminoacyl groups. The systematic name of this enzyme class is acetyl-CoA:alpha-D-glucosamine-1-phosphate N-acetyltransferase. This enzyme participates in aminosugars metabolism. Structural studies: As of late 2007, 3 structures have been solved for this class of enzymes, with PDB accession codes 2OI5, 2OI6, and 2OI7.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Infrared Nanospectroscopy (AFM-IR)** Infrared Nanospectroscopy (AFM-IR): AFM-IR (atomic force microscope-infrared spectroscopy) or infrared nanospectroscopy is one of a family of techniques that are derived from a combination of two parent instrumental techniques. AFM-IR combines the chemical analysis power of infrared spectroscopy and the high-spatial resolution of scanning probe microscopy (SPM). The term was first used to denote a method that combined a tuneable free electron laser with an atomic force microscope (AFM, a type of SPM) equipped with a sharp probe that measured the local absorption of infrared light by a sample with nanoscale spatial resolution.Originally the technique required the sample to be deposited on an infrared-transparent prism and be less than 1μm thick. This early setup improved the spatial resolution and sensitivity of photothermal AFM-based techniques from microns to circa 100 nm. Then, the use of modern pulsed optical parametric oscillators and quantum cascade lasers, in combination with top-illumination, have enabled to investigate samples on any substrate and with increase sensitivity and spatial resolution. As most recent advances, AFM-IR has been proved capable to acquire chemical maps and nanoscale resolved spectra at the single-molecule scale from macromolecular self-assemblies and biomolecules with circa 10 nm diameter, as well as to overcome limitations of IR spectroscopy and measure in aqueous liquid environments.Recording the amount of infrared absorption as a function of wavelength or wavenumber, AFM-IR creates an infrared absorption spectra that can be used to chemically characterize and even identify unknown samples. Recording the infrared absorption as a function of position can be used to create chemical composition maps that show the spatial distribution of different chemical components. Novel extensions of the original AFM-IR technique and earlier techniques have enabled the development of bench-top devices capable of nanometer spatial resolution, that do not require a prism and can work with thicker samples, and thereby greatly improving ease of use and expanding the range of samples that can be analysed. AFM-IR has achieved lateral spatial resolutions of ca. 10 nm, with a sensitivity down to the scale of molecular monolayer and single protein molecules with molecular weight down to 400-600 kDa.AFM-IR is related to techniques such as tip-enhanced Raman spectroscopy (TERS), scanning near-field optical microscopy (SNOM), nano-FTIR and other methods of vibrational analysis with scanning probe microscopy. History: Early history The earliest measurements combining AFM with infrared spectroscopy were performed in 1999 by Hammiche et al. at the University of Lancaster in the United Kingdom, in an EPSRC-funded project led by M Reading and H M Pollock. Separately, Anderson at the Jet Propulsion Laboratory in the United States made a related measurement in 2000. Both groups used a conventional Fourier transform infrared spectrometer (FTIR) equipped with a broadband thermal source, the radiation was focused near the tip of a probe that was in contact with a sample. The Lancaster group obtained spectra by detecting the absorption of infrared radiation using a temperature sensitive thermal probe. Anderson took the different approach of using a conventional AFM probe to detect the thermal expansion. He reported an interferogram but not a spectrum; the first infrared spectrum obtained in this way was reported by Hammiche et al. in 2004: this represented the first proof that spectral information about a sample could be obtained using this approach. History: Both of these early experiments used a broadband source in conjunction with an interferometer; these techniques could, therefore, be referred to as AFM-FTIR although Hammiche et al. coined the more general term photothermal microspectroscopy or PTMS in their first paper. PTMS has various subgroups; including techniques that measure temperature measure thermal expansion use broadband sources. use lasers excite the sample using evanescent waves, illuminate the sample directly from above etc. and different combinations of these. Fundamentally, they all exploit the photothermal effect. Different combinations of sources, methods, methods of detection and methods of illumination have benefits for different applications. Care should be taken to ensure that it is clear which form of PTMS is being used in each case. Currently there is no universally accepted nomenclature. The original technique dubbed AFM-IR that induced resonant motion in the probe using a Free Electron Laser has developed by exploiting the foregoing permutations so that it has evolved into various forms. History: The pioneering experiments of Hammiche et al and Anderson had limited spatial resolution due to thermal diffusion - the spreading of heat away from the region where the infrared light was absorbed. The thermal diffusion length (the distance the heat spreads) is inversely proportional to the root of the modulation frequency. Consequently, the spatial resolution achieved by the early AFM-IR approaches was around one micron or more, due to the low modulation frequencies of the incident radiation created by the movement of the mirror in the interferometer. Also, the first thermal probes were Wollaston wire devices that were developed originally for Microthermal analysis (in fact PTMS was originally considered to be one of a family of microthermal techniques). The comparatively large size of these probes also limited spatial resolution. Bozec et al. and Reading et al. used thermal probes with nanoscale dimensions and demonstrated higher spatial resolution. Ye et al described a MEM-type thermal probe giving sub-100 nm spatial resolution, which they used for nanothermal analysis. The process of exploring laser sources began in 2001 by Hammiche et al when they acquired the first spectrum using a tuneable laser (see Resolution improvement with pulsed laser source). History: A significant development was the creation by Reading et al. in 2001 of a custom interface that allowed measurements to be made while illuminating the sample from above; this interface focused the infrared beam to a spot of circa 500μm diameter, close to the theoretical maximum. The use of top-down or top-side illumination has the important benefit that samples of arbitrary thickness can be studied on arbitrary substrates. In many cases this can be done without any sample preparation. All subsequent experiments by Hammiche, Pollock, Reading and their co-workers were made using this type of interface including the instrument constructed by Hill et al. for nanoscale imaging using a pulsed laser. The work of the University of Lancaster group in collaboration with workers from the University of East Anglia led to the formation of a company, Anasys Instruments, to exploit this and related technologies (see Commercialization). History: Spatial resolution improvement with pulsed laser sources In the first paper on AFM-based infrared by Hammiche et al., the relevant well-established theoretical considerations were outlined that predict that high spatial resolution can be achieved using rapid modulation frequencies because of the consequent reduction in the thermal diffusion length. They estimated that spatial resolutions in the range of 20 nm-30 nm should be achievable. The most readily available sources that can achieve high modulation frequencies are pulsed lasers: even when the rapidity of the pulses is not high, the square wave form of a pulse contains very high modulation frequencies in Fourier space. In 2001, Hammiche et al. used a type of bench-top tuneable, pulsed infrared laser known as an optical parametric oscillator or OPO and obtained the first probe-based infrared spectrum with a pulsed laser, however, they did not report any images.Nanoscale spatial resolution AFM-IR imaging using a pulsed laser was first demonstrated by Dazzi et al at the University of Paris-Sud, France. Dazzi and his colleagues used a wavelength-tuneable, free electron laser at the CLIO facility in Orsay, France to provide an infrared source with short pulses. Like earlier workers, they used a conventional AFM probe to measure thermal expansion but introduced a novel optical configuration: the sample was mounted on an IR-transparent prism so that it could be excited by an evanescent wave. Absorption of short infrared laser pulses by the sample caused rapid thermal expansion that created a force impulse at the tip of the AFM cantilever. The thermal expansion pulse induced transient resonant oscillations of the AFM cantilever probe. This has led to the technique being dubbed Photo-Thermal Induced Resonance (PTIR), by some workers in the field. Some prefer the terms PTIR or PTMS to AFM-IR as the technique is not necessarily restricted to infrared wavelengths. The amplitude of the cantilever oscillation is directly related to the amount of infrared radiation absorbed by the sample. By measuring the cantilever oscillation amplitude as a function of wavenumber, Dazzi's group was able to obtain absorption spectra from nanoscale regions of the sample. Compared to earlier work, this approach improved spatial resolution because the use of short laser pulses reduced the duration of the thermal expansion pulse to the point that the thermal diffusion lengths can be on the scale of nanometres rather than microns. History: A key advantage of the use of a tuneable laser source, with a narrow wavelength range, is the ability to rapidly map the locations of specific chemical components on the sample surface. To achieve this, Dazzi's group tuned their free electron laser source to a wavelength corresponding to the molecular vibration of the chemical of interest, then mapped the cantilever oscillation amplitude as function of position across the sample. They demonstrated the ability to map chemical composition in E. coli bacteria. They could also visualize polyhydroxybutyrate (PHB) vesicles inside Rhodobacter capsulatus cells and monitor the efficiency of PHB production by the cells. History: At the University of East Anglia in the UK, as part of an EPSRC-funded project led by M. Reading and S. Meech, Hill and his co-workers followed the earlier work of Reading et al. and Hammiche et al. and measured thermal expansion using an optical configuration that illuminated the sample from above in contrast to Dazzi et al. who excited the sample with an evanescent wave from below. Hill also made use of an optical parametric oscillator as the infrared source in the manner of Hammiche et al. This novel combination of topside illumination, OPO source and measuring thermal expansion proved capable of nanoscale spatial resolution for infrared imaging and spectroscopy (the figures show a schematic of the UEA apparatus and results obtained with it). The use by Hill and co-workers of illumination from above allowed a substantially wider range of samples to be studied than was possible using Dazzi's technique. By introducing the use of a bench top IR source and topdown illumination, the work of Hammiche, Hill and their coworkers made possible the first commercially viable SPM-based infrared instrument (see Commercialization). History: Broadband pulsed laser sources Reading et al. have explored the use of a broadband QCL combined with thermal expansion measurements. Above, the inability of thermal broadband sources to achieve high spatial resolution is discussed (see history). In this case the frequency of modulation is limited by the mirror speed of the interferometer which, in turn, limits the lateral spatial resolution that can be achieved. When using a broadband QCL the resolution is limited not by the mirror speed but by the modulation frequency of the laser pulses (or other waveforms). The benefit of using a broadband source is that an image can be acquired that comprises an entire spectrum or part of a spectrum for each pixel. This is much more powerful than acquiring images bases on a single wavelength. The preliminary results of Reading et al. show that directing a broadband QCL though an interferometer can give an easily detectable response from a conventional AFM probe measuring thermal expansion. History: Commercialization The AFM-IR technique based on a pulsed infrared laser source was commercialized by Anasys Instruments, a company founded by Reading, Hammiche and Pollock in the United Kingdom in 2004; a sister, United States corporation was founded a year later. Anasys Instruments developed its product with support from the National Institute of Standards and Technology and the National Science Foundation. Since free electron lasers are rare and available only at select institutions, a key to enabling a commercial AFM-IR was to replace them with a more compact type of infrared source. Following the lead given by Hammiche et al in 2001 and Hill et al in 2008, Anasys Instruments introduced an AFM-IR product in early 2010, using a tabletop laser source based on a nanosecond optical parametric oscillator. The OPO source enabled nanoscale infrared spectroscopy over a tuning range of roughly 1000–4000 cm−1 or 2.5-10 μm. History: The initial product required samples to be mounted on infrared-transparent prisms, with the infrared light being directed from below in the manner of Dazzi et al. For best operation, this illumination scheme required thin samples, with optimal thickness of less than 1 μm, prepared on the surface of the prism. In 2013, Anasys released an AFM-IR instrument based on the work of Hill et al. that supported top-side illumination. "By eliminating the need to prepare samples on infrared-transparent prisms and relaxing the restriction on sample thickness, the range of samples that could be studied was greatly expanded. The CEO of Anasys Instruments recognised this achievement by calling it " an exciting major advance" in a letter written to the university and included in the final report of EPSRC project EP/C007751/1. The UEA technique went on to become Anasys Instruments' flagship product. History: Comparison to related photothermal techniques It is worth noting that the first infrared spectrum obtained by measuring thermal expansion using an AFM was obtained by Hammiche and co-workers without inducing resonant motions in the probe cantilever. In this early example the modulation frequency was too low to achieve high spatial resolution but there is nothing, in principle, preventing the measurement of thermal expansion at higher frequencies without analysing or inducing resonant behaviour. Possible options for measuring the displacement of the tip rather than the subsequent propagation of waves along the cantilever include; interferometry focused at the end of the cantilever where the tip is located, a torsional motion resulting from an offset probe (it would only be influenced by the motions of the cantilever as a second order effect) and exploiting the fact that the signal from a heated thermal probe is strongly influenced by the position of the tip relative to the surface thus this could provide a measurement of thermal expansion that wasn't strongly influenced by or dependent upon resonance. The advantages of a non-resonant method of detection is that any frequency of light modulation could be used thus depth information could be obtained in a controlled way (see below) whereas methods that rely on resonance are limited to harmonics. The thermal-probe based method of Hammiche et al. has found a significant number of applications.A unique application made possible by the top-down illumination combined with a thermal probe is localized depth profiling, this is not possible using either using the Dazzi et al. configuration of AFM-IR or that of Hill et al. despite the fact the latter uses top-down illumination. Obtaining linescans and images with thermal probes has been shown to be possible, sub-diffraction limit spatial resolution can be achieved and the resolution for delineating boundaries can be enhanced using chemometric techniques.In all of these examples a spectrum is acquired that spans the entire mid-IR range for each pixel, this is considerably more powerful than measuring the absorption of a single wavelength as is the case for AFM-IR when using either the method of Dazzi et al. or Hill et al. Reading and his group demonstrated how, because thermal probes can be heated, localized thermal analysis can be combined with photothermal infrared spectroscopy using a single probe. In this way local chemical information could be complemented with local physical properties such melting and glass transition temperatures. This in turn led to the concept of thermally assisted nanosampling, where the heated tip performs a local thermal analysis experiment then the probe is retracted taking with it down to femtograms of softened material that adhere to the tip. This material can then be manipulated and/or analysed by photothermal infrared spectroscopy or other techniques. This considerably increases the analytical power of this type of SPM-based infrared instrument beyond anything that can be achieved with conventional AFM probes such as those used in AFM-IR when using either the Dazzi et al. or the Hill et al. version. History: Thermal probe techniques have still not achieved the nanoscale spatial resolution that thermal expansion methods have attained though this is theoretically possible. For this, a robust thermal probe and a high intensity source is needed. Recently, the first images using a QCL and a thermal probe have been obtained by Reading et al. A good signal to noise ratio enabled rapid imaging but sub-micron spatial resolution was not clearly demonstrated. Theory predicts improvements in spatial resolution could be achieved by confining data analysis to the early part of the thermal response to a step change increase in the intensity of the incident radiation. In this way pollution of the measurement from adjacent regions would be avoided, i.e. the measurement window could be confined to a suitable fraction of the time of flight of the thermal wave (using a Fourier analysis of the response could provide a similar outcome by using the high frequency components). This could be achieved by tapping the probe in synchrony with the laser. Similarly, lasers that provide very rapid modulations could further reduce thermal diffusion lengths. History: Although most effort to date has been focused on thermal expansion measurements, this might change. Truly robust thermal probes have recently become available, as have affordable compact QCL's that are tuneable over a broad frequency range. Consequently, it may soon be the case that thermal probe techniques will become as widely used as those based on thermal expansion. Ultimately, instruments that can easily switch between modes and even combine them using a single probe will certainly become available, for example, a single probe will eventually be able to measure both temperature and thermal expansion. Recent improvements and single-molecule sensitivity: The original commercial AFM-IR instruments required most samples to be thicker than 50 nm to achieve sufficient sensitivity. Sensitivity improvements were achieved using specialized cantilever probes with an internal resonator and by wavelet based signal processing techniques. Sensitivity was further improved by Lu et al. by using quantum cascade laser (QCL) sources. The high repetition rate of the QCL allows absorbed infrared light to continuously excite the AFM tip at a "contact resonance" of the AFM cantilever. This resonance-enhanced AFM-IR, in combination with electric field enhancement from metallic tips and substrates led to the demonstration of AFM-IR spectroscopy and compositional imaging of films as thin as single self-assembled monolayers. AFM-IR has also been integrated with other sources including a picosecond OPO offering a tuning range 1.55 μm to 16 μm (from 6450 cm−1 to 625 cm−1). Recent improvements and single-molecule sensitivity: In its initial development, with samples deposited on transparent prisms and using OPO laser sources, the sensitivity of AFM-IR was limited to a minimal thickness of the sample of circa 50-100 nm as mentioned above. The advent of quantum cascade lasers (QCL) and the use of the electromagnetic field enhancement between metallic probes and substrates have improved the sensitivity and spatial resolution of AFM-IR down to the measurement of large (>0.3 μm) and flat (~2–10 nm) self-assembled monolayers, where still hundreds of molecules are present. Ruggeri et al. have recently developed off-resonance, low power and short pulse AFM-IR (ORS-nanoIR) to prove the acquisition of infrared absorption spectra and chemical maps at the single molecule level, in the case of macromolecular assemblies and large protein molecules with a spatial resolution of ca. 10 nm. Nanoscale chemical imaging and mapping: Nanoscale resolved chemical maps and spectra AFM-IR enables nanoscale infrared spectroscopy, i.e. the ability to obtain infrared absorption spectra from nanoscale regions of a sample. Nanoscale chemical imaging and mapping: Chemical compositional mapping AFM-IR can also be used to perform chemical imaging or compositional mapping with spatial resolution down to ~10-20 nm, limited only by the radius of the AFM tip. In this case, the tuneable infrared source emits a single wavelength, corresponding to a specific molecular resonance, i.e. a specific infrared absorption band. By mapping the AFM cantilever oscillation amplitude as a function of position, it is possible to map out the distribution of specific chemical components. Compositional maps can be made at different absorption bands to reveal the distribution of difference chemical species. Nanoscale chemical imaging and mapping: Complementary morphological and mechanical mapping The AFM-IR technique can simultaneously provide complementary measurements of the mechanical stiffness and dissipation of a sample surface. When infrared light is absorbed by the sample the resulting rapid thermal expansion excites a "contact resonance" of the AFM cantilever, i.e. a coupled resonance resulting from the properties of both the cantilever and the stiffness and damping of the sample surface. Specifically, the resonance frequency shifts to higher frequencies for stiffer materials and to lower frequencies for softer material. Additionally, the resonance becomes broader for materials with larger dissipation. These contact resonances have been studied extensively by the AFM community (see, for example, atomic force acoustic microscopy). Traditional contact resonance AFM requires an external actuator to excite the cantilever contact resonances. In AFM-IR these contact resonances are automatically excited every time an infrared pulse is absorbed by the sample. So the AFM-IR technique can measure the infrared absorption by the amplitude of the cantilever oscillation response and the mechanical properties of the sample via the contact resonance frequency and quality factor. Applications: Applications of AFM-IR have include the characterisation of protein, polymers composites, bacteria, cells, biominerals, pharmaceutical sciences, photonics/nanoantennas, fuel cells, fibers, skin, hair, metal organic frameworks, microdroplets, self-assembled monolayers, nanocrystals, and semiconductors. Polymers Polymers blends, composites, multilayer films and fibers AFM-IR has been used to identify and map polymer components in blends, characterize interfaces in composites, and even reverse engineer multilayer films Additionally AFM-IR has been used to study chemical composition in Poly(3][4-ethylenedioxythiophene) (PEDOT) conducting polymers. and vapor infiltration into polyethylene terephthalate PET fibers. Applications: Protein science The chemical and structural properties of protein determine their interactions, and thus their functions, in a wide variety of biochemical processes. Since Ruggeri et al. pioneering work on the aggregation pathways of the Josephin domain of ataxin-3, responsible for type-3 spinocerebellar ataxia, an inheritable protein-misfolding disease, AFM-IR was used to characterize molecular conformations in a wide spectrum of applications in protein and life sciences. This approach has delivered new mechanistic insights into the behaviour of disease-related proteins and peptides, such as Aβ42, huntingtin and FUS, which are involved in the onset of Alzheimer's, Huntington's and Amyotrophic lateral sclerosis (ALS). Similarly AFM-IR has been applied to study studying protein based functional biomaterials. Applications: Life sciences AFM-IR has been used to characterise spectroscopically in detail chromosomes, bacteria and cells with nanoscale resolution. For example in the case of infection of bacteria by viruses (Bacteriophages), and also the production of polyhydroxybutyrate (PHB) vesicles inside Rhodobacter capsulatus cells and triglycerides in Streptomyces bacteria (for biofuel applications). AFM-IR has also been used to evaluate and map mineral content, crystallinity, collagen maturity and acid phosphate content via ratiometric analysis of various absorption bands in bone. AFM-IR has also been used to perform spectroscopy and chemical mapping of structural lipids in human skin, cells and hair Fuel cells AFM-IR has been used to study hydrated Nafion membranes used as separators in fuel cells. The measurements revealed the distribution of free and ionically bound water on the Nafion surface. Applications: Photonic nanoantennas AFM-IR has been used to study the surface plasmon resonance in heavily silicon-doped indium arsenide microparticles. Gold split ring resonators have been studied for use with Surface-Enhanced Infrared Absorption Spectroscopy. In this case AFM-IR was used to measure the local field enhancement of the plasmonics structures (~30X) at 100 nm spatial resolution. Pharmaceutical sciences AFM-IR has been used to study miscibility and phase separation in drug polymer blends, the chemical analysis of nanocrystalline drug particles as small 90 nm across, the interaction of chromosomes with chemotherapeutics drugs, and of amyloids with pharmacological approches to contrast neurodegeneration.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Sepiapterin reductase (L-threo-7,8-dihydrobiopterin forming)** Sepiapterin reductase (L-threo-7,8-dihydrobiopterin forming): Sepiapterin reductase (L-threo-7,8-dihydrobiopterin forming) (EC 1.1.1.325) is an enzyme with systematic name L-threo-7,8-dihydrobiopterin:NADP+ oxidoreductase. This enzyme catalyses the following chemical reaction (1) L-threo-7,8-dihydrobiopterin + NADP+ ⇌ sepiapterin + NADPH + H+ (2) L-threo-tetrahydrobiopterin + 2 NADP+ ⇌ 6-pyruvoyl-5,6,7,8-tetrahydropterin + 2 NADPH + 2 H+This bacterial (Chlorobium tepidum) enzyme catalyses the final step in the de novo synthesis of tetrahydrobiopterin from GTP.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**VS ribozyme** VS ribozyme: The Varkud satellite (VS) ribozyme is an RNA enzyme that carries out the cleavage of a phosphodiester bond. Introduction: Varkud satellite (VS) ribozyme is the largest known nucleolytic ribozyme and found to be embedded in VS RNA. VS RNA is a long non-coding RNA exists as a satellite RNA and is found in mitochondria of Varkud-1C and few other strains of Neurospora. VS ribozyme contains features of both catalytic RNAs and group 1 introns. VS ribozyme has both cleavage and ligation activity and can perform both cleavage and ligation reactions efficiently in the absence of proteins. VS ribozyme undergo horizontal gene transfer with other Neurospora strains. VS ribozymes have nothing in common with other nucleolytic ribozymes. Introduction: VS RNA has a unique primary, secondary, and tertiary structure. The secondary structure of the VS ribozyme consists of six helical domains (Figure 1). Stem loop I forms the substrate domain while stem-loop II-VI forms the catalytic domain. When these 2 domains are synthesized in vitro separately, they can perform the self-cleavage reaction by trans-acting The substrate binds into a cleft which is made by two helices. The likely active site of the ribozyme is a very important nucleotide A756. The A730 loop and A756 nucleotide are critical to its function since they participate in the phosphoric transfer chemistry activity of the ribozyme The Origin: VS RNA is transcribed as a multimeric transcript from VS DNA. VS DNA contains a region coding reverse transcriptase necessary for replication of the VS RNA. Once transcribed VS RNA undergoes a site specific cleavage. VS RNA self-cleaves at a specific phosphodiester bond to produce a monomeric and few multimeric transcripts. These transcripts then undergo a self-ligation and form a circular VS RNA. This circular VS RNA is the predominant form of VS found in Neurospora. VS ribozyme is a small catalytic motif embedded within this circular VS RNA. The majority of VS RNA is made up of 881 nucleotides Structure of the Ribozyme: In the natural state, a VS ribozyme motif contains 154 nucleotides that fold into six helices. Its RNA contains a self-cleavage element which is thought to act in the processing of intermediates made through the process of replication. The H-shaped structure of the ribozyme is organized by two three-way junctions which determine the overall fold of the ribozyme. A unique feature of the structure of ribozyme is that even if the majority of helix IV and distal end of helix VI would be deleted there would be no significant loss of activity However, if the lengths of helix III and V were to be changed there would be major loss of activity. The base bulges of the ribozyme, helices II and IV have very important structural roles since replacing them with other nucleotides does not affect their activity. Basically, VS ribozyme's activity is very dependent on the local sequence of the two three-way junctions. The three-way junction present in the VS ribozyme is very similar to the one seen in the small (23S) subunit of rRNA. The Active Site of Ribozyme: The active sites of the ribozyme can be found in the helical junctions, the bulges and the lengths of the critical helices those being III and V. There is one important area found in the internal loop of helix VI called A730, a single base change in this loop would lead to decreased loss of cleavage activity but no significant changes in the folding of the ribozyme occur. Other mutations which affect the activity of the ribozyme are methylation, suppression of thiophilic Manganese ions at the A730 site Possible Catalytic Mechanism: The A730 loop is very important in the catalytic activity of the ribozyme. The ribozyme functions like a docking station where it will dock the substrate into the cleft between helices II and VI to facilitate an interaction between the cleavage site and A730 loop. This interaction makes an environment to which catalysis can proceed in a way similar to interactions seen in the hairpin ribozyme. Within the A730 loop, a substitution of A756 by G, C or U will lead to a 300-fold loss of cleavage and ligation activity. Possible Catalytic Mechanism: The proof that A730 loop is the active site of the VS ribozyme is very evident, and that A756 plays an important role in its activity. The cleavage reaction works by an SN2 reaction mechanism. The nucleophilic attack of the 2’-oxygen on the 3’-phosphate will create a cyclic 2’3’ phosphate by the 5’-oxygen leaving. The ligation reaction occurs in reverse in which the 5’-oxygen attacks the 3’-phosphate of the cyclic phosphate. The way that both of these reactions are facilitated is by general acid-base catalysis which strengthen the oxygen nucleophile by removing bonded proteins and stabilizing the oxyanion leaving groups through protonation. It is also important to add that if a group is behaving as a base in the cleavage reaction then it must act as an acid in the ligation reaction. Solvated metal ions act in general acid-base catalysis, where the metal ions might act as a Lewis acid which polarize phosphate oxygen atoms. Another important factor in the rate of ligation reaction is the pH dependence which corresponds to a pKa of 5.6, which is not a factor in the cleavage reaction . This particular dependence requires a protonated base at position A756 of the ribozyme. Possible Catalytic Mechanism: Another proposed catalytic strategy is the stabilization of a pentavalent phosphate of the reaction transition state. This mechanism would probably involve the formation of hydrogen bonds as seen in the hairpin ribozyme Furthermore, the proximity of active site groups to each other and their orientation in space would contribute to the catalytic mechanism taking place. This might bring the transition state and the substrate closer for the legation reaction to occur. Catalysts: Very high concentration of bivalent and monovalent cation increase the efficiency of the cleavage reaction. These cations facilitate the base pairing of the ribozyme with the substrate. VS cleavage rate can be accelerated by high cation concentration as well as by increasing RNA concentration. Therefore, a low concentration of any of these is rate-limiting. The cations' role is considered to be charge neutralizing in the folding of RNA rather than acting as a catalyst. Hypothesis For Evolution of VS Ribozyme: 1. A molecular fossil of RNA world which has retained both cleavage and ligation functions. 2. VS Ribozyme later acquired one or more of its enzymatic activities. Hypothesis For Evolution of VS Ribozyme: RNA mediated cleavage and ligation is found in group 1 and group 2 self-splicing RNAs. VS RNA contains many conserved sequence characteristics to group 1 introns. However VS ribozyme splice site is different from group 1 intron splice site and VR ribozyme self-cleaving site is outside of the core of the group 1 intron. In the cleavage reaction VS ribozyme produce 2’,3’ -cyclic phosphate and the group 1 introns produce 3’-hydroxyl. Functional similarity with group 1 introns and then mechanistically being different from the introns support this hypothesis that VS ribozyme is a chimera formed by insertion of a novel catalytic RNA into group 1 introns.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Number (music)** Number (music): In music, number refers to an individual song, dance, or instrumental piece which is part of a larger work of musical theatre, opera, or oratorio. It can also refer either to an individual song in a published collection or an individual song or dance in a performance of several unrelated musical pieces as in concerts and revues. Both meanings of the term have been used in American English since the second half of the 19th century. Musical theatre and related genres: In musical theatre, the lyrics of the individual song numbers are integrated with the narrative of the libretto (or "book"). As early as 1917, Jerome Kern wrote that "musical numbers should carry on the action of the play, and should be representative of the personalities of the characters who sing them." The lyricist Oscar Hammerstein, another proponent of this view, even refused to list the numbers in Rose-Marie because he thought it would detract from what he viewed as the close integration between the book and the lyrics. However, both David Horn and Scott McMillin have proposed that full integration is not completely possible. For McMillin, the start of a musical number creates a noticeably different "feel" in which the singer becomes a "performer" not simply a character. For Horn, the individual numbers can serve not only to advance the narrative but also to directly address and engage the audience in an experience which stands apart from the dramatic context of the work., and this latter function had its roots in vaudeville entertainments. In revues, a type of multi-act popular theatrical entertainment that combines music, dance and sketches, there is no overall narrative, but rather a sequence of unrelated (often lavish) musical numbers. However, as Rick Altman points out, some of the numbers in these types of shows such "This Heart of Mine" in the film Ziegfeld Follies can be narratives in miniature. That number, according to Altman, "is not just musical—its dream-like dance grows out of the Bremer/Astaire mimed narrative which opens the selection." Opera and oratorio: Opera numbers may be arias, but also ensemble pieces, such as duets, trios, quartets, quintets, sextets or choruses. They may also be ballets and instrumental pieces, such as marches, sinfonias, or intermezzi. Until the mid-19th century most operas were structured as a series of discrete numbers connected by recitative or spoken dialogue. Oratorios followed a similar model. However, as the century progressed, numbers were increasingly unified into larger musical segments with no clear break between them. Early examples of this trend include Carl Maria von Weber's opera Euryanthe and Robert Schumann's secular oratorio Das Paradies und die Peri.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Regenerative shock absorber** Regenerative shock absorber: A regenerative shock absorber is a type of shock absorber that converts parasitic intermittent linear motion and vibration into useful energy, such as electricity. Conventional shock absorbers simply dissipate this energy as heat. Regenerative shock absorber: When used in an electric vehicle or hybrid electric vehicle the electricity generated by the shock absorber can be diverted to its powertrain to increase battery life. In non-electric vehicles the electricity can be used to power accessories such as air conditioning. Several different systems have been developed recently, though they are still in stages of development and not installed on production vehicles. Electromagnetic: A patent for such a device was filed in 2005. This type of system uses a linear motor/generator consisting of a stack of permanent magnets and coils to generate electricity. This system was further developed at Tufts University and has been licensed to Electric Truck, LLC. Preliminary data suggests 20% to 70% of the energy normally lost in the suspension can be recaptured with this system.A system developed at Swinburne University of Technology used DC electromagnetic machines as damping elements to generate energy for storage in conventional batteries. This system utilized a device based on similar principles to a 'step-up' (boost) DC-DC converter. The design offered the ability to optimize the energy conversion efficiency of the system and also allow the ability to control the damping coefficient of the damper, such that the system could act as a semi-active damper. Hydraulic: A system developed at MIT uses hydraulic pistons to force fluid through a turbine coupled to a generator. The system is controlled by active electronics which optimize damping, which the inventors claim also results in a smoother ride compared to a conventional suspension. They calculate that a large company like Walmart could save $13 million annually by converting their trucks.Another system has been developed by a team at New York State University.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Stocking (forestry)** Stocking (forestry): Stocking is a quantitative measure of the area occupied by trees, usually measured in terms of well-spaced trees or basal area per hectare, relative to an optimum or desired level of density. It is also known as a measure of the growth potential of a site that may be affected by vegetation in the area along with other nearby trees. Stocking can be shown as a ratio of the current stand density to the stand density of a maximally-occupied site. Stocking measures account for three things: the cover type and species mixture in the stand, the basal area per acre, and the number of trees per acre.Stocking allows for comparing stands that may have diverse ecology. Stocking is a major part of forest management and how to control the growth of trees in certain areas. They have developed more approaches for different ages of stands and grow areas. When managing forests, foresters want the most amount of growth and volume in different stands and areas across the world. A desirable level of stocking is often considered that which maximizes timber production, or other management objectives. Stocking (forestry): Stand density is not the same as stocking. See stand density index for the difference. Stocking charts: Once the stands have been measured, they are marked as either overstocked, 50% stocked, or understocked. When an area is overstocked it means that it has too many trees in the given area, and it will be affecting the growth of other trees that are around it. When an area is understocked, the stand site is not up to its full potential in tree growth. More trees should be planted there to maximize tree growth in the stand site. Stocking charts or guides help determine whether a stand is overstocked, 50% stocked, or understocked. When you have two stands that have similar basal areas but different amounts of trees in the stand, you can compare to see if there is a difference. In these charts are 2 reference lines, A and B, which show where an area is being overstocked, understocked, or fully stocked. The A-line represents the limit for an uncut forest. It would be considered overstocked with trees if it were over the A-line. The B-line represents the best number of trees to be grown in each area based on the space in the stand. The B-line also represents the stand being understocked and that the area is not up to its full growth potential. If the area of a planted stand is between the A and B lines, they will reduce the number of trees down to the B line to get the maximum growth out of the trees. Foresters usually want to have a stand around the B-line because it will give them the maximum growth out of the stand and use the fewest trees possible. But this is also a difficult decision because if you make a cut into the stand and try to make it down to the B-line this could influence the trees that are left in the stand, and they may not grow as the forester thought they would.When it comes to a stand that is overstocked or understocked, it is important to make the correct decision on how to get the most growth potential. When you have a stand that is right on the edge of being understocked it may not be beneficial to plant more trees in the stand. Planting more trees may cause the stand to become overstocked, and you won’t have the maximum growth potential. When you have a very understocked stand, you have two options for helping the stand. You can either plant new trees in the stand as an underplant or clearcut the stand and restart by planting all new trees. If you have a fully stocked stand, you will still want to pay attention to it because modifications may need to be made to keep the stand in the stocked area in the chart. It is important to also speak to a professional about whether you should keep the stand as it is or reduce the stand to obtain the maximum potential growth. Measurement types: Basal area per acre When stocking, you are measuring a tree’s basal area. It is a cross-sectional area about 4.5 feet above the ground in the tree's center. The basal area is measured in square feet per tree in the given stand. The equation for calculating the basal area of trees in a stand is Basal Area = 0.005454 DBH2, where DBH is located at 4.5 feet above the ground surface calculated in inches and is the diameter of the tree. When you are calculating the basal area, you may need to have more calculations for larger stands. In contrast, if you have a smaller stand, it will require fewer basal area measurements to be taken. The base number for the measurements that should be taken in a stand is 20 – 25, giving you a good estimate for most stands.When calculating basal area, foresters use a special prism or gauge that can help get precise estimates. Not every stand is a uniform stand, so numerous measurements will need to be taken at the same spot so that you can have precise measurements in the stand. Measurement types: Trees per acre When looking at stocking, it is important to account for the trees per acre in your stand. This follows the same principle of basal area. When looking at the trees per acre, make several estimates in the stand and take the average to have the most reliable source of information.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Bloom syndrome** Bloom syndrome: Bloom syndrome (often abbreviated as BS in literature) is a rare autosomal recessive genetic disorder characterized by short stature, predisposition to the development of cancer, and genomic instability. BS is caused by mutations in the BLM gene which is a member of the RecQ DNA helicase family. Mutations in other members of this family, namely WRN and RECQL4, are associated with the clinical entities Werner syndrome and Rothmund–Thomson syndrome, respectively. More broadly, Bloom syndrome is a member of a class of clinical entities that are characterized by chromosomal instability, genomic instability, or both and by cancer predisposition. Bloom syndrome: Cells from a person with Bloom syndrome exhibit a striking genomic instability that includes excessive crossovers between homologous chromosomes and sister chromatid exchanges (SCEs). The condition was discovered and first described by New York dermatologist Dr. David Bloom in 1954.Bloom syndrome has also appeared in the older literature as Bloom–Torre–Machacek syndrome. Presentation: The most prominent feature of Bloom syndrome is proportional small size. The small size is apparent in utero. At birth, neonates exhibit rostral to caudal lengths, head circumferences, and birth weights that are typically below the third percentile.The second most commonly noted feature is a rash on the face that develops early in life as a result of sun exposure. The facial rash appears most prominently on the cheeks, nose, and around the lips. It is described as erythematous, that is red and inflamed, and telangiectatic, that is characterized by dilated blood vessels at the skin's surface. The rash commonly also affects the backs of the hands and neck, and it can develop on any other sun-exposed areas of the skin. The rash is variably expressed, being present in a majority but not all persons with Bloom syndrome, and it is on average less severe in females than in males. Moreover, the sun sensitivity can resolve in adulthood. There are other dermatologic changes, including hypo-pigmented and hyper-pigmented areas, cafe-au-lait spots, and telangiectasias, which can appear on the face and on the ocular surface.There is a characteristic facial appearance that includes a long, narrow face; prominent nose, cheeks, and ears; and micrognathism or undersized jaw. The voice is high-pitched and squeaky. Presentation: There are a variety of other features that are commonly associated with Bloom syndrome. There is a moderate immune deficiency, characterized by deficiency in certain immunoglobulin classes and a generalized proliferative defect of B and T cells. The immune deficiency is thought to be the cause of recurrent pneumonia and middle ear infections in persons with the syndrome. Infants can exhibit frequent gastrointestinal upsets, with reflux, vomiting, and diarrhea, and there is a remarkable lack in interest in food. There are endocrine disturbances, particularly abnormalities of carbohydrate metabolism, insulin resistance and susceptibility to type 2 diabetes, dyslipidemia, and compensated hypothyroidism. Persons with Bloom syndrome exhibit a paucity of subcutaneous fat. There is reduced fertility, characterized by a failure in males to produce sperm (azoospermia) and premature cessation of menses (premature menopause) in females. Despite these reductions, several women with Bloom syndrome have had children, and there is a single report of a male with Bloom syndrome bearing children.Although some persons with Bloom syndrome can struggle in school with subjects that require abstract thought, there is no evidence that intellectual disability is more common in Bloom syndrome than in other people.The most serious and frequent complication of Bloom syndrome is cancer. In the 281 persons followed by the Bloom Syndrome Registry, 145 persons (51.6%) have been diagnosed with a malignant neoplasm, and there have been 227 malignancies. The types of cancer and the anatomic sites at which they develop resemble the cancers that affect persons in the general population. The age of diagnosis for these cancers is earlier than for the same cancer in normal persons. And many persons with Bloom syndrome have been diagnosed with multiple cancers. The average life span is approximately 27 years. The most common cause of death in Bloom syndrome is from cancer. Other complications of the disorder include chronic obstructive lung disease and type 2 diabetes.There are a variety of excellent sources for more detailed clinical information about Bloom syndrome.There is a closely related entity that is now referred to as Bloom-syndrome-like disorder (BSLD) which is caused by mutations in components of the same protein complex to which the BLM gene product belongs, including TOP3A, which encodes the type I topoisomerase, topoisomerase 3 alpha, RMI1, and RMI2. The features of BSLD include small size and dermatologic findings, such as cafe-au-lait spots, and the presence of the once pathognomonic elevated SCEs is reported for persons with mutations in TOP3A and RMI1.Bloom syndrome shares some features with Fanconi anemia possibly because there is overlap in the function of the proteins mutated in this related disorder. Genetics: Bloom syndrome is an autosomal recessive disorder, caused by mutations in the maternally- and paternally-derived copies of the gene BLM. As in other autosomal recessive conditions, the parents of an individual with Bloom syndrome do not necessarily exhibit any features of the syndrome. The mutations in BLM associated with Bloom syndrome are nulls and missense mutations that are catalytically inactive. The cells from persons with Bloom syndrome exhibit a striking genomic instability that is characterized by hyper-recombination and hyper-mutation. Human BLM cells are sensitive to DNA damaging agents such as UV and methyl methanesulfonate, indicating deficient repair capability. At the level of the chromosomes, the rate of sister chromatid exchange in Bloom's syndrome is approximately 10 fold higher than normal and quadriradial figures, which are the cytologic manifestations of crossing-over between homologous chromosome, are highly elevated. Other chromosome manifestations include chromatid breaks and gaps, telomere associations, and fragmented chromosomes. The hyper-recombination can also be detected by molecular assays The BLM gene is a member of the protein family referred to as RecQ helicases. The diffusion of BLM has been measured to 1.34 μm2s in nucleoplasm and 0.13 μm2s at nucleoli DNA helicases are enzymes that attach to DNA and temporarily unravel the double helix of the DNA molecule. DNA helicases function in DNA replication and DNA repair. BLM very likely functions in DNA replication, as cells from persons with Bloom syndrome exhibit multiple defects in DNA replication, and they are sensitive to agents that obstruct DNA replication.The BLM helicase is a member of a protein complex with topoisomerase III alpha, RMI1 and RMI2, also known as BTRR, Bloom Syndrome complex or the dissolvasome. Disruption of the proper assembly of the Bloom Syndrome complex leads to genome stability, genetic dependence on cellular nucleases GEN1 and MUS81, and loss of normal cell growth. Bloom-like phenotypes have been associated with mutations in topoisomerase III alpha, RMI1 and RMI2 genes. Genetics: Relationship to cancer and aging As noted above, there is greatly elevated rate of mutation in Bloom syndrome and the genomic instability is associated with a high risk of cancer in affected individuals. The cancer predisposition is characterized by 1) broad spectrum, including leukemias, lymphomas, and carcinomas, 2) early age of onset relative to the same cancer in the general population, and 3) multiplicity, that is, synchronous or metachronous cancers. There is at least one person with Bloom syndrome who had five independent primary cancers. Persons with Bloom syndrome may develop cancer at any age. The average age of cancer diagnoses in the cohort is approximately 26 years old. Pathophysiology: When a cell prepares to divide to form two cells, the chromosomes are duplicated so that each new cell will get a complete set of chromosomes. The duplication process is called DNA replication. Errors made during DNA replication can lead to mutations. The BLM protein is important in maintaining the stability of the DNA during the replication process. Lack of BLM protein or protein activity leads to an increase in mutations; however, the molecular mechanism(s) by which BLM maintains stability of the chromosomes is still a very active area of research.Persons with Bloom syndrome have an enormous increase in exchange events between homologous chromosomes or sister chromatids (the two DNA molecules that are produced by the DNA replication process); and there are increases in chromosome breakage and rearrangements compared to persons who do not have Bloom's syndrome. Direct connections between the molecular processes in which BLM operates and the chromosomes themselves are under investigation. The relationships between molecular defects in Bloom syndrome cells, the chromosome mutations that accumulate in somatic cells (the cells of the body), and the many clinical features seen in Bloom syndrome are also areas of intense research. Diagnosis: Bloom syndrome is diagnosed using any of three tests - the presence of quadriradial (Qr, a four-armed chromatid interchange) in cultured blood lymphocytes, and/or the elevated levels of sister chromatid exchange in cells of any type, and/or the mutation in the BLM gene. The US Food and Drug Administration (FDA) announced on February 19, 2015, that they have authorized marketing of a direct-to-consumer genetic test from 23andMe. The test is designed to identify healthy individuals who carry a gene that could cause Bloom Syndrome in their offspring. Treatment: Bloom syndrome has no specific treatment; however, avoiding sun exposure and using sunscreens can help prevent some of the cutaneous changes associated with photo-sensitivity. Efforts to minimize exposure to other known environmental mutagens are also advisable in multiple forms. Epidemiology: Bloom syndrome is an extremely rare disorder in most populations and the frequency of the disease has not been measured in most populations. However, the disorder is relatively more common amongst people of Central and Eastern European Ashkenazi Jewish background. Approximately 1 in 48,000 Ashkenazi Jews are affected by Bloom syndrome, who account for about one-third of affected individuals worldwide. Epidemiology: Bloom's Syndrome Registry The Bloom's Syndrome Registry lists 265 individuals reported to have this rare disorder (as of 2009), collected from the time it was first recognized in 1954. The registry was developed as a surveillance mechanism to observe the effects of cancer in the patients, which has shown 122 individuals have been diagnosed with cancer. It also acts as a report to show current findings and data on all aspects of the disorder.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**CDY1** CDY1: Testis-specific chromodomain protein Y 1 is a protein that in humans is encoded by the CDY1 gene.This gene encodes a protein containing a chromodomain and a histone acetyltransferase catalytic domain. Chromodomain proteins are components of heterochromatin-like complexes and can act as gene repressors. This protein is localized to the nucleus of late spermatids where histone hyperacetylation takes place. Histone hyperacetylation is thought to facilitate the transition in which protamines replace histones as the major DNA-packaging protein. The human chromosome Y has two identical copies of this gene within a palindromic region; this record represents the more telomeric copy. Chromosome Y also contains a pair of closely related genes in another more telomeric palindrome as well as several related pseudogenes. Two protein isoforms are encoded by transcript variants of this gene. Additional transcript variants have been described, but their full-length nature has not been determined. The gene is thought to be related to high-altitude adaptation in humans.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Sulfinyl nitrene** Sulfinyl nitrene: A sulfinyl nitrene is a chemical compound with generic formula R-S(O)N, with oxygen and nitrogen both bonded to the sulfur atom. Preparation: Sulfinyl nitrenes can be generated from the unstable sulfinyl azides (R-SON3). However this reaction easily results in explosions and unpredictable results.Sulfinyl nitrenes can be prepared from a sulfinylhydroxylamine, with R–O–N=S=O, by a reaction with an organometallic M-R1 compound to yield R1SON. Properties: Sulfinyl nitrenes are electrophilic.Sulfinyl nitrenes have a resonance structure between sulfur in a +6 oxidation state and a triple bond to nitrogen, or sulfur with a +4 state, nitrogen in +1 state and a single bond, or a positive charge on sulfur and a negative charge on nitrogen with a double bond. The dominant state is the singlet state with charge separation and a double bond. Reactions: Sulfinyl nitrenes are unstable and react with themselves to yield sulfonyl nitrenes, disulfides or polymerise to trioxotrithiztriazes.The reaction with water yields a sulfonylamide (R-SO2NH2.The reaction of a sulfinyl nitrene with sulfoxides results in a sulfonyl sulfoximde, with the oxygen joining the sulfinyl sulfur and its bond replaced by a double bond to nitrogen. Examples: Trifluoromethyl sulfinyl nitrene (CF3S(O)N) has been produced as a gas, and isolated in a noble gas matrix.Methoxysulfinyl nitrene (CH3OS(O)N) was also produced by decomposing the azide. Related: Sulfinyl nitrenes are distinct from sulfonyl nitrenes which have two oxygen atoms attached to the sulfur atom.Thiazate or thionylimide ([NSO]−) is an anion that exists as alkali metal salts. They can be formed by a tri-tert-butoxy metal reaction with a trimethylsilyl compound:KOtBu + Me3SiNSO → K[NSO] + Me3SiOtBu NaNSO, KNSO, RbNSO, CsNSO, (Me2N)3S+NSO− and tris(dimethylamino)sulfonium [(Me2N)3S][NSO] salts are known.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**10th edition of Systema Naturae** 10th edition of Systema Naturae: The 10th edition of Systema Naturae is a book written by Swedish naturalist Carl Linnaeus and published in two volumes in 1758 and 1759, which marks the starting point of zoological nomenclature. In it, Linnaeus introduced binomial nomenclature for animals, something he had already done for plants in his 1753 publication of Species Plantarum. Starting point: Before 1758, most biological catalogues had used polynomial names for the taxa included, including earlier editions of Systema Naturae. The first work to consistently apply binomial nomenclature across the animal kingdom was the 10th edition of Systema Naturae. The International Commission on Zoological Nomenclature therefore chose 1 January 1758 as the "starting point" for zoological nomenclature, and asserted that the 10th edition of Systema Naturae was to be treated as if published on that date. Names published before that date are unavailable, even if they would otherwise satisfy the rules. The only work which takes priority over the 10th edition is Carl Alexander Clerck's Svenska Spindlar or Aranei Suecici, which was published in 1757, but is also to be treated as if published on January 1, 1758. Revisions: During Linnaeus' lifetime, Systema Naturae was under continuous revision. Progress was incorporated into new and ever-expanding editions; for example, in his 1st edition (1735), whales and manatees were originally classified as species of fish (as was thought to be the case then). In the 10th edition, they were both moved into the mammal class. Animals: The animal kingdom (as described by Linnaeus): "Animals enjoy sensation by means of a living organization, animated by a medullary substance; perception by nerves; and motion by the exertion of the will. They have members for the different purposes of life; organs for their different senses; and faculties (or powers) for the application of their different perceptions. They all originate from an egg. Their external and internal structure; their comparative anatomy, habits, instincts, and various relations to each other, are detailed in authors who professedly treat on their subjects."The list has been broken down into the original six classes Linnaeus described for animals; Mammalia, Aves, Amphibia, Pisces, Insecta, and Vermes. These classes were ultimately created by studying the internal anatomy, as seen in his key: Heart with two auricles, two ventricles. Warm, red blood Viviparous: Mammalia Oviparous: Aves Heart with one auricle, one ventricle. Cold, red blood Lungs voluntary: Amphibia External gills: Pisces Heart with one auricle, no ventricles. Cold, pus-like blood Have antennae: Insecta Have tentacles: VermesBy current standards Pisces and Vermes are informal groupings, Insecta also contained arachnids and crustaceans, and one order of Amphibia comprised sharks, lampreys, and sturgeons. Animals: Mammalia Linnaeus described mammals as: "Animals that suckle their young by means of lactiferous teats. In external and internal structure they resemble man: most of them are quadrupeds; and with man, their natural enemy, inhabit the surface of the Earth. The largest, though fewest in number, inhabit the ocean."Linnaeus divided the mammals based upon the number, situation, and structure of their teeth, into the following orders and genera: Primates: Homo (humans), Simia (monkeys & apes), Lemur (lemurs & colugos) & Vespertilio (bats) Bruta: Elephas (elephants), Trichechus (manatees), Bradypus (sloths), Myrmecophaga (anteaters) & Manis (pangolins) Ferae: Phoca (seals), Canis (dogs & hyenas), Felis (cats), Viverra (mongooses & civets), Mustela (weasels & kin) & Ursus (bears) Bestiae: Sus (pigs), Dasypus (armadillos), Erinaceus (hedgehogs), Talpa (moles), Sorex (shrews) & Didelphis (opossums) Glires: Rhinoceros (rhinoceroses), Hystrix (porcupines), Lepus (rabbits & hares), Castor (beavers), Mus (mice & kin) & Sciurus (squirrels) Pecora: Camelus (camels), Moschus (musk deer), Cervus (deer & giraffes), Capra (goats & antelope), Ovis (sheep) & Bos (cattle) Belluae: Equus (horses) & Hippopotamus (hippopotamuses) Cete: Monodon (narwhals), Balaena (rorquals), Physeter (sperm whales) & Delphinus (dolphins & porpoises) Aves Linnaeus described birds as: "A beautiful and cheerful portion of created nature consisting of animals having a body covered with feathers and down; protracted and naked jaws (the beak), two wings formed for flight, and two feet. They are areal, vocal, swift and light, and destitute of external ears, lips, teeth, scrotum, womb, bladder, epiglottis, corpus callosum and its arch, and diaphragm."Linnaeus divided the birds based upon the characters of the bill and feet, into the following 6 orders and 63 genera: Accipitres: Vultur (vultures & condors), Falco (falcons, eagles, & kin), Strix (owls) & Lanius (shrikes) Picae: Psittacus (parrots), Ramphastos (toucans), Buceros (hornbills), Crotophaga (anis), Corvus (crows & ravens), Coracias (rollers & orioles), Gracula (mynas), Paradisea (birds-of-paradise), Cuculus (cuckoos), Jynx (wrynecks), Picus (woodpeckers), Sitta (nuthatches), Alcedo (kingfishers), Merops (bee-eaters), Upupa (hoopoes), Certhia (treecreepers) & Trochilus (hummingbirds) Anseres: Anas (ducks, geese, & swans), Mergus (mergansers), Alca (auks & puffins), Procellaria (petrels), Diomedea (albatrosses & penguins), Pelecanus (pelicans & kin), Phaethon (tropicbirds), Colymbus (grebes & loons), Larus (gulls), Sterna (terns) & Rhyncops (skimmers) Grallae: Phoenicopterus (flamingoes), Platalea (spoonbills), Mycteria & Tantulus (storks), Ardea (herons, cranes, & kin), Scolopax (godwits, ibises, & kin), Tringa (phalaropes and sandpipers), Charadrius (plovers), Recurvirostra (avocets), Haematopus (oystercatchers), Fulica (coots & kin), Rallus (rails), Psophia (trumpeters), Otis (bustards) & Struthio (ostriches) Gallinae: Pavo (peafowl), Meleagris (turkeys), Crax (curassows), Phasianus (pheasants & chickens) & Tetrao (grouse & kin) Passeres: Columba (pigeons & doves), Alauda (larks & pipits), Sturnus (starlings), Turdus (thrushes), Loxia (cardinals, bullfinches, & kin), Emberiza (buntings), Fringilla (finches), Motacilla (wagtails), Parus (tits & chickadees), Hirundo (swallows & swifts) & Caprimulgus (nightjars) Amphibia Linnaeus described his "Amphibia" (comprising reptiles and amphibians) as: "Animals that are distinguished by a body cold and generally naked; stern and expressive countenance; harsh voice; mostly lurid color; filthy odor; a few are furnished with a horrid poison; all have cartilaginous bones, slow circulation, exquisite sight and hearing, large pulmonary vessels, lobate liver, oblong thick stomach, and cystic, hepatic, and pancreatic ducts: they are deficient in diaphragm, do not transpire (sweat), can live a long time without food, are tenacious of life, and have the power of reproducing parts which have been destroyed or lost; some undergo a metamorphosis; some cast (shed) their skin; some appear to live promiscuously on land or in the water, and some are torpid during the winter."Linnaeus divided the amphibians based upon the limb structures and the way they breathed, into the following orders and genera: Reptiles: Testudo (turtles & tortoises), Draco (gliding lizards), Lacerta (terrestrial lizards, salamanders, & crocodilians) & Rana (frogs & toads) Serpentes: Crotalus (rattlesnakes), Boa (boas), Coluber (racers, cobras, & typical snakes), Anguis (slowworms & worm snakes), Amphisbaena (worm lizards) & Coecilia (caecilians) Nantes: Petromyzon (lampreys), Raja (rays), Squalus (sharks), Chimaera (ratfishes), Lophius (anglerfishes) & Acipenser (sturgeons) Pisces Linnaeus described fish as: "Always inhabiting the waters; are swift in their motion and voracious in their appetites. They breathe by means of gills, which are generally united by a bony arch; swim by means of radiate fins, and are mostly covered over with cartilaginous scales. Besides they parts they have in common with other animals, they are furnished with a nictitant membrane, and most of them with a swim-bladder, by the contraction or dilatation of which, they can raise or sink themselves in their element at pleasure."Linnaeus divided the fishes based upon the position of the ventral and pectoral fins, into the following orders and genera: Apodes: Muraena (eels), Gymnotus (electric knifefishes), Trichiurus (cutlassfishes), Anarhichas (wolffishes), Ammodytes (sand eels), Stromateus (butterfishes) & Xiphias (swordfishes) Jugulares: Callionymus (dragonets), Uranoscopus (stargazers), Trachinus (weevers), Gadus (cod & kin) & Ophidion (cusk-wels) Thoracici: Cyclopterus (lumpfishes), Echeneis (remoras), Coryphaena (dolphinfishes), Gobius (gobies), Cottus (sculpins), Scorpaena (scorpionfishes), Zeus (john dories), Pleuronectes (flatfishes), Chaetodon (butterflyfishes), Sparus (breams & porgies), Labrus (wrasses), Sciaena (snappers), Perca (perch), Gasterosteus (sticklebacks), Scomber (mackerel & tuna), Mullus (goatfishes) & Trigla (sea robins) Abdominales: Cobitis (loaches), Silurus (catfishes), Loricaria (suckermouth catfishes), Salmo (salmon & trout), Fistularia (cornetfishes), Esox (pike), Argentina (herring smelts), Atherina (silversides), Mugil (mullet), Exocoetus (flying fishes), Polynemus (threadfins), Clupea (herring) & Cyprinus (carp) Branchiostegi: Mormyrus (elephantfishes), Balistes (triggerfishes), Ostracion (boxfishes), Tetrodon (pufferfishes), Diodon (porcupinefishes), Centriscus (snipefishes), Syngnathus (pipefishes & seahorses) & Pegasus (seamoths) Insecta Linnaeus described his "Insecta" (comprising all arthropods, including insects, crustaceans, arachnids and others) as: "A very numerous and various class consisting of small animals, breathing through lateral spiracles, armed on all sides with a bony skin, or covered with hair; furnished with many feet, and moveable antennae (or horns), which project from the head, and are the probable instruments of sensation."Linnaeus divided the insects based upon the form of the wings, into the following orders and genera: Coleoptera: Scarabaeus (scarab beetles), Dermestes (larder beetles), Hister (clown beetles), Attelabus (leaf-rolling weevils), Curculio (true weevils), Silpha (carrion beetles), Coccinella (ladybirds or ladybugs), Cassida (tortoise beetles), Chrysomela (leaf beetles), Meloe (blister beetles), Tenebrio (darkling beetles), Mordella (tumbling flower beetles), Staphylinus (rove beetles), Cerambyx (longhorn beetles), Cantharis (soldier beetles), Elater (click beetles), Cicindela (ground beetles), Buprestis (jewel beetles), Dytiscus (Dytiscidae), Carabus (Carabus species), Necydalis (necydaline beetles), Forficula (earwigs), Blatta (cockroaches) & Gryllus (other orthopteroid insects) Hemiptera: Cicada (cicadas), Notonecta (backswimmers), Nepa (water scorpions), Cimex (bedbugs), Aphis (aphids), Chermes (woolly aphids), Coccus (scale insects) & Thrips (thrips) Lepidoptera: Papilio (butterflies), Sphinx (hawk moths), Phalaena (moths) Neuroptera: Libellula (dragonflies & damselflies), Ephemera (mayflies), Phryganea (caddisflies), Hemerobius (lacewings), Panorpa (scorpionflies) & Raphidia (snakeflies) Hymenoptera: Cynips (Gall wasps), Tenthredo (sawflies), Ichneumon (ichneumon wasps), Sphex (digger wasps), Vespa (hornets), Apis (bees), Formica (ants) & Mutilla (velvet ants) Diptera: Oestrus (botflies), Tipula (crane flies), Musca (house flies), Tabanus (horse flies), Culex (mosquitoes), Empis (dance flies), Conops (thick-headed flies), Asilus (robber flies), Bombylius (bee flies) & Hippobosca (louse flies) Aptera: Lepisma (silverfish), Podura (springtails), Termes (termites), Pediculus (lice), Pulex (fleas), Acarus (mites & ticks), Phalangium (harvestmen), Aranea (spiders), Scorpio (scorpions), Cancer (crabs, lobsters and kin), Monoculus (water fleas & kin), Oniscus (woodlice), Scolopendra (centipedes) & Julus (millipedes) Vermes Linnaeus described his "Vermes" as: "Animals of slow motion, soft substance, able to increase their bulk and restore parts which have been destroyed, extremely tenacious of life, and the inhabitants of moist places. Many of them are without a distinct head, and most of them without feet. They are principally distinguished by their tentacles (or feelers). By the Ancients they were not improperly called imperfect animals, as being destitute of ears, nose, head, eyes and legs; and are therefore totally distinct from Insects."Linnaeus divided the "Vermes" based upon the structure of the body, into the following orders and genera: Intestina: Gordius (horsehair worms), Furia, Lumbricus (earthworms), Ascaris (giant intestinal roundworms), Fasciola (liver flukes), Hirudo (leeches), Myxine (hagfishes), Teredo (shipworms) Mollusca: Limax (terrestrial slugs), Doris (dorid nudibranchs), Tethys (tethydid sea slugs), Nereis (polychaete worms), Aphrodita (sea mice), Lernaea (anchor worms), Priapus (priapulid worms & sea anemones), Scyllaea (scyllaeid sea slugs), Holothuria (salps & Portuguese Man o' War), Triton (triton shells), Sepia (octopuses, squids, & cuttlefishes), Medusa (jellyfishes), Asterias (starfishes), Echinus (sea urchins) Testacea: Chiton (chitons), Lepas (barnacles), Pholas (piddocks & angelwings), Myes (soft-shell clams), Solen (saltwater clams), Tellina (tellinid shellfishes), Cardium (cockles), Donax (wedge shells), Venus (Venus clams), Spondylus (thorny oysters), Chama (jewel box shells), Arca (ark clams), Ostrea (true oysters), Anomia (saddle oysters), Mytilus (saltwater mussels), Pinna (pen shells), Argonauta (paper nautiluses), Nautilus (nautiluses), Conus (cone snails), Cypraea (cowries), Bulla (bubble shells), Voluta (volutes), Buccinum (true whelks), Strombus (true conches), Murex (murex snails), Trochus (top snails), Turbo (turban snails), Helix (terrestrial snails), Neritha (nerites), Haliotis (abalones), Patella (true limpets and brachiopods), Dentalium (tusk shells), Serpula (serpulid worms) Lithophyta: Tubipora (organ pipe corals), Millepora (fire corals), Madrepora (stone corals) Zoophyta: Isis (soft corals), Gorgonia (sea fans), Alcyonium (tunicates), Tubularia (Tubularia), Eschara (Bryozoa), Corallina (coralline algae), Sertularia (Bryozoa), Hydra, Pennatula (sea pens), Taenia (tapeworms), Volvox Plants: The second volume, published in 1759, detailed the kingdom Plantae, in which Linnaeus included true plants, as well as fungi, algae and lichens. In addition to repeating the species he had previously listed in his Species Plantarum (1753), and those published in the intervening period, Linnaeus described several hundred new plant species. The species from Species Plantarum were numbered sequentially, while the new species were labelled with letters. Many were sent to Linnaeus by his correspondents overseas, including Johannes Burman and David de Gorter in South Africa, Patrick Browne, Philip Miller and John Ellis in America, Jean-François Séguier, Carlo Allioni and Casimir Christoph Schmidel in the Alps, Gorter and Johann Ernst Hebenstreit in the Orient, and François Boissier de Sauvages de Lacroix, Gerard and Barnadet Gabriel across Europe.New plant species described in the 10th edition of Systema Naturae include:
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Charades** Charades: Charades (UK: , US: ) is a parlor or party word guessing game. Originally, the game was a dramatic form of literary charades: a single person would act out each syllable of a word or phrase in order, followed by the whole phrase together, while the rest of the group guessed. A variant was to have teams who acted scenes out together while the others guessed. Today, it is common to require the actors to mime their hints without using any spoken words, which requires some conventional gestures. Puns and visual puns were and remain common. History: Literary charades A charade was a form of literary riddle popularized in France in the 18th century where each syllable of the answer was described enigmatically as a separate word before the word as a whole was similarly described. The term charade was borrowed into English from French in the second half of the eighteenth century, denoting a "kind of riddle in which each syllable of a word, or a complete word or phrase, is enigmatically described or dramatically represented". Written forms of charade appeared in magazines and books, and on the folding fans of the Regency. The answers were sometimes printed on the reverse of the fan, suggesting that they were a flirting device, used by a young woman to tease her beau. One charade composed by Jane Austen goes as follows: The answer is "hem-lock". History: William Mackworth Praed's poetic charades became famous.Later examples omitted direct references to individual syllables, such as the following, said to be a favorite of Theodore Roosevelt: The answer is "an actor". History: In the early 20th century, the 11th edition of the Encyclopædia Britannica offered these two prose charades as "perhaps as good as could be selected": "My first, with the most rooted antipathy to a Frenchman, prides himself, whenever they meet, upon sticking close to his jacket; my second has many virtues, nor is its least that it gives its name to my first; my whole may I never catch!". History: and "My first is company; my second shuns company; my third collects company; and my whole amuses company". with the answers being tartar and conundrum. History: Acted charades In the early 19th century, the French began performing "acting" or "acted charades"—with the written description replaced by dramatic performances as a parlor game—and this was brought over to Britain by the English aristocracy. Thus the term gradually became more popularly used to refer to acted charades, examples of which are described in William Thackeray's Vanity Fair and in Charlotte Brontë's Jane Eyre.Thackeray snarked that charades were enjoyed for "enabling the many ladies amongst us who had beauty to display their charms, and the fewer number who had cleverness, to exhibit their wit". In his Vanity Fair, the height of Rebecca Sharp's social success is brought on by her performances of acting charades before the Prince Regent. The first scene—"first two syllables"—displays a Turkish lord dealing with a slaver and his odalisque before being garroted by the sultan's chief black eunuch; the second—"last two syllables"—finds a Turk, his consort, and his black slave praying at sunrise when an enormous Egyptian head enters and begins singing. The answer—Agamemnon—is then acted out by Becky's husband, while she makes her (first) appearance as Clytemnestra. After refreshments, another round begins, partially in pantomime: the first scene shows a household yawningly finishing a game of cribbage and preparing for bed; the second opens on the household bustling with activity as daybreak prompts bells ringing, arguments over receipts, collection of the chamber pots, calls for carriages, and greetings to new guests; the third closes with a ship's crew and passengers tossed about by a storm with strong winds. The answer—nightingale—is then (somewhat mistakenly) acted out by Becky in the role of a singing French marquise, recalling both Lacoste's 1705 tragic opera Philomèle and an arriviste lover and wife of Louis XIV. Apart from its importance in the book, the scenes were subsequently considered models of the genre.By the time of the First World War, "acting charades" had become the most popular form and, as written charades were forgotten, it adopted its present, terser name. Thackeray's scenes—even those said to be "in pantomime"—included dialogue from the actors but truly "dumb" or "mime charades" gradually became more popular as well and similarly dropped their descriptive adjectives. The amateurish acting involved in charades led to the word's use to describe any obvious or inept deception, but over time "a charade" became used more broadly for any put-on (even highly competent and successful ones) and its original association with the parlor game has largely been lost.The acted form of charades has been repeatedly made into television game shows, including the American Play the Game, Movietown, RSVP, Pantomime Quiz, Stump the Stars, Celebrity Charades, Showoffs and Body Language; the British Give Us a Clue; the Canadian Party Game and Acting Crazy; and the Australian Celebrity Game. On Britain's BBC Radio 4, I'm Sorry I Haven't a Clue performs a variant of the old written and spoken form of the game as Sound Charades. History: In the 1939 movie The Mystery of Mr. Wong, the game is called "Indications". Rules: As a long-lived and informal game, charades' rules can vary widely. Common features of the game include holding up a number of fingers to indicate the number of syllables in the answer, silently replying to questions, and making a "come on" gesture once the guesses become close; some forms of the games, however, forbid anything except physically acting out the answer. In a mixed setting, it is therefore advisable to clarify the rules before play begins. Rules: Common features of the modern game include: Players are not allowed to play people or actors etc. Players divided into two or more exclusive teams. Rules: A notebook or scraps of paper, used for one team to write the answer(s) to be performed by a member of the other side. The answer(s) may be restricted to dictionary words, titles of artistic works, etc. to limit the difficulty. Words which cannot be explained other than by spelling (e.g., the or of) may be excluded from play except within larger phrases. Rules: A silent performance by the player to his or her teammates. To enforce a focus on physical acting out of the clues, silent mouthing of the words for lipreading, spelling, and pointing are generally banned. Humming, clapping, and other noises may be banned as well. A clock, timer, hourglass, etc. to limit the teams' guesses. A scoreboard or sheet to tally the teams' points: one for every correctly guessed answer and one for every answer the opposing team failed to guess within the allotted time. Alternation of teams until every player has acted at least once. Common signals: The following gestures are commonly used in the game: A number of fingers at the beginning of play gives the number of words in the answer. Holding the number on the opposite inside elbow denotes the number of syllables in a particular word. Pointing at or tugging on an earlobe means "sounds like" Moving hands or fingers closer together without touching means "shorter" Holding the hands or fingers close together without touching indicates a short word such as "if" or "of" that is difficult to act out on its own A "T" gesture, like "time out", means "the". Moving hands or fingers farther apart means "more", which is to encourage answering a longer form of the same word. Common signals: "Come on", "close", or "keep guessing" may be indicated by any "come here" gesture or by holding one's hands toward each other and spinning them in circles "More" or "add a suffix" may be indicated by similar movements or by miming the act of stretching out a rubber band "I" may be signed either by gesturing to one's chest or eye "Yes, correct", in addition to more general signs such as nodding, is often expressed in charades by pointing at or touching the nose with one hand while pointing at the correct guesser with the other, signifying "on the nose" In India, thumbs up means English language, thumbs down is Hindi, thumb in horizontal position is a state language like Marathi, Gujarathi, Kannada, etc. Common signals: "OK sign" can mean 3, 0, or the middle finger (in Portuguese).Some of these signs may be banned from some forms of the game.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Kostant partition function** Kostant partition function: In representation theory, a branch of mathematics, the Kostant partition function, introduced by Bertram Kostant (1958, 1959), of a root system Δ is the number of ways one can represent a vector (weight) as a non-negative integer linear combination of the positive roots Δ+⊂Δ . Kostant used it to rewrite the Weyl character formula as a formula (the Kostant multiplicity formula) for the multiplicity of a weight of an irreducible representation of a semisimple Lie algebra. An alternative formula, that is more computationally efficient in some cases, is Freudenthal's formula. Kostant partition function: The Kostant partition function can also be defined for Kac–Moody algebras and has similar properties. Examples: A2 Consider the A2 root system, with positive roots α1 , α2 , and := α1+α2 . If an element μ can be expressed as a non-negative integer linear combination of α1 , α2 , and α3 , then since α3=α1+α2 , it can also be expressed as a non-negative integer linear combination of the positive simple roots α1 and α2 :μ=n1α1+n2α2 with n1 and n2 being non-negative integers. This expression gives one way to write μ as a non-negative integer combination of positive roots; other expressions can be obtained by replacing α1+α2 with α3 some number of times. We can do the replacement k times, where 0≤k≤min(n1,n2) . Thus, if the Kostant partition function is denoted by p , we obtain the formula p(n1α1+n2α2)=1+min(n1,n2) .This result is shown graphically in the image at right. If an element μ is not of the form μ=n1α1+n2α2 , then p(μ)=0 B2 The partition function for the other rank 2 root systems are more complicated but are known explicitly.For B2, the positive simple roots are α1=(1,0),α2=(0,1) , and the positive roots are the simple roots together with α3=(1,1) and α4=(2,1) . The partition function can be viewed as a function of two non-negative integers n1 and n2 , which represent the element n1α1+n2α2 . Then the partition function P(n1,n2) can be defined piecewise with the help of two auxiliary functions. Examples: If n1≤n2 , then P(n1,n2)=b(n1) . If n2≤n1≤2n2 , then P(n1,n2)=q2(n2)−b(2n2−n1−1)=b(n1)−q2(n1−n2−1) . If 2n2≤n1 , then P(n1,n2)=q2(n2) . The auxiliary functions are defined for n≥1 and are given by q2(n)=12(n+1)(n+2) and b(n)=14(n+2)2 for n even, 14(n+1)(n+3) for n odd. G2 For G2, the positive roots are (1,0),(0,1),(1,1),(2,1),(3,1) and (3,2) , with (1,0) denoting the short simple root and (0,1) denoting the long simple root. The partition function is defined piecewise with the domain divided into five regions, with the help of two auxiliary functions. Relation to the Weyl character formula: Inverting the Weyl denominator For each root α and each H∈h , we can formally apply the formula for the sum of a geometric series to obtain 11−e−α(H)=1+e−α(H)+e−2α(H)+⋯ where we do not worry about convergence—that is, the equality is understood at the level of formal power series. Using Weyl's denominator formula ∑w∈W(−1)ℓ(w)ew⋅ρ(H)=eρ(H)∏α>0(1−e−α(H)), we obtain a formal expression for the reciprocal of the Weyl denominator: 1∑w∈W(−1)ℓ(w)ew⋅ρ(H)=e−ρ(H)∏α>0(1+e−α(H)+e−2α(H)+e−3α(H)+⋯)=e−ρ(H)∑μp(μ)e−μ(H) Here, the first equality is by taking a product over the positive roots of the geometric series formula and the second equality is by counting all the ways a given exponential eμ(H) can occur in the product. The function ℓ(w) is zero if the argument is a rotation and one if the argument is a reflection. Relation to the Weyl character formula: Rewriting the character formula This argument shows that we can convert the Weyl character formula for the irreducible representation with highest weight λ ch ⁡(V)=∑w∈W(−1)ℓ(w)ew⋅(λ+ρ)(H)∑w∈W(−1)ℓ(w)ew⋅ρ(H) from a quotient to a product: ch ⁡(V)=(∑w∈W(−1)ℓ(w)ew⋅(λ+ρ)(H))(e−ρ(H)∑μp(μ)e−μ(H)). Relation to the Weyl character formula: The multiplicity formula Using the preceding rewriting of the character formula, it is relatively easy to write the character as a sum of exponentials. The coefficients of these exponentials are the multiplicities of the corresponding weights. We thus obtain a formula for the multiplicity of a given weight μ in the irreducible representation with highest weight λ :mult(μ)=∑w∈W(−1)ℓ(w)p(w⋅(λ+ρ)−(μ+ρ)) .This result is the Kostant multiplicity formula. The dominant term in this formula is the term w=1 ; the contribution of this term is p(λ−μ) , which is just the multiplicity of μ in the Verma module with highest weight λ . If λ is sufficiently far inside the fundamental Weyl chamber and μ is sufficiently close to λ , it may happen that all other terms in the formula are zero. Specifically, unless w⋅(λ+ρ) is higher than μ+ρ , the value of the Kostant partition function on w⋅(λ+ρ)−(μ+ρ) will be zero. Thus, although the sum is nominally over the whole Weyl group, in most cases, the number of nonzero terms is smaller than the order of the Weyl group.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Suprameatal spine** Suprameatal spine: The inner end of the external acoustic meatus is closed, in the recent state, by the tympanic membrane; the upper limit of its outer orifice is formed by the posterior root of the zygomatic process, immediately below which there is sometimes seen a small spine, the suprameatal spine also called the spine of Henle, situated at the upper and posterior part of the orifice.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**E♭ (musical note)** E♭ (musical note): E♭ (E-flat) or mi bémol is the fourth semitone of the solfège. It lies a diatonic semitone above D and a chromatic semitone below E, thus being enharmonic to D♯ (D-sharp) or re dièse. In equal temperament it is also enharmonic with F (F-double flat). However, in some temperaments, D♯ is not the same as E♭. E♭ is a perfect fourth above B♭, whereas D♯ is a major third above B. When calculated in equal temperament with a reference of A above middle C as 440 Hz, the frequency of the E♭ above middle C (or E♭4) is approximately 311.127 Hz. See pitch (music) for a discussion of historical variations in frequency. In German nomenclature, it is known as Es, sometimes (especially in the context of musical motifs, e.g. DSCH motif) abbreviated to S. Scales: Common scales beginning on E♭ E♭ major: E♭ F G A♭ B♭ C D E♭ E♭ natural minor: E♭ F G♭ A♭ B♭ C♭ D♭ E♭ E♭ harmonic minor: E♭ F G♭ A♭ B♭ C♭ D E♭ E♭ melodic minor ascending: E♭ F G♭ A♭ B♭ C D E♭ E♭ melodic minor descending: E♭ D♭ C♭ B♭ A♭ G♭ F E♭ Diatonic scales E♭ Ionian: E♭ F G A♭ B♭ C D E♭ E♭ Dorian: E♭ F G♭ A♭ B♭ C D♭ E♭ E♭ Phrygian: E♭ F♭ G♭ A♭ B♭ C♭ D♭ E♭ E♭ Lydian: E♭ F G A B♭ C D E♭ E♭ Mixolydian: E♭ F G A♭ B♭ C D♭ E♭ E♭ Aeolian: E♭ F G♭ A♭ B♭ C♭ D♭ E♭ E♭ Locrian: E♭ F♭ G♭ A♭ B C♭ D♭ E♭ Jazz melodic minor E♭ ascending melodic minor: E♭ F G♭ A♭ B♭ C D E♭ E♭ Dorian ♭2: E♭ F♭ G♭ A♭ B♭ C D♭ E♭ E♭ Lydian augmented: E♭ F G A B C D E♭ E♭ Lydian dominant: E♭ F G A B♭ C D♭ E♭ E♭ Mixolydian ♭6: E♭ F G A♭ B♭ C♭ D♭ E♭ E♭ Locrian ♮2: E♭ F G♭ A♭ B C♭ D♭ E♭ E♭ altered: E♭ F♭ G♭ A B C♭ D♭ E♭
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Hard-paste porcelain** Hard-paste porcelain: Hard-paste porcelain, sometimes called "true porcelain", is a ceramic material that was originally made from a compound of the feldspathic rock petuntse and kaolin fired at a very high temperature, usually around 1400 °C. It was first made in China around the 7th or 8th century and has remained the most common type of Chinese porcelain.From the Middle Ages onwards, it was very widely exported and admired by other cultures and fetched huge prices on foreign markets. Eventually Korean porcelain developed in the 14th century and Japanese porcelain in the 17th, but other cultures were unable to learn or reproduce the secret of its formula in terms of materials and firing temperature until it was worked out in Europe in the early 18th century and suitable mineral deposits of kaolin, feldspar, and quartz were discovered. This soon led to a large production in factories across Europe by the end of the 18th century. Despite the huge influence of Chinese porcelain decoration on Islamic pottery, historic production in the Islamic world was all in earthenware or fritware, the latter having some of the properties of hard-paste porcelain. Europeans also developed soft-paste porcelain, fired at lower temperatures (around 1200 °C), while trying to copy the Chinese, and later bone china, which in modern times has somewhat replaced hard-paste around the world, even in China. History: Chinese porcelain began to be exported to Europe by the Portuguese and later by the Dutch from the middle of the 16th century, creating vast demand for the material. The discovery in Europe of the secret of its manufacture has conventionally been credited to Johann Friedrich Böttger of Meissen, Germany in 1708, but it has also been claimed that English manufacturers or Ehrenfried Walther von Tschirnhaus produced porcelain first. Certainly, the Meissen porcelain factory, established 1710, was the first to produce porcelain in Europe in large quantities and since the recipe was kept a trade secret by Böttger for his company, experiments continued elsewhere throughout Europe. Vienna porcelain became the second European manufacturer in 1718, followed by Vezzi porcelain in Venice in 1720. History: In 1712, the French Jesuit François Xavier d'Entrecolles described the Chinese process of manufacturing porcelain in his letters to Europe. In 1771, the comte de Milly published L'art de la porcelaine, a detailed account of the processes of creating hard-paste porcelain, ending its prestige as a rare and valuable material.Hard-paste now chiefly refers to formulations prepared from mixtures of kaolin, feldspar and quartz. Other raw materials can also be used and these include pottery stones, which historically were known as petunse although this name has long fallen out of use. Characteristics: Hard-paste porcelain is now differentiated from soft-paste porcelain mainly by the firing temperature, with the former being higher, to around 1400 °C, and the latter to around 1200 °C. Depending on the raw materials and firing methods used, hard-paste porcelain can also resemble stoneware or earthenware. Hard-paste porcelain can also be used for unglazed biscuit porcelain. It is a translucent and bright, white ceramic. Hard-paste has the advantage over soft-paste that it is less likely to crack when exposed to hot liquids, but the higher firing temperature of hard-paste may necessitate a second "glost" firing for the decoration.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Shot silk** Shot silk: Shot silk (also called changeant, changeable silk, changeable taffeta, cross-color, changeable fabric, or "dhoop chaon" ("sunshine shade")) is a fabric which is made up of silk woven from warp and weft yarns of two or more colours producing an iridescent appearance. A "shot" is a single throw of the bobbin that carries the weft thread through the warp, and shot silk colours can be described as "[warp colour] shot with [weft colour]." The weaving technique can also be applied to other fibres such as cotton, linen, and synthetics. History: A shot silk vestment of purple and yellow dating from about 698 is described in detail in a document written in about 1170, showing that the technique existed since at least the 7th century. An argument has been made that shot silk was also described as purpura at this time, the Latin word mainly applied to purple although there are multiple references to purpura being red, green and black-and-red, as well as "varied". Purpura is also used to mean iridescence and the play of light, and contemporary descriptions exist indicating that the textile purpura was a type of silk distinct to other silks in assorted colours. It has also been suggested that illuminations in the Lindisfarne Gospels of c.700 show garments of shot silk being worn by the Four Evangelists.Shot silks were popular in the 18th and 19th centuries, including warp printing, where the warp was printed before weaving to create chiné or "Pompadour taffeta". Current use: Shot silks are used today to make neckties and other garments. Notably, some forms of academic dress use shot silks, such as those of the University of Wales and the University of Cambridge. For example, the robes of a Cambridge Doctor of Divinity are faced with "dove" silk, which is turquoise shot with rose-pink to create an overall grey effect.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Buchholz hydra** Buchholz hydra: In mathematical logic, especially in graph theory and number theory, the Buchholz hydra game is a type of hydra game, which is a single-player game based on the idea of chopping pieces off of a mathematical tree. The hydra game can be used to generate a rapidly growing function, BH(n) , which eventually dominates all recursive functions that are provably total in " ID ν ", and the termination of all hydra games is not provably total in -CA)+BI Rules: The game is played on a hydra, a finite, rooted connected tree A with the following properties: The root of A has a special label, usually denoted + Any other node of A has a label ν≤ω All nodes directly above the root of A have a label 0 .If the player decides to remove the top node σ of A , the hydra will then choose an arbitrary n∈N , where n is a current turn number, and then transform itself into a new hydra A(σ,n) as follows. Let τ represent the parent of σ , and let A− represent the part of the hydra which remains after σ has been removed. The definition of A(σ,n) depends on the label of σ If the label of σ is 0 and τ is the root of A , then A(σ,n) = A− If the label of σ is 0 but τ is not the root of A , we make n copies of τ and all its children and add edges between them and τ 's parent. This new tree is A(σ,n) If the label of σ is u for some u∈N , then we label the first node below σ with label v<u as ε . B is then the subtree obtained by starting with Aε and replacing the label of ε with u−1 and σ with 0. A(σ,n) is then obtained by taking A and replacing σ with B . In this case, the value of n does not matter. Rules: If the label of σ is ω , A(σ,n) is obtained by replacing the label of σ with n+1 .If σ is the rightmost head of A , we write A(n) . A series of moves is called a strategy, and a strategy is called a winning strategy if, after a finite amount of moves, the hydra reduces to its root. It has been proven that this always terminates, even though the hydra can get taller by massive amounts. Hydra theorem: Buchholz's paper in 1987 showed that the canonical correspondence between a hydra and an infinitary well-founded tree (or the corresponding term in the notation system T associated to Buchholz's function, which does not necessarily belong to the ordinal notation system OT⊂T ), preserves fundamental sequences of choosing the rightmost leaves and the (n) operation on an infinitary well-founded tree or the [n] operation on the corresponding term in T The hydra theorem for Buchholz hydra, stating that there are no losing strategies for any hydra, is unprovable in Π11−CA+BI BH(n): Suppose a tree consists of just one branch with x nodes, labelled +,0,ω,...,ω . Call such a tree Rn . It cannot be proven in Π11−CA+BI that for all x , there exists k such that Rx(1)(2)(3)...(k) is a winning strategy. (The latter expression means taking the tree Rx , then transforming it with n=1 , then n=2 , then n=3 , etc. up to n=k .) Define BH(x) as the smallest k such that Rx(1)(2)(3)...(k) as defined above is a winning strategy. By the hydra theorem, this function is well-defined, but its totality cannot be proven in Π11−CA+BI . Hydras grow extremely fast, because the amount of turns required to kill Rx(1)(2) is larger than Graham's number or even the amount of turns to kill a Kirby-Paris hydra; and Rx(1)(2)(3)(4)(5)(6) has an entire Kirby-Paris hydra as its branch. To be precise, its rate of growth is believed to be comparable to fψ0(εΩω+1)(x) with respect to the unspecified system of fundamental sequences without a proof. Here, ψ0 denotes Buchholz's function, and ψ0(εΩω+1) is the Takeuti-Feferman-Buchholz ordinal which measures the strength of Π11−CA+BI The first two values of the BH function are virtually degenerate: BH(1)=0 and BH(2)=1 . Similarly to the weak tree function, BH(3) is very large, but not extremely.The Buchholz hydra eventually surpasses TREE(n) and SCG(n), yet it is likely weaker than loader as well as numbers from finite promise games. Analysis: It is possible to make a one-to-one correspondence between some hydras and ordinals. To convert a tree or subtree to an ordinal: Inductively convert all the immediate children of the node to ordinals. Add up those child ordinals. If there were no children, this will be 0. If the label of the node is not +, apply ψα , where α is the label of the node, and ψ is Buchholz's function.The resulting ordinal expression is only useful if it is in normal form. Some examples are:
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Buy a Shotgun** Buy a Shotgun: "Buy a Shotgun" is a phrase spoken by then Vice-President of the United States Joe Biden during a video question and answer session hosted by Parents Magazine in 2013. During the session, Biden questioned the utility of a semi-automatic rifle as a home defense weapon, suggesting a shotgun was more appropriate. He went on to explain that he owned two shotguns and had advised the Second Lady of the United States, Jill Biden, to use one of them to "fire two blasts" should she feel threatened by someone or something. The advice later became a subject of interest in social media and popular culture. Background: Early 2013 comments on shotguns In January 2013, during an interactive chat session on the social networking site Google+, then vice-president of the United States Joe Biden responded to a question about personal protection in the wake of a natural disaster by explaining the prudence in buying "some shotgun shells" to repel looters. Background: "You know, it's harder to use an assault weapon to hit something than it is a shotgun, OK. So if you want to keep people away in an earthquake, buy some shotgun shells." The following month, Biden was interviewed by Field & Stream, during which he said ownership of semi-automatic rifles was unnecessary for persons who owned shotguns since they would be able "to keep someone away" from their home by firing "the shotgun through the door". The interview was published on February 25, though conducted earlier in the month. Background: "Buy a Shotgun" Later, on February 19, Biden hosted a question and answer session with Parents Magazine on the social networking site Facebook. During the session, the topic of gun control was raised. Biden noted that he personally owned two shotguns and had advised Jill Biden that "if there's ever a problem" to walk outside their home and "fire two blasts". Biden also explained that he felt shotguns were more appropriate for personal security than an AR-15 which, he said, was more difficult to aim and use. He concluded by stating, "Buy a shotgun! Buy a shotgun!" Later Biden comments on shotguns In 2020, responding to criticism from a Detroit autoworker who confronted him about gun control policies, Biden explained his two shotguns were in calibers 12-gauge and 20-gauge. Reaction: Social media response Biden's remarks during his Facebook session, according to CNN, "unleashed a torrent of online reaction" on social media. Reaction: Legal analysis The Wilmington, Delaware police department – in whose jurisdiction it is believed Biden's shotguns were stored – advised that it was illegal for residents to discharge firearms on their property unless "you really feel that your life is being threatened". Former Delaware deputy attorney general John Garey also advised residents not to follow Biden's advice to "fire two blasts" due to Delaware self-defense statutes which required a person have a reasonable fear of "imminent death" before resorting to deadly force. Some gun rights advocates also opined that they felt Biden's advice was legally reckless.Kathleen Jennings, a prosecutor at the Delaware Department of Justice, disagreed with assessments that Biden's advice was unsound noting that "in Delaware, a person can legally fire a weapon to protect themselves and others from someone intruding onto her dwelling". Questioned by The Washington Post whether her reading of the law was colored by the fact that Biden's son, Beau Biden, was at the time the head of the Delaware Department of Justice, Jennings rejected the suggestion and noted she had spent 32 years as both a prosecutor and a criminal defense attorney. Reaction: Security assessment Regarding the efficacy of a shotgun as a personal security device, Jeff Johnston wrote in American Hunter that "Biden had it partially right when he said shotguns are good for self-defense," but objected to Biden's specific advice to use a double-barreled shotgun. Reaction: "Joe Biden defense" A Washington state man was put on trial in November 2013 for illegal discharge of a firearm after he fired a shotgun blast to deter car thieves on his property the previous summer. During his trial, the man pleaded in his defense that "I did what Joe Biden told me to do" but was convicted in a jury trial nonetheless. The defense claim was later referred to by some media as the "Joe Biden defense". In popular culture: Biden's comments were remixed by The Gregory Brothers with Darren Criss into a song titled "Buy a Shotgun" which was released on YouTube in August 2013. The use of the so-called "Joe Biden defense" in Washington state became the subject of a two-minute comedy bit on The Daily Show with John Stewart.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Steroid use in Australia** Steroid use in Australia: Anabolic/androgenic steroids are drugs that are obtained from the male hormone, testosterone. Anabolic steroids are used for muscle-building and strength gain for cosmetic reasons as well as for performance-enhancement in athletics and bodybuilding. Anabolic steroids work in many ways by increasing protein synthesis in the muscles and by eliminating the catabolic process (the process of breaking down skeletal muscle for energy). It is common for teens and adults to use steroids as they stimulate and encourage muscle growth much more rapidly than natural body building. Statistics: In Australia, many people are encouraged to use steroids due to the body image expectations created by society. In secondary schools, 3.2% of boys and 1.2% of girls are using steroids. Many Australian bodybuilders visit Bangkok and Pattaya in Thailand because the pharmacies there sell some steroid brands ten times cheaper than they available on the Australian black market. Australians were also purchasing their steroids in other countries to avoid a possible criminal record at home. Australian Crime Commission statistics have shown that there was a 106% increase in the last financial year of "performance and image-enhancing-drugs", showing 5,561 border detections. Notable events: In the first 3 months of 2008, 300 AAS seizures were reported by the Australian Customs and Border Protection Service.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Brown sugar** Brown sugar: Brown sugar is a sucrose sugar product with a distinctive brown color due to the presence of molasses. It is by tradition an unrefined or partially refined soft sugar consisting of sugar crystals with some residual molasses content (natural brown sugar), but is now often produced by the addition of molasses to refined white sugar (commercial brown sugar). Characteristics: The Codex Alimentarius requires brown sugar to contain at least 88% sucrose plus invert sugar. Commercial brown sugar contains from 3.5% molasses (light brown sugar) to 6.5% molasses (dark brown sugar) based on its total volume. Based on total weight, regular commercial brown sugar contains up to 10% molasses. The product is naturally moist from the hygroscopic nature of the molasses and is often labeled "soft." The product may undergo processing to make it flow better for industrial handling. The addition of dyes or other chemicals may be permitted in some areas or for industrial products. Characteristics: Particle size is variable but generally smaller than that of granulated white sugar. Products for industrial use (e.g., the industrial production of cakes) may be based on caster sugar, which has crystals of approximately 0.35 mm. History: From a type of raw sugar to a consumer product The meaning of the term 'brown sugar' has changed over time. In the 19th century, American works referred to 'refining brown sugar'. Americans also referred to the 'Brown sugar of Commerce', which could be refined with a yield of 70% of white sugar. In the United Kingdom it was the same. There were two kinds of raw sugar. The most common kind was muscovado a.k.a. brown sugar, and was processed by British sugar refineries. The other kind of raw sugar was brown sugar which had been clayed and was known as clayed sugar. It was used for domestic purposes, but this usage was diminishing. In the 19th century United States the same meaning of the words raw sugar, brown sugar and muscovado was also noted: "Raw sugar, commonly called muscovado or brown sugar, not advanced beyond its raw state by claying, boiling, clarifying or other process".In the mid 20th century United States, 'brown sugar' could refer to two products. It could be a raw sugar which had been centrifuged to a purity of about 97% pure sugar and that was offered as brown sugar in health food shops. However, in most cases it was white sugar to which molasses had been added. For the latter, a consumer magazine stated that: "Contrary to opinion, this brown sugar is a product of the refinery." The most important consideration is that the term 'brown sugar' now came to refer to a product for consumers, instead of referring to a type of sugar that was processed by sugar refineries. History: Smear campaign In the late 19th century, the newly consolidated refined white sugar industry, which did not have full control over brown sugar production, mounted a smear campaign against brown sugar, reproducing microscopic photographs of harmless but repulsive-looking microbes living in brown sugar. The effort was so successful that by 1900, a best-selling cookbook warned that brown sugar was of inferior quality and was susceptible to infestation by "a minute insect". This campaign of disinformation was also felt in other sectors using raw or brown sugar such as brewing; Raw sugars are all more or less liable to be contaminated with decomposing nitrogenous matters, fermentative germs, and other living organisms, both animal and vegetable....For this reason, raw sugars must always be considered dangerous brewing materials. Production: Brown sugar is often produced by adding sugarcane molasses to completely refined white sugar crystals to more carefully control the ratio of molasses to sugar crystals and to reduce manufacturing costs. Brown sugar prepared in this manner is often much coarser than its unrefined equivalent and its molasses may be easily separated from the crystals by simply washing to reveal the underlying white sugar crystals; in contrast, with unrefined brown sugar, washing will reveal underlying crystals which are off-white due to the inclusion of molasses. Production: The molasses usually used for food is obtained from sugar cane, because the flavor is generally preferred over beet sugar molasses, although in some areas, especially in Belgium and the Netherlands, sugar beet molasses is used. The white sugar used can be from either beet or cane, as the chemical composition, nutritional value, color, and taste of fully refined white sugar is for practical purposes the same, no matter from what plant it originates. Even with less-than-perfect refining, the small differences in color, odor, and taste of the white sugar will be masked by the molasses. Natural brown sugar: Definition Natural brown sugar, raw sugar or whole cane sugar is sugar that retains some amount of the molasses from the mother liquor (the partially evaporated sugar cane juice). The term 'Natural brown sugar' can be traced back to at least the 1940s, when it was noted that the sugar refiners had pushed the brown sugar from the plantation owner out of the consumer market. Natural brown sugar was: 'The raw sugar, not the brown sugar most easily obtained, which usually is white sugar artificially colored.' So the term 'Natural brown sugar' came up to distinguish brown sugar that still contained part of its molasses from brown sugar that was really white sugar to which molasses had been added. Natural brown sugar: Modern types of natural brown sugar Some natural brown sugars have particular names and characteristics, and are sold as turbinado, demerara or raw sugar. These have been centrifuged, and therefore can be said to have been refined to a large degree. Muscovado is darkest of the modern types of natural brown sugar. Turbinado sugar is made from crystallized, partially evaporated sugar cane juice which has been spun in a centrifuge to remove almost all of the molasses. The sugar crystals are large and golden-coloured. This sugar can be sold as is or sent to the refinery to produce white sugar. Demerara sugar is now 97-99% pure sucrose and has also been centrifuged. What is now sold to the United States consumer as 'raw sugar' is also a centrifuged product. If it were raw sugar in the generally accepted meaning of an unrefined product, the Food and Drug Administration would take action. Some say that for consumers, raw sugar means that the sugar is highly refined, but has been crystallized only once.Modern muscovado sugar sold to consumers is different from traditional Muscovado. It is made by refining sugar with lime, but not centrifuging it. This means that impurities like dirt and ash are removed, but the molasses remains. Natural brown sugar: Traditional types of natural brown sugar Brown sugars that have been only mildly centrifuged or unrefined (non-centrifuged) retain a much higher degree of molasses than products sold as natural brown sugar to consumers in developed nations. These traditional brown sugars are called various names across the globe often depending on their country of origin: e.g. muscovado, panela, rapadura, jaggery, piloncillo, etc. Natural brown sugar: Muscovado from the Portuguese açúcar mascavado, was the most common type of raw sugar and was also called brown sugar. In the 19th century, this was the sugar that based upon weight yielded about 70% white sugar when fully refined.Muscovado, panela, piloncillo, chancaca, jaggery and other natural dark brown sugars have been minimally centrifuged or not at all. Typically these sugars are made in smaller factories or "cottage industries" in developing nations, where they are produced with traditional practices that do not make use of industrialized vacuum evaporators or centrifuges. They are commonly boiled in open pans upon wood-fired stoves until the sugar cane juice reaches approximately 30% of the former volume and sucrose crystallization begins. They are then poured into molds to solidify or onto cooling pans where they are beaten or worked vigorously to produce a granulated brown sugar. In some countries, such as Mauritius or the Philippines, a natural brown sugar called muscovado is produced by partially centrifuging the evaporated and crystallizing cane juice to create a sugar-crystal rich mush, which is allowed to drain under gravity to produce varying degrees of molasses content in the final product. This process approximates a slightly modernized practice introduced in the 19th century to generate a better quality of natural brown sugar.A similar Japanese version of uncentrifuged natural cane sugar is called kokuto (Japanese: 黒糖 kokutō). This is a regional specialty of Okinawa and is often sold in the form of large lumps. It is sometimes used to make shochu. Okinawan brown sugar is sometimes referred to as 'black sugar' for its darker colour compared to other types of unrefined sugar, although when broken up into smaller pieces its colour becomes lighter. Kokuto is commonly used as a flavouring for drinks and desserts, but can also be eaten raw as it has a taste similar to caramel. The sugar is also thought to be rich in nutrients removed during the refinement process of other sugars, such as potassium and iron. Culinary & Health considerations: Brown sugar adds flavor to desserts and baked goods. It can be substituted for maple sugar, and maple sugar can be substituted for it in recipes. Brown sugar caramelizes much more readily than refined sugar, and this effect can be used to make glazes and gravies brown while cooking. Culinary & Health considerations: For domestic purposes one can create the equivalent of brown sugar by mixing white sugar with molasses. Suitable proportions are about one tablespoon of molasses to each cup of sugar (one-sixteenth of the total volume). Molasses comprises about 10% of brown sugar's total weight, which is about one ninth of the white sugar weight. Due to varying qualities and colors of molasses products, for lighter or darker sugar, reduce or increase its proportion according to taste. Culinary & Health considerations: In following a modern recipe that specifies "brown sugar", one usually may assume that the intended meaning is light brown sugar, but how dark or light one prefers one's sugar is largely a matter of taste. Even in recipes such as cakes, where the overall moisture content might be critical, the amount of water contained in brown sugar is too small to matter. Much more significant than its water content is the fact that darker brown sugar or more molasses will impart a stronger flavor, with more of a suggestion of caramel. Culinary & Health considerations: Brown sugar that has hardened can be made soft again by adding a new source of moisture for the molasses, or by heating and remelting the molasses. Storing brown sugar in a freezer will prevent moisture from escaping and molasses from crystallizing, allowing for a much longer shelf life. Although brown sugar has been touted as having health benefits ranging from soothing menstrual cramps to serving as an anti-aging skin treatment, brown sugar is no better for health than refined sugar, despite the minerals it contains (the amounts are negligible). Nutritional value: One hundred grams of brown sugar contains 377 Calories (nutrition table), as opposed to 387 Calories in white sugar (link to nutrition table). However, brown sugar packs more densely than white sugar due to the smaller crystal size and may have more calories when measured by volume. Any minerals present in brown sugar come from the molasses added to the white sugar. In a 100-gram reference amount, brown sugar contains 15% of the Daily Value for iron, with no other vitamins or minerals in significant content (table).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Thai zig zag scam** Thai zig zag scam: The Thai zig zag scam is a confidence trick where one is falsely accused of shoplifting, and then held by police, or those claiming to be police, until "bail" is paid for the alleged theft. At times those fleeced are shown faked closed-circuit television footage as corroboration. In several cases in Thailand, this confidence trick has occurred at the airport, and thus is sometimes called the "Thai airport scam". Most reports of this scam are dated. Cases: According to the BBC, police in Bangkok's Suvarnabhumi Airport have participated in a series of these scams, robbing tourists of thousands of dollars each time. An English couple was charged with stealing a wallet. The Thai embassy in Singapore published a rebuttal to the BBC article, stating that all legal proceedings in the case of the English were conducted "...in accordance with the law." An Irish woman was charged with stealing eye-liner. An Australian man was charged with stealing a doughnut.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Supercapacitor** Supercapacitor: A supercapacitor (SC), also called an ultracapacitor, is a high-capacity capacitor, with a capacitance value much higher than other capacitors but with lower voltage limits. It bridges the gap between electrolytic capacitors and rechargeable batteries. It typically stores 10 to 100 times more energy per unit volume or mass than electrolytic capacitors, can accept and deliver charge much faster than batteries, and tolerates many more charge and discharge cycles than rechargeable batteries.Supercapacitors are used in applications requiring many rapid charge/discharge cycles, rather than long-term compact energy storage — in automobiles, buses, trains, cranes and elevators, where they are used for regenerative braking, short-term energy storage, or burst-mode power delivery. Smaller units are used as power backup for static random-access memory (SRAM). Supercapacitor: Unlike ordinary capacitors, supercapacitors do not use the conventional solid dielectric, but rather, they use electrostatic double-layer capacitance and electrochemical pseudocapacitance, both of which contribute to the total capacitance of the capacitor, with a few differences: Electrostatic double-layer capacitors (EDLCs) use carbon electrodes or derivatives with much higher electrostatic double-layer capacitance than electrochemical pseudocapacitance, achieving separation of charge in a Helmholtz double layer at the interface between the surface of a conductive electrode and an electrolyte. The separation of charge is of the order of a few ångströms (0.3–0.8 nm), much smaller than in a conventional capacitor. Supercapacitor: Electrochemical pseudocapacitors use metal oxide or conducting polymer electrodes with a high amount of electrochemical pseudocapacitance additional to the double-layer capacitance. Pseudocapacitance is achieved by Faradaic electron charge-transfer with redox reactions, intercalation or electrosorption. Supercapacitor: Hybrid capacitors, such as the lithium-ion capacitor, use electrodes with differing characteristics: one exhibiting mostly electrostatic capacitance and the other mostly electrochemical capacitance.The electrolyte forms an ionic conductive connection between the two electrodes which distinguishes them from conventional electrolytic capacitors where a dielectric layer always exists, and the so-called electrolyte, e.g., MnO2 or conducting polymer, is in fact part of the second electrode (the cathode, or more correctly the positive electrode). Supercapacitors are polarized by design with asymmetric electrodes, or, for symmetric electrodes, by a potential applied during manufacturing. History: Development of the double layer and pseudocapacitance models (see Double layer (interfacial)). History: Evolution of components In the early 1950s, General Electric engineers began experimenting with porous carbon electrodes in the design of capacitors, from the design of fuel cells and rechargeable batteries. Activated charcoal is an electrical conductor that is an extremely porous "spongy" form of carbon with a high specific surface area. In 1957 H. Becker developed a "Low voltage electrolytic capacitor with porous carbon electrodes". He believed that the energy was stored as a charge in the carbon pores as in the pores of the etched foils of electrolytic capacitors. Because the double layer mechanism was not known by him at the time, he wrote in the patent: "It is not known exactly what is taking place in the component if it is used for energy storage, but it leads to an extremely high capacity." General Electric did not immediately pursue this work. In 1966 researchers at Standard Oil of Ohio (SOHIO) developed another version of the component as "electrical energy storage apparatus", while working on experimental fuel cell designs. The nature of electrochemical energy storage was not described in this patent. Even in 1970, the electrochemical capacitor patented by Donald L. Boos was registered as an electrolytic capacitor with activated carbon electrodes.Early electrochemical capacitors used two aluminum foils covered with activated carbon — the electrodes — that were soaked in an electrolyte and separated by a thin porous insulator. This design gave a capacitor with a capacitance on the order of one farad, significantly higher than electrolytic capacitors of the same dimensions. This basic mechanical design remains the basis of most electrochemical capacitors. History: SOHIO did not commercialize their invention, licensing the technology to NEC, who finally marketed the results as "supercapacitors" in 1978, to provide backup power for computer memory.Between 1975 and 1980 Brian Evans Conway conducted extensive fundamental and development work on ruthenium oxide electrochemical capacitors. In 1991 he described the difference between "supercapacitor" and "battery" behaviour in electrochemical energy storage. In 1999 he defined the term "supercapacitor" to make reference to the increase in observed capacitance by surface redox reactions with faradaic charge transfer between electrodes and ions. His "supercapacitor" stored electrical charge partially in the Helmholtz double-layer and partially as result of faradaic reactions with "pseudocapacitance" charge transfer of electrons and protons between electrode and electrolyte. The working mechanisms of pseudocapacitors are redox reactions, intercalation and electrosorption (adsorption onto a surface). With his research, Conway greatly expanded the knowledge of electrochemical capacitors. History: The market expanded slowly. That changed around 1978 as Panasonic marketed its Goldcaps brand. This product became a successful energy source for memory backup applications. Competition started only years later. In 1987 ELNA "Dynacap"s entered the market. First generation EDLC's had relatively high internal resistance that limited the discharge current. They were used for low current applications such as powering SRAM chips or for data backup. History: At the end of the 1980s, improved electrode materials increased capacitance values. At the same time, the development of electrolytes with better conductivity lowered the equivalent series resistance (ESR) increasing charge/discharge currents. The first supercapacitor with low internal resistance was developed in 1982 for military applications through the Pinnacle Research Institute (PRI), and were marketed under the brand name "PRI Ultracapacitor". In 1992, Maxwell Laboratories (later Maxwell Technologies) took over this development. Maxwell adopted the term Ultracapacitor from PRI and called them "Boost Caps" to underline their use for power applications. History: Since capacitors' energy content increases with the square of the voltage, researchers were looking for a way to increase the electrolyte's breakdown voltage. In 1994 using the anode of a 200V high voltage tantalum electrolytic capacitor, David A. Evans developed an "Electrolytic-Hybrid Electrochemical Capacitor". These capacitors combine features of electrolytic and electrochemical capacitors. They combine the high dielectric strength of an anode from an electrolytic capacitor with the high capacitance of a pseudocapacitive metal oxide (ruthenium (IV) oxide) cathode from an electrochemical capacitor, yielding a hybrid electrochemical capacitor. Evans' capacitors, coined Capattery, had an energy content about a factor of 5 higher than a comparable tantalum electrolytic capacitor of the same size. Their high costs limited them to specific military applications. History: Recent developments include lithium-ion capacitors. These hybrid capacitors were pioneered by Fujitsu's FDK in 2007. They combine an electrostatic carbon electrode with a pre-doped lithium-ion electrochemical electrode. This combination increases the capacitance value. Additionally, the pre-doping process lowers the anode potential and results in a high cell output voltage, further increasing specific energy. Research departments active in many companies and universities are working to improve characteristics such as specific energy, specific power, and cycle stability and to reduce production costs. Design: Basic design Electrochemical capacitors (supercapacitors) consist of two electrodes separated by an ion-permeable membrane (separator), and an electrolyte ionically connecting both electrodes. When the electrodes are polarized by an applied voltage, ions in the electrolyte form electric double layers of opposite polarity to the electrode's polarity. For example, positively polarized electrodes will have a layer of negative ions at the electrode/electrolyte interface along with a charge-balancing layer of positive ions adsorbing onto the negative layer. The opposite is true for the negatively polarized electrode. Design: Additionally, depending on electrode material and surface shape, some ions may permeate the double layer becoming specifically adsorbed ions and contribute with pseudocapacitance to the total capacitance of the supercapacitor. Design: Capacitance distribution The two electrodes form a series circuit of two individual capacitors C1 and C2. The total capacitance Ctotal is given by the formula total =C1⋅C2C1+C2 Supercapacitors may have either symmetric or asymmetric electrodes. Symmetry implies that both electrodes have the same capacitance value, yielding a total capacitance of half the value of each single electrode (if C1 = C2, then Ctotal = ½ C1). For asymmetric capacitors, the total capacitance can be taken as that of the electrode with the smaller capacitance (if C1 >> C2, then Ctotal ≈ C2). Design: Storage principles Electrochemical capacitors use the double-layer effect to store electric energy; however, this double-layer has no conventional solid dielectric to separate the charges. There are two storage principles in the electric double-layer of the electrodes that contribute to the total capacitance of an electrochemical capacitor: Double-layer capacitance, electrostatic storage of the electrical energy achieved by separation of charge in a Helmholtz double layer. Design: Pseudocapacitance, electrochemical storage of the electrical energy achieved by faradaic redox reactions with charge-transfer.Both capacitances are only separable by measurement techniques. The amount of charge stored per unit voltage in an electrochemical capacitor is primarily a function of the electrode size, although the amount of capacitance of each storage principle can vary extremely. Design: Electrical double-layer capacitance Every electrochemical capacitor has two electrodes, mechanically separated by a separator, which are ionically connected to each other via the electrolyte. The electrolyte is a mixture of positive and negative ions dissolved in a solvent such as water. At each of the two electrode surfaces originates an area in which the liquid electrolyte contacts the conductive metallic surface of the electrode. This interface forms a common boundary among two different phases of matter, such as an insoluble solid electrode surface and an adjacent liquid electrolyte. In this interface occurs a very special phenomenon of the double layer effect.Applying a voltage to an electrochemical capacitor causes both electrodes in the capacitor to generate electrical double-layers. These double-layers consist of two layers of charges: one electronic layer is in the surface lattice structure of the electrode, and the other, with opposite polarity, emerges from dissolved and solvated ions in the electrolyte. The two layers are separated by a monolayer of solvent molecules, e.g., for water as solvent by water molecules, called inner Helmholtz plane (IHP). Solvent molecules adhere by physical adsorption on the surface of the electrode and separate the oppositely polarized ions from each other, and can be idealised as a molecular dielectric. In the process, there is no transfer of charge between electrode and electrolyte, so the forces that cause the adhesion are not chemical bonds, but physical forces, e.g., electrostatic forces. The adsorbed molecules are polarized, but, due to the lack of transfer of charge between electrolyte and electrode, suffered no chemical changes. Design: The amount of charge in the electrode is matched by the magnitude of counter-charges in outer Helmholtz plane (OHP). This double-layer phenomena stores electrical charges as in a conventional capacitor. The double-layer charge forms a static electric field in the molecular layer of the solvent molecules in the IHP that corresponds to the strength of the applied voltage. Design: The double-layer serves approximately as the dielectric layer in a conventional capacitor, albeit with the thickness of a single molecule. Thus, the standard formula for conventional plate capacitors can be used to calculate their capacitance: C=εAd .Accordingly, capacitance C is greatest in capacitors made from materials with a high permittivity ε, large electrode plate surface areas A and small distance between plates d. As a result, double-layer capacitors have much higher capacitance values than conventional capacitors, arising from the extremely large surface area of activated carbon electrodes and the extremely thin double-layer distance on the order of a few ångströms (0.3-0.8 nm), of order of the Debye length.Assuming that the minimum distance between the electrode and the charge accumulating region cannot be less than the typical distance between negative and positive charges in atoms of ~0.05 nm a general capacitance upper limit of ~18 µF/cm2 has been predicted for non-faradaic capacitors.The main drawback of carbon electrodes of double-layer SCs is small values of quantum capacitance which act in series with capacitance of ionic space charge. Therefore, further increase of density of capacitance in SCs can be connected with increasing of quantum capacitance of carbon electrode nanostructures.The amount of charge stored per unit voltage in an electrochemical capacitor is primarily a function of the electrode size. The electrostatic storage of energy in the double-layers is linear with respect to the stored charge, and correspond to the concentration of the adsorbed ions. Also, while charge in conventional capacitors is transferred via electrons, capacitance in double-layer capacitors is related to the limited moving speed of ions in the electrolyte and the resistive porous structure of the electrodes. Since no chemical changes take place within the electrode or electrolyte, charging and discharging electric double-layers in principle is unlimited. Real supercapacitors lifetimes are only limited by electrolyte evaporation effects. Design: Electrochemical pseudocapacitance Applying a voltage at the electrochemical capacitor terminals moves electrolyte ions to the opposite polarized electrode and forms a double-layer in which a single layer of solvent molecules acts as separator. Pseudocapacitance can originate when specifically adsorbed ions out of the electrolyte pervade the double-layer. This pseudocapacitance stores electrical energy by means of reversible faradaic redox reactions on the surface of suitable electrodes in an electrochemical capacitor with an electric double-layer. Pseudocapacitance is accompanied with an electron charge-transfer between electrolyte and electrode coming from a de-solvated and adsorbed ion whereby only one electron per charge unit is participating. This faradaic charge transfer originates by a very fast sequence of reversible redox, intercalation or electrosorption processes. The adsorbed ion has no chemical reaction with the atoms of the electrode (no chemical bonds arise) since only a charge-transfer take place. Design: The electrons involved in the faradaic processes are transferred to or from valence electron states (orbitals) of the redox electrode reagent. They enter the negative electrode and flow through the external circuit to the positive electrode where a second double-layer with an equal number of anions has formed. The electrons reaching the positive electrode are not transferred to the anions forming the double-layer, instead they remain in the strongly ionized and "electron hungry" transition-metal ions of the electrode's surface. As such, the storage capacity of faradaic pseudocapacitance is limited by the finite quantity of reagent in the available surface. Design: A faradaic pseudocapacitance only occurs together with a static double-layer capacitance, and its magnitude may exceed the value of double-layer capacitance for the same surface area by factor 100, depending on the nature and the structure of the electrode, because all the pseudocapacitance reactions take place only with de-solvated ions, which are much smaller than solvated ion with their solvating shell. The amount of pseudocapacitance has a linear function within narrow limits determined by the potential-dependent degree of surface coverage of the adsorbed anions. Design: The ability of electrodes to accomplish pseudocapacitance effects by redox reactions, intercalation or electrosorption strongly depends on the chemical affinity of electrode materials to the ions adsorbed on the electrode surface as well as on the structure and dimension of the electrode pores. Materials exhibiting redox behavior for use as electrodes in pseudocapacitors are transition-metal oxides like RuO2, IrO2, or MnO2 inserted by doping in the conductive electrode material such as active carbon, as well as conducting polymers such as polyaniline or derivatives of polythiophene covering the electrode material. Design: The amount of electric charge stored in a pseudocapacitance is linearly proportional to the applied voltage. The unit of pseudocapacitance is farad. Design: Potential distribution Conventional capacitors (also known as electrostatic capacitors), such as ceramic capacitors and film capacitors, consist of two electrodes separated by a dielectric material. When charged, the energy is stored in a static electric field that permeates the dielectric between the electrodes. The total energy increases with the amount of stored charge, which in turn correlates linearly with the potential (voltage) between the plates. The maximum potential difference between the plates (the maximal voltage) is limited by the dielectric's breakdown field strength. The same static storage also applies for electrolytic capacitors in which most of the potential decreases over the anode's thin oxide layer. The somewhat resistive liquid electrolyte (cathode) accounts for a small decrease of potential for "wet" electrolytic capacitors, while electrolytic capacitors with solid conductive polymer electrolyte this voltage drop is negligible. Design: In contrast, electrochemical capacitors (supercapacitors) consists of two electrodes separated by an ion-permeable membrane (separator) and electrically connected via an electrolyte. Energy storage occurs within the double-layers of both electrodes as a mixture of a double-layer capacitance and pseudocapacitance. When both electrodes have approximately the same resistance (internal resistance), the potential of the capacitor decreases symmetrically over both double-layers, whereby a voltage drop across the equivalent series resistance (ESR) of the electrolyte is achieved. For asymmetrical supercapacitors like hybrid capacitors the voltage drop between the electrodes could be asymmetrical. The maximum potential across the capacitor (the maximal voltage) is limited by the electrolyte decomposition voltage. Design: Both electrostatic and electrochemical energy storage in supercapacitors are linear with respect to the stored charge, just as in conventional capacitors. The voltage between the capacitor terminals is linear with respect to the amount of stored energy. Such linear voltage gradient differs from rechargeable electrochemical batteries, in which the voltage between the terminals remains independent of the amount of stored energy, providing a relatively constant voltage. Design: Comparison with other storage technologies Supercapacitors compete with electrolytic capacitors and rechargeable batteries, especially lithium-ion batteries. The following table compares the major parameters of the three main supercapacitor families with electrolytic capacitors and batteries. Electrolytic capacitors feature nearly unlimited charge/discharge cycles, high dielectric strength (up to 550 V) and good frequency response as alternating current (AC) reactance in the lower frequency range. Supercapacitors can store 10 to 100 times more energy than electrolytic capacitors, but they do not support AC applications. With regards to rechargeable batteries, supercapacitors feature higher peak currents, low cost per cycle, no danger of overcharging, good reversibility, non-corrosive electrolyte and low material toxicity. Batteries offer lower purchase cost and stable voltage under discharge, but require complex electronic control and switching equipment, with consequent energy loss and spark hazard given a short. Styles: Supercapacitors are made in different styles, such as flat with a single pair of electrodes, wound in a cylindrical case, or stacked in a rectangular case. Because they cover a broad range of capacitance values, the size of the cases can vary. Styles: Different styles of supercapacitors Construction details Construction details of wound and stacked supercapacitors with activated carbon electrodes Supercapacitors are constructed with two metal foils (current collectors), each coated with an electrode material such as activated carbon, which serve as the power connection between the electrode material and the external terminals of the capacitor. Specifically to the electrode material is a very large surface area. In this example the activated carbon is electrochemically etched, so that the surface area of the material is about 100,000 times greater than the smooth surface. The electrodes are kept apart by an ion-permeable membrane (separator) used as an insulator to protect the electrodes against short circuits. This construction is subsequently rolled or folded into a cylindrical or rectangular shape and can be stacked in an aluminum can or an adaptable rectangular housing. The cell is then impregnated with a liquid or viscous electrolyte of organic or aqueous type. The electrolyte, an ionic conductor, enters the pores of the electrodes and serves as the conductive connection between the electrodes across the separator. Finally, the housing is hermetically sealed to ensure stable behavior over the specified lifetime. Types: Electrical energy is stored in supercapacitors via two storage principles, static double-layer capacitance and electrochemical pseudocapacitance; and the distribution of the two types of capacitance depends on the material and structure of the electrodes. There are three types of supercapacitors based on storage principle: Double-layer capacitors (EDLCs) — with activated carbon electrodes or derivatives with much higher electrostatic double-layer capacitance than electrochemical pseudocapacitance Pseudocapacitors — with transition metal oxide or conducting polymer electrodes with a high electrochemical pseudocapacitance Hybrid capacitors — with asymmetric electrodes, one of which exhibits mostly electrostatic and the other mostly electrochemical capacitance, such as lithium-ion capacitorsBecause double-layer capacitance and pseudocapacitance both contribute inseparably to the total capacitance value of an electrochemical capacitor, a correct description of these capacitors only can be given under the generic term. The concepts of supercapattery and supercabattery have been recently proposed to better represent those hybrid devices that behave more like the supercapacitor and the rechargeable battery, respectively.The capacitance value of a supercapacitor is determined by two storage principles: Double-layer capacitance – electrostatic storage of the electrical energy achieved by separation of charge in a Helmholtz double layer at the interface between the surface of a conductor electrode and an electrolytic solution electrolyte. The separation of charge distance in a double-layer is on the order of a few ångströms (0.3–0.8 nm) and is static in origin. Types: Pseudocapacitance – Electrochemical storage of the electrical energy, achieved by redox reactions, electrosorption or intercalation on the surface of the electrode by specifically adsorbed ions, that results in a reversible faradaic charge-transfer on the electrode.Double-layer capacitance and pseudocapacitance both contribute inseparably to the total capacitance value of a supercapacitor. However, the ratio of the two can vary greatly, depending on the design of the electrodes and the composition of the electrolyte. Pseudocapacitance can increase the capacitance value by as much as a factor of ten over that of the double-layer by itself.Electric double-layer capacitors (EDLC) are electrochemical capacitors in which energy storage predominantly is achieved by double-layer capacitance. In the past, all electrochemical capacitors were called "double-layer capacitors". Contemporary usage sees double-layer capacitors, together with pseudocapacitors, as part of a larger family of electrochemical capacitors called supercapacitors. They are also known as ultracapacitors. Materials: The properties of supercapacitors come from the interaction of their internal materials. Especially, the combination of electrode material and type of electrolyte determine the functionality and thermal and electrical characteristics of the capacitors. Electrodes Supercapacitor electrodes are generally thin coatings applied and electrically connected to a conductive, metallic current collector. Electrodes must have good conductivity, high temperature stability, long-term chemical stability (inertness), high corrosion resistance and high surface areas per unit volume and mass. Other requirements include environmental friendliness and low cost. Materials: The amount of double-layer as well as pseudocapacitance stored per unit voltage in a supercapacitor is predominantly a function of the electrode surface area. Therefore, supercapacitor electrodes are typically made of porous, spongy material with an extraordinarily high specific surface area, such as activated carbon. Additionally, the ability of the electrode material to perform faradaic charge transfers enhances the total capacitance. Materials: Generally the smaller the electrode's pores, the greater the capacitance and specific energy. However, smaller pores increase equivalent series resistance (ESR) and decrease specific power. Applications with high peak currents require larger pores and low internal losses, while applications requiring high specific energy need small pores. Materials: Electrodes for EDLCs The most commonly used electrode material for supercapacitors is carbon in various manifestations such as activated carbon (AC), carbon fibre-cloth (AFC), carbide-derived carbon (CDC), carbon aerogel, graphite (graphene), graphane and carbon nanotubes (CNTs).Carbon-based electrodes exhibit predominantly static double-layer capacitance, even though a small amount of pseudocapacitance may also be present depending on the pore size distribution. Pore sizes in carbons typically range from micropores (less than 2 nm) to mesopores (2-50 nm), but only micropores (<2 nm) contribute to pseudocapacitance. As pore size approaches the solvation shell size, solvent molecules are excluded and only unsolvated ions fill the pores (even for large ions), increasing ionic packing density and storage capability by faradaic H2 intercalation. Materials: Activated carbon Activated carbon was the first material chosen for EDLC electrodes. Even though its electrical conductivity is approximately 0.003% that of metals (1,250 to 2,000 S/m), it is sufficient for supercapacitors.Activated carbon is an extremely porous form of carbon with a high specific surface area — a common approximation is that 1 gram (0.035 oz) (a pencil-eraser-sized amount) has a surface area of roughly 1,000 to 3,000 square metres (11,000 to 32,000 sq ft) — about the size of 4 to 12 tennis courts. The bulk form used in electrodes is low-density with many pores, giving high double-layer capacitance. Materials: Solid activated carbon, also termed consolidated amorphous carbon (CAC) is the most used electrode material for supercapacitors and may be cheaper than other carbon derivatives. It is produced from activated carbon powder pressed into the desired shape, forming a block with a wide distribution of pore sizes. An electrode with a surface area of about 1000 m2/g results in a typical double-layer capacitance of about 10 μF/cm2 and a specific capacitance of 100 F/g. Materials: As of 2010 virtually all commercial supercapacitors use powdered activated carbon made from coconut shells. Coconut shells produce activated carbon with more micropores than does charcoal made from wood. Materials: Activated carbon fibres Activated carbon fibres (ACF) are produced from activated carbon and have a typical diameter of 10 µm. They can have micropores with a very narrow pore-size distribution that can be readily controlled. The surface area of ACF woven into a textile is about 2500 m2/g. Advantages of ACF electrodes include low electrical resistance along the fibre axis and good contact to the collector.As for activated carbon, ACF electrodes exhibit predominantly double-layer capacitance with a small amount of pseudocapacitance due to their micropores. Materials: Carbon aerogel Carbon aerogel is a highly porous, synthetic, ultralight material derived from an organic gel in which the liquid component of the gel has been replaced with a gas. Aerogel electrodes are made via pyrolysis of resorcinol-formaldehyde aerogels and are more conductive than most activated carbons. They enable thin and mechanically stable electrodes with a thickness in the range of several hundred micrometres (µm) and with uniform pore size. Aerogel electrodes also provide mechanical and vibration stability for supercapacitors used in high-vibration environments. Researchers have created a carbon aerogel electrode with gravimetric densities of about 400–1200 m2/g and volumetric capacitance of 104 F/cm3, yielding a specific energy of 325 kJ/kg (90 Wh/kg) and specific power of 20 W/g.Standard aerogel electrodes exhibit predominantly double-layer capacitance. Aerogel electrodes that incorporate composite material can add a high amount of pseudocapacitance. Materials: Carbide-derived carbon Carbide-derived carbon (CDC), also known as tunable nanoporous carbon, is a family of carbon materials derived from carbide precursors, such as binary silicon carbide and titanium carbide, that are transformed into pure carbon via physical, e.g., thermal decomposition or chemical, e.g., halogenation) processes.Carbide-derived carbons can exhibit high surface area and tunable pore diameters (from micropores to mesopores) to maximize ion confinement, increasing pseudocapacitance by faradaic H2 adsorption treatment. CDC electrodes with tailored pore design offer as much as 75% greater specific energy than conventional activated carbons. Materials: As of 2015, a CDC supercapacitor offered a specific energy of 10.1 Wh/kg, 3,500 F capacitance and over one million charge-discharge cycles. Materials: Graphene Graphene is a one-atom thick sheet of graphite, with atoms arranged in a regular hexagonal pattern, also called "nanocomposite paper".Graphene has a theoretical specific surface area of 2630 m2/g which can theoretically lead to a capacitance of 550 F/g. In addition, an advantage of graphene over activated carbon is its higher electrical conductivity. As of 2012 a new development used graphene sheets directly as electrodes without collectors for portable applications.In one embodiment, a graphene-based supercapacitor uses curved graphene sheets that do not stack face-to-face, forming mesopores that are accessible to and wettable by ionic electrolytes at voltages up to 4 V. A specific energy of 85.6 Wh/kg (308 kJ/kg) is obtained at room temperature equaling that of a conventional nickel metal hydride battery, but with 100-1000 times greater specific power.The two-dimensional structure of graphene improves charging and discharging. Charge carriers in vertically oriented sheets can quickly migrate into or out of the deeper structures of the electrode, thus increasing currents. Such capacitors may be suitable for 100/120 Hz filter applications, which are unreachable for supercapacitors using other carbon materials. Materials: Carbon nanotubes Carbon nanotubes (CNTs), also called buckytubes, are carbon molecules with a cylindrical nanostructure. They have a hollow structure with walls formed by one-atom-thick sheets of graphite. These sheets are rolled at specific and discrete ("chiral") angles, and the combination of chiral angle and radius controls properties such as electrical conductivity, electrolyte wettability and ion access. Nanotubes are categorized as single-walled nanotubes (SWNTs) or multi-walled nanotubes (MWNTs). The latter have one or more outer tubes successively enveloping a SWNT, much like the Russian matryoshka dolls. SWNTs have diameters ranging between 1 and 3 nm. MWNTs have thicker coaxial walls, separated by spacing (0.34 nm) that is close to graphene's interlayer distance. Materials: Nanotubes can grow vertically on the collector substrate, such as a silicon wafer. Typical lengths are 20 to 100 µm.Carbon nanotubes can greatly improve capacitor performance, due to the highly wettable surface area and high conductivity.A SWNT-based supercapacitor with aqueous electrolyte was systematically studied at University of Delaware in Prof. Bingqing Wei's group. Li et al., for the first time, discovered that the ion-size effect and the electrode-electrolyte wettability are the dominant factors affecting the electrochemical behavior of flexible SWCNTs-supercapacitors in different 1 molar aqueous electrolytes with different anions and cations. The experimental results also showed for flexible supercapacitor that it is suggested to put enough pressure between the two electrodes to improve the aqueous electrolyte CNT supercapacitor.CNTs can store about the same charge as activated carbon per unit surface area, but nanotubes' surface is arranged in a regular pattern, providing greater wettability. SWNTs have a high theoretical specific surface area of 1315 m2/g, while that for MWNTs is lower and is determined by the diameter of the tubes and degree of nesting, compared with a surface area of about 3000 m2/g of activated carbons. Nevertheless, CNTs have higher capacitance than activated carbon electrodes, e.g., 102 F/g for MWNTs and 180 F/g for SWNTs.MWNTs have mesopores that allow for easy access of ions at the electrode–electrolyte interface. As the pore size approaches the size of the ion solvation shell, the solvent molecules are partially stripped, resulting in larger ionic packing density and increased faradaic storage capability. However, the considerable volume change during repeated intercalation and depletion decreases their mechanical stability. To this end, research to increase surface area, mechanical strength, electrical conductivity and chemical stability is ongoing. Materials: Electrodes for pseudocapacitors MnO2 and RuO2 are typical materials used as electrodes for pseudocapacitors, since they have the electrochemical signature of a capacitive electrode (linear dependence on current versus voltage curve) as well as exhibiting aic behavior. Additionally, the charge storage originates from electron-transfer mechanisms rather than accumulation of ions in the electrochemical double layer. Pseudocapacitors were created through faradaic redox reactions that occur within the active electrode materials. More research was focused on transition-metal oxides such as MnO2 since transition-metal oxides have a lower cost compared to noble metal oxides such as RuO2. Moreover, the charge storage mechanisms of transition-metal oxides are based predominantly on pseudocapacitance. Two mechanisms of MnO2 charge storage behavior were introduced. The first mechanism implies the intercalation of protons (H+) or alkali metal cations (C+) in the bulk of the material upon reduction followed by deintercalation upon oxidation. Materials: MnO2 + H+ (C+) + e− ⇌ MnOOH(C)The second mechanism is based on the surface adsorption of electrolyte cations on MnO2. (MnO2)surface + C+ + e− ⇌ (MnO2− C+)surfaceNot every material that exhibits faradaic behavior can be used as an electrode for pseudocapacitors, such as Ni(OH)2 since it is a battery type electrode (non-linear dependence on current versus voltage curve). Materials: Metal oxides Brian Evans Conway's research described electrodes of transition metal oxides that exhibited high amounts of pseudocapacitance. Oxides of transition metals including ruthenium (RuO2), iridium (IrO2), iron (Fe3O4), manganese (MnO2) or sulfides such as titanium sulfide (TiS2) alone or in combination generate strong faradaic electron–transferring reactions combined with low resistance. Ruthenium dioxide in combination with H2SO4 electrolyte provides specific capacitance of 720 F/g and a high specific energy of 26.7 Wh/kg (96.12 kJ/kg).Charge/discharge takes place over a window of about 1.2 V per electrode. This pseudocapacitance of about 720 F/g is roughly 100 times higher than for double-layer capacitance using activated carbon electrodes. These transition metal electrodes offer excellent reversibility, with several hundred-thousand cycles. However, ruthenium is expensive and the 2.4 V voltage window for this capacitor limits their applications to military and space applications. Materials: Das et al. reported highest capacitance value (1715 F/g) for ruthenium oxide based supercapacitor with electrodeposited ruthenium oxide onto porous single wall carbon nanotube film electrode. A high specific capacitance of 1715 F/g has been reported which closely approaches the predicted theoretical maximum RuO2 capacitance of 2000 F/g. Materials: In 2014, a RuO2 supercapacitor anchored on a graphene foam electrode delivered specific capacitance of 502.78 F/g and areal capacitance of 1.11 F/cm2) leading to a specific energy of 39.28 Wh/kg and specific power of 128.01 kW/kg over 8,000 cycles with constant performance. The device was a three-dimensional (3D) sub-5 nm hydrous ruthenium-anchored graphene and carbon nanotube (CNT) hybrid foam (RGM) architecture. The graphene foam was conformally covered with hybrid networks of RuO2 nanoparticles and anchored CNTs.Less expensive oxides of iron, vanadium, nickel and cobalt have been tested in aqueous electrolytes, but none has been investigated as much as manganese dioxide (MnO2). However, none of these oxides are in commercial use. Materials: Conductive polymers Another approach uses electron-conducting polymers as pseudocapacitive material. Although mechanically weak, conductive polymers have high conductivity, resulting in a low ESR and a relatively high capacitance. Such conducting polymers include polyaniline, polythiophene, polypyrrole and polyacetylene. Such electrodes also employ electrochemical doping or dedoping of the polymers with anions and cations. Electrodes made from, or coated with, conductive polymers have costs comparable to carbon electrodes. Materials: Conducting polymer electrodes generally suffer from limited cycling stability. However, polyacene electrodes provide up to 10,000 cycles, much better than batteries. Materials: Electrodes for hybrid capacitors All commercial hybrid supercapacitors are asymmetric. They combine an electrode with high amount of pseudocapacitance with an electrode with a high amount of double-layer capacitance. In such systems the faradaic pseudocapacitance electrode with their higher capacitance provides high specific energy while the non-faradaic EDLC electrode enables high specific power. An advantage of the hybrid-type supercapacitors compared with symmetrical EDLC's is their higher specific capacitance value as well as their higher rated voltage and correspondingly their higher specific energy. Materials: Composite electrodes Composite electrodes for hybrid-type supercapacitors are constructed from carbon-based material with incorporated or deposited pseudocapacitive active materials like metal oxides and conducting polymers. As of 2013 most research for supercapacitors explores composite electrodes. Materials: CNTs give a backbone for a homogeneous distribution of metal oxide or electrically conducting polymers (ECPs), producing good pseudocapacitance and good double-layer capacitance. These electrodes achieve higher capacitances than either pure carbon or pure metal oxide or polymer-based electrodes. This is attributed to the accessibility of the nanotubes' tangled mat structure, which allows a uniform coating of pseudocapacitive materials and three-dimensional charge distribution. The process to anchor pseudocapactive materials usually uses a hydrothermal process. However, a recent researcher, Li et al., from the University of Delaware found a facile and scalable approach to precipitate MnO2 on a SWNT film to make an organic-electrolyte based supercapacitor.Another way to enhance CNT electrodes is by doping with a pseudocapacitive dopant as in lithium-ion capacitors. In this case the relatively small lithium atoms intercalate between the layers of carbon. The anode is made of lithium-doped carbon, which enables lower negative potential with a cathode made of activated carbon. This results in a larger voltage of 3.8-4 V that prevents electrolyte oxidation. As of 2007 they had achieved capacitance of 550 F/g. and reach a specific energy up to 14 Wh/kg (50.4 kJ/kg). Materials: Battery-type electrodes Rechargeable battery electrodes influenced the development of electrodes for new hybrid-type supercapacitor electrodes as for lithium-ion capacitors. Together with a carbon EDLC electrode in an asymmetric construction offers this configuration higher specific energy than typical supercapacitors with higher specific power, longer cycle life and faster charging and recharging times than batteries. Asymmetric electrodes (pseudo/EDLC) Recently some asymmetric hybrid supercapacitors were developed in which the positive electrode were based on a real pseudocapacitive metal oxide electrode (not a composite electrode), and the negative electrode on an EDLC activated carbon electrode. Materials: Asymmetric supercapacitors (ASC) have shown a great potential candidate for high-performance supercapacitor due to their wide operating potential which can remarkably enhance the capacitive behavior. An advantage of this type of supercapacitors is their higher voltage and correspondingly their higher specific energy (up to 10-20 Wh/kg (36-72 kJ/kg)).And they also have good cycling stability.For example, researchers use a kind of novel skutterudite Ni–CoP3 nanosheets and use it as positive electrodes with activated carbon (AC) as negative electrodes to fabricate asymmetric supercapacitor (ASC). It exhibits high energy density of 89.6 Wh/kg at 796 W/kg and stability of 93% after 10000 cycles, which can be a great potential to be an excellent next-generation electrode candidate.Also, carbon nanofibers/poly(3,4-ethylenedioxythiophene)/manganese oxide (f-CNFs/PEDOT/MnO2) were used as positive electrodes and AC as negative electrodes. It has high specific energy of 49.4 Wh/kg and good cycling stability (81.06% after cycling 8000 times).Besides, many kinds of nanocomposite are being studied as electrodes, like NiCo2S4@NiO, MgCo2O4@MnO2 and so on. For example, Fe-SnO2@CeO2 nanocomposite used as electrode can provide a specific energy and specific power of 32.2 Wh/kg and 747 W/kg. The device exhibited the capacitance retention of 85.05 % over 5000 cycles of operation.As far as known no commercial offered supercapacitors with such kind of asymmetric electrodes are on the market. Materials: Electrolytes Electrolytes consist of a solvent and dissolved chemicals that dissociate into positive cations and negative anions, making the electrolyte electrically conductive. The more ions the electrolyte contains, the better its conductivity. In supercapacitors electrolytes are the electrically conductive connection between the two electrodes. Additionally, in supercapacitors the electrolyte provides the molecules for the separating monolayer in the Helmholtz double-layer and delivers the ions for pseudocapacitance. Materials: The electrolyte determines the capacitor's characteristics: its operating voltage, temperature range, ESR and capacitance. With the same activated carbon electrode an aqueous electrolyte achieves capacitance values of 160 F/g, while an organic electrolyte achieves only 100 F/g.The electrolyte must be chemically inert and not chemically attack the other materials in the capacitor to ensure long time stable behavior of the capacitor's electrical parameters. The electrolyte's viscosity must be low enough to wet the porous, sponge-like structure of the electrodes. An ideal electrolyte does not exist, forcing a compromise between performance and other requirements. Materials: Aqueous Water is a relatively good solvent for inorganic chemicals. Treated with acids such as sulfuric acid (H2SO4), alkalis such as potassium hydroxide (KOH), or salts such as quaternary phosphonium salts, sodium perchlorate (NaClO4), lithium perchlorate (LiClO4) or lithium hexafluoride arsenate (LiAsF6), water offers relatively high conductivity values of about 100 to 1000 mS/cm. Aqueous electrolytes have a dissociation voltage of 1.15 V per electrode (2.3 V capacitor voltage) and a relatively low operating temperature range. They are used in supercapacitors with low specific energy and high specific power. Materials: Organic Electrolytes with organic solvents such as acetonitrile, propylene carbonate, tetrahydrofuran, diethyl carbonate, γ-butyrolactone and solutions with quaternary ammonium salts or alkyl ammonium salts such as tetraethylammonium tetrafluoroborate (N(Et)4BF4) or triethyl (metyl) tetrafluoroborate (NMe(Et)3BF4) are more expensive than aqueous electrolytes, but they have a higher dissociation voltage of typically 1.35 V per electrode (2.7 V capacitor voltage), and a higher temperature range. The lower electrical conductivity of organic solvents (10 to 60 mS/cm) leads to a lower specific power, but since the specific energy increases with the square of the voltage, a higher specific energy. Materials: Ionic Ionic electrolytes consists of liquid salts that can be stable in a wider electrochemical window, enabling capacitor voltages above 3.5 V. Ionic electrolytes typically have an ionic conductivity of a few mS/cm, lower than aqueous or organic electrolytes. Materials: Separators Separators have to physically separate the two electrodes to prevent a short circuit by direct contact. It can be very thin (a few hundredths of a millimeter) and must be very porous to the conducting ions to minimize ESR. Furthermore, separators must be chemically inert to protect the electrolyte's stability and conductivity. Inexpensive components use open capacitor papers. More sophisticated designs use nonwoven porous polymeric films like polyacrylonitrile or Kapton, woven glass fibers or porous woven ceramic fibres. Materials: Collectors and housing Current collectors connect the electrodes to the capacitor's terminals. The collector is either sprayed onto the electrode or is a metal foil. They must be able to distribute peak currents of up to 100 A. If the housing is made out of a metal (typically aluminum) the collectors should be made from the same material to avoid forming a corrosive galvanic cell. Electrical parameters: Capacitance Capacitance values for commercial capacitors are specified as "rated capacitance CR". This is the value for which the capacitor has been designed. The value for an actual component must be within the limits given by the specified tolerance. Typical values are in the range of farads (F), three to six orders of magnitude larger than those of electrolytic capacitors. Electrical parameters: The capacitance value results from the energy W (expressed in Joule) of a loaded capacitor loaded via a DC voltage VDC. DC DC 2 This value is also called the "DC capacitance". Electrical parameters: Measurement Conventional capacitors are normally measured with a small AC voltage (0.5 V) and a frequency of 100 Hz or 1 kHz depending on the capacitor type. The AC capacitance measurement offers fast results, important for industrial production lines. The capacitance value of a supercapacitor depends strongly on the measurement frequency, which is related to the porous electrode structure and the limited electrolyte's ion mobility. Even at a low frequency of 10 Hz, the measured capacitance value drops from 100 to 20 percent of the DC capacitance value. Electrical parameters: This extraordinarily strong frequency dependence can be explained by the different distances the ions have to move in the electrode's pores. The area at the beginning of the pores can be easily accessed by the ions; this short distance is accompanied by low electrical resistance. The greater the distance the ions have to cover, the higher the resistance. This phenomenon can be described with a series circuit of cascaded RC (resistor/capacitor) elements with serial RC time constants. These result in delayed current flow, reducing the total electrode surface area that can be covered with ions if polarity changes – capacitance decreases with increasing AC frequency. Thus, the total capacitance is achieved only after longer measuring times. Electrical parameters: Out of the reason of the very strong frequency dependence of the capacitance, this electrical parameter has to be measured with a special constant current charge and discharge measurement, defined in IEC standards 62391-1 and -2. Electrical parameters: Measurement starts with charging the capacitor. The voltage has to be applied and after the constant current/constant voltage power supply has achieved the rated voltage, the capacitor must be charged for 30 minutes. Next, the capacitor has to be discharged with a constant discharge current Idischarge. Then the time t1 and t2, for the voltage to drop from 80% (V1) to 40% (V2) of the rated voltage is measured. The capacitance value is calculated as: total discharge ⋅t2−t1V1−V2 The value of the discharge current is determined by the application. The IEC standard defines four classes: Memory backup, discharge current in mA = 1 • C (F) Energy storage, discharge current in mA = 0,4 • C (F) • V (V) Power, discharge current in mA = 4 • C (F) • V (V) Instantaneous power, discharge current in mA = 40 • C (F) • V (V)The measurement methods employed by individual manufacturers are mainly comparable to the standardized methods.The standardized measuring method is too time consuming for manufacturers to use during production for each individual component. For industrial-produced capacitors, the capacitance value is instead measured with a faster, low-frequency AC voltage, and a correlation factor is used to compute the rated capacitance. Electrical parameters: This frequency dependence affects capacitor operation. Rapid charge and discharge cycles mean that neither the rated capacitance value nor specific energy are available. In this case the rated capacitance value is recalculated for each application condition. Operating voltage Supercapacitors are low voltage components. Safe operation requires that the voltage remain within specified limits. The rated voltage UR is the maximum DC voltage or peak pulse voltage that may be applied continuously and remain within the specified temperature range. Capacitors should never be subjected to voltages continuously in excess of the rated voltage. Electrical parameters: The rated voltage includes a safety margin against the electrolyte's breakdown voltage at which the electrolyte decomposes. The breakdown voltage decomposes the separating solvent molecules in the Helmholtz double-layer, e.g. water splits into hydrogen and oxygen. The solvent molecules then cannot separate the electrical charges from each other. Higher voltages than rated voltage cause hydrogen gas formation or a short circuit. Electrical parameters: Standard supercapacitors with aqueous electrolyte normally are specified with a rated voltage of 2.1 to 2.3 V and capacitors with organic solvents with 2.5 to 2.7 V. Lithium-ion capacitors with doped electrodes may reach a rated voltage of 3.8 to 4 V, but have a low voltage limit of about 2.2 V. Supercapacitors with ionic electrolytes can exceed an operating voltage of 3.5 V.Operating supercapacitors below the rated voltage improves the long-time behavior of the electrical parameters. Capacitance values and internal resistance during cycling are more stable and lifetime and charge/discharge cycles may be extended.Higher application voltages require connecting cells in series. Since each component has a slight difference in capacitance value and ESR, it is necessary to actively or passively balance them to stabilize the applied voltage. Passive balancing employs resistors in parallel with the supercapacitors. Active balancing may include electronic voltage management above a threshold that varies the current. Electrical parameters: Internal resistance Charging/discharging a supercapacitor is connected to the movement of charge carriers (ions) in the electrolyte across the separator to the electrodes and into their porous structure. Losses occur during this movement that can be measured as the internal DC resistance. Electrical parameters: With the electrical model of cascaded, series-connected RC (resistor/capacitor) elements in the electrode pores, the internal resistance increases with the increasing penetration depth of the charge carriers into the pores. The internal DC resistance is time dependent and increases during charge/discharge. In applications often only the switch-on and switch-off range is interesting. The internal resistance Ri can be calculated from the voltage drop ΔV2 at the time of discharge, starting with a constant discharge current Idischarge. It is obtained from the intersection of the auxiliary line extended from the straight part and the time base at the time of discharge start (see picture right). Resistance can be calculated by: discharge The discharge current Idischarge for the measurement of internal resistance can be taken from the classification according to IEC 62391-1. Electrical parameters: This internal DC resistance Ri should not be confused with the internal AC resistance called equivalent series resistance (ESR) normally specified for capacitors. It is measured at 1 kHz. ESR is much smaller than DC resistance. ESR is not relevant for calculating supercapacitor inrush currents or other peak currents. Electrical parameters: Ri determines several supercapacitor properties. It limits the charge and discharge peak currents as well as charge/discharge times. Ri and the capacitance C results in the time constant τ τ=Ri⋅C This time constant determines the charge/discharge time. A 100 F capacitor with an internal resistance of 30 mΩ for example, has a time constant of 0.03 • 100 = 3 s. After 3 seconds charging with a current limited only by internal resistance, the capacitor has 63.2% of full charge (or is discharged to 36.8% of full charge). Electrical parameters: Standard capacitors with constant internal resistance fully charge during about 5 τ. Since internal resistance increases with charge/discharge, actual times cannot be calculated with this formula. Thus, charge/discharge time depends on specific individual construction details. Current load and cycle stability Because supercapacitors operate without forming chemical bonds, current loads, including charge, discharge and peak currents are not limited by reaction constraints. Current load and cycle stability can be much higher than for rechargeable batteries. Current loads are limited only by internal resistance, which may be substantially lower than for batteries. Internal resistance "Ri" and charge/discharge currents or peak currents "I" generate internal heat losses "Ploss" according to: loss =Ri⋅I2 This heat must be released and distributed to the ambient environment to maintain operating temperatures below the specified maximum temperature. Heat generally defines capacitor lifetime due to electrolyte diffusion. The heat generation coming from current loads should be smaller than 5 to 10 K at maximum ambient temperature (which has only minor influence on expected lifetime). For that reason the specified charge and discharge currents for frequent cycling are determined by internal resistance. Electrical parameters: The specified cycle parameters under maximal conditions include charge and discharge current, pulse duration and frequency. They are specified for a defined temperature range and over the full voltage range for a defined lifetime. They can differ enormously depending on the combination of electrode porosity, pore size and electrolyte. Generally a lower current load increases capacitor life and increases the number of cycles. This can be achieved either by a lower voltage range or slower charging and discharging.Supercapacitors (except those with polymer electrodes) can potentially support more than one million charge/discharge cycles without substantial capacity drops or internal resistance increases. Beneath the higher current load is this the second great advantage of supercapacitors over batteries. The stability results from the dual electrostatic and electrochemical storage principles. Electrical parameters: The specified charge and discharge currents can be significantly exceeded by lowering the frequency or by single pulses. Heat generated by a single pulse may be spread over the time until the next pulse occurs to ensure a relatively small average heat increase. Such a "peak power current" for power applications for supercapacitors of more than 1000 F can provide a maximum peak current of about 1000 A. Such high currents generate high thermal stress and high electromagnetic forces that can damage the electrode-collector connection requiring robust design and construction of the capacitors. Electrical parameters: Device capacitance and resistance dependence on operating voltage and temperature Device parameters such as capacitance initial resistance and steady state resistance are not constant, but are variable and dependent on the device's operating voltage. Device capacitance will have a measurable increase as the operating voltage increases. For example: a 100F device can be seen to vary 26% from its maximum capacitance over its entire operational voltage range. Similar dependence on operating voltage is seen in steady state resistance (Rss) and initial resistance (Ri). Electrical parameters: Device properties can also be seen to be dependent on device temperature. As the temperature of the device changes either through operation of varying ambient temperature, the internal properties such as capacitance and resistance will vary as well. Device capacitance is seen to increase as the operating temperature increases. Electrical parameters: Energy capacity Supercapacitors occupy the gap between high power/low energy electrolytic capacitors and low power/high energy rechargeable batteries. The energy Wmax (expressed in Joule) that can be stored in a capacitor is given by the formula max total loaded 2 This formula describes the amount of energy stored and is often used to describe new research successes. However, only part of the stored energy is available to applications, because the voltage drop and the time constant over the internal resistance mean that some of the stored charge is inaccessible. The effective realized amount of energy Weff is reduced by the used voltage difference between Vmax and Vmin and can be represented as: eff max min 2) This formula also represents the energy asymmetric voltage components such as lithium ion capacitors. Electrical parameters: Specific energy and specific power The amount of energy that can be stored in a capacitor per mass of that capacitor is called its specific energy. Specific energy is measured gravimetrically (per unit of mass) in watt-hours per kilogram (Wh/kg). The amount of energy can be stored in a capacitor per volume of that capacitor is called its energy density (also called volumetric specific energy in some literature). Energy density is measured volumetrically (per unit of volume) in watt-hours per litre (Wh/L). Units of liters and dm3 can be used interchangeably. Electrical parameters: As of 2013 commercial energy density varies widely, but in general range from around 5 to 8 Wh/L. In comparison, petrol fuel has an energy density of 32.4 MJ/L or 9000 Wh/L. Commercial specific energies range from around 0.5 to 15 Wh/kg. For comparison, an aluminum electrolytic capacitor stores typically 0.01 to 0.3 Wh/kg, while a conventional lead-acid battery stores typically 30 to 40 Wh/kg and modern lithium-ion batteries 100 to 265 Wh/kg. Supercapacitors can therefore store 10 to 100 times more energy than electrolytic capacitors, but only one tenth as much as batteries. For reference, petrol fuel has a specific energy of 44.4 MJ/kg or 12300 Wh/kg. Electrical parameters: Although the specific energy of supercapacitors is defavorably compared with batteries, capacitors have the important advantage of the specific power. Specific power describes the speed at which energy can be delivered to the load (or, in charging the device, absorbed from the generator). The maximum power Pmax specifies the power of a theoretical rectangular single maximum current peak of a given voltage. In real circuits the current peak is not rectangular and the voltage is smaller, caused by the voltage drop, so IEC 62391–2 established a more realistic effective power Peff for supercapacitors for power applications, which is half the maximum and given by the following formulas : eff =18⋅V2Ri max =14⋅V2Ri with V = voltage applied and Ri, the internal DC resistance of the capacitor. Electrical parameters: Just like specific energy, specific power is measured either gravimetrically in kilowatts per kilogram (kW/kg, specific power) or volumetrically in kilowatts per litre (kW/L, power density). Supercapacitor specific power is typically 10 to 100 times greater than for batteries and can reach values up to 15 kW/kg. Ragone charts relate energy to power and are a valuable tool for characterizing and visualizing energy storage components. With such a diagram, the position of specific power and specific energy of different storage technologies is easily to compare, see diagram. Electrical parameters: Lifetime Since supercapacitors do not rely on chemical changes in the electrodes (except for those with polymer electrodes), lifetimes depend mostly on the rate of evaporation of the liquid electrolyte. This evaporation is generally a function of temperature, current load, current cycle frequency and voltage. Current load and cycle frequency generate internal heat, so that the evaporation-determining temperature is the sum of ambient and internal heat. This temperature is measurable as core temperature in the center of a capacitor body. The higher the core temperature, the faster the evaporation, and the shorter the lifetime. Electrical parameters: Evaporation generally results in decreasing capacitance and increasing internal resistance. According to IEC/EN 62391-2, capacitance reductions of over 30%, or internal resistance exceeding four times its data sheet specifications, are considered "wear-out failures," implying that the component has reached end-of-life. The capacitors are operable, but with reduced capabilities. Whether the aberration of the parameters have any influence on the proper functionality depends on the application of the capacitors. Electrical parameters: Such large changes of electrical parameters specified in IEC/EN 62391-2 are usually unacceptable for high current load applications. Components that support high current loads use much smaller limits, e.g., 20% loss of capacitance or double the internal resistance. The narrower definition is important for such applications, since heat increases linearly with increasing internal resistance, and the maximum temperature should not be exceeded. Temperatures higher than specified can destroy the capacitor. Electrical parameters: The real application lifetime of supercapacitors, also called "service life," "life expectancy," or "load life," can reach 10 to 15 years or more, at room temperature. Such long periods cannot be tested by manufacturers. Hence, they specify the expected capacitor lifetime at the maximum temperature and voltage conditions. The results are specified in datasheets using the notation "tested time (hours)/max. temperature (°C)," such as "5000 h/65 °C". With this value, and expressions derived from historical data, lifetimes can be estimated for lower temperature conditions. Electrical parameters: Datasheet lifetime specification is tested by the manufactures using an accelerated aging test called an "endurance test," with maximum temperature and voltage over a specified time. For a "zero defect" product policy, no wear out or total failure may occur during this test. Electrical parameters: The lifetime specification from datasheets can be used to estimate the expected lifetime for a given design. The "10-degrees-rule" used for electrolytic capacitors with non-solid electrolyte is used in those estimations, and can be used for supercapacitors. This rule employs the Arrhenius equation: a simple formula for the temperature dependence of reaction rates. For every 10 °C reduction in operating temperature, the estimated life doubles. Electrical parameters: 10 With Lx = estimated lifetime L0 = specified lifetime T0 = upper specified capacitor temperature Tx = actual operating temperature of the capacitor cellCalculated with this formula, capacitors specified with 5000 h at 65 °C, have an estimated lifetime of 20,000 h at 45 °C. Lifetimes are also dependent on the operating voltage, because the development of gas in the liquid electrolyte depends on the voltage. The lower the voltage, the smaller the gas development, and the longer the lifetime. No general formula relates voltage to lifetime. The voltage dependent curves shown from the picture are an empirical result from one manufacturer. Life expectancy for power applications may be also limited by current load or number of cycles. This limitation has to be specified by the relevant manufacturer and is strongly type dependent. Electrical parameters: Self-discharge Storing electrical energy in the double-layer separates the charge carriers within the pores by distances in the range of molecules. Irregularities can occur over this short distance, leading to a small exchange of charge carriers and gradual discharge. This self-discharge is called leakage current. Leakage depends on capacitance, voltage, temperature, and the chemical stability of the electrode/electrolyte combination. At room temperature, leakage is so low that it is specified as time to self-discharge in hours, days, or weeks. As an example, a 5.5 V/F Panasonic "Goldcapacitor" specifies a voltage drop at 20 °C from 5.5 V to 3 V in 600 hours (25 days or 3.6 weeks) for a double cell capacitor. Electrical parameters: Post charge voltage relaxation It has been noticed that after the EDLC experiences a charge or discharge, the voltage will drift over time, relaxing toward its previous voltage level. The observed relaxation can occur over several hours and is likely due to long diffusion time constants of the porous electrodes within the EDLC. Electrical parameters: Polarity Since the positive and negative electrodes (or simply positrode and negatrode, respectively) of symmetric supercapacitors consist of the same material, theoretically supercapacitors have no true polarity and catastrophic failure does not normally occur. However reverse-charging a supercapacitor lowers its capacity, so it is recommended practice to maintain the polarity resulting from the formation of the electrodes during production. Asymmetric supercapacitors are inherently polar. Electrical parameters: Pseudocapacitor and hybrid supercapacitors which have electrochemical charge properties may not be operated with reverse polarity, precluding their use in AC operation. However, this limitation does not apply to EDLC supercapacitors A bar in the insulating sleeve identifies the negative terminal in a polarized component. Electrical parameters: In some literature, the terms "anode" and "cathode" are used in place of negative electrode and positive electrode. Using anode and cathode to describe the electrodes in supercapacitors (and also rechargeable batteries including lithium ion batteries) can lead to confusion, because the polarity changes depending on whether a component is considered as a generator or as a consumer of current. In electrochemistry, cathode and anode are related to reduction and oxidation reactions, respectively. However, in supercapacitors based on electric double layer capacitance, there is no oxidation nor reduction reactions on any of the two electrodes. Therefore, the concepts of cathode and anode do not apply. Electrical parameters: Comparison of selected commercial supercapacitors The range of electrodes and electrolytes available yields a variety of components suitable for diverse applications. The development of low-ohmic electrolyte systems, in combination with electrodes with high pseudocapacitance, enable many more technical solutions. The following table shows differences among capacitors of various manufacturers in capacitance range, cell voltage, internal resistance (ESR, DC or AC value) and volumetric and gravimetric specific energy. Electrical parameters: In the table, ESR refers to the component with the largest capacitance value of the respective manufacturer. Roughly, they divide supercapacitors into two groups. The first group offers greater ESR values of about 20 milliohms and relatively small capacitance of 0.1 to 470 F. These are "double-layer capacitors" for memory back-up or similar applications. The second group offers 100 to 10,000 F with a significantly lower ESR value under 1 milliohm. These components are suitable for power applications. A correlation of some supercapacitor series of different manufacturers to the various construction features is provided in Pandolfo and Hollenkamp.In commercial double-layer capacitors, or, more specifically, EDLCs in which energy storage is predominantly achieved by double-layer capacitance, energy is stored by forming an electrical double layer of electrolyte ions on the surface of conductive electrodes. Since EDLCs are not limited by the electrochemical charge transfer kinetics of batteries, they can charge and discharge at a much higher rate, with lifetimes of more than 1 million cycles. The EDLC energy density is determined by operating voltage and the specific capacitance (farad/gram or farad/cm3) of the electrode/electrolyte system. The specific capacitance is related to the Specific Surface Area (SSA) accessible by the electrolyte, its interfacial double-layer capacitance, and the electrode material density. Electrical parameters: Commercial EDLCs are based on two symmetric electrodes impregnated with electrolytes comprising tetraethylammonium tetrafluoroborate salts in organic solvents. Current EDLCs containing organic electrolytes operate at 2.7 V and reach energy densities around 5-8 Wh/kg and 7 to 10 Wh/L. The specific capacitance is related to the specific surface area (SSA) accessible by the electrolyte, its interfacial double-layer capacitance, and the electrode material density. Graphene-based platelets with mesoporous spacer material is a promising structure for increasing the SSA of the electrolyte. Standards: Supercapacitors vary sufficiently that they are rarely interchangeable, especially those with higher specific energy. Applications range from low to high peak currents, requiring standardized test protocols.Test specifications and parameter requirements are specified in the generic specification IEC/EN 62391–1, Fixed electric double layer capacitors for use in electronic equipment.The standard defines four application classes, according to discharge current levels: Memory backup Energy storage, mainly used for driving motors require a short time operation, Power, higher power demand for a long time operation, Instantaneous power, for applications that requires relatively high current units or peak currents ranging up to several hundreds of amperes even with a short operating timeThree further standards describe special applications: IEC 62391–2, Fixed electric double-layer capacitors for use in electronic equipment - Blank detail specification - Electric double-layer capacitors for power application IEC 62576, Electric double-layer capacitors for use in hybrid electric vehicles. Test methods for electrical characteristics BS/EN 61881-3, Railway applications. Rolling stock equipment. Capacitors for power electronics. Electric double-layer capacitors Applications: Supercapacitors do not support alternating current (AC) applications. Applications: Supercapacitors have advantages in applications where a large amount of power is needed for a relatively short time, where a very high number of charge/discharge cycles or a longer lifetime is required. Typical applications range from milliamp currents or milliwatts of power for up to a few minutes to several amps current or several hundred kilowatts power for much shorter periods. Applications: The time t a supercapacitor can deliver a constant current I can be calculated as: charge min )I as the capacitor voltage decreases from Ucharge down to Umin. If the application needs a constant power P for a certain time t this can be calculated as: charge min 2). wherein also the capacitor voltage decreases from Ucharge down to Umin. General Consumer electronics In applications with fluctuating loads, such as laptop computers, PDAs, GPS, portable media players, hand-held devices, and photovoltaic systems, supercapacitors can stabilize the power supply. Supercapacitors deliver power for photographic flashes in digital cameras and for LED flashlights that can be charged in much shorter periods of time, e.g., 90 seconds.Some portable speakers are powered by supercapacitors. Tools A cordless electric screwdriver with supercapacitors for energy storage has about half the run time of a comparable battery model, but can be fully charged in 90 seconds. It retains 85% of its charge after three months left idle. Applications: Grid power buffer Numerous non-linear loads, such as EV chargers, HEVs, air conditioning systems, and advanced power conversion systems cause current fluctuations and harmonics. These current differences create unwanted voltage fluctuations and therefore power oscillations on the grid. Power oscillations not only reduce the efficiency of the grid, but can cause voltage drops in the common coupling bus, and considerable frequency fluctuations throughout the entire system. To overcome this problem, supercapacitors can be implemented as an interface between the load and the grid to act as a buffer between the grid and the high pulse power drawn from the charging station. Applications: Low-power equipment power buffer Supercapacitors provide backup or emergency shutdown power to low-power equipment such as RAM, SRAM, micro-controllers and PC Cards. They are the sole power source for low energy applications such as automated meter reading (AMR) equipment or for event notification in industrial electronics. Supercapacitors buffer power to and from rechargeable batteries, mitigating the effects of short power interruptions and high current peaks. Batteries kick in only during extended interruptions, e.g., if the mains power or a fuel cell fails, which lengthens battery life. Uninterruptible power supplies (UPS) may be powered by supercapacitors, which can replace much larger banks of electrolytic capacitors. This combination reduces the cost per cycle, saves on replacement and maintenance costs, enables the battery to be downsized and extends battery life. Supercapacitors provide backup power for actuators in wind turbine pitch systems, so that blade pitch can be adjusted even if the main supply fails. Voltage stabilizer Supercapacitors can stabilize voltage fluctuations for powerlines by acting as dampers. Wind and photovoltaic systems exhibit fluctuating supply evoked by gusting or clouds that supercapacitors can buffer within milliseconds. Applications: Micro grids Micro grids are usually powered by clean and renewable energy. Most of this energy generation, however, is not constant throughout the day and does not usually match demand. Supercapacitors can be used for micro grid storage to instantaneously inject power when the demand is high and the production dips momentarily, and to store energy in the reverse conditions. They are useful in this scenario, because micro grids are increasingly producing power in DC, and capacitors can be utilized in both DC and AC applications. Supercapacitors work best in conjunction with chemical batteries. They provide an immediate voltage buffer to compensate for quick changing power loads due to their high charge and discharge rate through an active control system. Once the voltage is buffered, it is put through an inverter to supply AC power to the grid. It is important to note that supercapacitors cannot provide frequency correction in this form directly in the AC grid. Applications: Energy harvesting Supercapacitors are suitable temporary energy storage devices for energy harvesting systems. In energy harvesting systems, the energy is collected from the ambient or renewable sources, e.g., mechanical movement, light or electromagnetic fields, and converted to electrical energy in an energy storage device. For example, it was demonstrated that energy collected from RF (radio frequency) fields (using an RF antenna as an appropriate rectifier circuit) can be stored to a printed supercapacitor. The harvested energy was then used to power an application-specific integrated circuit (ASIC) for over 10 hours. Applications: Incorporation into batteries The UltraBattery is a hybrid rechargeable lead-acid battery and a supercapacitor. Its cell construction contains a standard lead-acid battery positive electrode, standard sulphuric acid electrolyte and a specially prepared negative carbon-based electrode that store electrical energy with double-layer capacitance. The presence of the supercapacitor electrode alters the chemistry of the battery and affords it significant protection from sulfation in high rate partial state of charge use, which is the typical failure mode of valve regulated lead-acid cells used this way. The resulting cell performs with characteristics beyond either a lead-acid cell or a supercapacitor, with charge and discharge rates, cycle life, efficiency and performance all enhanced. Applications: Medical Supercapacitors are used in defibrillators where they can deliver 500 joules to shock the heart back into sinus rhythm. Transport Aviation In 2005, aerospace systems and controls company Diehl Luftfahrt Elektronik GmbH chose supercapacitors to power emergency actuators for doors and evacuation slides used in airliners, including the Airbus 380. Military Supercapacitors' low internal resistance supports applications that require short-term high currents. Among the earliest uses were motor startup (cold engine starts, particularly with diesels) for large engines in tanks and submarines. Supercapacitors buffer the battery, handling short current peaks, reducing cycling and extending battery life. Further military applications that require high specific power are phased array radar antennae, laser power supplies, military radio communications, avionics displays and instrumentation, backup power for airbag deployment and GPS-guided missiles and projectiles. Applications: Automotive Toyota's Yaris Hybrid-R concept car uses a supercapacitor to provide bursts of power. PSA Peugeot Citroën has started using supercapacitors as part of its stop-start fuel-saving system, which permits faster initial acceleration. Mazda's i-ELOOP system stores energy in a supercapacitor during deceleration and uses it to power on-board electrical systems while the engine is stopped by the stop-start system. Applications: Bus/tram Maxwell Technologies, an American supercapacitor-maker, claimed that more than 20,000 hybrid buses use the devices to increase acceleration, particularly in China. Guangzhou, In 2014 China began using trams powered with supercapacitors that are recharged in 30 seconds by a device positioned between the rails, storing power to run the tram for up to 4 km — more than enough to reach the next stop, where the cycle can be repeated. CAF also offers supercapacitors on their Urbos 3 trams in the form of their ACR system. Applications: Energy recovery A primary challenge of all transport is reducing energy consumption and reducing CO2 emissions. Recovery of braking energy (recuperation or regenerative braking) helps with both. This requires components that can quickly store and release energy over long times with a high cycle rate. Supercapacitors fulfill these requirements and are therefore used in various applications in transportation. Applications: Railway Supercapacitors can be used to supplement batteries in starter systems in diesel railroad locomotives with diesel-electric transmission. The capacitors capture the braking energy of a full stop and deliver the peak current for starting the diesel engine and acceleration of the train and ensures the stabilization of line voltage. Depending on the driving mode up to 30% energy saving is possible by recovery of braking energy. Low maintenance and environmentally friendly materials encouraged the choice of supercapacitors. Applications: Cranes, forklifts and tractors Mobile hybrid Diesel-electric rubber tyred gantry cranes move and stack containers within a terminal. Lifting the boxes requires large amounts of energy. Some of the energy could be recaptured while lowering the load, resulting in improved efficiency.A triple hybrid forklift truck uses fuel cells and batteries as primary energy storage and supercapacitors to buffer power peaks by storing braking energy. They provide the fork lift with peak power over 30 kW. The triple-hybrid system offers over 50% energy savings compared with Diesel or fuel-cell systems.Supercapacitor-powered terminal tractors transport containers to warehouses. They provide an economical, quiet and pollution-free alternative to Diesel terminal tractors. Applications: Light-rails and trams Supercapacitors make it possible not only to reduce energy, but to replace overhead lines in historical city areas, so preserving the city's architectural heritage. This approach may allow many new light rail city lines to replace overhead wires that are too expensive to fully route. Applications: In 2003 Mannheim adopted a prototype light-rail vehicle (LRV) using the MITRAC Energy Saver system from Bombardier Transportation to store mechanical braking energy with a roof-mounted supercapacitor unit. It contains several units each made of 192 capacitors with 2700 F / 2.7 V interconnected in three parallel lines. This circuit results in a 518 V system with an energy content of 1.5 kWh. For acceleration when starting this "on-board-system" can provide the LRV with 600 kW and can drive the vehicle up to 1 km without overhead line supply, thus better integrating the LRV into the urban environment. Compared to conventional LRVs or Metro vehicles that return energy into the grid, onboard energy storage saves up to 30% and reduces peak grid demand by up to 50%. Applications: In 2009 supercapacitors enabled LRVs to operate in the historical city area of Heidelberg without overhead wires, thus preserving the city's architectural heritage. The SC equipment cost an additional €270,000 per vehicle, which was expected to be recovered over the first 15 years of operation. The supercapacitors are charged at stop-over stations when the vehicle is at a scheduled stop. In April 2011 German regional transport operator Rhein-Neckar, responsible for Heidelberg, ordered a further 11 units.In 2009, Alstom and RATP equipped a Citadis tram with an experimental energy recovery system called "STEEM". The system is fitted with 48 roof-mounted supercapacitors to store braking energy, which provides tramways with a high level of energy autonomy by enabling them to run without overhead power lines on parts of its route, recharging while traveling on powered stop-over stations. During the tests, which took place between the Porte d’Italie and Porte de Choisy stops on line T3 of the tramway network in Paris, the tramset used an average of approximately 16% less energy. Applications: In 2012 tram operator Geneva Public Transport began tests of an LRV equipped with a prototype roof-mounted supercapacitor unit to recover braking energy.Siemens is delivering supercapacitor-enhanced light-rail transport systems that include mobile storage.Hong Kong's South Island metro line is to be equipped with two 2 MW energy storage units that are expected to reduce energy consumption by 10%.In August 2012 the CSR Zhuzhou Electric Locomotive corporation of China presented a prototype two-car light metro train equipped with a roof-mounted supercapacitor unit. The train can travel up 2 km without wires, recharging in 30 seconds at stations via a ground mounted pickup. The supplier claimed the trains could be used in 100 small and medium-sized Chinese cities. Seven trams (street cars) powered by supercapacitors were scheduled to go into operation in 2014 in Guangzhou, China. The supercapacitors are recharged in 30 seconds by a device positioned between the rails. That powers the tram for up to 4 kilometres (2.5 mi). Applications: As of 2017, Zhuzhou's supercapacitor vehicles are also used on the new Nanjing streetcar system, and are undergoing trials in Wuhan.In 2012, in Lyon (France), the SYTRAL (Lyon public transportation administration) started experiments of a "way side regeneration" system built by Adetel Group which has developed its own energy saver named ″NeoGreen″ for LRV, LRT and metros.In 2015, Alstom announced SRS, an energy storage system that charges supercapacitors on board a tram by means of ground-level conductor rails located at tram stops. This allows trams to operate without overhead lines for short distances. The system has been touted as an alternative to the company's ground-level power supply (APS) system, or can be used in conjunction with it, as in the case of the VLT network in Rio de Janeiro, Brazil, which opened in 2016. Applications: Buses The first hybrid bus with supercapacitors in Europe came in 2001 in Nuremberg, Germany. It was MAN's so-called "Ultracapbus", and was tested in real operation in 2001/2002. The test vehicle was equipped with a diesel-electric drive in combination with supercapacitors. The system was supplied with 8 Ultracap modules of 80 V, each containing 36 components. The system worked with 640 V and could be charged/discharged at 400 A. Its energy content was 0.4 kWh with a weight of 400 kg. Applications: The supercapacitors recaptured braking energy and delivered starting energy. Fuel consumption was reduced by 10 to 15% compared to conventional diesel vehicles. Other advantages included reduction of CO2 emissions, quiet and emissions-free engine starts, lower vibration and reduced maintenance costs. Applications: As of 2002 in Luzern, Switzerland an electric bus fleet called TOHYCO-Rider was tested. The supercapacitors could be recharged via an inductive contactless high-speed power charger after every transportation cycle, within 3 to 4 minutes.In early 2005 Shanghai tested a new form of electric bus called capabus that runs without powerlines (catenary free operation) using large onboard supercapacitors that partially recharge whenever the bus is at a stop (under so-called electric umbrellas), and fully charge in the terminus. In 2006, two commercial bus routes began to use the capabuses; one of them is route 11 in Shanghai. It was estimated that the supercapacitor bus was cheaper than a lithium-ion battery bus, and one of its buses had one-tenth the energy cost of a diesel bus with lifetime fuel savings of $200,000.A hybrid electric bus called tribrid was unveiled in 2008 by the University of Glamorgan, Wales, for use as student transport. It is powered by hydrogen fuel or solar cells, batteries and ultracapacitors. Applications: Motor racing The FIA, a governing body for motor racing events, proposed in the Power-Train Regulation Framework for Formula 1 version 1.3 of 23 May 2007 that a new set of power train regulations be issued that includes a hybrid drive of up to 200 kW input and output power using "superbatteries" made with batteries and supercapacitors connected in parallel (KERS). About 20% tank-to-wheel efficiency could be reached using the KERS system. Applications: The Toyota TS030 Hybrid LMP1 car, a racing car developed under Le Mans Prototype rules, uses a hybrid drivetrain with supercapacitors. In the 2012 24 Hours of Le Mans race a TS030 qualified with a fastest lap only 1.055 seconds slower (3:24.842 versus 3:23.787) than the fastest car, an Audi R18 e-tron quattro with flywheel energy storage. The supercapacitor and flywheel components, whose rapid charge-discharge capabilities help in both braking and acceleration, made the Audi and Toyota hybrids the fastest cars in the race. In the 2012 Le Mans race the two competing TS030s, one of which was in the lead for part of the race, both retired for reasons unrelated to the supercapacitors. The TS030 won three of the 8 races in the 2012 FIA World Endurance Championship season. In 2014 the Toyota TS040 Hybrid used a supercapacitor to add 480 horsepower from two electric motors. Applications: Hybrid electric vehicles Supercapacitor/battery combinations in electric vehicles (EV) and hybrid electric vehicles (HEV) are well investigated. A 20 to 60% fuel reduction has been claimed by recovering brake energy in EVs or HEVs. The ability of supercapacitors to charge much faster than batteries, their stable electrical properties, broader temperature range and longer lifetime are suitable, but weight, volume and especially cost mitigate those advantages. Applications: Supercapacitors' lower specific energy makes them unsuitable for use as a stand-alone energy source for long distance driving. The fuel economy improvement between a capacitor and a battery solution is about 20% and is available only for shorter trips. For long distance driving the advantage decreases to 6%. Vehicles combining capacitors and batteries run only in experimental vehicles.As of 2013 all automotive manufacturers of EV or HEVs have developed prototypes that uses supercapacitors instead of batteries to store braking energy in order to improve driveline efficiency. The Mazda 6 is the only production car that uses supercapacitors to recover braking energy. Branded as i-eloop, the regenerative braking is claimed to reduce fuel consumption by about 10%.Russian Yo-cars Ё-mobile series was a concept and crossover hybrid vehicle working with a gasoline driven rotary vane type and an electric generator for driving the traction motors. A supercapacitor with relatively low capacitance recovers brake energy to power the electric motor when accelerating from a stop.Toyota's Yaris Hybrid-R concept car uses a supercapacitor to provide quick bursts of power.PSA Peugeot Citroën fit supercapacitors to some of its cars as part of its stop-start fuel-saving system, as this permits faster start-ups when the traffic lights turn green. Applications: Gondolas In Zell am See, Austria, an aerial lift connects the city with Schmittenhöhe mountain. The gondolas sometimes run 24 hours per day, using electricity for lights, door opening and communication. The only available time for recharging batteries at the stations is during the brief intervals of guest loading and unloading, which is too short to recharge batteries. Supercapacitors offer a fast charge, higher number of cycles and longer life time than batteries. Applications: Emirates Air Line (cable car), also known as the Thames cable car, is a 1-kilometre (0.62 mi) gondola line in London, UK, that crosses the Thames from the Greenwich Peninsula to the Royal Docks. The cabins are equipped with a modern infotainment system, which is powered by supercapacitors. Developments: As of 2013 commercially available lithium-ion supercapacitors offered the highest gravimetric specific energy to date, reaching 15 Wh/kg (54 kJ/kg). Research focuses on improving specific energy, reducing internal resistance, expanding temperature range, increasing lifetimes and reducing costs. Projects include tailored-pore-size electrodes, pseudocapacitive coating or doping materials and improved electrolytes. A Research into electrode materials requires measurement of individual components, such as an electrode or half-cell. By using a counterelectrode that does not affect the measurements, the characteristics of only the electrode of interest can be revealed. Specific energy and power for real supercapacitors only have more or less roughly 1/3 of the electrode density. Market: As of 2016 worldwide sales of supercapacitors is about US$400 million.The market for batteries (estimated by Frost & Sullivan) grew from US$47.5 billion, (76.4% or US$36.3 billion of which was rechargeable batteries) to US$95 billion. The market for supercapacitors is still a small niche market that is not keeping pace with its larger rival. In 2016, IDTechEx forecast sales to grow from $240 million to $2 billion by 2026, an annual increase of about 24%.Supercapacitor costs in 2006 were US$0.01 per farad or US$2.85 per kilojoule, moving in 2008 below US$0.01 per farad, and were expected to drop further in the medium term. Trade or series names: Exceptional for electronic components like capacitors are the manifold different trade or series names used for supercapacitors, like APowerCap, BestCap, BoostCap, CAP-XX, C-SECH, DLCAP, EneCapTen, EVerCAP, DynaCap, Faradcap, GreenCap, Goldcap, HY-CAP, Kapton capacitor, Super capacitor, SuperCap, PAS Capacitor, PowerStor, PseudoCap, Ultracapacitor making it difficult for users to classify these capacitors. (Compare with #Comparison of technical parameters) Literature: Abruña, H. D.; Kiya, Y.; Henderson, J. C. (2008). "Batteries and Electrochemical Capacitors" (PDF). Phys. Today. 61 (12): 43–47. Bibcode:2008PhT....61l..43A. doi:10.1063/1.3047681. Archived from the original (PDF) on 4 March 2016. Retrieved 17 February 2015. Bockris, J. O'M.; Devanathan, M. A. V.; Muller, K. (1963). "On the Structure of Charged Interfaces". Proc. R. Soc. A. 274 (1356): 55–79. Bibcode:1963RSPSA.274...55B. doi:10.1098/rspa.1963.0114. S2CID 94958336. Béguin, Francois; Raymundo-Piñeiro, E.; Frackowiak, Elzbieta (2009). "8. Electrical Double-Layer Capacitors and Pseudocapacitors". Carbons for Electrochemical Energy Storage and Conversion Systems. CRC Press. pp. 329–375. doi:10.1201/9781420055405-c8. ISBN 978-1-4200-5540-5. Conway, Brian Evans (1999). Electrochemical Supercapacitors: Scientific Fundamentals and Technological Applications. Springer. doi:10.1007/978-1-4757-3058-6. ISBN 978-0306457364. Zhang, J.; Zhang, L.; Liu, H.; Sun, A.; Liu, R.-S. (2011). "8. Electrochemical Supercapacitors". Electrochemical Technologies for Energy Storage and Conversion. Weinheim: Wiley-VCH. pp. 317–382. ISBN 978-3-527-32869-7. Leitner, K. W.; Winter, M.; Besenhard, J. O. (2003). "Composite Supercapacitor Electrodes". J. Solid State Electr. 8 (1): 15–16. doi:10.1007/s10008-003-0412-x. S2CID 95416761. Kinoshita, K. (18 January 1988). Carbon: Electrochemical and Physicochemical Properties. John Wiley & Sons. ISBN 978-0-471-84802-8. Vol'fkovich, Y. M.; Serdyuk, T. M. (2002). "Electrochemical Capacitors". Russ. J. Electrochem. 38 (9): 935–959. doi:10.1023/A:1020220425954. Palaniselvam, Thangavelu; Baek, Jong-Beom (2015). "Graphene based 2D-materials for supercapacitors". 2D Materials. 2 (3): 032002. Bibcode:2015TDM.....2c2002P. doi:10.1088/2053-1583/2/3/032002. S2CID 135679359. Ploehn, Harry (2015). "Composite for energy storage takes the heat". Nature. 523 (7562): 536–537. Bibcode:2015Natur.523..536P. doi:10.1038/523536a. PMID 26223620. S2CID 4398225. Li, Qui (2015). "Flexible high-temperature dielectric materials from polymer nanocomposites". Nature. 523 (7562): 576–579. Bibcode:2015Natur.523..576L. doi:10.1038/nature14647. PMID 26223625. S2CID 4472947.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Interpersonal adaptation theory** Interpersonal adaptation theory: Interpersonal (Interaction) adaptation theory (IAT) is often referred to as a theory of theories. Several theories have been developed to provide frameworks as explanations of social interactions. After reviewing and examining various communication theories and previous empirical evidence pertaining to interpersonal communication, a need to address ways in which individuals adapt to one another in interactions became apparent. The importance of observing both sides of a dyadic interaction lead to the development of the interpersonal adaptation theory. The theory states, individuals enter interactions with expectations, requirements, and desires, which combined establish an interaction position. Once the interaction begins, the difference between interaction position and the other party's actual behavior determines whether the individual will adapt and continue the communication positively or not. Background: In 1995, Judee K. Burgoon, Lesa Stern, and Leesa Dillman published a book titled, Interpersonal Adaptation: Dyadic Interaction Patterns in which they described their findings on a "new" theory which drew from the results of previous theories. Burgoon and her team examined fifteen previous models and considered the most important conclusions from the previous empirical research. They reviewed theories based in biological, arousal and affect, approach and avoidance, compensation and reciprocity, communication and cognitive, and social norms models. The conclusion after consideration of a multitude of theories and models stated, "while most theories predict a mix of patterns rather than committing to a single dominant pattern, they conflict over which patterns are likely under a given set of conditions."The theories and the models from which they are derived are include but are not limited to: Biologically based models: interacting individuals will exhibit similar behaviors to one another. Patterns presumed to be innate based on basic needs in bonding, safety, and social organization Motor Mimicry – describes an interaction and how an interactant will mimic another, usually out of empathy, or perceived empathy Interactional Synchrony MirroringArousal-based & affect-based models: internal and emotional states are driving forces in people's decisions to approach or avoid others Affiliative Conflict Theory (ACT) – Argyle & Dean (1965) – individuals have needs for both affiliation and autonomy Discrepancy Arousal Theory (DAT) – Cappella & Green (1982) – predicts discrepancies from expected behavior patterns produce arousal changeApproach & avoidance models: reciprocity and compensation Arousal-Labeling Theory – Patterson (1976) – external factors influence how an individual will react in any given interactionSocial-norm models: out of felt social obligation, individuals will reciprocate the behaviors they receive from others Norm of Reciprocity – out of social obligation, an individual will respond in the same manner as another Communication Accommodation Theory (CAT) – Gallois et al. (1991) and Giles (1973) – considers the way an individual interacts with another based on the context of the interaction Social Exchange Theory Resource Exchange TheoryCommunication & cognitive based models: communication-related cognitions and behaviors, analyzing the interaction patterns and the meaning behavioral patterns convey Sequential Functional Model – explains the stability of interaction and how each interactant accommodates the otherCombined elements of preceding models: Expectancy Violations Theory (EVT) – Burgoon (1978) – an interaction can be described positively or negatively based on an individual's expectations and the actual behavior of the other person Cognitive Valence Theory (CVT) – Andersen (1985) – describes and explains the process of intimacy exchange within a dyad relationshipThese previous theories combined with empirical evidence resulting from Burgoon's and her colleagues' own studies, birthed the interpersonal adaptation theory. Definitions: Requirements - interactant's basic human needs and drives; i.e. survival, safety, comfort, autonomy, affiliation Expectations - what is anticipated based on social norms, social prescriptions, individuated knowledge of the other's behavior; i.e. self-presentation, and demands Desires - highly personalized, one's goals, likes, and dislikes Interaction Position - a net assessment of what is needed, anticipated, and preferred as the dyadic interaction pattern in a situation Actual behavior - partner's actual performed communicative behavior in an interaction Convergence – the act of becoming more alike as a relationship progresses. If one interactant identifies and wants to be integrated with another, the first interactant will converge toward the communication behaviors of the other, adapting to rate of speech, volume, pauses, utterances, vocabulary, posture, and/or mode of dress. Definitions: Divergence – the opposite of convergence; becoming more dissimilar. Divergence occurs when interactants try to accentuate communicative differences between themselves and another interactant Mirroring – an individual's behavior becomes identical to the other party's behavior; also referred to at matching Compensation – an individual reacting dissimilar to another individual's response Reciprocity – an individual will react in a similar way to another individual's reaction Maintenance - an individual's communicative behaviors and patterns attempt to maintain stability throughout an interaction Basics: As previously stated, individuals enter interactions with a combination of expectations, requirements, and desires. The individual's expectations refers to how they anticipate the other party will respond in the given interaction. The individual's requirements are based on their biological basic needs. Lastly, the individual's desires are driven by their personalized likes and dislikes. Basics: For example, when a wife of an airman comes to her husband after he has hurt her emotionally, because he has not been spending enough time with her before he deploys, she may expect him to behave defensively, need him to not get mad thus spending even less time with her, and want him to understand her feelings. The wife's requirements, expectations, and desires are a combination of biological needs (unconsciously presumed or performed) and socially learned behaviors. Expectations are typically based on previously experienced social interactions or social norms. Requirements, such as the need for safety, may be more prominently based on a biological need for survival.In the above example, according to IAT, if the husband responds in a manner which meets his wife's requirements and desires, she will reciprocate and posture to mirror his behavior in the interaction. The theory explains, the reason reciprocity occurs, is because a positive and stable interaction is most preferred. Basics: If in the given example, the husband meets his wife's expectations and behaves unfavorably toward her, her response behavior will diverge, to deescalate the situation. The use of compensation is the most common behavioral response to occur in this interaction. In divergence, the wife may assume a role of the "fire extinguisher" and find herself frequently putting out fires or deescalating negative interactions in the relationship.Another example of interpersonal adaptation theory may be observed in an international business exchange. Consider the following example, in the United States business meeting culture is conducted in a direct, forward, and opinionated way. American business people engaged in meetings with an agenda and openly voice their ideas and opinions. In contrast, Japanese business culture is formal, polite and conducted at an elevated level of etiquette. In events in which the two cultures engage in business together, the Japanese businessman may expect the American businessman to be direct and opinionated but prefer politeness. If instead the American displays tact and decorum, the interaction will be more positive than had the expectation of the Japanese businessman been met. In this cross-cultural exchange, the interaction will likely adapt in convergence reciprocity. Theory: The review of past theories, empirical evidence, and considerations of their own investigations, lead Burgoon (1995) and her colleagues to propose nine principles meant to guide the new interaction adaption model: 1. There may be an innate pressure to adapt interaction patterns unconscious, inborn need to adapt interaction styles2. At the biological level, the inherent pressures are toward entrainment and synchrony, with the exception of compensatory adjustments that ensure physical safety and comfort it is advantageous for survival to converge and synchronize, except in situations where divergence is essential to deescalate a situation3. Approach or avoidance drives are not fixed or constant but cyclical due to satiation at a given pole4. At the social level, the pressure is also toward reciprocity and matching5. At the communication level, both reciprocity and compensation may occur6. Despite predispositions to adapt, the degree of strategic, conscious adaptation present in any situation will be limited due to: a) individual consistency in behavioral style b) internal causes of adjustments c) poor self-monitoring or monitoring of the partner d) inability to adjust performance e) cultural differences in communication practices and expectations7. The combined biological, psychological, and social forces set up boundaries within which most interaction patterns will oscillate, producing largely matching, synchrony, and reciprocity8. Many variables may be salient moderators of interaction adaptation.9. Predictions about functional complexes of behaviors should be more useful and accurate than predictions about particular behaviors viewed in isolation of the function they serveBased on the foundation set by the proposed nine guiding principles and the recognized importance of observing both sides of an interaction, the dyadic model of the interaction adaptation theory was created. The interaction adaptation model is derived from five key concepts. Theory: The first three of the five concepts, which govern behavior are requirements, expectations, and desires. Individuals engaging in an interaction begin with a combination of the three. Theory: Requirements ( R ) - based on an individual’s basic human needs, or what they feel is needed at the time of an interaction. Requirement factors occur below conscious awareness.Expectations ( E ) - based in social factors, influenced by social norms, social prescriptions, and knowledge of the typical behavior of the other interactant. Expectations are anticipated by the context of the interaction.Desires ( D ) - highly personalized, based on personal goals, likes, and dislikes. Desires are influenced by an individual’s personality, personal social experiences, and culture.R, E, & D are interrelated and not independent. Theory: The fourth concept, interaction position, is a product of an individual's requirements, desires, and expectations. Theory: Interaction Position ( IP ) - a derivative behavioral predispositionBurgoon and her colleagues presented the first four concepts in mathematical formula format: R+E+D=IP Understanding R, E, D and IP R, E, and D proscribe certain response options R, E, and D are hierarchically ordered – IP is typically driven by required needs first and so on Rs do not result in a single interaction pattern Es predominate in the equation and lead to strong inclination to match and reciprocate another's behavior Ds are less likely to be more significant than R or E but also lead to matching and reciprocationThe fifth concept is actual behavior which is used as a comparison point against the interaction position. Theory: Actual Behavior ( A ) - partner’s actual performed behavior in an interactionUnderstanding IP and A IP and A can be placed on a continuum. The discrepancy assessed between the two determines the interaction outcome.Large discrepancies between IP and A should activate a) behavioral change, b) cognitive change or c) bothThe goal is to minimize the gap between IP and A and align both individual's behavior with the IP by prompting a reciprocal response from the partnerIf an interactant’s IP matches a partner's A, the interactant will be inclined to match or reciprocate the partner’s behavior If IP equals A for both parties, a stable exchange should progress – unless and until the IP changes for either party If A is more positively valenced than the IP (a positive situation), the inclination will be toward convergence and reciprocity If IP is more positively valenced than A (a negative circumstance) the inclination will be toward nonaccommodation, divergence or compensationAs an alternative explanation of the relationship between the interaction position and actual behavior, a preferred stable interaction is described as one in which IP and A are equal. IAT predicts if at any point, either interactant wants the interaction to continue to be stable and IP does not equal A, one of the interactants much change their IP. This change minimizes the discrepancy gap between IP and A. By changing their IP, the interactant hopes their partner will acknowledge the adjustment by matching the behavior, thus changing A. Burgoon et al., describes this as a "Follow the Leader" entrainment principle. This is a strategic adaptation which was introduced by Ickes et al., in 1982.In summation, interpersonal adaptation theory explains the dyadic interaction as follows, prior to an individual entering an interaction with another individual, their interaction is predisposed with certain expectations, desires, and requirements, or an interaction position. Once the communication begins, the difference between interaction position and the other party's actual behavior determines whether the individual will adapt and continue the communication positively or not.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**SEMA5A** SEMA5A: Semaphorin-5A is a protein that in humans is encoded by the SEMA5A gene.Members of the semaphorin protein family, such as SEMA5A, are involved in axonal guidance during neural development.Semaphorin 5A also plays a role in autism, reducing the ability of neurons to form connections with other neurons in certain brain regions.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**2 mm scale** 2 mm scale: 2 mm scale, often 2 mm finescale is a specification used for railway modelling, largely for modelling British railway prototypes. It uses a scale of 2 mm on the model to 1 foot on the prototype, which scales out to 1:152. The track gauge used to represent prototype standard gauge (4 feet 8+1⁄2 inches) is 9.42 mm (0.371 in). Track and wheels are closer to dead scale replicas than commercial British N. Standard: The 2 mm standards were proposed by Mr. H H Groves in the early 1960s and revised to their current specification in November 1963 by Geoffrey Jones. It is similar in size to the slightly larger British N scale at 1:148, and the slightly smaller European/American N scale at 1:160; though it predates both.Since 2 mm scale is very close to the 1:148 British N scale, a hybrid specification can be modelled by rewheeling proprietary British N-scale models to the 9.42 mm track gauge. This hybrid specification results in a track gauge equivalent to 4 feet 6+7⁄8 inches (1,394 mm), slightly narrower than the prototype 4 feet 8+1⁄2 inches. There is an advantage however in the narrower gauge as this allows more room for the outside motion of outside cylindered steam locomotives, which must be overscale in order to function correctly. This approach is often recommended for beginners. However, 2 mm-scale and hybrid-scale models do not usually sit well together due to the larger size of the latter. Standard: Supplementary Standards Like Protofour, 2 mm standards have been extended to several other prototypes of both wider and narrower gauge with the same tolerances such as Brunel's 7 ft 1⁄4 in (2,140 mm) gauge, Japan Rail's 1,067 mm (3 ft 6 in) narrow gauge and so on. FiNescale Standard The FiNescale standard in use for European prototypes is identical to 2mmFS, with the exception of a to-scale rail gauge of 9 mm (0.354 in). Appreciation One major effect of the standard is to improve the appearance of the track as opposed to N scale, where it is overly tall. Linking carriages with three link chains has been successfully achieved in using the standard. Support: No ready-to-run models are available in 2 mm scale, and although there is some availability of kits and components, some model-making skill is normally required.There is an active association, The 2mm Scale Association, for modellers in this scale, who supply components, tools and jigs, publish a bi-monthly magazine, organise local groups, and promote modelling in the scale. Exhibition layouts: An early example of a 2 mm layout was Rydes Vale which was created in the 1960s by H. H. Grove and his son. The development of the Kineton exhibition layout by the Leamington & Warwick Model Railway Society was feature in series in the British Railway Modelling magazine running from February 2016.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Graphics device interface** Graphics device interface: A graphics device interface is a subsystem that most operating systems use for representing graphical objects and transmitting them to output devices such as monitors and printers. In most cases, the graphics device interface is only able to draw 2D graphics and simple 3D graphics, in order to make use of more advanced graphics and keep performance, an API such as DirectX or OpenGL needs to be installed. Graphics device interface: In Microsoft Windows, the GDI functionality resides in gdi.exe on 16-bit Windows, and gdi32.dll on 32-bit Windows.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Urban decay** Urban decay: Urban decay (also known as urban rot, urban death or urban blight) is the sociological process by which a previously functioning city, or part of a city, falls into disrepair and decrepitude. There is no single process that leads to urban decay. Urban decay can include the following aspects: Industrialization Deindustrialization Gentrification Population decline or overpopulation Counterurbanization Economic Restructuring Multiculturalism Abandoned buildings or infrastructure High local unemployment Increased poverty Fragmented families Low overall living standards or quality of life Political disenfranchisement Crime (e.g., gang activity, corruption, and drug-related crime) Large and/or less regulated populations of urban wildlife (e.g., abandoned pets, feral animals, and semi-feral animals) Elevated levels of pollution (e.g., air pollution, noise pollution, water pollution, and light pollution) Desolate cityscape known as greyfield land or urban prairieSince the 1970s and 1980s, urban decay has been a phenomenon associated with some Western cities, especially in North America and parts of Europe. Cities have experienced population flights to the suburbs and exurb commuter towns; often in the form of white flight. Another characteristic of urban decay is blight – the visual, psychological, and physical effects of living among empty lots, buildings, and condemned houses. Urban decay: Urban decay is often the result of inter-related socio-economic issues, including urban planning decisions, economic deprivation of the local populace, the construction of freeways and railroad lines that bypass or run through the area, depopulation by suburbanization of peripheral lands, real estate neighborhood redlining, and immigration restrictions. Causes: During the Industrial Revolution, many people moved from rural areas to cities for employment in the manufacturing industry, thus causing urban populations to boom. Subsequent economic change left many cities economically vulnerable. Studies such as the Urban Task Force (DETR 1999), the Urban White Paper (DETR 2000), and a study of Scottish cities (2003) hypothesize areas suffering from industrial decline, high unemployment, poverty, and a decaying physical environment (sometimes including contaminated land and obsolete infrastructure)—prove "highly resistant to improvement".Changes in transportation from public to private, (specifically the private motor car) eliminated some of the cities' public transport service advantages, e.g., fixed-route buses and trains. In particular, at the end of World War II, many political decisions favored suburban development and encouraged suburbanization through financial incentives like government supported FHA loans and VA mortgage aid. This allowed many veterans of World War II and their families to afford comfortable single family housing in suburbs.The manufacturing industry has historically been a base for the prosperity of major cities. When these industries relocate to larger, less urban environments, some cities have experienced population loss with associated urban decay, and even riots. Cutbacks on police and fire services may result, while lobbying for government funded housing may increase. Increased city taxes encourage residents to move out. Libertarian economists argue that rent control contributes to urban blight by reducing new construction and investment in housing and discouraging maintenance. Countries: United States Historically in the United States, the white middle class gradually left the cities for suburban areas due to African-American migration north toward cities after World War I. American cities often declare blighted status once it is determined that urban renewal strategies are the most appropriate means to encourage the private investment for reversing deteriorating downtown conditions.Some historians differentiate between the first Great Migration (1910–1930), numbering about 1.6 million African-American migrants who left mostly Southern rural areas to migrate to northern and Midwestern industrial cities, and, after a lull during the Great Depression, a Second Great Migration (1940–1970), in which 5 million or more African-Americans moved, including many to California and various western cities.Between 1910 and 1970, African-Americans moved from southern States, especially Alabama, Louisiana, Mississippi, and Texas to other regions of the United States, many of them townspeople with urban skills. By the end of the Second Great Migration, African-Americans had become an urbanized population, with more than 80% of Black Americans living in cities. A majority of 53 percent remained in the South, while 40 percent lived in the Northeast and Midwest and 7 percent in the West.From the 1930s until 1977, African-Americans seeking borrowed capital for housing and businesses were discriminated against via the federal-government–legislated discriminatory lending practices for the Federal Housing Administration (FHA) via redlining. In 1977, the US Congress passed the Community Reinvestment Act, designed to encourage commercial banks and savings associations to help meet the needs of borrowers in all segments of their communities, including low- and moderate-income neighborhoods.Later urban centers were drained further through the advent of mass car ownership, the marketing of suburbia as a location to move to, and the building of the Interstate Highway System. In North America, this shift manifested itself in strip malls, suburban retail and employment centers, and low-density housing estates. Large areas of many northern cities in the United States experienced population decreases and a degradation of urban areas.Inner-city property values declined, and economically disadvantaged populations moved in. In the U.S., the new inner-city poor were often African-Americans that migrated from the South in the 1920s and 1930s. As they moved into traditional white neighborhoods, ethnic frictions served to accelerate flight to the suburbs. Countries: United Kingdom Like many industrial nations before the Second World War, the United Kingdom carried out extensive slum clearances. These efforts continued after the war, however in many of these slums, depopulation became common, producing compounding decay. The UK is unlike much of Europe in having high overall population density, but low urban population density outside of London. In London, many former slum neighbourhoods like in Islington became "highly prized," however this was the exception to the rule, and much of the north of England remains deprived. Countries: The Joseph Rowntree Foundation in the 1980s and 1990s undertook extensive studies culminating with a 1991 report which analyzed the 20 most difficult council estates. Many of the most unpopular estates were in East London, Newcastle upon Tyne, Greater Manchester, Glasgow, the South Wales valleys, and Liverpool, their unpopularity driven by a variety of causes from the loss of key industries, population decline, and counterurbanization.Population decline in particular was noted to be faster in inner city areas than in outer ones, however a decline was noted throughout the 1970s, through the 1990s in both inner and outer city areas. Jobs declined between 1984 and 1991 (a decline observed particularly among men), while outer areas saw job growth (particularly among women). The UK also saw urban areas become more ethnically diverse, however urban decline was not limited to areas which saw population changes. Manchester in 1991 had a non-white population 7.5% higher than the national average, but Newcastle had a 1% smaller non-white population. Countries: Features of British urban decay analyzed by the Foundation included empty houses; widespread demolitions; declining property values; and low demand for all property types, neighborhoods, and tenures.Urban decay has been found by the Foundation to be "more extreme and therefore more visible" in the north of the United Kingdom. This trend of northern decline has been observed not just in the United Kingdom but also in much of Europe. Some seaside resort towns have also experienced urban decay towards the end of the 20th century. The UK's period of urban decay was exemplified by The Specials' 1981 hit single "Ghost Town". Countries: France Large French cities are often surrounded by areas of urban decay. While city centers tend to be occupied mainly by upper-class residents, cities are often surrounded by public housing developments, with many tenants being of North African origin (from Morocco, Algeria and Tunisia), and recent immigrants. From the 1950s to the 1970s, publicly funded housing projects resulted in large areas of mid- to high-rise buildings. These modern "grands ensembles" were welcomed at the time, as they replaced shanty towns and raised living standards, but these areas were heavily affected by economic depression in the 1980s. Countries: The banlieues of large cities like Lyon, especially the northern Parisian banlieues, are criticized by the country's territorial spatial planning administration. They have been ostracized since the French Commune government of 1871, considered as "lawless" or "outside the law", even "outside the Republic", as opposed to "deep France" or "authentic France", which is associated with the countryside.In November 2005, the French suburbs were the scene of riots sparked by the accidental electrocution of two teenagers in the northern suburbs of Paris, and fueled in part by the substandard living conditions in these areas. Many deprived suburbs of French cities were the scenes of clashes between youngsters and the police, with violence and numerous car burnings resulting in media coverage. Countries: Today the situation remains generally unchanged; however, there is a level of disparity. Some areas are experiencing increased drug trafficking, while some northern suburbs of Paris and areas like Vaulx-en-Velin are undergoing refurbishment and re-development. Some previously mono-industrial towns in France are experiencing increasing crime, decay, and decreasing population. The issue remains a divisive issue in French public politics. Countries: Italy In Italy, a well-known case of urban decay is represented by the Vele di Scampia, a large public housing estate built between 1962 and 1975 in the Scampia neighborhood of Naples. The idea behind the project was to provide an urban housing project, where hundreds of families could socialize and create a community. The design included a public transportation rail station, and a park area between the two buildings. The planners wanted to create a small city model with parks, playing fields, and other facilities. Countries: However, various events, starting with the 1980 earthquake in Irpinia, led to urban decay inside this project and in the surrounding areas. Many families left homeless by the earthquake squatted inside the Vele. The lack of police presence, led to a rise in Camorra drug trade, as well as other gang and illicit activity. Countries: South Africa In South Africa, the most prominent urban decay case is Hillbrow, an inner-city neighborhood of Johannesburg which was formerly affluent. At the end of apartheid in 1994, many middle-class white residents moved out and were replaced by mainly low-income workers and unemployed people, including many refugees and undocumented immigrants from neighboring countries. Many businesses that operated in the area followed their customers to the suburbs, and some apartment buildings were "hi-jacked" by gangs who collected rentals from residents but failed to pay the utility bills, leading to termination of municipal services and a refusal by the legal owners to invest in maintenance or cleaning. Occupied today by low-income residents and immigrants and being over-crowded; the proliferation of crime, drugs, illegal businesses, and decay of properties have become prevalent. Countries: Germany Many east German towns such as Hoyerswerda face or are facing population loss and urban shrinkage since the reunification of Germany in 1990. Hoyerswerda's population has dropped about 40% since its peak and there is a significant lack of teenagers and twenty- forty-some year olds due to the declining birthrates during the uncertainty of reunification. Part of the blight in east Germany is due to the construction and preservation practices of the socialist government under the German Democratic Republic (GDR). To fill the housing needs, the GDR quickly built many prefabricated apartment buildings. In addition, historic preservation of pre-war buildings varied; in some cases, the rubble of buildings destroyed by the war were simply left there while in other cases the debris was removed, and an empty lot remained. Other standing historical structures were left to decay in the early GDR as they did not represent the socialist ideals of the country. Policy responses to urban decay: The main responses to urban decay have been through positive public intervention and policy, through a plethora of initiatives, funding streams, and agencies, using the principles of New Urbanism (or through Urban Renaissance, its UK/European equivalent). Gentrification has also had a significant effect, and remains the primary means of a natural remedy. Policy responses to urban decay: United States In the United States, early government policies included "urban renewal" and building of large-scale housing projects for the poor. Urban renewal demolished entire neighborhoods in many inner cities and it was as much a cause of urban decay as a remedy. These government efforts are now thought by many to have been misguided.For multiple reasons including increased demand for urban amenities, some cities have rebounded from these policy mistakes. Meanwhile, some of the inner suburbs built in the 1950s and 1960s are beginning the process of decay, as those who are living in the inner city are pushed out due to gentrification. Policy responses to urban decay: Europe In Western Europe, where undeveloped land is scarce and urban areas are generally recognized as the drivers of the new information and service economies, urban renewal has become an industry in itself, with hundreds of agencies and charities set up to tackle the issue. European cities have the benefit of historical organic development patterns already concurrent to the New Urbanist model, and although derelict, most cities have attractive historical quarters and buildings ripe for redevelopment. Policy responses to urban decay: In the inner-city estates and suburban cities, the solution is often more drastic, with 1960s and 1970s state housing projects being demolished and rebuilt in a more traditional European urban style, with a mix of housing types, sizes, prices, and tenures, as well as a mix of other uses such as retail or commercial. One of the best examples of this is in Hulme, Manchester, which was cleared of 19th-century housing in the 1950s to make way for a large estate of high-rise flats. During the 1990s, it was cleared again to make way for new development built along new urbanist lines.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Coronary CT angiography** Coronary CT angiography: Coronary CT angiography (CTA or CCTA) is the use of computed tomography (CT) angiography to assess the coronary arteries of the heart. The patient receives an intravenous injection of radiocontrast and then the heart is scanned using a high speed CT scanner, allowing physicians to assess the extent of occlusion in the coronary arteries, usually in order to diagnose coronary artery disease. Coronary CT angiography: CTA is superior to coronary CT calcium scan in determining the risk of Major Adverse Cardiac Events (MACE). Medical uses: Faster CT machines, due to multidetector capabilities, have made imaging of the heart and circulatory system very practical in a number of clinical settings. The faster capability has allowed the imaging of the heart with minimal involuntary motion, which creates motion blur on the image, and has a number of practical applications. It may be useful in the diagnosis of suspected coronary heart disease, for follow-up of a coronary artery bypass, for the evaluation of valvular heart disease and for the evaluation of cardiac masses.It is uncertain whether this modality will replace invasive coronary catheterization. At present, it appears that the greatest utility of cardiac CT lies in ruling out coronary artery disease rather than ruling it in. This is because the test is highly sensitive (over 90% detection rate), so a negative test result largely rules out coronary artery disease (i.e. the test has a high negative predictive value). The test is somewhat less specific, however, so a positive result is less conclusive and may need to be confirmed by subsequent invasive angiography. Medical uses: The positive predictive value of cardiac CTA is approximately 82% and the negative predictive value is around 93%. This means for every 100 patients who appear to have coronary artery disease after CT angiography, 18 of them actually won't have it, and that for every 100 patients who have a negative CT angio test result (i.e. the test says they do not have coronary artery disease), 7 will actually have the disease as defined by the reference standard of invasive coronary angiography via cardiac catheterization. Both coronary CT angiography and invasive angiography via cardiac catheterization yield similar diagnostic accuracy when both are being compared to a third reference standard such as intravascular ultrasound or fractional flow reserve.In addition to the diagnostic abilities, cardiac CTA beholds important prognostic information. Stenosis severity and extent of coronary artery disease are important prognostic indicators. However, one of the unique features of cardiac CTA is the fact that it enables the visualization of the vessel wall, in a non-invasive manner. Therefore, the technique is able to identify characteristics of coronary artery disease that are associated to the development of acute coronary syndrome. Side effects: Because the heart is effectively imaged more than once (described above), cardiac CT angiography can result in a relatively high radiation exposure (around 12 millisievert), although newer acquisition protocols, have recently been developed which drastically reduce this exposure to around 1 mSv (cfr. Pavone, Fioranelli, Dowe: Computed Tomography or Coronary Arteries, Springer 2009). By comparison, a chest X-ray carries a dose of approximately 0.02-0.2 mSv and natural background radiation exposure is around 2.3 mSv/year. Thus, each cardiac CT scan carried out with current protocols (dose approximately 1 mSv) is equivalent to approximately 5-50 chest X-rays or less than 1 year of background radiation. Methods are available to decrease this exposure, however, such as prospectively decreasing radiation output based on the concurrently acquired ECG (i.e. tube current modulation.) This can result in a significant decrease in radiation exposure, at the risk of compromising image quality if there is any arrhythmia during the acquisition.The significance of the low radiation doses used in diagnostic imaging is unknown, although the possibility of increasing cancer incidence across a population is of significant concern. This potential risk must be weighed against the competing risk of not diagnosing a significant health problem in a particular individual, such as coronary artery disease. Side effects: Contraindications Pregnancy is considered a relative contraindication, similarly to many forms of medical imaging in pregnancy. The potential harms to a fetus include the application of X-rays in addition to radiocontrast. Since an iodine-containing contrast agent is used, severe contrast agent allergy, uncontrolled hyperthyroidism or renal function impairment are also relative contraindications. Cardiac arrhythmias, coronary artery stents and tachycardia may result in a reduced image quality. Improved resolution: With the advent of subsecond rotation combined with multi-slice CT (up to 320 slices), high resolution and high speed can be obtained at the same time, allowing excellent imaging of the coronary arteries (cardiac CT angiography). Images with even higher temporal resolution can be obtained using multi-cycle (also called multi-segmental) image reconstruction.In this technique, a portion of the heart is imaged during one heart cycle while an ECG trace is recorded. During the next heart cycle, the next portion of the heart is scanned for up to 5 total cycles until the entire heart is imaged. The reconstruction algorithm then combines the images from these different cycles to generate one complete image. The advantage of this method is that each image segment is acquired in less time as compared to acquiring the entire heart in one heart cycle, thus improving temporal resolution. The disadvantages are 1) the potential for image artifacts from fusing the image segments and 2) the requirement of additional X-ray radiation for image acquisition. Improved resolution: Dual Source CT scanners, introduced in 2005, allow higher temporal resolution by acquiring a full CT slice in only half a rotation, thus reducing motion blurring at high heart rates and potentially allowing for shorter breath-hold time. This is particularly useful for ill patients having difficulty holding their breath or unable to take heart-rate lowering medication. The speed advantages of 64-slice MSCT have rapidly established it as the minimum standard for newly installed CT scanners intended for cardiac scanning. Manufacturers have developed 320-slice and true 'volumetric' scanners, primarily for their improved cardiac scanning performance. Introduction of a CT scanner with a 160 mm detector in 2014 allows for imaging of the whole heart in a single beat without motion of the coronary arteries, regardless of patient heart rate. The latest MSCT scanners acquire images only at 70-80% of the R-R interval (late diastole). This prospective gating can reduce effective dose from 10 to 15 mSv to as little as 1.2 mSv in follow-up patients acquiring at 75% of the R-R interval. Effective dose using MSCT coronary imaging can average less than the dose in conventional coronary angiography.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Will call** Will call: Will call refers to a method of delivery for purchased items where the customer picks up the items at the seller's place of business, primarily in United States commerce. It may also refer to the department within a business where goods are staged for customer pick up. An equivalent service for goods which are paid for in installments then retrieved once fully paid, which was common before credit cards became available, is called layaway, and is still used among those without access to credit cards. Will call: The word "call" is a shortened form of "call for", which means "to come and get", so "will call" literally means "(the customer) will call for (come and get) the goods." In a linguistic process similar to initial-stress derived nominalization, the first syllable of the noun phrase is usually stressed ("will call") rather than the second syllable in the verb phrase ("will call"). Will call: The term is most commonly used in relation to admission tickets for events. North America: As of 2022, Will Call is still in wide use for ticket sales at a box office where patrons of entertainment venues go to pick up pre-purchased tickets for an event, such as a play, sporting event, museums, or concerts, either just before the event or in advance. At large venues such as stadiums or theme parks, "Will Call" pickup windows may be designated specifically for this purpose. North America: In the wholesale and retail trade industry, a will call memo is given to wholesale delivery drivers as an instruction to pick up items at the address stated on the memo. Great Britain: Normally in the UK the term used is 'at the door' or 'on the door'; such as 'tickets can be collected at the door'. The acronym COBO, for "Care of Box Office", is used internally by ticket offices and not common with the public. A similar term 'buyer collects' is used on online auction sites, to imply that the customer must collect the goods from the vendor after sale, usually implying that they will not post. For goods purchased remotely and collected by the buyer, the usual term used is click and collect.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**CNS Drugs (journal)** CNS Drugs (journal): CNS Drugs is a monthly peer-reviewed medical journal published by Adis International (Springer Nature) that covers drug treatment of psychiatric and neurological disorders. Abstracting and indexing: The journal is abstracted and indexed in: According to the Journal Citation Reports, the journal has a 2022 impact factor of 6.0.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Christian interpolation** Christian interpolation: In textual criticism, Christian interpolation generally refers to textual insertion and textual damage to Jewish and pagan source texts during Christian scribal transmission. Old Testament pseudepigrapha: Notable examples among the body of texts known as Old Testament pseudepigrapha include the disputed authenticity of Similitudes of Enoch and 4 Ezra which in the form transmitted by Christian scribal traditions contain arguably later Christian understanding of terms such as Son of Man. Other texts with significant Christian interpolation include the Testaments of the Twelve Patriarchs and the Sibylline Oracles. Josephus: Notable disputed examples in the works of Josephus include Josephus' sections on John the Baptist and James the Just which is widely accepted, and the Testimonium Flavianum, which is widely regarded as at best damaged.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Vacuole** Vacuole: A vacuole () is a membrane-bound organelle which is present in plant and fungal cells and some protist, animal, and bacterial cells. Vacuoles are essentially enclosed compartments which are filled with water containing inorganic and organic molecules including enzymes in solution, though in certain cases they may contain solids which have been engulfed. Vacuoles are formed by the fusion of multiple membrane vesicles and are effectively just larger forms of these. The organelle has no basic shape or size; its structure varies according to the requirements of the cell. Discovery: Contractile vacuoles ("stars") were first observed by Spallanzani (1776) in protozoa, although mistaken for respiratory organs. Dujardin (1841) named these "stars" as vacuoles. In 1842, Schleiden applied the term for plant cells, to distinguish the structure with cell sap from the rest of the protoplasm.In 1885, de Vries named the vacuole membrane as tonoplast. Function: The function and significance of vacuoles varies greatly according to the type of cell in which they are present, having much greater prominence in the cells of plants, fungi and certain protists than those of animals and bacteria. In general, the functions of the vacuole include: Isolating materials that might be harmful or a threat to the cell Containing waste products Containing water in plant cells Maintaining internal hydrostatic pressure or turgor within the cell Maintaining an acidic internal pH Containing small molecules Exporting unwanted substances from the cell Allows plants to support structures such as leaves and flowers due to the pressure of the central vacuole By increasing in size, allows the germinating plant or its organs (such as leaves) to grow very quickly and using up mostly just water. Function: In seeds, stored proteins needed for germination are kept in 'protein bodies', which are modified vacuoles.Vacuoles also play a major role in autophagy, maintaining a balance between biogenesis (production) and degradation (or turnover), of many substances and cell structures in certain organisms. They also aid in the lysis and recycling of misfolded proteins that have begun to build up within the cell. Thomas Boller and others proposed that the vacuole participates in the destruction of invading bacteria and Robert B. Mellor proposed organ-specific forms have a role in 'housing' symbiotic bacteria. In protists, vacuoles have the additional function of storing food which has been absorbed by the organism and assisting in the digestive and waste management process for the cell.In animal cells, vacuoles perform mostly subordinate roles, assisting in larger processes of exocytosis and endocytosis. Function: Animal vacuoles are smaller than their plant counterparts but also usually greater in number. There are also animal cells that do not have any vacuoles.Exocytosis is the extrusion process of proteins and lipids from the cell. These materials are absorbed into secretory granules within the Golgi apparatus before being transported to the cell membrane and secreted into the extracellular environment. In this capacity, vacuoles are simply storage vesicles which allow for the containment, transport and disposal of selected proteins and lipids to the extracellular environment of the cell. Function: Endocytosis is the reverse of exocytosis and can occur in a variety of forms. Phagocytosis ("cell eating") is the process by which bacteria, dead tissue, or other bits of material visible under the microscope are engulfed by cells. The material makes contact with the cell membrane, which then invaginates. The invagination is pinched off, leaving the engulfed material in the membrane-enclosed vacuole and the cell membrane intact. Pinocytosis ("cell drinking") is essentially the same process, the difference being that the substances ingested are in solution and not visible under the microscope. Phagocytosis and pinocytosis are both undertaken in association with lysosomes which complete the breakdown of the material which has been engulfed.Salmonella is able to survive and reproduce in the vacuoles of several mammal species after being engulfed.The vacuole probably evolved several times independently, even within the Viridiplantae. Vacuole types: Gas vacuoles Gas vesicles, also known as gas vacuoles, are nanocompartments which are freely permeable to gas, and occur mainly in Cyanobacteria, but are also found in other bacteria species and some archaea. Gas vesicles allow the bacteria to control their buoyancy. They are formed when small biconical structures grow to form spindles. The vesicle walls are composed of a hydrophobic gas vesicle protein A (GvpA) which form a cylindrical hollow, proteinaceous structure that fills with gas. Small variances in the amino acid sequence produce changes in morphology of the gas vesicle, for example, GvpC, is a larger protein. Vacuole types: Central vacuoles Most mature plant cells have one large vacuole that typically occupies more than 30% of the cell's volume, and that can occupy as much as 80% of the volume for certain cell types and conditions. Strands of cytoplasm often run through the vacuole. Vacuole types: A vacuole is surrounded by a membrane called the tonoplast (word origin: Gk tón(os) + -o-, meaning “stretching”, “tension”, “tone” + comb. form repr. Gk plastós formed, molded) and filled with cell sap. Also called the vacuolar membrane, the tonoplast is the cytoplasmic membrane surrounding a vacuole, separating the vacuolar contents from the cell's cytoplasm. As a membrane, it is mainly involved in regulating the movements of ions around the cell, and isolating materials that might be harmful or a threat to the cell.Transport of protons from the cytosol to the vacuole stabilizes cytoplasmic pH, while making the vacuolar interior more acidic creating a proton motive force which the cell can use to transport nutrients into or out of the vacuole. The low pH of the vacuole also allows degradative enzymes to act. Although single large vacuoles are most common, the size and number of vacuoles may vary in different tissues and stages of development. For example, developing cells in the meristems contain small provacuoles and cells of the vascular cambium have many small vacuoles in the winter and one large one in the summer. Vacuole types: Aside from storage, the main role of the central vacuole is to maintain turgor pressure against the cell wall. Proteins found in the tonoplast (aquaporins) control the flow of water into and out of the vacuole through active transport, pumping potassium (K+) ions into and out of the vacuolar interior. Due to osmosis, water will diffuse into the vacuole, placing pressure on the cell wall. If water loss leads to a significant decline in turgor pressure, the cell will plasmolyze. Turgor pressure exerted by vacuoles is also required for cellular elongation: as the cell wall is partially degraded by the action of expansins, the less rigid wall is expanded by the pressure coming from within the vacuole. Turgor pressure exerted by the vacuole is also essential in supporting plants in an upright position. Another function of a central vacuole is that it pushes all contents of the cell's cytoplasm against the cellular membrane, and thus keeps the chloroplasts closer to light. Most plants store chemicals in the vacuole that react with chemicals in the cytosol. If the cell is broken, for example by a herbivore, then the two chemicals can react forming toxic chemicals. In garlic, alliin and the enzyme alliinase are normally separated but form allicin if the vacuole is broken. A similar reaction is responsible for the production of syn-propanethial-S-oxide when onions are cut.Vacuoles in fungal cells perform similar functions to those in plants and there can be more than one vacuole per cell. In yeast cells the vacuole (Vac7) is a dynamic structure that can rapidly modify its morphology. They are involved in many processes including the homeostasis of cell pH and the concentration of ions, osmoregulation, storing amino acids and polyphosphate and degradative processes. Toxic ions, such as strontium (Sr2+), cobalt(II) (Co2+), and lead(II) (Pb2+) are transported into the vacuole to isolate them from the rest of the cell. Vacuole types: Contractile vacuoles Contractile Vacuoles is a specialized osmoregulatory organelle that is present in many free-living protists. The contractile vacuole is part of the contractile vacuole complex which includes radial arms and a spongiome. The contractile vacuole complex works periodically contracts to remove excess water and ions from the cell to balance water flow into the cell. When the contractile vacuole is slowly taking water in, the contractile vacuole enlarges, this is called diastole and when it reaches its threshold, the central vacuole contracts then contracts (systole) periodically to release water. Vacuole types: Food vacuoles Food vacuoles (also called digestive vacuole) are organelles found in Ciliates, and Plasmodium falciparum, a protozoan parasite that causes Malaria. Histopathology: In histopathology, vacuolization is the formation of vacuoles or vacuole-like structures, within or adjacent to cells. It is an unspecific sign of disease.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Decision table** Decision table: Decision tables are a concise visual representation for specifying which actions to perform depending on given conditions. They are algorithms whose output is a set of actions. The information expressed in decision tables could also be represented as decision trees or in a programming language as a series of if-then-else and switch-case statements. Overview: Each decision corresponds to a variable, relation or predicate whose possible values are listed among the condition alternatives. Each action is a procedure or operation to perform, and the entries specify whether (or in what order) the action is to be performed for the set of condition alternatives the entry corresponds to. Overview: To make them more concise, many decision tables include in their condition alternatives a don't care symbol. This can be a hyphen or blank, although using a blank is discouraged as it may merely indicate that the decision table has not been finished. One of the uses of decision tables is to reveal conditions under which certain input factors are irrelevant on the actions to be taken, allowing these input tests to be skipped and thereby streamlining decision-making procedures. Overview: Aside from the basic four quadrant structure, decision tables vary widely in the way the condition alternatives and action entries are represented. Some decision tables use simple true/false values to represent the alternatives to a condition (similar to if-then-else), other tables may use numbered alternatives (similar to switch-case), and some tables even use fuzzy logic or probabilistic representations for condition alternatives. In a similar way, action entries can simply represent whether an action is to be performed (check the actions to perform), or in more advanced decision tables, the sequencing of actions to perform (number the actions to perform). Overview: A decision table is considered balanced or complete if it includes every possible combination of input variables. In other words, balanced decision tables prescribe an action in every situation where the input variables are provided. Example: The limited-entry decision table is the simplest to describe. The condition alternatives are simple Boolean values, and the action entries are check-marks, representing which of the actions in a given column are to be performed. The following balanced decision table is an example in which a technical support company writes a decision table to enable technical support employees to efficiently diagnose printer problems based upon symptoms described to them over the phone from their clients. This is just a simple example, and it does not necessarily correspond to the reality of printer troubleshooting. Even so, it demonstrates how decision tables can scale to several conditions with many possibilities. Software engineering benefits: Decision tables, especially when coupled with the use of a domain-specific language, allow developers and policy experts to work from the same information, the decision tables themselves. Tools to render nested if statements from traditional programming languages into decision tables can also be used as a debugging tool.Decision tables have proven to be easier to understand and review than code, and have been used extensively and successfully to produce specifications for complex systems. History: In the 1960s and 1970s a range of "decision table based" languages such as Filetab were popular for business programming. Program embedded decision tables: Decision tables can be, and often are, embedded within computer programs and used to "drive" the logic of the program. A simple example might be a lookup table containing a range of possible input values and a function pointer to the section of code to process that input. Control tables: Multiple conditions can be coded for in similar manner to encapsulate the entire program logic in the form of an "executable" decision table or control table. There may be several such tables in practice, operating at different levels and often linked to each other (either by pointers or an index value). Implementations: Filetab, originally from the NCC DETAB/65, 1965, ACM FORTAB from Rand in 1962, designed to be imbedded in FORTRAN A Ruby implementation exists using MapReduce to find the correct actions based on specific input values.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Extraterrestrial Sample Curation Center** Extraterrestrial Sample Curation Center: The Planetary Material Sample Curation Facility (Japanese: 惑星物質試料受入れ設備) (PMSCF), commonly known as the Extraterrestrial Sample Curation Center (ESCuC, 地球外試料キュレーションセンター) is the facility where Japan Aerospace Exploration Agency (JAXA) conducts the curation works of extraterrestrial materials retrieved by some sample-return missions. They work closely with Japan's Astromaterials Science Research Group. Its objectives include documentation, preservation, preparation, and distribution of samples. All samples collected are made available for international distribution upon request. Overview: The conceptual studies for JAXA's curation facility begun in 2005, the specifications were decided in 2007, and the facility was completed in 2008 in time to receive the asteroid samples retrieved by the Hayabusa mission. The facility is composed of several cleanrooms rated from ISO7 (for gowning and cleaning rooms) to the cleanest ISO5 (for sample handling and storage). The key feature of JAXA's ESCuC curation facility is the ability to observe, take out a portion and preserve a precious returned sample, without being exposed to the terrestrial atmosphere and other contaminants. Due to the nature of the Hayabusa returned samples, the facility developed the capability to handle particles as small as 10 μm by using a system based on electrostatic micromanipulation within a clean chamber in contact with either vacuum or an inert gas (usually nitrogen).The facility also features a wide variety of laboratories and analyzers, including XCT/XRD, TEM/STEM, EPMA, SIMS, FTIR, Raman, NAA, noble-gas-MS, and ToF-SIMS.The "Monitoring and Meeting Room" has recently been retrofitted to host the samples returned by the Hayabusa2 mission from the carbonaceous asteroid Ryugu. Catalogue: Samples include: Asteroid 25143 Itokawa, retrieved by the Hayabusa mission. Meteorites and standard samples Samples collected by the Tanpopo orbital experiment. Asteroid 162173 Ryugu, to be retrieved by Hayabusa2. Returned to Earth in December 2020.Future samples expectedAsteroid 101955 Bennu, to be retrieved by the NASA OSIRIS-REx mission. The expected return to Earth is September 2023. Similar facilities: Other facilities dedicated to the curation of pristine returned extraterrestrial samples are the NASA Johnson Space Center Astromaterials Acquisition and Curation Office, the Russian Vernadsky Institute of Geochemistry and Analytical Chemistry in Moscow for Luna samples and the CNSA curatorial facility for Chang'e 5 lunar samples. There are currently no pristine returned samples curatorial facility in Europe, even though preparatory studies have been conducted in the recent past.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Sedan (automobile)** Sedan (automobile): A sedan or saloon (British English) is a passenger car in a three-box configuration with separate compartments for an engine, passengers, and cargo. The first recorded use of sedan in reference to an automobile body occurred in 1912. The name derives from the 17th-century litter known as a sedan chair, a one-person enclosed box with windows and carried by porters. Variations of the sedan style include the close-coupled sedan, club sedan, convertible sedan, fastback sedan, hardtop sedan, notchback sedan, and sedanet/sedanette. Definition: A sedan () is a car with a closed body (i.e. a fixed metal roof) with the engine, passengers, and cargo in separate compartments. This broad definition does not differentiate sedans from various other car body styles, but in practice, the typical characteristics of sedans are: a B-pillar (between the front and rear windows) that supports the roof; two rows of seats;: 134  a three-box design with the engine at the front and the cargo area at the rear; a less steeply sloping roofline than a coupé, which results in increased headroom for rear passengers and a less sporting appearance; and a rear interior volume of at least 33 cu ft (0.93 m3). Definition: It is sometimes suggested that sedans must have four doors (to provide a simple distinction between sedans and two-door coupés); others state that a sedan can have four or two doors.: 134  While the sloping rear roofline defined the coupe, the design element has become common on many body styles with manufacturers increasingly "cross-pollinating" the style so that terms such as sedan and coupé have been loosely interpreted as "'four-door coupes' - an inherent contradiction in terms."When a manufacturer produces two-door sedan and four-door sedan versions of the same model, the shape and position of the greenhouse on both versions may be identical, with only the B-pillar positioned further back to accommodate the longer doors on the two-door versions. Etymology: A sedan chair, a sophisticated litter, was an enclosed box with windows used to transport one seated person. Porters at the front and rear carried the chair with horizontal poles. Litters date back to long before ancient Egypt, India, and China. Sedan chairs were developed in the 1630s. Etymologists suggest the name of the chair very probably came through varieties of Italian from the Latin sedere, meaning "to sit". Etymology: The first recorded use of sedan for an automobile body occurred in 1912 when the Studebaker Four and Studebaker Six models were marketed as sedans. There were fully enclosed automobile bodies before 1912. Long before that time, the same fully enclosed but horse-drawn carriages were known as a brougham in the United Kingdom, berline in France, and berlina in Italy; the latter two have become the terms for sedans in these countries. It is sometimes stated that the 1899 Renault Voiturette Type B (a 2-seat car with an extra external seat for a footman/mechanic) was the first sedan, since it is the first known car to be produced with a roof. A one-off instance of a similar coachwork is also known in a 1900 De Dion-Bouton Type D.A sedan is typically considered to be a fixed-roof car with at least four seats. Based on this definition, the earliest sedan was the 1911 Speedwell, which was manufactured in the United States.: 87 International terminology: In American English, Latin American Spanish, and Brazilian Portuguese, the term sedan is used (accented as sedán in Spanish). In British English, a car of this configuration is called a saloon (). Hatchback sedans are known simply as hatchbacks (not hatchback saloons); long-wheelbase luxury saloons with a division between the driver and passengers are limousines. An equivalent term for sports sedan in the United Kingdom is super saloon.In Australia and New Zealand, sedan is now predominantly used, they were previously simply cars. In the 21st century, saloon is still found in the long-established names of particular motor races. In other languages, sedans are known as berline (French), berlina (European Spanish, European Portuguese, Romanian, and Italian), though they may include hatchbacks. These names, like the sedan, all come from forms of passenger transport used before the advent of automobiles. In German, a sedan is called Limousine and a limousine is a Stretch-Limousine.In the United States, two-door sedan models were marketed as Tudor in the Ford Model A (1927–1931) series. Automakers use different terms to differentiate their products and for Ford's sedan body styles "the tudor (2-door) and fordor (4-door) were marketing terms designed to stick in the minds of the public." Ford continued to use the Tudor name for 5-window coupes, 2-door convertibles, and roadsters since all had two doors. The Tudor name was also used to describe the Škoda 1101/1102 introduced in 1946. The name was popularized by the public for a two-door model and was then applied by the automaker to the entire line that included a four-door sedan and station wagon versions. Standard styles: Notchback sedans In the United States, the notchback sedan distinguishes models with a horizontal trunk lid. The term is generally only referred to in marketing when it is necessary to distinguish between two sedan body styles (e.g. notchback and fastback) of the same model range. Standard styles: Liftback sedans Several sedans have a fastback profile, but have a hatchback-style tailgate which is hinged at the roof. Examples include the Peugeot 309, Škoda Octavia, Hyundai Elantra XD, Chevrolet Malibu Maxx, BMW 4 Series Grand Coupe, Audi A5 Sportback, and Tesla Model S. The names hatchback and sedan are often used to differentiate between body styles of the same model. To avoid confusion, the term hatchback sedan is not often used. Standard styles: Fastback sedans There have been many sedans with a fastback style. Standard styles: Hardtop sedans Hardtop sedans were a popular body style in the United States from the 1950s to the 1970s. Hardtops are manufactured without a B-pillar leaving uninterrupted open space or, when closed, glass along the side of the car. The top was intended to look like a convertible's top but it was fixed and made of hard material that did not fold.All manufacturers in the United States from the early 1950s into the 1970s provided at least a 2-door hardtop model in their range and a 4-door hardtop as well. The lack of side bracing demanded a particularly strong and heavy chassis frame to combat unavoidable flexing. The pillarless design was also available in four-door models using unibody construction. For example, Chrysler moved to unibody designs for most of its models in 1960 and American Motors Corporation offered four-door sedans, as well a four-door station wagon from 1958 until 1960 in the Rambler and Ambassador series.In 1973, the US government passed Federal Motor Vehicle Safety Standard 216 creating a standard roof strength test to measure the integrity of roof structure in motor vehicles to come into effect some years later. Production of hardtop sedan body style ended with the 1978 Chrysler Newport. For a time roofs were covered with vinyl and B-pillars were minimized by using styling methods like matt black finishes. Stylists and engineers soon developed more subtle solutions. Mid-20th century variations: Close-coupled sedans A close-coupled sedan is a body style produced in the United States during the 1920s and 1930s. Their two-box boxy styling made these sedans more like crossover vehicles than traditional three-box sedans. Like other close-coupled body styles, the rear seats are located further forward than a regular sedan.: 43  This reduced the length of the body; close-coupled sedans, also known as town sedans, were the shortest of the sedan models offered.Models of close-coupled sedans include the Chrysler Imperial, Duesenberg Model A, and Packard 745 Coach sedans A two-door sedan for four or five passengers but with less room for passengers than a standard sedan. A Coach body has no external trunk for luggage. Haajanen says it can be difficult to tell the difference between a Club and a Brougham and a Coach body as if manufacturers were more concerned with marketing their product than adhering to strict body style definitions. Mid-20th century variations: Close-coupled saloons Close-coupled saloons originated as four-door thoroughbred sporting horse-drawn carriages with little room for the feet of rear passengers. In automotive use, manufacturers in the United Kingdom used the term for the development of the chummy body where passengers were forced to be friendly because they were tightly packed. They provided weather protection for extra passengers in what would otherwise be a two-seater car. Two-door versions would be described in the United States and France as coach bodies. A postwar example is the Rover 3 Litre Coupé. Mid-20th century variations: Club sedans Produced in the United States from the mid-1920s to the mid-1950s, the name club sedan was used for highly appointed models using the sedan chassis.: 44  Some people describe a club sedan as a two-door vehicle with a body style otherwise identical to the sedan models in the range. Others describe a club sedan as having either two or four doors and a shorter roof and therefore less interior space than the other sedan models in the range.: 44 Club sedan originates from the club carriage (e.g. the lounge or parlour carriage) in a railroad train.: 44 Sedanets From the 1910s to the 1950s, several United States manufacturers have named models either Sedanet or Sedanette. The term originated as a smaller version of the sedan; however, it has also been used for convertibles and fastback coupes. Models that have been called Sedanet or Sedanette include the 1917 Dort Sedanet, King, 1919 Lexington, 1930s Cadillac Fleetwood Sedanette, 1949 Cadillac Series 62 Sedanette, 1942-1951 Buick Super Sedanet, and 1956 Studebaker.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Rotator cuff** Rotator cuff: The rotator cuff is a group of muscles and their tendons that act to stabilize the human shoulder and allow for its extensive range of motion. Of the seven scapulohumeral muscles, four make up the rotator cuff. The four muscles are: supraspinatus muscle infraspinatus muscle teres minor muscle subscapularis muscle. Structure: Muscles composing rotator cuff The supraspinatus muscle spreads out in a horizontal band to insert on the superior facet of the greater tubercle of the humerus. The greater tubercle projects as the most lateral structure of the humeral head. Medial to this, in turn, is the lesser tubercle of the humeral head. The subscapularis muscle origin is divided from the remainder of the rotator cuff origins as it is deep to the scapula. Structure: The four tendons of these muscles converge to form the rotator cuff tendon. These tendinous insertions along with the articular capsule, the coracohumeral ligament, and the glenohumeral ligament complex, blend into a confluent sheet before insertion into the humeral tuberosities (i.e. greater and lesser tubercle). The infraspinatus and teres minor fuse near their musculotendinous junctions, while the supraspinatus and subscapularis tendons join as a sheath that surrounds the biceps tendon at the entrance of the bicipital groove. The supraspinatus is most commonly involved in a rotator cuff tear. Function: The rotator cuff muscles are important in shoulder movements and in maintaining glenohumeral joint (shoulder joint) stability. These muscles arise from the scapula and connect to the head of the humerus, forming a cuff at the shoulder joint. They hold the head of the humerus in the small and shallow glenoid fossa of the scapula. The glenohumeral joint has been analogously described as a golf ball (head of the humerus) sitting on a golf tee (glenoid fossa).During abduction of the arm, moving it outward and away from the trunk (torso), the rotator cuff compresses the glenohumeral joint, an action known as concavity compression, in order to allow the large deltoid muscle to further elevate the arm. In other words, without the rotator cuff, the humeral head would ride up partially out of the glenoid fossa, lessening the efficiency of the deltoid muscle. The anterior and posterior directions of the glenoid fossa are more susceptible to shear force perturbations as the glenoid fossa is not as deep relative to the superior and inferior directions. The rotator cuff's contributions to concavity compression and stability vary according to their stiffness and the direction of the force they apply upon the joint. Function: In addition to stabilizing the glenohumeral joint and controlling humeral head translation, the rotator cuff muscles also perform multiple functions, including abduction, internal rotation, and external rotation of the shoulder. The infraspinatus and subscapularis have significant roles in scapular plane shoulder abduction (scaption), generating forces that are two to three times greater than the force produced by the supraspinatus muscle. However, the supraspinatus is more effective for general shoulder abduction because of its moment arm. The anterior portion of the supraspinatus tendon is submitted to a significantly greater load and stress, and performs its main functional role. Clinical significance: Tear The tendons at the ends of the rotator cuff muscles can become torn, leading to pain and restricted movement of the arm. A torn rotator cuff can occur following trauma to the shoulder or it can occur through the "wear and tear" on tendons, most commonly the supraspinatus tendon found under the acromion. Clinical significance: Rotator cuff injuries are commonly associated with motions that require repeated overhead motions or forceful pulling motions. Such injuries are frequently sustained by athletes whose actions include making repetitive throws, athletes such as baseball pitchers, softball pitchers, American football players (especially quarterbacks), firefighters, cheerleaders, weightlifters (especially powerlifters due to extreme weights used in the bench press), rugby players, volleyball players (due to their swinging motions), water polo players, rodeo team ropers, shot put throwers, swimmers, boxers, kayakers, martial artists, fast bowlers in cricket, tennis players (due to their service motion) and tenpin bowlers due to the repetitive swinging motion of the arm with the weight of a bowling ball. This type of injury also commonly affects orchestra conductors, choral conductors, and drummers (due, again, to swinging motions). Clinical significance: As progression increases after 4–6 weeks, active exercises are now implemented into the rehabilitation process. Active exercises allow an increase in strength and further range of motion by permitting the movement of the shoulder joint without the support of a physical therapist. Active exercises include the Pendulum exercise, which is used to strengthen the Supraspinatus, Infraspinatus, and Subscapularis. External rotation of the shoulder with the arm at a 90-degree angle is an additional exercise done to increase control and range of motion of the Infraspinatus and Teres minor muscles. Various active exercises are done for an additional 3–6 weeks as progress is based on an individual case-by-case basis. At 8–12 weeks, strength training intensity will increase as free-weights and resistance bands will be implemented within the exercise prescription. Clinical significance: Impingement The accuracy of the physical examination is low. The Hawkins-Kennedy test has a sensitivity of approximately 80% to 90% for detecting impingement. The infraspinatus and supraspinatus tests have a specificity of 80% to 90%.A common cause of shoulder pain in rotator cuff impingement syndrome is tendinosis, which is an age-related and most often self-limiting condition.Studies show that there is moderate evidence that hypothermia (cold therapy) and exercise therapy used together are more effective than simply waiting for surgery and they suggest the best outcome for non-surgical treatment of subacromial impingement syndrome. The group of patients who participated in the exercise group were found to use significantly lower amounts of non-steroidal anti-inflammatory drugs (NSAIDS) and analgesics than the control group with no intervention. Clinical significance: Inflammation and fibrosis The rotator interval is a triangular space in the shoulder that is functionally reinforced externally by the coracohumeral ligament and internally by the superior glenohumeral ligament, and traversed by the intra-articular biceps tendon. On imaging, it is defined by the coracoid process at its base, the supraspinatus tendon superiorly and the subscapularis tendon inferiorly. Changes of adhesive capsulitis can be seen at this interval as edema and fibrosis. Pathology at the interval is also associated with glenohumeral and biceps instability. Adhesive capsulitis or "frozen shoulder" is often secondary to rotator cuff injury due to post-surgical immobilization. Available treatment options include intra-articular corticosteroid injections to relieve pain in the short-term and electrotherapy, mobilizations, and home exercise programs for long-term pain relief. Pain management: Treatment for a rotator cuff tear can include rest, ice, physical therapy, and/or surgery. A review of manual therapy and exercise treatments found inconclusive evidence as to whether these treatments were any better than placebo, however "High quality evidence from one trial suggested that manual therapy and exercise improved function only slightly more than placebo at 22 weeks, was little or no different to placebo in terms of other patient-important outcomes (e.g. overall pain), and was associated with relatively more frequent but mild adverse events."The rotator cuff includes muscles such as the supraspinatus muscle, the infraspinatus muscle, the teres minor muscle and the subscapularis muscle. The upper arm consists of the deltoids, biceps, as well as the triceps. Steps must be taken and precautions need to be made in order for the rotator cuffs to heal properly following surgery while still maintaining function to prevent any deteriorating effects on the muscles. In the immediate postoperative period (within one week following surgery), pain can be treated with a standard ice wrap. There are also commercial devices available which not only cool the shoulder but also exert pressure on the shoulder ("compressive cryotherapy"). However, one study has shown no significant difference in postoperative pain when comparing these devices to a standard ice wrap. Pain management: Continuous passive motion Physiotherapy can help manage the pain, but utilizing a program that involves continuous passive motion will reduce the pain even further. Assisted passive motion at a low intensity allows the tissues to be stretched slightly without damaging them Continuous passive motion improves the shoulder range and enables the subject to expand their range of motion without experiencing additional pain. Easing into the motions will allow the person to continue working those muscles to keep them from undergoing atrophy, while also still maintaining that minimum level of function where daily function is allowed. Doing these exercises will also prevent tears in the muscles that will impair daily function further. Since injuries of the rotator cuff often tend to inhibit motion without first experience discomfort and pain, other methods can be done to help accommodate that. Pain management: Manual therapy A systematic review and Meta-analysis study shows manual therapy may help to reduce pain for patient with Rotator cuff tendiopathy, based on low- to moderate-quality evidence. However, there is not strong evidence for improving function also. Pain management: Surgery Surgical approaches include acromioplasty (a part of the bone is removed to decrease pressure placed on the rotator cuff tendons), removal of a bursa that is inflamed or swollen, and subacromial decompression (the removal of tissue or bone that is damaged in order to allow more space for the tendons).Surgery may be recommended for patients with an acute, traumatic rotator cuff tear resulting in substantial weakness. Surgery can be performed open or arthroscopically, although the arthroscopic approach has become much more popular. If a surgical option is selected, the rehabilitation of the rotator cuff is necessary in order to regain maximum strength and range of motion within the shoulder joint. Physical therapy progresses through four stages, increasing movement throughout each phase. The tempo and intensity of the stages are solely reliant on the extent of the injury and the patient's activity necessities. The first stage requires immobilization of the shoulder joint. The shoulder that is injured is placed in a sling and shoulder flexion or abduction of the arm is avoided for 4 to 6 weeks after surgery (Brewster, 1993). Avoiding movement of the shoulder joint allows the torn tendon to fully heal. Once the tendon is entirely recovered, passive exercises can be implemented. Passive exercises of the shoulder are movements in which a physical therapist maintains the arm in a particular position, manipulating the rotator cuff without any effort by the patient. These exercises are used to increase stability, strength and range of motion of the Subscapularis, Supraspinatus, Infraspinatus, and Teres minor muscles within the rotator cuff. Passive exercises include internal and external rotation of the shoulder joint, as well as flexion and extension of the shoulder.A 2019 Cochrane Systematic Review found with a high degree of certainty that subacromial decompression surgery does not improve pain, function, or quality of life compared with a placebo surgery. Pain management: Orthotherapy exercises Patients that suffer from pain in the rotator cuff may consider utilizing orthotherapy into their daily lives. Orthotherapy is an exercise program that aims to restore the motion and strength of the shoulder muscles. Patients can go through the three phases of orthotherapy to help manage pain and also recover their full range of motion in the rotator cuff. The first phase involves gentle stretches and passive all around movements, and people are advised not to go above 70 degrees of elevation to prevent any kind of further pain. The second phase of this regimen requires patients to implement exercises to strengthen the muscles that are surrounding the rotator cuff muscles, combined with the passive exercises done in the first phase to keep on stretching the tissues without overexerting them. Exercises include pushups and shoulder shrugs, and after a couple of weeks of this, daily activities are gradually added to the patient's routine. This program does not require any sort of medication or surgery and can serve as a good alternative. Pain management: The rotator cuff and the upper muscles are responsible for many daily tasks that people do in their lives. A proper recovery needs to be maintained and achieved to prevent limiting movement, and can be done through simple movements.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**3-Hydroxyphenazepam** 3-Hydroxyphenazepam: 3-Hydroxyphenazepam is a benzodiazepine with hypnotic, sedative, anxiolytic, and anticonvulsant properties. It is an active metabolite of phenazepam, as well as the active metabolite of the benzodiazepine prodrug cinazepam. Relative to phenazepam, 3-hydroxyphenazepam has diminished myorelaxant properties, but is about equivalent in most other regards. Like other benzodiazepines, 3-hydroxyphenazepam behaves as a positive allosteric modulator of the benzodiazepine site of the GABAA receptor with an EC50 value of 10.3 nM. It has been sold online as a designer drug.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Black propaganda** Black propaganda: Black propaganda is a form of propaganda intended to create the impression that it was created by those it is supposed to discredit. Black propaganda contrasts with gray propaganda, which does not identify its source, as well as white propaganda, which does not disguise its origins at all. It is typically used to vilify or embarrass the enemy through misrepresentation.The major characteristic of black propaganda is that the audience are not aware that someone is influencing them, and do not feel that they are being pushed in a certain direction. Black propaganda purports to emanate from a source other than the true source. This type of propaganda is associated with covert psychological operations. Sometimes the source is concealed or credited to a false authority and spreads lies, fabrications, and deceptions. Black propaganda is the "big lie", including all types of creative deceit. Black propaganda relies on the willingness of the receiver to accept the credibility of the source. If the creators or senders of the black propaganda message do not adequately understand their intended audience, the message may be misunderstood, seem suspicious, or fail altogether.Governments conduct black propaganda for a few reasons. By disguising their direct involvement, a government may be more likely to succeed in convincing an otherwise unbelieving target audience. There are also diplomatic reasons behind the use of black propaganda. Black propaganda is necessary to obfuscate a government's involvement in activities that may be detrimental to its foreign policies. In the American Revolution: Benjamin Franklin created and circulated a fake supplement to a Boston newspaper that included letters on Indian atrocities and the treatment of American prisoners. In World War II: British In the United Kingdom, the Political Warfare Executive operated a number of black propaganda radio stations. Gustav Siegfried Eins (GS1) was one of the first such stations—purporting to be a clandestine German station. The speaker, "Der Chef", purported to be a Nazi extremist, accusing Adolf Hitler and his henchmen of going soft. The station focused on alleged corruption and sexual improprieties of Nazi Party members. In World War II: Another example was the British radio station Soldatensender Calais, which purported to be a radio station for the Wehrmacht. Under the direction of Sefton Delmer, a British journalist who spoke perfect Berliner German, Soldatensender Calais and its associated shortwave station, Kurzwellensender Atlantik, broadcast music, up-to-date sports scores, speeches of Adolf Hitler for "cover" and subtle propaganda. In World War II: Radio Deutschland was another radio station employed by the British during the war aimed and designed to undermine German morale and create tensions that would ultimately disrupt the German war effort. The station was broadcast on a frequency close on the radio dial to an actual German station. During the war most Germans actually believed that this station was in fact a German radio station and it even gained the recognition of Germany's propaganda chief Joseph Goebbels. In World War II: There were British black propaganda radio stations in most of the languages of occupied Europe as well as German and Italian. Most of these were based in the area around Bletchley Park and Woburn Abbey in Buckinghamshire and Bedfordshire respectively. In World War II: Another possible example was a rumour that there had been a German attempt to land on British shores at Shingle Street, but it had been repulsed with large German casualties. This was reported in the American press, and in William L. Shirer's Berlin Diary but was officially denied. British papers, declassified in 1993, have suggested this was a successful example of British black propaganda to bolster morale in the UK, US and occupied Europe.Author James Hayward has proposed that the rumours, which were widely reported in the American press, were a successfully engineered example of black propaganda with an aim of ensuring American co-operation and securing lend lease resources by showing that the United Kingdom was capable of successfully resisting the might of the German Army.David Hare's play Licking Hitler provides a fictionalised account based on the British black propaganda efforts in World War II. In World War II: German German black propaganda usually took advantage of European racism and anti-Communism. For example, on the night of April 27, 1944, German aircraft under cover of darkness (and possibly carrying fake Royal Air Force markings) dropped propaganda leaflets on occupied Denmark. These leaflets used the title of Frihedsposten, a genuine Danish underground newspaper, and claimed that the "hour of liberation" was approaching. They instructed Danes to accept "occupation by Russian or specially trained American Negro soldiers" until the first disorders resulting from military operations were over.The German Büro Concordia organisation operated several black propaganda radio stations (many of which pretended to broadcast illegally from within the countries they targeted). One of these stations was Workers' Challenge which purported to be a British communist radio station and encouraged British workers to go on strike against their "capitalist" bosses. In World War II: Pacific Theatre The Tanaka Memorial was a document that described a Japanese plan for world conquest, beginning with the conquest of China. Most historians now believe it was a forgery. In World War II: The following message was distributed in black propaganda leaflets dropped by the Japanese over the Philippines in World War II. It was designed to turn Filipinos against the United States: GUARD AGAINST VENEREAL DISEASES Lately there has been a great increase in the number of venereal diseases among our officers and men owing to prolific contacts with Filipino women of dubious character. In World War II: Due to hard times and stricken conditions brought about by the Japanese occupation of the islands, Filipino women are willing to offer themselves for a small amount of foodstuffs. It is advisable in such cases to take full protective measures by use of condoms, protective medicines, etc.; better still to hold intercourse only with wives, virgins, or women of respective [sic] character. In World War II: Furthermore, in view of the increase in pro-American leanings, many Filipino women are more than willing to offer themselves to American soldiers, and due to the fact that Filipinos have no knowledge of hygiene, disease carriers are rampant and due care must be taken. Cold War black propaganda of the Soviet Union: Prior to, and during the Cold War, the Soviet Union used disinformation on multiple occasions. It also employed the technique during the Iranian hostage crisis that took place from 1979 until 1981. For strictly political purposes, and to show support for the hostages, Soviet diplomats at the United Nations vocally criticized the taking of the hostages. At this same time, Soviet "black" radio stations within Iran called the National Voice of Iran openly broadcast strong support for the hostage-takers in an effort to increase anti-American sentiment inside Iran. This was a clear use of black propaganda to make anti-American broadcasts appear as if they were originating from Iranian sources. Cold War black propaganda of the Soviet Union: Throughout the Cold War, the Soviet Union effectively used the KGB's Service A of the First Chief Directorate in order to conduct its covert, or "black", "active measures". It was Service A that was responsible for clandestine campaigns that were targeted at foreign governments, public populations, as well as to influence individuals and specific groups that were hostile towards the Soviet government and its policies. The majority of their operations was actually conducted by other elements and directorates of the KGB. As a result, it was the First Chief Directorate that was ultimately responsible for the production of Soviet black propaganda operations. Cold War black propaganda of the Soviet Union: By the 1980s, Service A consisted of nearly 120 officers whose responsibilities consisted of covert media placements, and controlled media to covertly introduce carefully manufactured information, disinformation, and slogans into the areas such as government, media, and religion of their targeted countries, namely the United States. Because both the Soviet Union and the KGB's involvements were not acknowledged and intentionally disguised, these operations are therefore classified as a form of black propaganda. The activities of Service A greatly increased during the period of the 1980s through the early 1990s presumably as the Soviet government fought to maintain control during the declining period of the Cold War. UK: The British government ran a secret “black propaganda” campaign for decades, targeting Africa, the Middle East and parts of Asia with leaflets and reports from fake sources aimed at destabilising cold war enemies by encouraging racial tensions, sowing chaos, inciting violence and reinforcing anti-communist ideas, newly declassified documents have revealed. United States: Following the September 11 attacks against the United States, the U.S. Department of Defense organized and implemented the Office of Strategic Influence in an effort to improve public support abroad, mainly in Muslim countries. The head of OSI was an appointed general, Pete Worden who maintained a mission described by The New York Times as "circulating classified proposals calling for aggressive campaigns that use[d] not only the foreign media and the Internet, but also covert operations." Worden, as well as then Defense Secretary Donald Rumsfeld planned for what Pentagon officials said was 'a broad mission ranging from 'black' campaigns that use[d] disinformation and other covert activities to 'white' public affairs that rely on truthful news releases.' Therefore, OSI's operations could include black activities. United States: OSI's operations were to do more than public relations work, but included contacting and emailing media, journalists, and foreign community leaders with information that would counter foreign governments and organizations that are hostile to the United States. In doing so, the emails would be masked by using addresses ending with .com as opposed to using the standard Pentagon address of .mil, and hide any involvement of the US government and the Pentagon. The Pentagon is forbidden to conduct black propaganda operations within the American media, but is not prohibited from conducting these operations against foreign media outlets. The thought of conducting black propaganda operations and utilizing disinformation resulted in harsh criticism for the program that resulted in its closure in 2002. In domestic politics: Australian media In the run-up to the 2007 federal election in Australia, flyers were circulated around Sydney under the name of a fake organisation called the Islamic Australia Federation. The flyers thanked the Australian Labor Party for supporting terrorism, Islamic fundamentalists, and the Bali bombing suspects. A group of Sydney-based Liberal Party members were implicated in the incident. In domestic politics: British media In November 1995, a Sunday Telegraph newspaper article alleged Libya's Saif al-Islam Gaddafi (Muammar Gaddafi's son) was connected to currency counterfeiting. The story's author, Con Coughlin, falsely attributed the claim to a "British banking official", but his information actually came from MI6 agents. This fact, and the fact that Coughlin had no other sources for the story, only came to light when Saif Gaddafi later sued the newspaper for libel.The Zinoviev letter was a fake letter published in 1924 in the British newspaper the Daily Mail. It claimed to be a letter from the Comintern president Grigory Zinoviev to the Communist Party of Great Britain. It called on Communists to mobilise "sympathetic forces" in the Labour Party and talked of creating dissent in the British Armed Forces. The Zinoviev letter was instrumental in the Conservative victory in the 1924 general election. The letter seemed authentic at the time, but historians now believe it was a forgery. Historians now agree that the letter had little impact on the Labour vote—which held up in 1924. However, it aided the Conservative Party in hastening the collapse of the Liberal Party that led to the Conservative landslide. In domestic politics: United States media In the "Roorback forgery" of 1844 the Chronicle of Ithaca, New York ran a story, supposedly by a German tourist called Baron von Roorback, that James K. Polk, standing for re-election as a Democrat to the United States House of Representatives, branded his slaves before selling them at auction to distinguish them from the others on sale. Polk actually benefited from the ploy, as it reflected badly on his opponents when the lie was found out. Afterwards the term "Roorback" was coined for political dirty tricks. In domestic politics: During the 1972 U.S. presidential election, Donald H. Segretti, a political operative for President Richard Nixon's reelection campaign, released a faked letter, on Senator Edmund Muskie's letterhead, falsely alleging that Senator Henry "Scoop" Jackson, against whom Muskie was running for the Democratic Party's nomination, had had an illegitimate child with a seventeen-year-old. Muskie, who had been considered the frontrunner, lost the nomination to George McGovern, and Nixon was reelected. The letter was part of a campaign of so-called "dirty tricks", directed by Segretti, and uncovered as part of the Watergate Scandal. Segretti went to prison in 1974 after pleading guilty to three misdemeanor counts of distributing illegal campaign literature. Another of his dirty tricks was the "Canuck letter", although this was libel of Muskie and not a black propaganda piece. In domestic politics: United States Government The Federal Bureau of Investigation's Counter-intelligence program "COINTELPRO", was intended to, according to the FBI, "expose, disrupt, misdirect, discredit, or otherwise neutralize the activities of black nationalists, hate-type organizations and groups, their leadership, membership, and supporters." Black propaganda was used on Communists and the Black Panther Party. It was also used against opposition to the U.S. involvement in the Vietnam War, labor leaders, and Native Americans. The FBI's strategy was captured in a 1968 memo: "Consider the use of cartoons, photographs, and anonymous letters which will have the effect of ridiculing the New Left. Ridicule is one of the most potent weapons which we can use against it." "The Penkovsky Papers" are an example of a black propaganda effort conducted by the United States' Central Intelligence Agency during the 1960s. The "Penkovsky Papers" were alleged to have been written by a Soviet GRU defector, Colonel Oleg Penkovsky, but were in fact produced by the CIA in an effort to diminish the Soviet Union's credibility at a pivotal time during the Cold War. In domestic politics: Religious black propaganda In 1955, the Church of Scientology published the book Brain-Washing, which was allegedly written by the Soviet secret police chief Lavrentiy Beria. In fact, the book describes all of the practices Scientology opposes (brain surgery, psychiatric drugs, psychology, child labor laws, and income tax) as Communist conspiracies directed by Moscow, and it describes the greatest threat to "Communism" as being "The Church of Scientology" (the Catholic Church is barely mentioned as a threat to the Soviet Union, and the Eastern Orthodox Church, the dominant religion of the Soviet Union, is not mentioned at all). Additionally, “Beria” uses precise phrases that L. Ron Hubbard (creator of scientology) has coined, such as "pain-drug hypnosis" and "thinkingness". In domestic politics: The Church of Scientology, under the leadership of L. Ron Hubbard, is alleged to have advocated the usage of "black propaganda" to "destroy reputation or public belief in persons, companies or nations" as a practice of "fair game" against suppressive persons. After the author Paulette Cooper wrote The Scandal of Scientology, the Church of Scientology ran a false flag operation that stole stationery from her in order to fabricate bomb threats. In domestic politics: Environmentalist black propaganda The "Let's Go! Shell in the Arctic" website was designed to look like an official website by Royal Dutch Shell, but was in fact a fake produced by Greenpeace.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Tel Hashomer camptodactyly syndrome** Tel Hashomer camptodactyly syndrome: Tel Hashomer camptodactyly syndrome is a rare genetic disorder which is characterized by camptodactyly,( a condition where one or more fingers or toes are permanently bent), facial dysmorphisms, and fingerprint, skeletal and muscular abnormalities.This disorder is thought to be inherited in an autosomal recessive fashion. Presentation: This disorder has symptoms that affect the feet, hands, muscles, fingerprints, skeleton, heart and back, these include: talipes equinovarus (clubfeet), thenar/hypothenar hypoplasia, abnormalities of the palmar crease and the fingerprints, hypertelorism, long philtrum, spina bifida, and mitral valve prolapse. Etimology: This disorder was discovered in the late 1960s-mid 1970s by Richard M Goodman. a US-born geneticist working in Tel Aviv, Israel, since 2016, only 23 cases of this disorder have been reported in medical literature. Cases: The following is a list of every case report of the disorder. Goodman et al. describes Tel-Hashomer camptodactyly syndrome for the first time in history in two siblings that came from non-consanguineous parents. Goodman et al. observes two additional cases of the disorder Gollop and Colleto et al. describe members from two consanguineous Brazilian families. Cases: Patton et al. shows that the muscle weakness in the disorder is caused by abnormal muscle histology Tylki-Szymanska reports two people with the disorder whose parents were first cousins Pagnan et al. describes two siblings from a Brazilian family Toriello et al. describes two Latin American siblings with the disorder, both of them showing mitral valve prolapse Zareen and Rashmi describe two Indian sisters with the disorder who came from a non-consanguineous family, both of them presented hirsutism, a feature not seen before in Tel Hashomer camptodactyly.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Deployment diagram** Deployment diagram: A deployment diagram in the Unified Modeling Language models the physical deployment of artifacts on nodes. To describe a web site, for example, a deployment diagram would show what hardware components ("nodes") exist (e.g., a web server, an application server, and a database server), what software components ("artifacts") run on each node (e.g., web application, database), and how the different pieces are connected (e.g. JDBC, REST, RMI). Deployment diagram: The nodes appear as boxes, and the artifacts allocated to each node appear as rectangles within the boxes. Nodes may have subnodes, which appear as nested boxes. A single node in a deployment diagram may conceptually represent multiple physical nodes, such as a cluster of database servers. Deployment diagram: There are two types of Nodes: Device Node Execution Environment NodeDevice nodes are physical computing resources with processing memory and services to execute software, such as typical computers or mobile phones. An execution environment node (EEN) is a software computing resource that runs within an outer node and which itself provides a service to host and execute other executable software elements.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**EIA RF Connectors** EIA RF Connectors: EIA RF Connectors are used to connect two items of high power radio frequency rigid or semi-rigid (flexline) coaxial transmission line. Typically these are only required in very high power transmitting installations (above 3kW at VHF to MW) where the feedline diameters may be several inches. The connectors are always female, requiring a male coupling element or bullet to make the connection. The EIA under the Electronic Components Industry Association (http://www.ecianow.org/), are responsible for a number of standard imperial connector sizes. Dimensions: The flange design, inner and outer conductor dimensions are standardized, by EIA, in the RS-225, 50 Ω (ohm), and RS-259, 75 Ω, standards. They are commonly referred to by the inner diameter of the outer conductor in fractional inches. Sizes covered under these two standards range from 3/8 to 6 1/8 inch outside diameter (OD) for 50 Ω and 3/8 to 3 1/8 inch OD for 75 Ω. Dimensions: Peak pulse power handling, driven by voltage breakdown, is more or less frequency independent for any given size (and can be deduced by assuming ~300 V RMS per mm of inner to outer spacing), but the average power, limited by losses heating the centre conductors, increases approximately with the square root of the operating frequency. Commonly the limit is quoted as that dissipation that will raise the inner temperature to 100 °C when the outer is maintained constant at +40 °C. Field failures can occur at power levels well below this if the central bullet connections are not making uniform positive contact and free of contamination. Conversely the average power ratings can be significantly exceeded if there is forced air flow either through the inner conductor or through the void between the inner and outer conductors. Many years ago, the two RS standards were considered obsolete by EIA. Only recently (until 2007) there has been an effort by manufacturers in the US to update these standards. Dimensions: The 7/8" is the smallest size EIA type in common use. Below this, other types such as the DIN7/16 are more popular. International standards: The corresponding International standards are published by the International Electrotechnical Commission: IEC 60339-1 and IEC 60339-2. These standards are more complete as they include many additional sizes that are missing in the EIA standards. Interchangeability: Many of these sizes are also interchangeable with RF Connectors defined by the US military in MIL-DTL-24044. Manufacturers: Dielectric Exir Broadcasting Radio Frequency Systems Alan Dick Andrew Shively Labs JACAL conectores coaxil Myat Industries Electronics Research SPINNER GmbH SIRA srl SILEX SYSTEM TELECOMwww.brackemfg.com
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Oil sludge** Oil sludge: Oil sludge or black sludge is a gel-like or semi-solid deposit inside an internal combustion engine, that can create a catastrophic buildup. It is often the result of contaminated engine oil and occurs when moisture and/or high heat is introduced to engine oil. Causes: Oil sludge may occur due to a variety of different factors. Some of the most common causes are: Defective crankcase ventilation system Oil/coolant contamination Neglecting oil changes Low oil level Poor engine design Precautions: Oil sludge is generally preventable through frequent oil changes at manufacturer specified intervals, however, while uncommon, some engines do have a tendency to build up more sludge than others.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**HLSW** HLSW: Half-Life Server Watch (HLSW) is a game server browser and administration tool written by Timo Stripf. HLSW was originally designed with a focus on administrating and joining Half-Life servers (pre-Steam) thus the name, Half-Life Server Watch. However, over the years, HLSW has added support for other games and modifications. HLSW abbreviation: The HLSW definition, Half-Life Server Watch, is no longer felt relevant by the developers of HLSW due to Half-Life no longer being the primary focus, as many games are now supported. Related projects: PocketHLSW PocketHLSW was developed initially as a side project by Andrew Collins but was later adopted by HLSW but still fully maintained by its original author. PocketHLSW is a Google Gadget that completely depends on XML feeds from HLSW and is the first example to make use of the feeds. With PocketHLSW users are able to quickly find a list of servers matching their search criteria, view their HLSW buddylists and see the latest HLSW news. Related projects: XML feeds HLSW has recently started opening up many areas of its databases by providing XML feeds that are free to be used by developers for any purpose. Currently little documentation exists on the feeds but bits of information can be found in the official forums. PocketHLSW is an example of the HLSW XML feeds capabilities.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**HERC1** HERC1: Probable E3 ubiquitin-protein ligase HERC1 is an enzyme that in humans is encoded by the HERC1 gene.The protein encoded by this gene stimulates guanine nucleotide exchange on ARF1 and Rab proteins. This protein is thought to be involved in membrane transport processesKnowledge of the gene is facilitated by the discovery of a mouse mutation. The tambaleante (tbl) mutation arose spontaneously on the DW/J-Pas genetic background, a recessive mutation of the Herc1 gene located on mouse chromosome 9 that increases Herc1 protein levels. This protein is largely expressed in many tissues (Sanchez-Tena et al., 2016; https://www.proteinatlas.org/ENSG00000103657-HERC1/tissue) and multiple brain regions including the cerebellum (https://www.proteinatlas.org/ENSG00000103657-HERC1/brain). Herc1-tbl (tambaleante) mutant mice are characterized by Purkinje cell loss. In addition to the cerebellum, Herc1tbl mutants had lower dendritic spine widths in CA1 pyramidal neurons. Herc1-tbl mutant mice are also characterized by cerebellar ataxia, an unstable gait, and a limb-flexion reflex triggered by tail lifting seen in other cerebellar mutants, the reverse of the normal limb extensor reflex.Relative to wild-type mice, Herc1-tbl mutant mice fell sooner and more often from a rotarod, fell sooner from a vertical pole, slipped more often and took more time to reach the end of a stationary beam</ref>, and had weaker forelimb grip strength measured by a grip strength meter. The rotarod deficit was rescued when Herc1tbl mutants were bred with transgenic mice expressing normal human HERC1. Herc1tbl mutants were also less adept at landing correctly on all four legs when released in the air.Biallelic HERC1 mutations were reported in two siblings with facial dysmorphism, macrocephaly, motor development delay, ataxic gait, hypotonia, and intellectual disability. Likewise, a nonsense HERC1 variant was reported in one subject with an autosomal recessive condition consisting of facial dysmorphism, macrocephaly, epilepsy, motor development delay, cerebellar atrophy, and intellectual disability. Facial dysmorphism, macrocephaly, and intellectual disability but without cerebellar ataxia were also reported in two siblings with a HERC1 splice variant mutation. The lack of cerebellar involvement was ascribed either to the nature of the mutation or the influence of modifier genes. Another patient with a frameshift HERC1 mutation predicted to truncate the protein displayed facial dysmorphism, macrocephaly, epileptiform discharges, hypotonia, intellectual disability, and autistic features.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Fibre Channel frame** Fibre Channel frame: In computer networking, a Fibre Channel frame is the frame of the Fibre Channel protocol. The basic building blocks of an FC connection are the frames. They contain the information to be transmitted (payload), the address of the source and destination ports and link control information. Frames are broadly categorized as Data frames Link_control framesData frames may be used as Link_Data frames and Device_Data frames, link control frames are classified as Acknowledge (ACK) and Link_Response (Busy and Reject) frames. The primary function of the Fabric is to receive the frames from the source port and route them to the destination port. It is the FC-2 layer's responsibility to break the data to be transmitted into frame size, and reassemble the frames. Fibre Channel frame: Each frame begins and ends with a frame delimiter. The frame header immediately follows the Start of Frame (SOF) delimiter. The frame header is used to control link applications, control device protocol transfers, and detect missing or out of order frames. Optional headers may contain further link control information. A maximum 2048 byte long field (payload) contains the information to be transferred from a source N_Port to a destination N_Port. The 4 byte Cyclic Redundancy Check (CRC) precedes the End of Frame (EOF) delimiter. The CRC is used to detect transmission errors. The maximum total frame length is 2148 bytes. Fibre Channel frame: Between successive frames a sequence of (at least) six primitives must be transmitted, sometimes called interframe gap.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Lung bud** Lung bud: The lung bud sometimes referred to as the respiratory bud forms from the respiratory diverticulum, an embryological endodermal structure that develops into the respiratory tract organs such as the larynx, trachea, bronchi and lungs. It arises from part of the laryngotracheal tube. Early stage: In the fourth week of development, the respiratory diverticulum, starts to grow from the ventral (front) side of the foregut into the mesoderm that surrounds it, forming the lung bud. Around the 28th day, during the separation of the lung bud from the foregut it forms the trachea and splits into two bronchial buds, one on each side. Early stage: Molecular signaling The molecular signaling involved in the specification of the respiratory bud starts with the expression of the Nkx2-1 gene, which determines the respiratory field – the area where the respiratory bud will begin to grow from. The signaling that makes the growth of the respiratory bud possible is complex and involves a number of interactions between the mesoderm and the respiratory bud epithelium, in which members of the Fgf and Fgfr family of genes express. Early stage: Separation of trachea and esophagus At first, the posterior part of the trachea is open to the esophagus, but as the bud elongates two longitudinal mesodermal ridges known as the laryngotracheal folds, begin to form and grow until they join, forming a wall between the two organs. An incomplete separation of the organs leads to a congenital abnormality known as a tracheoesophageal fistula. Early stage: Larynx development The epithelium of the larynx is of endodermal origin, but the laryngeal cartilages, unlike the rest of the respiratory bud connective tissue, come from the mesenchyme of the fourth and sixth pharyngeal arches. The fourth pharyngeal arch, adjacent to what will be the root of the tongue, will become the epiglottis. The sixth pharyngeal arch, located around the laryngeal orifice, will become the thyroid, cricoid and arytenoid cartilages. These structures are formed in a process in which the lining cells of the primitive larynx proliferate and occlude it. Later, it recanalizes leaving two membrane-like structures: the vocal folds and the vestibular folds. In between, an enlarged space, the ventricle, remains. Failure in this process leads to a serious but rare condition called congenital atresia of the larynx. Later development: After the lung buds have formed, they begin to grow and branch forming a primitive version of the bronchial tree, determining how the lobes of the lung will be arranged in the mature organ. The first stage of alveolar development, spanning between the fifth and the 16th week of development, is called the pseudoglandular stage. It is so called because of the histological appearance of the primitive alveoli, which resemble glandular tissue. After the pseudoglandular stage, the lung enters the canalicular and saccular phases. During these stages, the terminal tubes narrow and give rise to small saccules, which become increasingly associated with capillaries as to make gas exchange possible. The alveolar epithelium begins to differentiate into two distinct types of cells: type I pneumocytes and type II pneumocytes, as well as the respiratory epithelium of the trachea and bronchial tree.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Tony Robinson (speech recognition)** Tony Robinson (speech recognition): Tony Robinson is a researcher in the application of recurrent neural networks to speech recognition, being one of the first to discover the practical capabilities of deep neural networks and its application to speech recognition. He first published on the topic while studying for his PhD at Cambridge University in the 1980s. He has published over a hundred widely cited research papers on automatic speech recognition (ASR) in the years since.In 1995, Robinson formed SoftSound Ltd, a speech technology company which was acquired by Autonomy with a view to using the technology to make unstructured video and voice data easily searchable. Robinson helped build the fastest large vocabulary speech recognition system available at the time, and operating in more languages than any other model, based on recurrent neural networks.From 2008 to 2010, Robinson was the Director of the Advanced Speech Group at SpinVox, a provider of speech-to-text conversion services for carrier markets, including wireless, VoIP and cable. Their Automatic Speech Recognition (ASR) system was, for a time, being used more than one million times per day and SpinVox was subsequently acquired by global speech technology company Nuance. Tony Robinson (speech recognition): Tony Robinson was also founder of Speechmatics, which launched its cloud-based speech recognition services in 2012. Speechmatics subsequently announced a new technology in accelerated new language modeling late in 2017. Robinson continues to publish papers in speech recognition technology, especially in the area of statistical language modelling.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**M-Xylene** M-Xylene: m-Xylene (meta-xylene) is an aromatic hydrocarbon. It is one of the three isomers of dimethylbenzene known collectively as xylenes. The m- stands for meta-, indicating that the two methyl groups in m-xylene occupy positions 1 and 3 on a benzene ring. It is in the positions of the two methyl groups, their arene substitution pattern, that it differs from the other isomers, o-xylene and p-xylene. All have the same chemical formula C6H4(CH3)2. All xylene isomers are colorless and highly flammable. Production and use: Petroleum contains about 1 weight percent xylenes. The meta isomer can be isolated from a mix of xylenes by the partial sulfonation (to which other isomers are less prone) followed by removal of unsulfonated oils and steam distillation of the sulfonated product. Production and use: The major use of meta-xylene is in the production of isophthalic acid, which is used as a copolymerizing monomer to alter the properties of polyethylene terephthalate. The conversion m-xylene to isophthalic acid entails catalytic oxidation. meta-Xylene is also used as a raw material in the manufacture of 2,4- and 2,6-xylidine as well as a range of smaller-volume chemicals. Ammoxidation gives isophthalonitrile. Toxicity and exposure: Xylenes are not acutely toxic, for example the LD50 (rat, oral) is 4300 mg/kg. Effects vary with animal and xylene isomer. Concerns with xylenes focus on narcotic effects.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**CoreExpress** CoreExpress: CoreExpress modules are complete computer-on-module (COM) highly integrated, compact computers that can be used in an embedded computer board design, much like an integrated circuit component. COMs integrate CPU, memory, graphics, and BIOS, and common I/O interfaces. The interfaces are modern, using only digital buses such as PCI Express, Serial ATA, Ethernet, USB, and HD audio (Intel High Definition Audio). All signals are accessible on a high-density, high-speed, 220-pin connector. Although most implementations use Intel processors, the specification is open for different CPU modules.CoreExpress modules are mounted on a custom carrier board, containing the peripherals required for the specific application. In this way, small but highly specialized computer systems can be built. CoreExpress: The CoreExpress form factor was originally developed by LiPPERT Embedded Computers and standardized by the Small Form Factor Special Interest Group in March 2010. Size and mechanics: The specification defines a board size of 58 mm × 65 mm, slightly smaller than a credit card and small enough to allow a carrier board in standard PC/104-Plus format. The module can be embedded into a heat spreader, which distributes the component-generated heat onto a larger surface area. In low power applications, this distribution may be enough for complete thermal dissipation. In higher power applications, the heat spreader presents a thermal interface for mating to additional heat dissipating components such as finned heat sinks. Heat spreaders are simpler and more rugged to connect to than the heat generating components underneath. This simplifies mechanical design for the system builder, but can be less efficient than a complete purpose-built thermal solution. In a complete system, heat spreaders can be part of the electromagnetic Interference containment design. Specification: The specification is hosted by the Small Form Factors Special Interest Group and is available on their website. Revision 2.1 was released on February 23, 2010.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Digital planar holography** Digital planar holography: Digital planar holography (DPH) is a method for designing and fabricating miniature components for integrated optics. It was invented by Vladimir Yankov and first published in 2003. The essence of the DPH technology is embedding computer designed digital holograms inside a planar waveguide. Light propagates through the plane of the hologram instead of perpendicularly, allowing for a long interaction path. Benefits of a long interaction path have long been used by volume or thick holograms. Planar configuration of the hologram provider for easier access to the embedded diagram aiding in its manufacture. Digital planar holography: Light can be confined in waveguides by a refractive index gradient. Light propagates in a core layer, surrounded by a cladding layer(s), which should be selected such that the core refractive index Ncore is greater than that of cladding Nclad: Ncore> Nclad. Cylindrical waveguides (optical fibers) allow for one-dimensional light propagation along the axis. Planar waveguides, fabricated by sequential depositing flat layers of transparent materials with a proper refractive index gradient on a standard wafer, confine light in one direction (z axis) and permit free propagation in two others (x and y axes). Digital planar holography: Light waves propagating in the core infiltrate both cladding layers to a small degree. If the refractive index is modulated in the wave path, light of each given wavelength can be directed to the desired point. Digital planar holography: The DPH technology, or Yankov hologram, comprises design and fabrication of the holographic nano-structures inside a planar waveguide, providing light processing and control. There are many ways of modulating the core refractive index, the simplest of which is engraving the required pattern by nanolithography means. The modulation is created by embedding a digital hologram on the lower or upper core surface or on the both of them. According to NOD (Nano-Optic Devices, LLC (NOD)) statement, standard lithographical processes can be used, making mass production straightforward and inexpensive. Nanoimprinting could be another viable method of fabricating DPH patterns. Digital planar holography: Each DPH pattern is customized for a given application and computer-generated. It consists of numerous nano-grooves, each about 100 nm wide, positioned in a way, providing maximum efficiency for a specific application. Digital planar holography: The devices are fabricated on standard wafers; one of typical devices is presented below (from the NOD web site). While the total number of nano-grooves is huge (≥106), a typical device size of DPH devices is on the millimeter scale. The small footprint of the DPH makes it possible to combine with other elements of photonic integrated circuits, such as coarse demultiplexers and interferometers.Nano-Optic Devices, LLC (NOD) developed the DPH technology and applied it for commercializing nano-spectrometers. There are additional numerous applications for the DPH in integrated optics. Digital planar holography: The pictures below from the NOD website demonstrate a DPH structure (left) and a nano-spectrometer hologram for the visible band (right).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Supply network** Supply network: A supply network is a pattern of temporal and spatial processes carried out at facility nodes and over distribution links, which adds value for customers through the manufacturing and delivery of products. It comprises the general state of business affairs in which all kinds of material (work-in-process material as well as finished products) are transformed and moved between various value-added points to maximize the value added for customers. In the semiconductor industry, for example, work-in-process moves from fabrication to assembly, and then to the test house. The term "supply network" refers to the high-tech phenomenon of contract manufacturing where the brand owner does not touch the product. Instead, she coordinates with contract manufacturers and component suppliers who ship components to the brand owner. This business practice requires the brand owner to stay in touch with multiple parties or "network" at once. Supply network: A supply chain is a special instance of a supply network in which raw materials, intermediate materials and finished goods are procured exclusively as products through a chain of processes that supply one another. Resilient supply networks: A resilient supply network effectively aligns its strategy, operations, management systems, governance structure, and decision-support capabilities so that it can uncover and adjust to continually changing risks, endure disruptions to its primary earnings drivers, and create advantages over less adaptive competitors. Moreover, it has the capability to respond rapidly to unforeseen changes, even chaotic disruption. The resilience of a supply network is the ability to bounce back – and, in fact, to bounce forward with speed, determination and precision. In recent studies, resilience is regarded as the next phase in the evolution of traditional, place-centric enterprise structures to highly virtualized, customer-centric structures that enable people to work anytime, anywhere.Resilient supply networks should align its strategy and operations to adapt to risk that affects its capacities. There are 4 levels of supply chain resilience: reactive supply chain management. Resilient supply networks: internal supply chain integration with planned buffers. collaboration across extended supply chain networks. a dynamic supply chain adaptation and flexibility. Resilient supply networks: Strategic resilience From the strategic resilient viewpoint, a supply network must dynamically reinvent business models and strategies as circumstances change. It is not about responding to a one-time crisis, or just having a flexible supply chain. It is about continuously anticipating and adjusting to discontinuities that can permanently impair the value preposition of a core business with focus on delivering customer satisfaction. Strategic resilience requires continuous innovation with respect to product structures, processes, but also corporate behaviour. Renewal can be regarded as the natural consequence of a supply network’s innate strategic resilience. Resilient supply networks: Operational resilience In terms of operational resilience, the supply networks must respond to the ups and downs of the business cycle or to quickly rebalance product-service mix, processes, and supply chain, by bolstering enterprises agility, flexibility and robustness in the face of changing environments.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**2-Deoxystreptamine N-acetyl-D-glucosaminyltransferase** 2-Deoxystreptamine N-acetyl-D-glucosaminyltransferase: 2-deoxystreptamine N-acetyl-D-glucosaminyltransferase (EC 2.4.1.283, btrM (gene), neoD (gene), kanF (gene)) is an enzyme with systematic name UDP-N-acetyl-alpha-D-glucosamine:2-deoxystreptamine N-acetyl-D-glucosaminyltransferase. This enzyme catalyses the following chemical reaction UDP-N-acetyl-alpha-D-glucosamine + 2-deoxystreptamine ⇌ UDP + 2'-N-acetylparomamineInvolved in the biosynthetic pathways of several clinically important aminocyclitol antibiotics.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**(24952) 1997 QJ4** (24952) 1997 QJ4: (24952) 1997 QJ4, also written as 1997 QJ4, is a plutino and as such, it is trapped in a 2:3 mean-motion resonance with Neptune. It was discovered on 28 August, 1997, by Jane X. Luu, Chad Trujillo, David C. Jewitt and K. Berney. This object has a perihelion (closest approach to the Sun) at 30.463 AU and an aphelion (farthest approach from the Sun) at 48.038 AU, so it moves in a relatively eccentric orbit (0.224). It has an estimated diameter of 139 km; therefore, it is unlikely to be classified as a dwarf planet. Sources: List of Trans Neptunian Objects, Minor Planet Center Another list of TNOs at johnstonsarchive
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Liquid junction potential** Liquid junction potential: Liquid junction potential (shortly LJP) occurs when two solutions of electrolytes of different concentrations are in contact with each other. The more concentrated solution will have a tendency to diffuse into the comparatively less concentrated one. The rate of diffusion of each ion will be roughly proportional to its speed in an electric field, or their ion mobility. If the anions diffuse more rapidly than the cations, they will diffuse ahead into the dilute solution, leaving the latter negatively charged and the concentrated solution positively charged. This will result in an electrical double layer of positive and negative charges at the junction of the two solutions. Thus at the point of junction, a potential difference will develop because of the ionic transfer. This potential is called liquid junction potential or diffusion potential which is non-equilibrium potential. The magnitude of the potential depends on the relative speeds of the 'ions' movement. Calculation: The liquid junction potential cannot be measured directly but calculated. The electromotive force (EMF) of a concentration cell with transference includes the liquid junction potential. The EMF of a concentration cell without transport is: ln ⁡a2a1 where a1 and a2 are activities of HCl in the two solutions, R is the universal gas constant, T is the temperature and F is the Faraday constant. The EMF of a concentration cell with transport (including the ion transport number) is: ln ⁡a2a1 where a2 and a1 are activities of HCl solutions of right and left hand electrodes, respectively, and tM is the transport number of Cl−. Liquid junction potential is the difference between the two EMFs of the two concentration cells, with and without ionic transport: ln ⁡a2a1 Elimination: The liquid junction potential interferes with the exact measurement of the electromotive force of a chemical cell, so its effect should be minimized as much as possible for accurate measurement. The most common method of eliminating the liquid junction potential is to place a salt bridge consisting of a saturated solution of potassium chloride (KCl) and ammonium nitrate (NH4NO3) with lithium acetate (CH3COOLi) between the two solutions constituting the junction. When such a bridge is used, the ions in the bridge are present in large excess at the junction and they carry almost the whole of the current across the boundary. The efficiency of KCl/NH4NO3 is connected with the fact that in these salts, the transport numbers of anions and cations are the same.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**RIPK1** RIPK1: Receptor-interacting serine/threonine-protein kinase 1 (RIPK1) functions in a variety of cellular pathways related to both cell survival and death. In terms of cell death, RIPK1 plays a role in apoptosis and necroptosis. Some of the cell survival pathways RIPK1 participates in include NF-κB, Akt, and JNK.RIPK1 is an enzyme that in humans is encoded by the RIPK1 gene, which is located on chromosome 6. This protein belongs to the Receptor Interacting Protein (RIP) kinases family, which consists of 7 members, RIPK1 being the first member of the family. Structure: RIPK1 protein is composed of 671 amino acids, and has a molecular weight of about 76 kDa. It contains a serine/threonine kinase domain (KD) in the 300 aa N-Terminus, a death domain (DD) in the 112 aa C-Terminus, and a central region between the KD and DD called intermediate domain (ID). The kinase domain plays different roles in cell survival and is important in necroptosis induction. RIP interacts with TRAF2 via the kinase domain. The KD can also interact with Necrostatin-1, which is an allosteric inhibitor of RIPK1 kinase activity. Overexpression of RIP lacking kinase activity can activate NF-kB. Structure: The death domain is homologous to the DD of other receptors such as Fas, TRAILR2 (DR5), TNFR1 and TRAILR1 (DR4), so it can bind to these receptors, as well as TRADD and FADD in the TNFR1 signalling complex. Overexpression of RIP can induce apoptosis and can activate NF-kB, but overexpression of the RIP death domain can block NF-kB activation by TNF-R1. Structure: The intermediate domain is important for NF-kB activation and (RHIM)-dependent signalling. Via the intermediate domain, RIP can interact with TRAF2, NEMO, RIPK3, ZBP1, OPTN and other small molecules and proteins, depending on cellular context.. Function: Although, RIPK1 has been primarily studied in the context of TNFR signaling, RIPK1 is also activated in response to diverse stimuli.The kinase domain, while important for necroptotic (programmed necrotic) functions, appears dispensable for pro-survival roles. Kinase activity of RIPK1 is also required for RIPK1-dependent apoptosis in conditions of IAP1/2 depletion, TAK1inhibition/depletion, RIPK3 depletion or MLKL depletion. Also, proteolytic processing of RIPk1, through both caspase-dependent and -independent mechanisms, triggers lethality that is dependent on the generation of one or more specific C-terminal cleavage product(s) of RIPk1 upon stress. Function: Role in cell survival It has been shown that cell survival can be regulated through different RIPK1-mediated pathways that ultimately result in the expression of NF-kB, a protein complex known to regulate transcription of DNA and thus, related to survival processes. Function: Receptor-mediated signalling The best well-known pathway of NF-kB activation is that mediated by the death receptor TNFR1, which starts as in the necroptosis pathway with the assembly of TRADD, RIPK1, TRAF2 and clAP1 in the lipid rafts of the plasma membrane (complex I is formed). In survival signalling, RIPK1 is then polyubiquitinated, allowing NEMO (Necrosis Factor – kappa – B essential modulator) to bind to the IkB kinase or IKK complex. To activate IKK, TAB2 and TAB3 adaptor proteins recruit TAK1 or MEKK3, which phosphorylate the complex. This results in the phosphorylation of the NF-kB inhibitors by the activated IKK complex, which in turn triggers their polyubiquitination and posterior degradation in the 26S proteasome. Function: As a result, NF-kB can now migrate to the nucleus where it will control DNA transcription by binding itself to the promoters of specific genes. Some of those genes are thought to have anti-apoptotic properties as well as to promote proteasomal degradation of RIPK1, resulting in a self-regulatory cycle. Function: While being in complex I, RIPK1 has also been proved to play a role in the activation of MAP (mitogen-activated protein) kinases such as JNK, ERK and p38. In particular, JNK can be found in both cell death and survival pathways, with its role in the cell death process being suppressed by activated NF-kB.Cell survival signalling can also be mediated by TLR-3 (toll-like receptors) and TLR-4. In here, RIPK1 is recruited to the receptors where it is phosphorylated and polyubiquitinated. This results in the recruit of the IKK complex activating proteins (TAK1, TAB1 and TAB2) so eventually NF-kB can now too migrate to the nucleus. RIPK2 is involved in this TLR-mediated signalling, which suggests that there might be a regulation of cell survival or death (the two possible outcomes) through the mutual interaction between the two RIPK family members. Function: Genotoxic stress-mediated activation Upon DNA damage, RIPK1 mediates another NF-kB activation pathway where two simultaneous and exclusive processes occur. A pro-apoptotic complex is created while RIPK1 also mediates the interaction between PIDD, NEMO and IKK subunits that will eventually result in the IKK complex activation after interaction with ATM kinase (a DNA double-strand breaks stimulated protein). The interaction between RIPK1 and PIDD through their death domains is thought to promote cell survival to neutralize this pro-apoptotic complex. Function: Others It has been observed that RIPK1 may also interact with IGF-1R (insulin-like growth factor 1 receptor) to activate JNK (c-Jun N-terminal Kinases), it may be related to epidermal growth factor receptor signalling and it is largely expressed in glioblastoma cells, suggesting that RIPK1 is indeed involved in cell survival and proliferation processes. Function: Role in cell death Necroptosis Necroptosis is a programmed form of necrosis which starts with the assembly of the TNF (tumor necrosis factor) ligand to its membrane receptor, the TNFR (tumor necrosis factor receptor). Once activated, the intracellular domain of TNFR starts the recruitment of the adaptor TNFR-1-associated death domain protein TRADD, which recruits RIPK1 and two ubiquitin ligases: TRAF2 and clAP1. This complex is called the TNFR-1 complex I.Complex-I is then modified by the IAPs (Inhibitor of Apoptosis Proteins) and the LUBAC (Linear Ubiquitination Assembly Complex), which generate linear ubiquitin linkages. The ubiquitination of complex-I leads to the activation of NF-κB , which in turn activates the expression of FLICE-like inhibitory protein FLIP. FLIP then binds to caspase-8, forming a caspase-8 FLIP heterodimer in the cytosol that disrupts the activity of caspase-8 and prevents caspase-8 mediated apoptosis from taking place.The assembly of complex II-b then starts in the cytosol. This new complex contains the caspase-8 FLIP heterodimer as well as RIPK1 and RIPK3. Caspase inhibition within this complex allows RIPK1 and RIPK3 to autotransphosphorylate each other, forming another complex called the necrosome. The necrosome starts recruiting MLKL (Mixed Lineage Kinase Domain Like protein), which is phosphorylated by RIPK3 and immediately translocates to lipid rafts inside the plasma membrane. This leads to the formation of pores in the membrane, allowing the sodium influx to increase -and consequently the osmotic pressure-, which eventually causes cell membrane rupture. Function: Apoptosis The apoptotic extrinsic pathway starts with the formation of the TNFR-1 complex-I, which contains TRADD, RIPK1, and two ubiquitin ligases:TRAF2 and clAP1.Unlike the necroptotic pathway, this pathway doesn’t include the inhibition of caspase-8. Thus, in absence of NF-κB function, FLIP is not produced, and therefore active caspase-8 assembles with FADD, RIPK1 and RIPK3 in the cytosol, forming what is known as complex IIa.Caspase-8 activates Bid, a protein that binds to the mitochondrial membrane, allowing the release of intermembrane mitochondrial molecules such as cytochrome c. Cytochrome c then assembles with Apaf 1 and ATP molecules, forming a complex called apoptosome. The activation of caspase 3 and 9 by the apoptosome starts a proteolitic cascade that eventually leads to the degradation of organelles and proteins, and the fragmentation of the DNA, inducing apoptotic cell death. Neurodegenerative diseases: Alzheimer's disease Patients with Alzheimer's disease, a neurodegenerative disease characterized by a cognitive deterioration and a behavioural disorder, experience a chronic brain inflammation which leads to the atrophy of several brain regions.[1] A sign of this inflammation is an increased number of microglia, a type of glial cells located in the brain and the spinal cord. RIPK1 is known to appear in larger quantities in brains from those affected with AD. This enzyme regulates not only necroptosis, but cell inflammation as well, and as a result it is involved in the regulation of microglial functions, specially those associated with the appearance and development of neurodegenerative diseases such as AD. Neurodegenerative diseases: Amyotrophic Lateral Sclerosis Amyotrophic Lateral Sclerosis (ALS) is characterized by the degeneration of motor neurons which leads to the progressive loss of mobility. Consequently, patients are unable to do any physical activity due to the atrophy of their muscles.The optineurin gene (OPTN) and its mutation are known to be involved in ALS. When the organism loses OPTN, the dysmyelination of axons and its degeneration start. The degeneration of the axons is produced by several components from the Central Nervous System (CNS) including RIPK1 and another enzyme from the Receptor Interacting Protein kinases family, RIPK3, as well as other proteins such as MLKL.Once RIPK1, RIPK3 and MLKL have contributed to the dysmyelination and the consequent degeneration of axons, the nerve impulse can't to go from one neuron to another due to the lack of myelin, which leads to the consequent mobility problems as the nerve impulse does not arrive to its final destination. Autoinflamatory disease: An autoinflammatory disease characterised by recurrent fevers and lymphadenopathy has been associated with mutations in this gene.CRIA syndrome (Cleavage-resistant RIPK1-induced autoinflammatory syndrome) is a disorder caused by specific mutations of the RIPK1 gene. Symptoms include "fevers, swollen lymph nodes, severe abdominal pain, gastrointestinal problems, headaches and, in some cases, abnormally enlarged spleen and liver". Interactions: RIPK1 has been shown to interact with:
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Lock and key** Lock and key: A lock is a mechanical or electronic fastening device that is released by a physical object (such as a key, keycard, fingerprint, RFID card, security token or coin), by supplying secret information (such as a number or letter permutation or password), by a combination thereof, or it may only be able to be opened from one side, such as a door chain. Lock and key: A key is a device that is used to operate a lock (to lock or unlock it). A typical key is a small piece of metal consisting of two parts: the bit or blade, which slides into the keyway of the lock and distinguishes between different keys, and the bow, which is left protruding so that torque can be applied by the user. In its simplest implementation, a key operates one lock or set of locks that are keyed alike, a lock/key system where each similarly keyed lock requires the same, unique key. The key serves as a security token for access to the locked area; locks are meant to only allow persons having the correct key to open it and gain access. In more complex mechanical lock/key systems, two different keys, one of which is known as the master key, serve to open the lock. Common metals include brass, plated brass, nickel silver, and steel. History: Premodern history Locks have been in use for over 6000 years, with one early example discovered in the ruins of Nineveh, the capital of ancient Assyria. Locks such as this were developed into the Egyptian wooden pin lock, which consisted of a bolt, door fixture or attachment, and key. When the key was inserted, pins within the fixture were lifted out of drilled holes within the bolt, allowing it to move. When the key was removed, the pins fell part-way into the bolt, preventing movement.The warded lock was also present from antiquity and remains the most recognizable lock and key design in the Western world. The first all-metal locks appeared between the years 870 and 900, and are attributed to English craftsmen. It is also said that the key was invented by Theodorus of Samos in the 6th century BC.'The Romans invented metal locks and keys and the system of security provided by wards.'Affluent Romans often kept their valuables in secure locked boxes within their households, and wore the keys as rings on their fingers. The practice had two benefits: It kept the key handy at all times, while signaling that the wearer was wealthy and important enough to have money and jewellery worth securing. History: A special type of lock, dating back to the 17th-18th century, although potentially older as similar locks date back to the 14th century, can be found in the Beguinage of the Belgian city Lier. These locks are most likely Gothic locks, that were decorated with foliage, often in a V-shape surrounding the keyhole. They are often called drunk man's lock, however the reference to being drunk may be erroneous as these locks were, according to certain sources, designed in such a way a person can still find the keyhole in the dark, although this might not be the case as the ornaments might have been purely aesthetic. In more recent times similar locks have been designed. History: Modern locks With the onset of the Industrial Revolution in the late 18th century and the concomitant development of precision engineering and component standardization, locks and keys were manufactured with increasing complexity and sophistication.The lever tumbler lock, which uses a set of levers to prevent the bolt from moving in the lock, was invented by Robert Barron in 1778. His double acting lever lock required the lever to be lifted to a certain height by having a slot cut in the lever, so lifting the lever too far was as bad as not lifting the lever far enough. This type of lock is still used today. History: The lever tumbler lock was greatly improved by Jeremiah Chubb in 1818. A burglary in Portsmouth Dockyard prompted the British Government to announce a competition to produce a lock that could be opened only with its own key. Chubb developed the Chubb detector lock, which incorporated an integral security feature that could frustrate unauthorized access attempts and would indicate to the lock's owner if it had been interfered with. Chubb was awarded £100 after a trained lock-picker failed to break the lock after 3 months.In 1820, Jeremiah joined his brother Charles in starting their own lock company, Chubb. Chubb made various improvements to his lock: his 1824 improved design did not require a special regulator key to reset the lock; by 1847 his keys used six levers rather than four; and he later introduced a disc that allowed the key to pass but narrowed the field of view, hiding the levers from anybody attempting to pick the lock. The Chubb brothers also received a patent for the first burglar-resisting safe and began production in 1835. History: The designs of Barron and Chubb were based on the use of movable levers, but Joseph Bramah, a prolific inventor, developed an alternative method in 1784. His lock used a cylindrical key with precise notches along the surface; these moved the metal slides that impeded the turning of the bolt into an exact alignment, allowing the lock to open. The lock was at the limits of the precision manufacturing capabilities of the time and was said by its inventor to be unpickable. In the same year Bramah started the Bramah Locks company at 124 Piccadilly, and displayed the "Challenge Lock" in the window of his shop from 1790, challenging "...the artist who can make an instrument that will pick or open this lock" for the reward of £200. The challenge stood for over 67 years until, at the Great Exhibition of 1851, the American locksmith Alfred Charles Hobbs was able to open the lock and, following some argument about the circumstances under which he had opened it, was awarded the prize. Hobbs' attempt required some 51 hours, spread over 16 days. History: The earliest patent for a double-acting pin tumbler lock was granted to American physician Abraham O. Stansbury in England in 1805, but the modern version, still in use today, was invented by American Linus Yale Sr. in 1848. This lock design used pins of varying lengths to prevent the lock from opening without the correct key. In 1861, Linus Yale Jr. was inspired by the original 1840s pin-tumbler lock designed by his father, thus inventing and patenting a smaller flat key with serrated edges as well as pins of varying lengths within the lock itself, the same design of the pin-tumbler lock which still remains in use today. The modern Yale lock is essentially a more developed version of the Egyptian lock. History: Despite some improvement in key design since, the majority of locks today are still variants of the designs invented by Bramah, Chubb and Yale. Types of lock: With physical keys A warded lock uses a set of obstructions, or wards, to prevent the lock from opening unless the correct key is inserted. The key has notches or slots that correspond to the obstructions in the lock, allowing it to rotate freely inside the lock. Warded locks are typically reserved for low-security applications as a well-designed skeleton key can successfully open a wide variety of warded locks. Types of lock: The pin tumbler lock uses a set of pins to prevent the lock from opening unless the correct key is inserted. The key has a series of grooves on either side of the key's blade that limit the type of lock the key can slide into. As the key slides into the lock, the horizontal grooves on the blade align with the wards in the keyway allowing or denying entry to the cylinder. A series of pointed teeth and notches on the blade, called bittings, then allow pins to move up and down until they are in line with the shear line of the inner and outer cylinder, allowing the cylinder or cam to rotate freely and the lock to open. An additional pin called the master pin is present between the key and driver pins in locks that accept master keys, to allow the plug to rotate at multiple pin elevations. Types of lock: A wafer tumbler lock is similar to the pin tumbler lock and works on a similar principle. However, unlike the pin lock (where each pin consists of two or more pieces) each wafer is a single piece. The wafer tumbler lock is often incorrectly referred to as a disc tumbler lock, which uses an entirely different mechanism. The wafer lock is relatively inexpensive to produce and is often used in automobiles and cabinetry. Types of lock: The disc tumbler lock or Abloy lock is composed of slotted rotating detainer discs. The lever tumbler lock uses a set of levers to prevent the bolt from moving in the lock. In its simplest form, lifting the tumbler above a certain height will allow the bolt to slide past. Lever locks are commonly recessed inside wooden doors or on some older forms of padlocks, including fire brigade padlocks. A magnetic keyed lock is a locking mechanism whereby the key utilizes magnets as part of the locking and unlocking mechanism. A magnetic key would use from one to many small magnets oriented so that the North and South poles would equate to a combination to push or pull the lock's internal tumblers thus releasing the lock. Types of lock: With electronic keys An electronic lock works by means of an electric current and is usually connected to an access control system. In addition to the pin and tumbler used in standard locks, electronic locks connect the bolt or cylinder to a motor within the door using a part called an actuator. Types of electronic locks include the following: A keycard lock operates with a flat card of similar dimensions as a credit card. In order to open the door, one needs to successfully match the signature within the keycard. Types of lock: The lock in a typical remote keyless system operates with a smart key radio transmitter. The lock typically accepts a particular valid code only once, and the smart key transmits a different rolling code every time the button is pressed. Generally the car door can be opened with either a valid code by radio transmission, or with a (non-electronic) pin tumbler key. The ignition switch may require a transponder car key to both open a pin tumbler lock and also transmit a valid code by radio transmission. Types of lock: A smart lock is an electromechanics lock that gets instructions to lock and unlock the door from an authorized device using a cryptographic key and wireless protocol. Smart locks have begun to be used more commonly in residential areas, often controlled with smartphones. Smart locks are used in coworking spaces and offices to enable keyless office entry. In addition, electronic locks cannot be picked with conventional tools. Locksmithing: Locksmithing is a traditional trade, and in most countries requires completion of an apprenticeship. The level of formal education required varies from country to country, from no qualifications required at all in the UK, to a simple training certificate awarded by an employer, to a full diploma from an engineering college. Locksmiths may be commercial (working out of a storefront), mobile (working out of a vehicle), institutional, or investigational (forensic locksmiths). They may specialize in one aspect of the skill, such as an automotive lock specialist, a master key system specialist or a safe technician. Many also act as security consultants, but not all security consultants have the skills and knowledge of a locksmith.Historically, locksmiths constructed or repaired an entire lock, including its constituent parts. The rise of cheap mass production has made this less common; the vast majority of locks are repaired through like-for-like replacements, high-security safes and strongboxes being the most common exception. Many locksmiths also work on any existing door hardware, including door closers, hinges, electric strikes, and frame repairs, or service electronic locks by making keys for transponder-equipped vehicles and implementing access control systems. Locksmithing: Although the fitting and replacement of keys remains an important part of locksmithing, modern locksmiths are primarily involved in the installation of high quality lock-sets and the design, implementation, and management of keying and key control systems. Locksmiths are frequently required to determine the level of risk to an individual or institution and then recommend and implement appropriate combinations of equipment and policies to create a "security layer" that exceeds the reasonable gain of an intruder. Locksmithing: Key duplication Traditional key cutting is the primary method of key duplication. It is a subtractive process named after the metalworking process of cutting, where a flat blank key is ground down to form the same shape as the template (original) key. The process roughly follows these stages: The original key is fitted into a vise in a machine, with a blank attached to a parallel vise which is mechanically linked. Locksmithing: The original key is moved along a guide in a movement which follows the key's shape, while the blank is moved in the same pattern against a cutting wheel by the mechanical linkage between the vices. Locksmithing: After cutting, the new key is deburred by scrubbing it with a metal brush to remove particles of metal which could be dangerously sharp and foul locks.Modern key cutting replaces the mechanical key following aspect with a process in which the original key is scanned electronically, processed by software, stored, then used to guide a cutting wheel when a key is produced. The capability to store electronic copies of the key's shape allows for key shapes to be stored for key cutting by any party that has access to the key image. Locksmithing: Different key cutting machines are more or less automated, using different milling or grinding equipment, and follow the design of early 20th century key duplicators. Key duplication is available in many retail hardware stores and as a service of the specialized locksmith, though the correct key blank may not be available. More recently, online services for duplicating keys have become available. Keyhole: A keyhole (or keyway) is a hole or aperture (as in a door or lock) for receiving a key. Lock keyway shapes vary widely with lock manufacturer, and many manufacturers have a number of unique profiles requiring a specifically milled key blank to engage the lock's tumblers. Symbolism: Heraldry Keys appear in various symbols and coats of arms, the best-known being that of the Holy See: derived from the phrase in Matthew 16:19 which promises Saint Peter, in Roman Catholic tradition the first pope, the Keys of Heaven. But this is by no means the only case. Many examples are given on Commons. Artwork Some works of art associate keys with the Greek goddess of witchcraft known as Hecate. Symbolism: Palestinian key The Palestinian key is the Palestinian collective symbol of their homes lost in the Nakba, when more than half of the population of Mandatory Palestine was expelled or fled violence in 1948 and subsequently refused the right to return. Since 2016, a Palestinian restaurant in Doha, Qatar, holds the Guinness World Record for the world's largest key – 2.7 tonnes and 7.8 x 3 meters.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Solar eclipse of November 3, 2032** Solar eclipse of November 3, 2032: A partial solar eclipse will occur on November 3, 2032. A solar eclipse occurs when the Moon passes between Earth and the Sun, thereby totally or partly obscuring the image of the Sun for a viewer on Earth. A partial solar eclipse occurs in the polar regions of the Earth when the center of the Moon's shadow misses the Earth. Related eclipses: Solar eclipses 2029–2032 This eclipse is a member of a semester series. An eclipse in a semester series of solar eclipses repeats approximately every 177 days and 4 hours (a semester) at alternating nodes of the Moon's orbit.Note: Partial solar eclipses on January 14, 2029 and July 11, 2029 occur on the previous lunar year eclipse set. Metonic series The metonic series repeats eclipses every 19 years (6939.69 days), lasting about 5 cycles. Eclipses occur in nearly the same calendar date. In addition, the octon subseries repeats 1/5 of that or every 3.8 years (1387.94 days). All eclipses in this table occur at the Moon's ascending node.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**SRX expansion board** SRX expansion board: The SRX are a series of expansion boards produced by Roland Corporation. First introduced in 2000, they are small boards of electronic circuitry with 64MB ROMs containing patches (timbres) and rhythm sets (drum kits). They are used to expand certain models of Roland synthesizers, music workstations, keyboards, and sound modules. SRX expansion board: Predecessor formats include the 15 SN-U110 PCM cards (U-110, U-20, U-220, D-70, CM-64 and CM-32P), 8 SL-JD80 PCM card/preset RAM card (JD-only) sets and 8 SO-PCM1 1-2 MB cards (both JD-800, JD-990, JV-80, JV-880, JV-90, JV-1000 and JV-1080), 22 SR-JV80 expansion boards (JD-990, JV-880, JV-1010, JV-1080, JV-2080, XV-3080, XV-5080, JV-80, JV-90, JV-1000, XP-30, XP-50, XP-60, XP-80, Fantom FA76, XV-88) and others. Expansion boards: SRX-01 Dynamic Drum Kits SRX-02 Concert Piano SRX-03 Studio SRX SRX-04 Symphonique Strings SRX-05 Supreme Dance SRX-06 Complete Orchestra SRX-07 Ultimate Keys SRX-08 Platinum Trax SRX-09 World Collection SRX-10 Big Brass Ensemble SRX-11 Complete Piano SRX-12 Classic EPs SRX-96 World Collection and Legendary XP Essentials (special SRX board 2008) SRX-97 Jon Lord's Rock Organ (special SRX board 2007) SRX-98 Analog Essentials (special SRX board 2006) SRX-99 Special Wave Expansion (promo released mid-2004) Compatible hosts: According to Roland, the following products accept SRX expansion boards. The number in parenthesis indicates the number of SRX boards each unit can accept. Compatible hosts: Fantom workstation (2) Fantom-S series (4) Fantom-X series (4) Fantom-XR rack unit (6) Juno-G (1) Juno-Stage (2) RD-700, RD-700SX, and RD-700GX (2) V-Combo (2) G-70 (1) E-80 (2) Roland MC-909 (1) SonicCell module (2) XV-88 (2) XV-5050, XV-3080, and XV-2020 modules (2) XV-5080 (4) V-Studio 700 (1)Some later SRX cards, for example the SRX96 and 97 do not work in the XV3080 host synthesizer module nor in the XV-88 keyboard synthesizer.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**IGEPAL CA-630** IGEPAL CA-630: IGEPAL CA-630 is a nonionic, non-denaturing detergent. Its official IUPAC name is octylphenoxypolyethoxyethanol. IGEPAL is a registered trademark of Rhodia. IGEPAL CA-630: IGEPAL CA-630 is sold by Sigma-Aldrich and is claimed to be a "chemically indistinguishable" substitute for Nonidet P-40 (a trademark of Shell Chemical Company) which is no longer manufactured. However, a 2017 publication reported that IGEPAL 630 was ten-fold more potent than Nonidet P-40 in a tubulin polymerisation assay.All IGEPAL CA surfactants are derived from octylphenol. This serves as the hydrophobic part. Different amounts of ethylene oxide are combined with this part to get a balance of hydrophobic/hydrophilic substances (measured by HLB). This balance has an important impact on wetting detergency, foam, solubility, emulsification. IGEPAL CA-630 has HLB of 13.4, similar to that of Triton X-100 (13.4) and thus belongs to the detergent range (HLB 13-15); this is significantly less than 17.8 of tergitol NP-40 or 16.7 of Polysorbate 20 (also known as Tween 20), which both belong in the solubilizer range (15-18) of HLB.Ca-630 is completely miscible with water. Human Toxicity: CA-630 is not a primary skin irritant nor sensitizer. However, it is a severe eye irritant.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Necrobiotic xanthogranuloma** Necrobiotic xanthogranuloma: Necrobiotic xanthogranuloma (also known as "necrobiotic xanthogranuloma with paraproteinemia") is a multisystem disease that affects older adults, and is characterized by prominent skin findings.: 707
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Evaluation of machine translation** Evaluation of machine translation: Various methods for the evaluation for machine translation have been employed. This article focuses on the evaluation of the output of machine translation, rather than on performance or usability evaluation. Round-trip translation: A typical way for lay people to assess machine translation quality is to translate from a source language to a target language and back to the source language with the same engine. Though intuitively this may seem like a good method of evaluation, it has been shown that round-trip translation is a "poor predictor of quality". The reason why it is such a poor predictor of quality is reasonably intuitive. A round-trip translation is not testing one system, but two systems: the language pair of the engine for translating into the target language, and the language pair translating back from the target language. Round-trip translation: Consider the following examples of round-trip translation performed from English to Italian and Portuguese from Somers (2005): In the first example, where the text is translated into Italian then back into English—the English text is significantly garbled, but the Italian is a serviceable translation. In the second example, the text translated back into English is perfect, but the Portuguese translation is meaningless; the program thought "tit" was a reference to a tit (bird), which was intended for a "tat", a word it did not understand. Round-trip translation: While round-trip translation may be useful to generate a "surplus of fun," the methodology is deficient for serious study of machine translation quality. Human evaluation: This section covers two of the large scale evaluation studies that have had significant impact on the field—the ALPAC 1966 study and the ARPA study. Human evaluation: Automatic Language Processing Advisory Committee (ALPAC) One of the constituent parts of the ALPAC report was a study comparing different levels of human translation with machine translation output, using human subjects as judges. The human judges were specially trained for the purpose. The evaluation study compared an MT system translating from Russian into English with human translators, on two variables. Human evaluation: The variables studied were "intelligibility" and "fidelity". Intelligibility was a measure of how "understandable" the sentence was, and was measured on a scale of 1–9. Fidelity was a measure of how much information the translated sentence retained compared to the original, and was measured on a scale of 0–9. Each point on the scale was associated with a textual description. For example, 3 on the intelligibility scale was described as "Generally unintelligible; it tends to read like nonsense but, with a considerable amount of reflection and study, one can at least hypothesize the idea intended by the sentence".Intelligibility was measured without reference to the original, while fidelity was measured indirectly. The translated sentence was presented, and after reading it and absorbing the content, the original sentence was presented. The judges were asked to rate the original sentence on informativeness. So, the more informative the original sentence, the lower the quality of the translation. Human evaluation: The study showed that the variables were highly correlated when the human judgment was averaged per sentence. The variation among raters was small, but the researchers recommended that at the very least, three or four raters should be used. The evaluation methodology managed to separate translations by humans from translations by machines with ease. The study concluded that, "highly reliable assessments can be made of the quality of human and machine translations". Human evaluation: Advanced Research Projects Agency (ARPA) As part of the Human Language Technologies Program, the Advanced Research Projects Agency (ARPA) created a methodology to evaluate machine translation systems, and continues to perform evaluations based on this methodology. The evaluation programme was instigated in 1991, and continues to this day. Details of the programme can be found in White et al. (1994) and White (1995). Human evaluation: The evaluation programme involved testing several systems based on different theoretical approaches; statistical, rule-based and human-assisted. A number of methods for the evaluation of the output from these systems were tested in 1992 and the most recent suitable methods were selected for inclusion in the programmes for subsequent years. The methods were; comprehension evaluation, quality panel evaluation, and evaluation based on adequacy and fluency. Human evaluation: Comprehension evaluation aimed to directly compare systems based on the results from multiple choice comprehension tests, as in Church et al. (1993). The texts chosen were a set of articles in English on the subject of financial news. These articles were translated by professional translators into a series of language pairs, and then translated back into English using the machine translation systems. It was decided that this was not adequate for a standalone method of comparing systems and as such abandoned due to issues with the modification of meaning in the process of translating from English. Human evaluation: The idea of quality panel evaluation was to submit translations to a panel of expert native English speakers who were professional translators and get them to evaluate them. The evaluations were done on the basis of a metric, modelled on a standard US government metric used to rate human translations. This was good from the point of view that the metric was "externally motivated", since it was not specifically developed for machine translation. However, the quality panel evaluation was very difficult to set up logistically, as it necessitated having a number of experts together in one place for a week or more, and furthermore for them to reach consensus. This method was also abandoned. Human evaluation: Along with a modified form of the comprehension evaluation (re-styled as informativeness evaluation), the most popular method was to obtain ratings from monolingual judges for segments of a document. The judges were presented with a segment, and asked to rate it for two variables, adequacy and fluency. Adequacy is a rating of how much information is transferred between the original and the translation, and fluency is a rating of how good the English is. This technique was found to cover the relevant parts of the quality panel evaluation, while at the same time being easier to deploy, as it didn't require expert judgment. Human evaluation: Measuring systems based on adequacy and fluency, along with informativeness is now the standard methodology for the ARPA evaluation program. Automatic evaluation: In the context of this article, a metric is a measurement. A metric that evaluates machine translation output represents the quality of the output. The quality of a translation is inherently subjective, there is no objective or quantifiable "good." Therefore, any metric must assign quality scores so they correlate with the human judgment of quality. That is, a metric should score highly translations that humans score highly, and give low scores to those humans give low scores. Human judgment is the benchmark for assessing automatic metrics, as humans are the end-users of any translation output. Automatic evaluation: The measure of evaluation for metrics is correlation with human judgment. This is generally done at two levels, at the sentence level, where scores are calculated by the metric for a set of translated sentences, and then correlated against human judgment for the same sentences. And at the corpus level, where scores over the sentences are aggregated for both human judgments and metric judgments, and these aggregate scores are then correlated. Figures for correlation at the sentence level are rarely reported, although Banerjee et al. (2005) do give correlation figures that show that, at least for their metric, sentence-level correlation is substantially worse than corpus level correlation. Automatic evaluation: While not widely reported, it has been noted that the genre, or domain, of a text has an effect on the correlation obtained when using metrics. Coughlin (2003) reports that comparing the candidate text against a single reference translation does not adversely affect the correlation of metrics when working in a restricted domain text. Automatic evaluation: Even if a metric correlates well with human judgment in one study on one corpus, this successful correlation may not carry over to another corpus. Good metric performance, across text types or domains, is important for the reusability of the metric. A metric that only works for text in a specific domain is useful, but less useful than one that works across many domains—because creating a new metric for every new evaluation or domain is undesirable. Automatic evaluation: Another important factor in the usefulness of an evaluation metric is to have a good correlation, even when working with small amounts of data, that is candidate sentences and reference translations. Turian et al. (2003) point out that, "Any MT evaluation measure is less reliable on shorter translations", and show that increasing the amount of data improves the reliability of a metric. However, they add that "... reliability on shorter texts, as short as one sentence or even one phrase, is highly desirable because a reliable MT evaluation measure can greatly accelerate exploratory data analysis".Banerjee et al. (2005) highlight five attributes that a good automatic metric must possess; correlation, sensitivity, consistency, reliability and generality. Any good metric must correlate highly with human judgment, it must be consistent, giving similar results to the same MT system on similar text. It must be sensitive to differences between MT systems and reliable in that MT systems that score similarly should be expected to perform similarly. Finally, the metric must be general, that is it should work with different text domains, in a wide range of scenarios and MT tasks. Automatic evaluation: The aim of this subsection is to give an overview of the state of the art in automatic metrics for evaluating machine translation. Automatic evaluation: BLEU BLEU was one of the first metrics to report a high correlation with human judgments of quality. The metric is currently one of the most popular in the field. The central idea behind the metric is that "the closer a machine translation is to a professional human translation, the better it is". The metric calculates scores for individual segments, generally sentences — then averages these scores over the whole corpus for a final score. It has been shown to correlate highly with human judgments of quality at the corpus level.BLEU uses a modified form of precision to compare a candidate translation against multiple reference translations. The metric modifies simple precision since machine translation systems have been known to generate more words than appear in a reference text. No other machine translation metric is yet to significantly outperform BLEU with respect to correlation with human judgment across language pairs. Automatic evaluation: NIST The NIST metric is based on the BLEU metric, but with some alterations. Where BLEU simply calculates n-gram precision adding equal weight to each one, NIST also calculates how informative a particular n-gram is. That is to say, when a correct n-gram is found, the rarer that n-gram is, the more weight it is given. For example, if the bigram "on the" correctly matches, it receives lower weight than the correct matching of bigram "interesting calculations," as this is less likely to occur. NIST also differs from BLEU in its calculation of the brevity penalty, insofar as small variations in translation length do not impact the overall score as much. Automatic evaluation: Word error rate The Word error rate (WER) is a metric based on the Levenshtein distance, where the Levenshtein distance works at the character level, WER works at the word level. It was originally used for measuring the performance of speech recognition systems but is also used in the evaluation of machine translation. The metric is based on the calculation of the number of words that differ between a piece of machine-translated text and a reference translation. Automatic evaluation: A related metric is the Position-independent word error rate (PER), which allows for the re-ordering of words and sequences of words between a translated text and a reference translation. Automatic evaluation: METEOR The METEOR metric is designed to address some of the deficiencies inherent in the BLEU metric. The metric is based on the weighted harmonic mean of unigram precision and unigram recall. The metric was designed after research by Lavie (2004) into the significance of recall in evaluation metrics. Their research showed that metrics based on recall consistently achieved higher correlation than those based on precision alone, cf. BLEU and NIST.METEOR also includes some other features not found in other metrics, such as synonymy matching, where instead of matching only on the exact word form, the metric also matches on synonyms. For example, the word "good" in the reference rendering as "well" in the translation counts as a match. The metric is also includes a stemmer, which lemmatises words and matches on the lemmatised forms. The implementation of the metric is modular insofar as the algorithms that match words are implemented as modules, and new modules that implement different matching strategies may easily be added. Automatic evaluation: LEPOR A new MT evaluation metric LEPOR was proposed as the combination of many evaluation factors including existing ones (precision, recall) and modified ones (sentence-length penalty and n-gram based word order penalty). The experiments were tested on eight language pairs from ACL-WMT2011 including English-to-other (Spanish, French, German, and Czech) and the inverse, and showed that LEPOR yielded higher system-level correlation with human judgments than several existing metrics such as BLEU, Meteor-1.3, TER, AMBER and MP4IBM1. An enhanced version of LEPOR metric, hLEPOR, is introduced in the paper. hLEPOR utilizes the harmonic mean to combine the sub-factors of the designed metric. Furthermore, they design a set of parameters to tune the weights of the sub-factors according to different language pairs. The ACL-WMT13 Metrics shared task results show that hLEPOR yields the highest Pearson correlation score with human judgment on the English-to-Russian language pair, in addition to the highest average-score on five language pairs (English-to-German, French, Spanish, Czech, Russian). The detailed results of WMT13 Metrics Task is introduced in the paper. Overviews on Human and Automatic Evaluation Methodologies: There are some machine translation evaluation survey works, where people introduced more details about what kinds of human evaluation methods they used and how they work, such as the intelligibility, fidelity, fluency, adequacy, comprehension, and informativeness, etc. For automatic evaluations, they also did some clear classifications such as the lexical similarity methods, the linguistic features application, and the subfields of these two aspects. For instance, for lexical similarity, it contains edit distance, precision, recall and word order; for linguistic feature, it is divided into the syntactic feature and the semantic feature respectively. Some state-of-the-art overview on both manual and automatic translation evaluation introduced the recently developed translation quality assessment (TQA) methodologies, such as the crowd-sourced intelligence Amazon Mechanical Turk utilization, statistical significance testing, re-visiting traditional criteria with newly designed strategies, as well as MT quality estimation (QE) shared tasks from the annual workshop on MT (WMT) and corresponding models that do not rely on human offered reference translations. Software for Automated Evaluation: Asia Online Language Studio - Supports BLEU, TER, F-Measure, METEOR BLEU F-Measure NIST METEOR TER TERP LEPOR hLEPOR KantanAnalytics - segment level MT quality estimation
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Index mineral** Index mineral: An index mineral is used in geology to determine the degree of metamorphism a rock has experienced. Depending on the original composition of and the pressure and temperature experienced by the protolith (parent rock), chemical reactions between minerals in the solid state produce new minerals. When an index mineral is found in a metamorphosed rock, it indicates the minimum pressure and temperature the protolith must have achieved in order for that mineral to form. The higher the pressure and temperature in which the rock formed, the higher the grade of the rock. Index mineral: The concept traces its roots to 1912, when G. M. Barrow mapped zones of metamorphism in southern Scotland. Each zone is named for the index mineral that appears in it. E.g. the chlorite zone is named for chlorite. Mineralogic zones: Mudrock, a fine-grained sedimentary rock often containing aluminium-rich minerals, produces these minerals after being metamorphosed, from low to high grade: Chlorite zone: quartz, chlorite, muscovite, albite Biotite zone: quartz, muscovite, biotite, chlorite, albite Garnet zone: quartz, muscovite, biotite, garnet, sodic plagioclase Staurolite zone: quartz, muscovite, biotite, garnet, staurolite, plagioclase Kyanite zone: quartz, muscovite, biotite, garnet, kyanite, plagioclase, +/- staurolite Silimanite zone: quartz, muscovite, biotite, garnet, sillimanite, plagioclase
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Bathometer** Bathometer: A bathometer (also bathymeter) is an instrument for measuring water depth. It was previously used mainly in oceanographical studies, but is rarely employed nowadays. The term originates from Greek βαθύς (bathys), "deep" and μέτρον (métron), "measure". History: The earliest idea for a bathometer is due to Leon Battista Alberti (1404–1472) who sunk a hollow sphere attached to some ballast with a hook. When the ball reached the bottom it detached from the ballast and resurfaced. The depth was determined (rather inaccurately) by the time it took to surface. Jacob Perkins (1766–1849) proposed a bathometer based on the compressibility of water. In this instrument the movement of a piston compressing a body of water enclosed in its cylinder is dependent on the pressure of the water outside the cylinder, and hence its depth. The amount the piston moved can be measured when it is returned to the surface.A bathometer that did not need to be submerged was invented in 1876 by William Siemens, stimulated by the needs of the telegraph industry. Siemens' instrument was the first to come into widespread use and is so different and so much more practical than anything that had gone before that he is often credited as the inventor of the bathometer. His instrument consisted of a tube of mercury and worked similar to a barometer. The pressure of the mercury acting under the force of gravity pushed down on, and deformed, a thin steel sheet. The height of the mercury in the column was thus proportional to the strength of the Earth's gravity field. The theory of the instrument was that the greater the depth of water under the ship, the lower the gravitational force would be. This is because water has a much lower density than the rocks of the Earth's crust. Starting in the mid-nineteenth century, submarine telegraph cables were being laid around the world. Accurate knowledge of the depth of the ocean bed was important for this work. Previously, depth was determined by taking soundings with a lead line, a time-consuming and difficult method.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Harrison-Meldola Memorial Prizes** Harrison-Meldola Memorial Prizes: The Harrison-Meldola Memorial Prizes are annual prizes awarded by Royal Society of Chemistry to chemists in Britain who are 34 years of age or below. The prize is given to scientist who demonstrate the most meritorious and promising original investigations in chemistry and published results of those investigations. There are 3 prizes given every year, each winning £5000 and a medal. Candidates are not permitted to nominate themselves. Harrison-Meldola Memorial Prizes: They were begun in 2008 when two previous awards, the Meldola Medal and Prize and the Edward Harrison Memorial Prize, were joined together. They commemorate Raphael Meldola and Edward Harrison. Winners of the Harrison-Meldola Memorial Prizes: Source: Royal Society of Chemistry 2022 Volker Deringer, University of Oxford Marina Freitag, Newcastle University Paul McGonigal, Durham University 2021 Nicholas Chilton, University of Manchester Fernanda Duarte, University of Oxford Ceri Hammond, Imperial College London 2020 Thomas Bennett, University of Cambridge Anthony Green, University of Manchester Sihai Yang, University of Manchester 2019 Rebecca Melen, Cardiff University Robert Phipps, University of Cambridge Mathew Powner, University College London 2018 Kim Jelfs, Imperial College London Daniele Leonori, University of Manchester David Mills, University of Manchester 2017 Matthew Baker, University of Strathclyde Mark Crimmin, Imperial College London Elaine O'Reilly, The University of Nottingham 2016 Gonçalo Bernardes, University of Cambridge Susan Perkin, University of Oxford Sarah Staniland, The University of Sheffield 2015 Adrian Chaplin, University of Warwick David Scanlon, University College London Robert Paton, University of Oxford 2014 David Glowacki, University of Bristol Erwin Reisner, University of Cambridge Matthew Fuchter, Imperial College London 2013 Andrew Baldwin, University of Oxford John Bower, University of Bristol Aron Walsh, University of Bath 2012 Michael Ingleson, University of Manchester Tuomas Knowles, University of Cambridge Marina Kuimova, Imperial College London 2011 Craig Banks, Manchester Metropolitan University Tomislav Friscic, University of Cambridge Philipp Kukura, University of Oxford 2010 Scott Dalgarno, Heriot-Watt University Andrew Goodwin, University of Oxford Nathan S Lawrence, Schlumberger Cambridge Research 2009 Eva Hevia, University of Strathclyde Petra Cameron, University of Bath Oren Scherman, University of Cambridge Previous winners of the Meldola Medal and Prize: The Meldola Medal and Prize commemorated Raphael Meldola, President of the Maccabaeans and the Institute of Chemistry. The last winners of the prize in 2007 were Hon Lam from the University of Edinburgh, and Rachel O'Reilly of the University of Cambridge. Previous winners of the Edward Harrison Memorial Prize: The Edward Harrison Memorial Prize commemorated the work of Edward Harrison who was credited with producing the first serviceable gas mask. The last winner of the prize was Katherine Holt of University College London.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Alder carr** Alder carr: An alder carr is a particular type of carr, i.e. waterlogged wooded terrain populated with alder trees. Examples: Alder Carr, Hildersham Alderfen Broad Fawley Ford on the Beaulieu River Biebrza National Park Fen Alder Carr Harston Wood Holywells Park, Ipswich: Pond 5 is known as Alder Carr and is a biodiversity action plan habitat. Historically there was another Alder Carr in the Cobbold family estate in what is now the northern edge of the Landseer Park. Jackson's Coppice and Marsh Loynton Moss
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Seismic intensity scales** Seismic intensity scales: Seismic intensity scales categorize the intensity or severity of ground shaking (quaking) at a given location, such as resulting from an earthquake. They are distinguished from seismic magnitude scales, which measure the magnitude or overall strength of an earthquake, which may, or perhaps may not, cause perceptible shaking. Seismic intensity scales: Intensity scales are based on the observed effects of the shaking, such as the degree to which people or animals were alarmed, and the extent and severity of damage to different kinds of structures or natural features. The maximal intensity observed, and the extent of the area where shaking was felt (see isoseismal map, below), can be used to estimate the location and magnitude of the source earthquake; this is especially useful for historical earthquakes where there is no instrumental record. Ground shaking: Ground shaking can be caused in various ways (volcanic tremors, avalanches, large explosions, etc.), but shaking intense enough to cause damage is usually due to rupturing of the earth's crust known as earthquakes. The intensity of shaking depends on several factors: The "size" or strength of the source event, such as measured by various seismic magnitude scales. The type of seismic wave generated, and its orientation. The depth of the event. The distance from the source event. Site response due to local geologySite response is especially important as certain conditions, such as unconsolidated sediments in a basin, can amplify ground motions as much as ten times. Ground shaking: Where an earthquake is not recorded on seismographs an isoseismal map showing the intensities felt at different areas can be used to estimate the location and magnitude of the quake. Such maps are also useful for estimating the shaking intensity, and thereby the likely level of damage, to be expected from a future earthquake of similar magnitude. In Japan this kind of information is used when an earthquake occurs to anticipate the severity of damage to be expected in different areas.The intensity of local ground-shaking depends on several factors besides the magnitude of the earthquake, one of the most important being soil conditions. For instance, thick layers of soft soil (such as fill) can amplify seismic waves, often at a considerable distance from the source, while sedimentary basins will often resonate, increasing the duration of shaking. This is why, in the 1989 Loma Prieta earthquake, the Marina district of San Francisco was one of the most damaged areas, though it was nearly 100 km from the epicenter. Geological structures were also significant, such as where seismic waves passing under the south end of San Francisco Bay reflected off the base of the Earth's crust towards San Francisco and Oakland. A similar effect channeled seismic waves between the other major faults in the area. History: The first simple classification of earthquake intensity was devised by Domenico Pignataro in the 1780s. The first recognisable intensity scale in the modern sense of the word was drawn up by P.N.G. Egen in 1828. However, the first modern mapping of earthquake intensity was made by Robert Mallet, an Irish engineer who was sent by Imperial College, London, to research the December 1857 Basilicata earthquake, also known as The Great Neapolitan Earthquake of 1857. The first widely adopted intensity scale, the Rossi–Forel scale, was introduced in the late 19th century as a 10 grade scale. In 1902, Italian seismologist Giuseppe Mercalli, created the Mercalli Scale, a new 12-grade scale. A very significant improvement was achieved, mainly by Charles Francis Richter during the 1950s, when (1) a correlation was found between seismic intensity and the Peak ground acceleration - PGA (see the equation that Richter found for California). (2) a definition of the strength of the buildings, and a subdivision into groups (called type of buildings) was made. Then, the evaluation of the seismic intensity was based upon the damage grade to a given type of structure. That gave the Mercalli Scale, as well as the followed European MSK-64 scale, the quantitative element, which represents the vulnerability of the building's type. Since then, that scale was called the Modified Mercalli intensity scale - MMS and the evaluations of the Seismic Intensities became more reliable.In addition, more intensity scales have been developed and are used in different parts of the world:
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**5Beta-Scymnol** 5Beta-Scymnol: 5Beta-Scymnol, also known simply as scymnol, is a synthetic INCI-listed skin conditioning ingredient. The molecule is a steroid derivative that behaves as a hydroxyl radical scavenger and is used for the treatment of skin blemishes such as blocked pores and acne. History: The molecule was identified and isolated from shark tissues by Professor Takuo Kosuge, Shizuoka College of Pharmacy, Shizuoka, Japan during the 1980s. Based on usage as a traditional folk remedy, it was hypothesised the ingredient may be effective for the treatment of scalds, blemishes and acne. Traits: 5Beta-Scymnol is a hydroxyl (OH) free radical scavenger. Scymnol's role in quenching free radicals may play a role in inhibiting acne.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Lighttpd** Lighttpd: lighttpd (prescribed pronunciation: "lighty") is an open-source web server optimized for speed-critical environments while remaining standards-compliant, secure and flexible. It was originally written by Jan Kneschke as a proof-of-concept of the c10k problem – how to handle 10,000 connections in parallel on one server, but has gained worldwide popularity. Its name is a portmanteau of "light" and "httpd". Premise: The low memory footprint (compared to other web servers), small CPU load and speed optimizations make lighttpd suitable for servers that are suffering load problems, or for serving static media separately from dynamic content. lighttpd is free and open-source software and is distributed under the BSD license. It runs natively on Unix-like operating systems, as well as Microsoft Windows. Application support: lighttpd supports the FastCGI, SCGI and CGI interfaces to external programs, allowing web applications written in any programming language to be used with the server. As a particularly popular language, PHP performance has received special attention. Lighttpd's FastCGI can be configured to support PHP with opcode caches (like APC) properly and efficiently. Additionally, it has received attention from its popularity within the Python, Perl, Ruby and Lua communities. Lighttpd also supports WebDNA, the resilient in-memory database system designed to build database-driven websites. It is a popular web server for the Catalyst and Ruby on Rails web frameworks. Lighttpd does not support ISAPI. Features: Load balancing, CGI, FastCGI, SCGI, HTTP proxy, Servlet AJP, WebSocket tunnel support chroot support Web server event mechanism performance – select(), poll(), and epoll() Support for more efficient event notification schemes like kqueue and epoll Conditional URL rewriting (mod_rewrite) TLS/SSL with SNI support, via OpenSSL, GnuTLS, Mbed TLS, NSS, WolfSSL. Features: Authentication against an LDAP or DBI server RRDtool statistics Rule-based downloading with possibility of a script handling only authentication Server Side Includes support (but not server-side CGI from SSI) Flexible virtual hosting Modules support Lua programming language scripts via mod_magnet WebDAV support HTTP compression using mod_deflate (zlib, brotli, zstd) Light-weight (less than 1 MB) Single-process design with only several threads. No processes or threads started per connection. Features: HTTP/2 support since lighttpd 1.4.56 HTTP/2 WebSocket support since lighttpd 1.4.65 Limitations: Versions below 1.4.40 do not officially support sending large files from CGI, FastCGI, or proxies unless X-Sendfile is used. This limitation has been removed in lighttpd 1.4.40. No HTTP/3 support Usage: Lighttpd was used in the past by several high-traffic websites, including Bloglines, xkcd, Meebo, and YouTube. The Wikimedia Foundation also once ran Lighttpd servers. Due to relatively small size it's often used in embedded devices like GL.iNet and Turris Omnia. It's also used by git as a HTTP server daemon.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Imidazolidinyl urea** Imidazolidinyl urea: Imidazolidinyl urea is an antimicrobial preservative used in cosmetics. It is chemically related to diazolidinyl urea which is used in the same way. Imidazolidinyl urea acts as a formaldehyde releaser. Safety: Some people have a contact allergy to imidazolidinyl urea causing dermatitis. Such people are often also allergic to diazolidinyl urea. Chemistry: Imidazolidinyl urea was poorly characterized until recently and the single Chemical Abstracts Service structure assigned to it is probably not the major one in the commercial material. Instead, new data indicate that the hydroxymethyl functional group of each imidazolidine ring is attached to the carbon, rather than on the nitrogen atom: Synthesis Imidazolidinyl urea is produced by the chemical reaction of allantoin and formaldehyde in the presence of sodium hydroxide solution and heat. The reaction mixture is then neutralized with hydrochloric acid and evaporated: 2 + 3 H2C=O → Commercial imidazolidinyl urea is a mixture of different formaldehyde addition products including polymers.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**VER-3323** VER-3323: VER-3323 is a drug which acts as a selective agonist for both the 5-HT2B and 5-HT2C serotonin receptor subtypes, with moderate selectivity for 5-HT2C, but relatively low affinity for 5-HT2A. It has potent anorectic effects in animal studies.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Lanthanum hexaboride** Lanthanum hexaboride: Lanthanum hexaboride (LaB6, also called lanthanum boride and LaB) is an inorganic chemical, a boride of lanthanum. It is a refractory ceramic material that has a melting point of 2210 °C, and is insoluble in water and hydrochloric acid. It is extremely hard, with a Mohs hardness of 9.5. It has a low work function and one of the highest electron emissivities known, and is stable in vacuum. Stoichiometric samples are colored intense purple-violet, while boron-rich ones (above LaB6.07) are blue. Ion bombardment changes its color from purple to emerald green. LaB6 is a superconductor with a relatively low transition temperature of 0.45 K. Uses: The principal use of lanthanum hexaboride is in hot cathodes, either as a single crystal or as a coating deposited by physical vapor deposition. Hexaborides, such as lanthanum hexaboride (LaB6) and cerium hexaboride (CeB6), have low work functions, around 2.5 eV. They are also somewhat resistant to cathode poisoning. Cerium hexaboride cathodes have a lower evaporation rate at 1700 K than lanthanum hexaboride, but they become equal at temperatures above 1850 K. Cerium hexaboride cathodes have one and half the lifetime of lanthanum hexaboride, due to the former's higher resistance to carbon contamination. Hexaboride cathodes are about ten times "brighter" than tungsten cathodes, and have 10–15 times longer lifetime. Devices and techniques in which hexaboride cathodes are used include electron microscopes, microwave tubes, electron lithography, electron beam welding, X-ray tubes, free electron lasers and several types of electric propulsion technologies. Lanthanum hexaboride slowly evaporates from the heated cathodes and forms deposits on the Wehnelt cylinders and apertures. LaB6 is also used as a size/strain standard in X-ray powder diffraction to calibrate instrumental broadening of diffraction peaks.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Bunt (sail)** Bunt (sail): The bunt of a sail is the middle part of it, which is purposely formed into a kind of curved bag, or cavity, so that the sail might receive more wind. It is chiefly used in topsails, for courses are for the most part cut square, or at least with a small allowance, for bunt or compass. Sailors would say, "the bunt holds much leeward wind", meaning that the bunt hangs too much to leeward. The buntlines are small lines fastened to the bottom of the sails, in the middle part of the bolt rope, to the cringle; and so are passed through a small block, seized to the yard. Their use is to trice up the bunt of the sail, to better furl it up. This article incorporates text from a publication now in the public domain: Chambers, Ephraim, ed. (1728). "Bunt". Cyclopædia, or an Universal Dictionary of Arts and Sciences (1st ed.). James and John Knapton, et al.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Goulac** Goulac: Goulac, also known as Glutrin, is a core binder developed from wood pulping. It is made from lignin pitch.The material has a dark colour and is soluble in water.Goulac water was used to make Gallagher sharp sand. It was trademarked in the 1940s. It prevents a chemical reaction between lead arsenate and lime sulphur.When used to make mold cores from sand, it results in a very hard surface after baking, however the sand can absorb moisture if the core is not used soon after being prepared. Use of Goulac allows the cores to be baked at a lower temperature compared to other types of binders.Glutrin was used in road paving in the early 20th century.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Logical matrix** Logical matrix: A logical matrix, binary matrix, relation matrix, Boolean matrix, or (0, 1)-matrix is a matrix with entries from the Boolean domain B = {0, 1}. Such a matrix can be used to represent a binary relation between a pair of finite sets. It is an important tool in combinatorial mathematics and theoretical computer science. Matrix representation of a relation: If R is a binary relation between the finite indexed sets X and Y (so R ⊆ X ×Y ), then R can be represented by the logical matrix M whose row and column indices index the elements of X and Y, respectively, such that the entries of M are defined by mi,j={1(xi,yj)∈R,0(xi,yj)∉R. In order to designate the row and column numbers of the matrix, the sets X and Y are indexed with positive integers: i ranges from 1 to the cardinality (size) of X, and j ranges from 1 to the cardinality of Y. See the article on indexed sets for more detail. Matrix representation of a relation: Example The binary relation R on the set {1, 2, 3, 4} is defined so that aRb holds if and only if a divides b evenly, with no remainder. For example, 2R4 holds because 2 divides 4 without leaving a remainder, but 3R4 does not hold because when 3 divides 4, there is a remainder of 1. The following set is the set of pairs for which the relation R holds. {(1, 1), (1, 2), (1, 3), (1, 4), (2, 2), (2, 4), (3, 3), (4, 4)}.The corresponding representation as a logical matrix is (1111010100100001), which includes a diagonal of ones, since each number divides itself. Other examples: A permutation matrix is a (0, 1)-matrix, all of whose columns and rows each have exactly one nonzero element. A Costas array is a special case of a permutation matrix. An incidence matrix in combinatorics and finite geometry has ones to indicate incidence between points (or vertices) and lines of a geometry, blocks of a block design, or edges of a graph. A design matrix in analysis of variance is a (0, 1)-matrix with constant row sums. A logical matrix may represent an adjacency matrix in graph theory: non-symmetric matrices correspond to directed graphs, symmetric matrices to ordinary graphs, and a 1 on the diagonal corresponds to a loop at the corresponding vertex. The biadjacency matrix of a simple, undirected bipartite graph is a (0, 1)-matrix, and any (0, 1)-matrix arises in this way. The prime factors of a list of m square-free, n-smooth numbers can be described as an m × π(n) (0, 1)-matrix, where π is the prime-counting function, and aij is 1 if and only if the j th prime divides the i th number. This representation is useful in the quadratic sieve factoring algorithm. A bitmap image containing pixels in only two colors can be represented as a (0, 1)-matrix in which the zeros represent pixels of one color and the ones represent pixels of the other color. A binary matrix can be used to check the game rules in the game of Go. The four valued logic of two bits, transformed by 2x2 logical matrices, forms a finite state machine. Some properties: The matrix representation of the equality relation on a finite set is the identity matrix I, that is, the matrix whose entries on the diagonal are all 1, while the others are all 0. More generally, if relation R satisfies I ⊆ R, then R is a reflexive relation. If the Boolean domain is viewed as a semiring, where addition corresponds to logical OR and multiplication to logical AND, the matrix representation of the composition of two relations is equal to the matrix product of the matrix representations of these relations. This product can be computed in expected time O(n2).Frequently, operations on binary matrices are defined in terms of modular arithmetic mod 2—that is, the elements are treated as elements of the Galois field GF(2) = ℤ2. They arise in a variety of representations and have a number of more restricted special forms. They are applied e.g. in XOR-satisfiability. The number of distinct m-by-n binary matrices is equal to 2mn, and is thus finite. Lattice: Let n and m be given and let U denote the set of all logical m × n matrices. Then U has a partial order given by when 1. In fact, U forms a Boolean algebra with the operations and & or between two matrices applied component-wise. The complement of a logical matrix is obtained by swapping all zeros and ones for their opposite. Every logical matrix A = (Aij) has a transpose AT = (Aji). Suppose A is a logical matrix with no columns or rows identically zero. Then the matrix product, using Boolean arithmetic, ATA contains the m × m identity matrix, and the product AAT contains the n × n identity. As a mathematical structure, the Boolean algebra U forms a lattice ordered by inclusion; additionally it is a multiplicative lattice due to matrix multiplication. Every logical matrix in U corresponds to a binary relation. These listed operations on U, and ordering, correspond to a calculus of relations, where the matrix multiplication represents composition of relations. Logical vectors: If m or n equals one, then the m × n logical matrix (mij) is a logical vector or bit string. If m = 1, the vector is a row vector, and if n = 1, it is a column vector. In either case the index equaling 1 is dropped from denotation of the vector. Suppose (Pi),i=1,2,…,m and (Qj),j=1,2,…,n are two logical vectors. The outer product of P and Q results in an m × n rectangular relation mij=Pi∧Qj. Logical vectors: A reordering of the rows and columns of such a matrix can assemble all the ones into a rectangular part of the matrix.Let h be the vector of all ones. Then if v is an arbitrary logical vector, the relation R = v hT has constant rows determined by v. In the calculus of relations such an R is called a vector. A particular instance is the universal relation hhT For a given relation R, a maximal rectangular relation contained in R is called a concept in R. Relations may be studied by decomposing into concepts, and then noting the induced concept lattice. Logical vectors: Consider the table of group-like structures, where "unneeded" can be denoted 0, and "required" denoted by 1, forming a logical matrix R. Logical vectors: To calculate elements of RRT , it is necessary to use the logical inner product of pairs of logical vectors in rows of this matrix. If this inner product is 0, then the rows are orthogonal. In fact, small category is orthogonal to quasigroup, and groupoid is orthogonal to magma. Consequently there are zeros in RRT , and it fails to be a universal relation. Row and column sums: Adding up all the ones in a logical matrix may be accomplished in two ways: first summing the rows or first summing the columns. When the row sums are added, the sum is the same as when the column sums are added. In incidence geometry, the matrix is interpreted as an incidence matrix with the rows corresponding to "points" and the columns as "blocks" (generalizing lines made of points). A row sum is called its point degree, and a column sum is the block degree. The sum of point degrees equals the sum of block degrees.An early problem in the area was "to find necessary and sufficient conditions for the existence of an incidence structure with given point degrees and block degrees; or in matrix language, for the existence of a (0, 1)-matrix of type v × b with given row and column sums". This problem is solved by the Gale–Ryser theorem.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**H&amp;E stain** H&amp;E stain: Hematoxylin and eosin stain (or haematoxylin and eosin stain or hematoxylin-eosin stain; often abbreviated as H&E stain or HE stain) is one of the principal tissue stains used in histology. It is the most widely used stain in medical diagnosis and is often the gold standard. For example, when a pathologist looks at a biopsy of a suspected cancer, the histological section is likely to be stained with H&E. H&amp;E stain: H&E is the combination of two histological stains: hematoxylin and eosin. The hematoxylin stains cell nuclei a purplish blue, and eosin stains the extracellular matrix and cytoplasm pink, with other structures taking on different shades, hues, and combinations of these colors. Hence a pathologist can easily differentiate between the nuclear and cytoplasmic parts of a cell, and additionally, the overall patterns of coloration from the stain show the general layout and distribution of cells and provides a general overview of a tissue sample's structure. Thus, pattern recognition, both by expert humans themselves and by software that aids those experts (in digital pathology), provides histologic information. H&amp;E stain: This stain combination was first introduced in 1877 by chemist N. Wissozky at the Kasan Imperial University in Russia. Uses: The H&E staining procedure is the principal stain in histology in part because it can be done quickly, is not expensive, and stains tissues in such a way that a considerable amount of microscopic anatomy is revealed, and can be used to diagnose a wide range of histopathologic conditions. The results from H&E staining are not overly dependent on the chemical used to fix the tissue or slight inconsistencies in laboratory protocol, and these factors contribute to its routine use in histology.H&E staining does not always provide enough contrast to differentiate all tissues, cellular structures, or the distribution of chemical substances, and in these cases more specific stains and methods are used. Method of application: There are many ways to prepare the hematoxylin solutions (formulation) used in the H&E procedure, in addition, there are many laboratory protocols for producing H&E stained slides, some of which may be specific to a certain laboratory. Although there is no standard procedure, the results by convention are reasonably consistent in that cell nuclei are stained blue and the cytoplasm and extracellular matrix are stained pink. Histology laboratories may also adjust the amount or type of staining for a particular pathologist.After tissues have been collected (often as biopsies) and fixed, they are typically dehydrated and embedded in melted paraffin wax, the resulting block is mounted on a microtome and cut into thin slices. The slices are affixed to microscope slides at which point the wax is removed with a solvent and the tissue slices attached to the slides are rehydrated and are ready for staining. Alternatively, H&E stain is the most used stain in Mohs surgery in which tissues are typically frozen, cut on a cryostat (a microtome that cuts frozen tissue), fixed in alcohol, and then stained.The H&E staining method involves application of haematoxylin mixed with a metallic salt, or mordant, often followed by a rinse in a weak acid solution to remove excess staining (differentiation), followed by bluing in mildly alkaline water. After the application of haematoxylin, the tissue is counterstained with eosin (most commonly eosin Y). Results: Hematoxylin principally colors the nuclei of cells blue or dark-purple, along with a few other tissues, such as keratohyalin granules and calcified material. Eosin stains the cytoplasm and some other structures including extracellular matrix such as collagen in up to five shades of pink. The eosinophilic (substances that are stained by eosin) structures are generally composed of intracellular or extracellular proteins. The Lewy bodies and Mallory bodies are examples of eosinophilic structures. Most of the cytoplasm is eosinophilic and is rendered pink. Red blood cells are stained intensely red. Mode of action: Although hematein, an oxidized form of hematoxylin, is the active colorant (when combined with a mordant), the stain is still referred to as hematoxylin. Hematoxylin, when combined with a mordant (most commonly aluminum alum) is often considered to "resemble" a basic, positively charged, or cationic stain. Eosin is an anionic (negatively charged) and acidic stain. The staining of nuclei by hemalum (a combination of aluminum ions and hematein) is ordinarily due to binding of the dye-metal complex to DNA, but nuclear staining can be obtained after extraction of DNA from tissue sections. The mechanism is different from that of nuclear staining by basic (cationic) dyes such as thionine or toluidine blue. Staining by basic dyes occurs only from solutions that are less acidic than hemalum, and it is prevented by prior chemical or enzymatic extraction of nucleic acids. There is evidence to indicate that co-ordinate bonds, similar to those that hold aluminium and hematein together, bind the hemalum complex to DNA and to carboxy groups of proteins in the nuclear chromatin.The structures do not have to be acidic or basic to be called basophilic and eosinophilic; the terminology is based on the affinity of cellular components for the dyes. Other colors, e.g. yellow and brown, can be present in the sample; they are caused by intrinsic pigments such as melanin. Basal laminae need to be stained by PAS stain or some silver stains, if they have to be well visible. Reticular fibers also require silver stain. Hydrophobic structures also tend to remain clear; these are usually rich in fats, e.g. adipocytes, myelin around neuron axons, and Golgi apparatus membranes.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Trichloroacetic acid** Trichloroacetic acid: Trichloroacetic acid (TCA; TCAA; also known as trichloroethanoic acid) is an analogue of acetic acid in which the three hydrogen atoms of the methyl group have all been replaced by chlorine atoms. Salts and esters of trichloroacetic acid are called trichloroacetates. Synthesis: It is prepared by the reaction of chlorine with acetic acid in the presence of a suitable catalyst such as red phosphorus. This reaction is Hell–Volhard–Zelinsky halogenation. CH3COOH + 3 Cl2 → CCl3COOH + 3 HClAnother route to trichloroacetic acid is the oxidation of trichloroacetaldehyde. Use: It is widely used in biochemistry for the precipitation of macromolecules, such as proteins, DNA, and RNA. TCA and DCA are both used in cosmetic treatments (such as chemical peels and tattoo removal) and as topical medication for chemoablation of warts, including genital warts. It can kill normal cells as well. It is considered safe for use for this purpose during pregnancy. Use: The sodium salt (sodium trichloroacetate) was used as an herbicide starting in the 1950s but regulators removed it from the market in the late 1980s and early 1990s. Environmental and health concerns: According to the European Chemicals Agency, "This substance causes severe skin burns and eye damage, is very toxic to aquatic life and has long lasting toxic effects."Trichloroacetic acid was placed on the California Proposition 65 List in 2013 "as a chemical known to the state to cause cancer". History: The discovery of trichloroacetic acid by Jean-Baptiste Dumas in 1839 delivered a striking example to the slowly evolving theory of organic radicals and valences. The theory was contrary to the beliefs of Jöns Jakob Berzelius, starting a long dispute between Dumas and Berzelius. Popular culture: In the 1958 film The Blob, a bottle of trichloroacetic acid is tossed at the Blob in a futile attempt to fend it off.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Tantalum diselenide** Tantalum diselenide: Tantalum diselenide is a compound made with tantalum and selenium atoms, with chemical formula TaSe2, which belongs to the family of transition metal dichalcogenides. In contrast to molybdenum disulfide (MoS2) or rhenium disulfide (ReS2, tantalum diselenide does not occur spontaneously in nature, but it can be synthesized. Depending on the growth parameters, different types of crystal structures can be stabilized. Tantalum diselenide: In the 2010s, interest in this compound has risen due to its ability to show a charge density wave (CDW), which depends on the crystal structure, up to 600 K, while other transition metal dichalcogenides normally need to be cooled down to hundreds of kelvin degrees, or even below, to observe the same capability. Structure: As other TMDs, TaSe2 is a layered compound, with a central tantalum hexagonal lattice sandwiched between two layers of selenium atoms, still with a hexagonal structure. Differently with respect to other 2D materials such as graphene, which is atomically thin, TMDs are composed by trilayers of atoms strongly bounded to each others, stacked above other trilayers and kept together through Van Der Waals forces. TMDs can be easily exfoliated. Structure: The most studied crystal structures of TaSe2 are the 1T and 2H phases that feature, respectively, octahedral and trigonal prismatic symmetries. However, it is also possible to synthesize the 3R phase or the 1H phase. 1T Phase In the 1T phase, selenium atoms show an octahedral symmetry and the relative orientation of the selenium atoms in the topmost and bottommost layers is opposed. On a macroscopic scale, the sample shows a gold colour. The lattice parameters are a = b = 3.48 Å, while c = 0.627 nm. Structure: Depending on the temperature, it shows different types of charge density waves (CDW): an incommensurate CDW (ICDW) between 600 K and 473 K and a commensurate CDW (CCDW) below 473 K. In the commensurate CDW, the resulting superlattice shows a 13 13 reconstruction often addressed as star of David (SOD), with respect to the lattice parameter (a = b) of non distorted TaSe2 (above 600 K). Film thickness can influence as well the CDW transition temperature: the thinner the film, the lower the transition temperature from ICDW to CCDW.In the 1T phase the single trilayers are stacked always in the same geometry, as shown in the corresponding image. Structure: 2H Phase The 2H phase is based on a configuration of selenium atoms characterized by a trigonal prismatic symmetry and an equal relative orientation in the topmost and bottommost layers. The lattice parameters are a = b = 3.43 Å, while c = 1.27 nm. Depending on the temperature, it shows different types of charge density wave: an incommensurate CDW (ICDW) between 122 K and 90 K and a commensurate CDW (CCDW) below 90 K. The lattice distortion below 90 K gives rise to a CCDW that makes a 3 × 3 reconstruction with respect to the non-distorted lattice parameter (a = b) of 2H TaSe2 (above 122 K). Structure: In the 2H phase the single trilayers are stacked one opposed to others, as shown in the relative image. Through molecular beam epitaxy it is possible to grow one single trilayer of 2H TaSe2, also known as 1H phase. Basically, the 2H phase can be seen as the stacking of 1H phase with opposed relative orientation with respect to each others.In the 1H phase the ICDW transition temperature is raised to 130 K. Properties: Electric and Magnetic TaSe2 exhibits different properties according to the polytype (2H or 1T), even if the chemical composition remains unchanged. Properties: 1T phase The resistivity at low temperature is similar to that of a metal, but it starts decreasing at higher temperatures. A peak is exhibited at approximately 473 K, which resembles the behavior of semiconductors. 1T phase has almost two orders of magnitude higher resistivity than to the 2H phase.The magnetic susceptibility of the 1T phases has no peaks at low temperature and remains always nearly constant until 473 K temperature is reached (ICDW temperature transition), when it jumps to slightly higher values. 1T phase is diamagnetic. Properties: 2H phase Resistivity linearly depends on the temperature when the latter exceeds 110 K. On the opposite, below this threshold it shows a non-linear behaviour. This abrupt variation of R(T) at 110 K might be related to the formation of some kinds of magnetic ordering in TaSe2: ordered spins scatter electrons in a less efficient way. This increases electrons mobility and yields a faster drop in resistivity than that ideally corresponding to a linear trend. Properties: The magnetic susceptibility of the 2H polytype slightly depends on the temperature and peaks in the range 110-120 K. The trend is linearly ascending or descending below and above 110 K, respectively. This maximum in the 2H phases is related to the formation of the CCDW at 120 K. The 2H phase is Pauli paramagnetic. The Hall coefficient RH is almost independent of the temperature above 120 K, a threshold below which it instead starts to drop to lastly reach a value of zero at 90 K. In the range included between 4 and 90 K, the coefficient RH is negative, its minimum being experienced at approximately 35 K. Electronic 1T phase Bulk 1T TaSe2 is metallic, while single monolayer (trilayer Se-Ta-Se in octahedral symmetry) is observed to be insulating with a band gap of 0.2 eV, in contrast with theoretical calculation which expected to be metallic as the bulk. 2H phase Bulk 2H TaSe2 is metallic and so the single monolayer (trilayer Se-Ta-Se in trigonal prismatic symmetry), which is also known as the 1H phase. Properties: Optical Investigating the non-linear refractive index of tantalum diselenide can be pursued preparing atomically thin flakes of TaSe2 with the liquid phase exfoliation method. Since this technique requires using alcohol, the refractive index of tantalum diselenide can be retrieved through the Kerr's law: n=n0+n2I where n0=1.37 represents the linear refractive index of ethanol, n2 is the non-linear refractive index of TaSe2 and I is the incident intensity of the laser beam. Using different light wavelengths, in particular λ=532 nm and λ=671 nm, it is possible to measure both n2 and χ(3) , the third order nonlinear susceptibility.Both these quantities depend on I because the higher the intensity of the laser, the higher the samples are heated up, which results in a variation of the refractive index.For 532 nm, 10 −7cm2/W and 1.37 10 −7(e.s.u) For 671 nm, 3.3 10 −7cm2/W and 1.58 10 −7(e.s.u) Superconductivity Bulk 2H TaSe2 has been demonstrated to be superconductive below a temperature of 0.14 K. However, the single monolayer (1H phase) can be associated with a critical temperature increased by an increment that can range up to 1 K.Despite the 1T phase typically does not show any superconductive behaviour, formation of TaSe2−xTex compound is possible through doping with tellurium atoms. The former compound superconductive character depends on the fraction of tellurium (x can varies in the range 0<x<2 ). The superconductive state arises when the fraction of Te ranges within 0.5 1.3 : the optimal configuration is achieved at 0.6 and in correspondence of a critical temperature 1.6 K. In the optimal configuration, the CDW is totally suppressed by the presence of tellurium. Properties: Lubricant Opposite to MoS2, which is largely employed as a lubricant in many different mechanical application, TaSe2 has not shown the same properties, with an average friction coefficient of 0.15. Under friction tests, like the Barker pendulum, it shows an initial friction coefficient of 0.2-0.3, which quickly increases to larger values as the number of oscillations of the pendulum increases (while for MoS2 it is almost constant during all the oscillations.) Synthesis: There are different methods in order to synthesize tantalum diselenide: depending on the growth parameter, different types of polytype can be stabilized. Synthesis: Chemical Vapor Transport In general, TMDs can be synthesized through a chemical vapor transport technique accordingly to the following chemical equation: (n−1n)M+1nMCl5+2X⟶MX2+52nCl2 where M is the chosen transition metal (Ta, Mo, etc.) and X represents the chosen chalcogen element (Se, Te, S etc.). The parameter n , which governs the crystal growth, can vary between 3 and 50, and can be selected appropriately so that the crystal growth is optimized. During such growth, which might last for 2 – 7 days, the temperature is initially increased within a range between Th= 600 °C - 900 °C. Then, it is cooled down to Tc= 530 °C - 800 °C. After the growth completion, the crystals are cooled down to room temperature. Depending on the value of Tc , either the 2H or the 1T phase can be stabilized: in particular, using tantalum and selenium with Tc< 800 °C, only the 2H phase is stabilized. For the 1T phase, Tc must be larger. This allows to selectively grow the desirable phase of the chosen TMD. Synthesis: Chemical Vapor Deposition Using powder of TaCl5 and Se as precursors, and a gold substrate, the 2H phase can be stabilized. The gold substrate has to be heated up to 930 °C, while TaCl5 and Se can be heated to 650 °C and 300 °C, respectively. Argon and hydrogen gases are used as carriers. Once the growth is complete, the sample is cooled down to room temperature. Synthesis: Mechanical exfoliation Since the single trilayers are kept together only by weak Van der Waals forces, atomically thin layers of tantalum diselenide can be easily separated by using scotch/carbon tape on the bulk TaSe2 crystals. With this method it is possible to isolate few layers (or even a single layer) of TaSe2. Then, the isolated layers can be deposited above other substrates, such as SiO2, for further characterizations. Synthesis: Molecular Beam Epitaxy Pure tantalum is directly sublimated on a bilayer of graphene inside a selenium atmosphere. Depending on the temperature of the substrate Ts (graphene bilayer), the 1T or the 2H phase can be stabilized: in particular, if 450 °C the 2H is favoured, while at 560 °C the 1T is stabilized. This growth method is suitable only for atomically thin/few layers, but not for bulk crystals. Synthesis: Liquid Phase Exfoliation Bulk crystals of TaSe2 (or any other TMDs) are put in a solution of pure alcohol. The mixture is then sonicated in an ultrasonic device with a power of at least 450 W for 15 hours. In this way it is possible to overcome the Van der Waals forces that keep the single monolayers of TaSe2 together, resulting in the formation of atomically thin flakes of tantalum diselenide. Research: Optoelectronics Since 2H TaSe2 has been found to feature very large optical absorption and emission of light at approximately 532 nm, it might be used for the development of new devices. In particular, the possibility of transferring energy between TaSe2 and other TMDs, especially MoS2, has been proved. This process can be accomplished in a non-radiative resonant way by exploiting the large coupling between the TaSe2 emission and the excitonic absorption of TMDs.Moreover, it is a promising material that may be used for the injection of hot carriers in semiconducting materials and other non-metallic TMDs due to the high lifetime of the generated photoelectrons. Research: All-Optical switch and transferring of information Exploiting the dependence of the non linear effects of TaSe2 by the intensity I of the incident laser beam, it is possible to build an all-optical switch by means of two lasers which operate at different wavelengths and intensities. In particular, a high intensity laser at 671 nm is used to modulate a low intensity signal at 532 nm. Since there is a minimum value of I in order to trigger the non-linear effects, the low intensity signal cannot excite alone. On the contrary, when the high intensity beam ( λ1 ) is coupled with the low intensity signal ( λ2 ), non-linear effects at both λ1,2 arise. So, it is possible to trigger the non-linear effects on the low intensity signal ( λ2 ) by operating on the high intensity one ( λ1 ). Research: Exploiting the coupling between λ1 and λ2 enables transferring information from the high intensity beam to the low intensity one. With this method, the delay time for transferring the information from λ1 to λ2 is around 0.6 s Spin-orbit torque devices Usually spin-orbit torque and spin to charge devices are built by interfacing a ferromagnetic layer with a bulk heavy transition metal, such as platinum. However, these effects take mainly place at the interface rather than in the platinum bulk, which introduces heat dissipation due to ohmic losses. Theoretical and DFT simulations suggest that interfacing a 1T TaSe2 monolayer with cobalt might lead to higher performances with respect to the usual platinum-based devices.Recent experiments showed that the spin-orbit scattering length of TaSe2 is around 17 nm, which is highly comparable with the one of platinum, 12 nm. This suggests the possible implementation of tantalum diselenide for the development of new 2D spintronic devices based on the Spin Hall effect. Research: Hydrogen evolution reaction (HER) DFT and AIMD simulations suggest that the stacking of flakes of both TaSe2 and TaS2 in a disordered way could be used for the development of a new efficient and cheaper cathode that might be used for the extraction of H2 from other chemical compounds.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded